↩ Accueil

Vue lecture

Quantum state teleported between quantum dots at telecoms wavelengths

Physicists at the University of Stuttgart, Germany have teleported a quantum state between photons generated by two different semiconductor quantum dot light sources located several metres apart. Though the distance involved in this proof-of-principle “quantum repeater” experiment is small, members of the team describe the feat as a prerequisite for future long-distance quantum communications networks.

“Our result is particularly exciting because such a quantum Internet will encompass these types of distant quantum nodes and will require quantum states that are transmitted among these different nodes,” explains Tim Strobel, a PhD student at Stuttgart’s Institute of Semiconductor Optics and Functional Interfaces (IHFG) and the lead author of a paper describing the research. “It is therefore an important step in showing that remote sources can be effectively interfaced in this way in quantum teleportation experiments.”

In the Stuttgart study, one of the quantum dots generates a single photon while the other produces a pair of photons that are entangled – meaning that the quantum state of one photon is closely linked to the state of the other, no matter how far apart they are. One of the photons in the entangled pair then travels to the other quantum dot and interferes with the photon there. This process produces a superposition that allows the information encapsulated in the single photon to be transferred to the distant “partner” photon from the pair.

Quantum frequency converters

Strobel says the most challenging part of the experiment was making photons from two remote quantum dots interfere with each other. Such interference is only possible if the two particles are indistinguishable, meaning they must be similar in every regard, be it in their temporal shape, spatial shape or wavelength. In contrast, each quantum dot is unique, especially in terms of its spectral properties, and each one emits photons at slightly different wavelengths.

To close the gap, the team used devices called quantum frequency converters to precisely tune the wavelength of the photons and match them spectrally. The researchers also used the converters to shift the original wavelengths of the photons emitted from the quantum dots (around 780 nm) to a wavelength commonly used in telecommunications (1515 nm) without altering the quantum state of the photons. This offers further advantages: “Being at telecommunication wavelengths makes the technology compatible with the existing global optical fibre network, an important step towards real-life applications,” Strobel tells Physics World.

Proof-of-principle experiment

In this work, the quantum dots were separated by an optical fibre just 10 m in length. However, the researchers aim to push this to considerably greater distances in the future. Strobel notes that the Stuttgart study was published in Nature Communications back-to-back with an independent work carried out by researchers led by Rinaldo Trotta of Sapienza University in Rome, Italy. The Rome-based group demonstrated quantum state teleportation across the Sapienza University campus at shorter wavelengths, enabled by the brightness of their quantum-dot source.

“These two papers that we published independently strengthen the measurement outcomes, demonstrating the maturity of quantum dot light sources in this domain,” Strobel says. Semiconducting quantum dots are particularly attractive for this application, he adds, because as well as producing both single and entangled photons on demand, they are also compatible with other semiconductor technologies.

Fundamental research pays off

Simone Luca Portalupi, who leads the quantum optics group at IHFG, notes that “several years of fundamental research and semiconductor technology are converging into these quantum teleportation experiments”. For Peter Michler, who led the study team, the next step is to leverage these advances to bring quantum-dot-based teleportation technology out of a controlled laboratory environment and into the real world.

Strobel points out that there is already some precedent for this, as one of the group’s previous studies showed that they could maintain photon entanglement across a 36-km fibre link deployed across the city of Stuttgart. “The natural next step would be to show that we can teleport the state of a photon across this deployed fibre link,” he says. “Our results will stimulate us to improve each building block of the experiment, from the sample to the setup.”

The post Quantum state teleported between quantum dots at telecoms wavelengths appeared first on Physics World.

  •  

Quantum metrology at NPL: we explore the challenges and opportunities

This episode of the Physics World Weekly podcast features a conversation with Tim Prior and John Devaney of the National Physical Laboratory (NPL), which is the UK’s national metrology institute.

Prior is NPL’s quantum programme manager and Devaney is its quantum standards manager. They talk about NPL’s central role in the recent launch of NMI-Q, which brings together some of the world’s leading national metrology institutes to accelerate the development and adoption of quantum technologies.

Prior and Devaney describe the challenges and opportunities of developing metrology and standards for rapidly evolving technologies including quantum sensors, quantum computing and quantum cryptography. They talk about the importance of NPL’s collaborations with industry and academia and explore the diverse career opportunities for physicists at NPL. Prior and Devaney also talk about their own careers and share their enthusiasm for working in the cutting-edge and fast-paced field of quantum metrology.

This podcast is sponsored by the National Physical Laboratory.

Further reading

Why quantum metrology is the driving force for best practice in quantum standardization

Performance metrics and benchmarks point the way to practical quantum advantage

End note: NPL retains copyright on this article.

The post Quantum metrology at NPL: we explore the challenges and opportunities appeared first on Physics World.

  •  

Mapping electron phases in nanotube arrays

Carbon nanotube arrays are designed to investigate the behaviour of electrons in low‑dimensional systems. By arranging well‑aligned 1D nanotubes into a 2D film, the researchers create a coupled‑wire structure that allows them to study how electrons move and interact as the system transitions between different dimensionalities. Using a gate electrode positioned on top of the array, the researchers were able to tune both the carrier density (number of electrons and holes in a unit area) and the strength of electron–electron interactions, enabling controlled access to regimes. The nanotubes behave as weakly coupled 1D channels where electrons move along each nanotube, as a 2D Fermi liquid where the electrons can move between nanotubes behaving like a conventional metal, or as a set of quantum‑dot‑like islands showing Coulomb blockade where at low carrier densities sections of the nanotubes become isolated.

The dimensional transitions are set by two key temperatures: T₂D, where electrons begin to hop between neighbouring nanotubes, and T₁D, where the system behaves as a Luttinger liquid which is a 1D state in which electrons cannot easily pass each other and therefore move in a strongly correlated, collective way. Changing the number of holes in the nanotubes changes how strongly the tubes interact with each other. This controls when the system stops acting like separate 1D wires and when strong interactions make parts of the film break up into isolated regions that show Coulomb blockade.

The researchers built a phase diagram by looking at how the conductance changes with temperature and voltage, and by checking how well it follows power‑law behaviour at different energy ranges. This approach allows them to identify the boundaries between Tomonaga–Luttinger liquid, Fermi liquid and Coulomb blockade phases across a wide range of gate voltages and temperatures.

Overall, the work demonstrates a continuous crossover between 2D, 1D and 0D electronic behaviour in a controllable nanotube array. This provides an experimentally accessible platform for studying correlated low‑dimensional physics and offers insights relevant to the development of nanoscale electronic devices and future carbon nanotube technologies.

Read the full article

Dimensionality and correlation effects in coupled carbon nanotube arrays

Xiaosong Deng et al 2025 Rep. Prog. Phys. 88 088001

Do you want to learn more about this topic?

Structural approach to charge density waves in low-dimensional systems: electronic instability and chemical bonding Jean-Paul Pouget and Enric Canadell (2024)

The post Mapping electron phases in nanotube arrays appeared first on Physics World.

  •  

CMS spots hints of a new form of top‑quark matter

The CMS Collaboration investigated in detail events in which a top quark and an anti‑top quark are produced together in high‑energy proton–proton collisions at √s = 13 TeV, using the full 138 fb⁻¹ dataset collected between 2016 and 2018. The top quark is the heaviest fundamental particle and decays almost immediately after being produced in high-energy collisions. As a consequence, the formation of a bound top–antitop state was long considered highly unlikely and had never been observed. The anti-top quark has the same mass and lifetime as the top quark but opposite charges. When a top quark and an anti-top quark are produced together, they form a top-antitop pair (tt̄).

Focusing on events with two charged leptons (top quarks and anti-top quarks decay into two electrons, two muons or one electron and one muon) and multiple jets (sprays of particles associated with top quark decay), the analysis examines the invariant mass of the top–antitop system along with two angular observables that directly probe how the spins of the top and anti‑top quarks are correlated. These measurements allow the team to compare the data with the prediction for the non resonant tt̄ production based on fixed order perturbative quantum chromodynamics (QCD), which is what physicists normally use to calculate how quarks behave according to the standard model of particle physics.

Near the kinematic threshold where the top–antitop pair is produced, CMS observes a significant excess of events relative to the QCD prediction. The number of extra events they see can be translated into a production rate. Using a simplified model based on non‑relativistic QCD, they estimate that this excess corresponds to a cross section of about 8.8 picobarns, with an uncertainty of roughly +1.2/–1.4 picobarns. The pattern of the excess, including its spin‑correlation features, is consistent with the production of a colour singlet pseudoscalar (a top–antitop pair in the 1S₀ state, i.e. the simplest, lowest energy configuration), and therefore with the prediction of non-relativistic QCD near the tt̄ threshold. The statistical significance of the excess exceeds five standard deviations, indicating that the effect is unlikely to be a statistical fluctuation. Researchers want to find a toponium‑like state because it would reveal how the strongest force in nature behaves at the highest energies, test key theories of heavy‑quark physics, and potentially expose new physics beyond the Standard Model.

The researchers emphasise that modelling the tt̄ threshold region is theoretically challenging, and that alternative explanations remain possible. Nonetheless, the result aligns with long‑standing predictions from non‑relativistic QCD that heavy quarks could form short‑lived bound states near threshold. The analysis also showcases spin correlation as an effective means to discover and characterise such effects, which were previously considered to be beyond the reach of experimental capabilities. Starting with the confirmation by the ATLAS Collaboration last July, this observation has sparked and continues to inspire follow-up theoretical follow-up theoretical and experimental works, opening up a new field of study involving bound states of heavy quarks and providing new insight into the behaviour of the strong force at high energies.

Read the full article

Observation of a pseudoscalar excess at the top quark pair production threshold

The CMS Collaboration 2025 Rep. Prog. Phys. 88 087801

Do you want to learn more about this topic?

The sea of quarks and antiquarks in the nucleon D F Geesaman and P E Reimer (2019)

The post CMS spots hints of a new form of top‑quark matter appeared first on Physics World.

  •  

Photonics West explores the future of optical technologies

The 2026 SPIE Photonics West meeting takes place in San Francisco, California, from 17 to 22 January. The premier event for photonics research and technology, Photonics West incorporates more than 100 technical conferences covering topics including lasers, biomedical optics, optoelectronics, quantum technologies and more.

As well as the conferences, Photonics West also offers 60 technical courses and a new Career Hub with a co-located job fair. There are also five world-class exhibitions featuring over 1500 companies and incorporating industry-focused presentations, product launches and live demonstrations. The first of these is the BiOS Expo, which begins on 17 January and examines the latest breakthroughs in biomedical optics and biophotonics technologies.

Then starting on 20 January, the main Photonics West Exhibition will host more than 1200 companies and showcase the latest innovative optics and photonics devices, components, systems and services. Alongside, the Quantum West Expo features the best in quantum-enabling technology advances, the AR | VR | MR Expo brings together leading companies in XR hardware and systems and – new for 2026 – the Vision Tech Expo highlights cutting-edge vision, sensing and imaging technologies.

Here are some of the product innovations on show at this year’s event.

Enabling high-performance photonics assembly with SmarAct

As photonics applications increasingly require systems with high complexity and integration density, manufacturers face a common challenge: how to assemble, align and test optical components with nanometre precision – quickly, reliably and at scale. At Photonics West, SmarAct presents a comprehensive technology portfolio addressing exactly these demands, spanning optical assembly, fast photonics alignment, precision motion and advanced metrology.

SmarAct’s photonics assembly portfolio
Rapid and reliable SmarAct’s technology portfolio enables assembly, alignment and testing of optical components with nanometre precision. (Courtesy: SmarAct)

A central highlight is SmarAct’s Optical Assembly Solution, presented together with a preview of a powerful new software platform planned for release in late-Q1 2026. This software tool is designed to provide exceptional flexibility for implementing automation routines and process workflows into user-specific control applications, laying the foundation for scalable and future-proof photonics solutions.

For high-throughput applications, SmarAct showcases its Fast Photonics Alignment capabilities. By combining high-dynamic motion systems with real-time feedback and controller-based algorithms, SmarAct enables rapid scanning and active alignment of PICs and optical components such as fibres, fibre array units, lenses, beam splitters and more. These solutions significantly reduce alignment time while maintaining sub-micrometre accuracy, making them ideal for demanding photonics packaging and assembly tasks.

Both the Optical Assembly Solution and Fast Photonics Alignment are powered by SmarAct’s electromagnetic (EM) positioning axes, which form the dynamic backbone of these systems. The direct-drive EM axes combine high speed, high force and exceptional long-term durability, enabling fast scanning, smooth motion and stable positioning even under demanding duty cycles. Their vibration-free operation and robustness make them ideally suited for high-throughput optical assembly and alignment tasks in both laboratory and industrial environments.

Precision feedback is provided by SmarAct’s advanced METIRIO optical encoder family, designed to deliver high-resolution position feedback for demanding photonics and semiconductor applications. The METIRIO stands out by offering sub-nanometre position feedback in an exceptionally compact and easy-to-integrate form factor. Compatible with linear, rotary and goniometric motion systems – and available in vacuum-compatible designs – the METIRIO is ideally suited for space-constrained photonics setups, semiconductor manufacturing, nanopositioning and scientific instrumentation.

For applications requiring ultimate measurement performance, SmarAct presents the PICOSCALE Interferometer and Vibrometer. These systems provide picometre-level displacement and vibration measurements directly at the point of interest, enabling precise motion tracking, dynamic alignment, and detailed characterization of optical and optoelectronic components. When combined with SmarAct’s precision stages, they form a powerful closed-loop solution for high-yield photonics testing and inspection.

Together, SmarAct’s motion, metrology and automation solutions form a unified platform for next-generation photonics assembly and alignment.

  • Visit SmarAct at booth #3438 at Photonics West and booth #8438 at BiOS to discover how these technologies can accelerate your photonics workflows.

Avantes previews AvaSoftX software platform and new broadband light source

Photonics West 2026 will see Avantes present the first live demonstration of its completely redesigned software platform, AvaSoftX, together with a sneak peek of its new broadband light source, the AvaLight-DH-BAL. The company will also run a series of application-focused live demonstrations, highlighting recent developments in laser-induced breakdown spectroscopy (LIBS), thin-film characterization and biomedical spectroscopy.

AvaSoftX is developed to streamline the path from raw spectra to usable results. The new software platform offers preloaded applications tailored to specific measurement techniques and types, such as irradiance, LIBS, chemometry and Raman. Each application presents the controls and visualizations needed for that workflow, reducing time and the risk of user error.

The new AvaSoftX software platform
Streamlined solution The new AvaSoftX software platform offers next-generation control and data handling. (Courtesy: Avantes)

Smart wizards guide users step-by-step through the setup of a measurement – from instrument configuration and referencing to data acquisition and evaluation. For more advanced users, AvaSoftX supports customization with scripting and user-defined libraries, enabling the creation of reusable methods and application-specific data handling. The platform also includes integrated instruction videos and online manuals to support the users directly on the platform.

The software features an accessible dark interface optimized for extended use in laboratory and production environments. Improved LIBS functionality will be highlighted through a live demonstration that combines AvaSoftX with the latest Avantes spectrometers and light sources.

Also making its public debut is the AvaLight-DH-BAL, a new and improved deuterium–halogen broadband light source designed to replace the current DH product line. The system delivers continuous broadband output from 215 to 2500 nm and combines a more powerful halogen lamp with a reworked deuterium section for improved optical performance and stability.

A switchable deuterium and halogen optical path is combined with deuterium peak suppression to improve dynamic range and spectral balance. The source is built into a newly developed, more robust housing to improve mechanical and thermal stability. Updated electronics support adjustable halogen output, a built-in filter holder, and both front-panel and remote-controlled shutter operation.

The AvaLight-DH-BAL is intended for applications requiring stable, high-output broadband illumination, including UV–VIS–NIR absorbance spectroscopy, materials research and thin-film analysis. The official launch date for the light source, as well as the software, will be shared in the near future.

Avantes will also run a series of live application demonstrations. These include a LIBS setup for rapid elemental analysis, a thin-film measurement system for optical coating characterization, and a biomedical spectroscopy demonstration focusing on real-time measurement and analysis. Each demo will be operated using the latest Avantes hardware and controlled through AvaSoftX, allowing visitors to assess overall system performance and workflow integration. Avantes’ engineering team will be available throughout the event.

  • For product previews, live demonstrations and more, meet Avantes at booth #1157.

HydraHarp 500: high-performance time tagger redefines precision and scalability

One year after its successful market introduction, the HydraHarp 500 continues to be a standout highlight at PicoQuant’s booth at Photonics West. Designed to meet the growing demands of advanced photonics and quantum optics, the HydraHarp 500 sets benchmarks in timing performance, scalability and flexible interfacing.

At its core, the HydraHarp 500 delivers exceptional timing precision combined with ultrashort jitter and dead time, enabling reliable photon timing measurements even at very high count rates. With support for up to 16 fully independent input channels plus a common sync channel, the system allows true simultaneous multichannel data acquisition without cross-channel dead time, making it ideal for complex correlation experiments and high-throughput applications.

The HydraHarp 500
At the forefront of photon timing The high-resolution multichannel time tagger HydraHarp 500 offers picosecond timing precision. It combines versatile trigger methods with multiple interfaces, making it ideally suited for demanding applications that require many input channels and high data throughput. (Courtesy: PicoQuant)

A key strength of the HydraHarp 500 is its high flexibility in detector integration. Multiple trigger methods support a wide range of detector technologies, from single-photon avalanche diodes (SPADs) to superconducting nanowire single-photon detectors (SNSPDs). Versatile interfaces, including USB 3.0 and a dedicated FPGA interface, ensure seamless data transfer and easy integration into existing experimental setups. For distributed and synchronized systems, White Rabbit compatibility enables precise cross-device timing coordination.

Engineered for speed and efficiency, the HydraHarp 500 combines ultrashort per-channel dead time with industry-leading timing performance, ensuring complete datasets and excellent statistical accuracy even under demanding experimental conditions.

Looking ahead, PicoQuant is preparing to expand the HydraHarp family with the upcoming HydraHarp 500 L. This new variant will set new standards for data throughput and scalability. With outstanding timing resolution, excellent timing precision and up to 64 flexible channels, the HydraHarp 500 L is engineered for highest-throughput applications powered – for the first time – by USB 3.2 Gen 2×2, making it ideal for rapid, large-volume data acquisition.

With the HydraHarp 500 and the forthcoming HydraHarp 500 L, PicoQuant continues to redefine what is possible in photon timing, delivering precision, scalability and flexibility for today’s and tomorrow’s photonics research. For more information, visit www.picoquant.com or contact us at info@picoquant.com.

  • Meet PicoQuant at BiOS booth #8511 and Photonics West booth #3511.

 

The post Photonics West explores the future of optical technologies appeared first on Physics World.

  •  

Mission to Mars: from biological barriers to ethical impediments

“It’s hard to say when exactly sending people to Mars became a goal for humanity,” ponders author Scott Solomon in his new book Becoming Martian: How Living in Space Will Change Our Bodies and Minds – and I think we’d all agree. Ten years ago, I’m not sure any of us thought even returning to the Moon was seriously on the cards. Yet here we are, suddenly living in a second space age, where the first people to purchase one-way tickets to the Red Planet have likely already been born.

The technology required to ship humans to Mars, and the infrastructure required to keep them alive, is well constrained, at least in theory. One could write thousands of words discussing the technical details of reusable rocket boosters and underground architectures. However, Becoming Martian is not that book. Instead, it deals with the effect Martian life will have on the human body – both in the short term across a single lifetime; and in the long term, on evolutionary timescales.

This book’s strength lies in its authorship: it is not written by a physicist enthralled by the engineering challenge of Mars, nor by an astronomer predisposed to romanticizing space exploration. Instead, Solomon is a research biologist who teaches ecology, evolutionary biology and scientific communication at Rice University in Houston, Texas.

Becoming Martian starts with a whirlwind, stripped-down tour of Mars across mythology, astronomy, culture and modern exploration. This effectively sets out the core issue: Mars is fundamentally different from Earth, and life there is going to be very difficult. Solomon goes on to describe the effects of space travel and microgravity on humans that we know of so far: anaemia, muscle wastage, bone density loss and increased radiation exposure, to name just a few.

Where the book really excels, though, is when Solomon uses his understanding of evolutionary processes to extend these findings and conclude how Martian life would be different. For example, childbirth becomes a very risky business on a planet with about one-third of Earth’s gravity. The loss of bone density translates into increased pelvic fractures, and the muscle wastage into an inability for the uterus to contract strongly enough. The result? All Martian births will likely need to be C-sections.

Solomon applies his expertise to the whole human body, including our “entourage” of micro-organisms. The indoor life of a Martian is likely to affect the immune system to the degree that contact with an Earthling would be immensely risky. “More than any other factor, the risk of disease transmission may be the wedge that drives the separation between people on the two planets,” he writes. “It will, perhaps inevitably, cause the people on Mars to truly become Martians.” Since many diseases are harboured or spread by animals, there is a compelling argument that Martians would be vegan and – a dealbreaker for some I imagine – unable to have any pets. So no dogs, no cats, no steak and chips on Mars.

Let’s get physical

The most fascinating part of the book for me is how Solomon repeatedly links the biological and psychological research with the more technical aspects of designing a mission to Mars. For example, the first exploratory teams should have odd numbers, to make decisions easier and us-versus-them rifts less likely. The first colonies will also need to number between 10,000 and 11,000 individuals to ensure enough genetic diversity to protect against evolutionary concepts such as genetic drift and population crashes.

Amusingly, the one part of human activity most important for a sustainable colony – procreation – is the most understudied. When a NASA scientist made the suggestion a colony would need private spaces with soundproof walls, the backlash was so severe that NASA had to reassure Congress that taxpayer dollars were not being “wasted” encouraging sexual activity among astronauts.

Solomon’s writing is concise yet extraordinarily thorough – there is always just enough for you to feel you can understand the importance and nuance of topics ranging from Apollo-era health studies to evolution, and from AI to genetic engineering. The book is impeccably researched, and he presents conflicting ethical viewpoints so deftly, and without apparent judgement, that you are left plenty of space to imprint your own opinions. So much so that when Solomon shares his own stance on the colonization of Mars in the epilogue, it comes as a bit of a surprise.

In essence, this book lays out a convincing argument that it might be our biology, not our technology, that limits humanity’s expansion to Mars. And if we are able to overcome those limitations, either with purposeful genetic engineering or passive evolutionary change, this could mean we have shed our humanity.

Becoming Martian is one of the best popular-science books I have read within the field, and it is an uplifting read, despite dealing with some of the heaviest ethical questions in space sciences. Whether you’re planning your future as a Martian or just wondering if humans can have sex in space, this book should be on your wish list.

  • February 2026 MIT Press 264pp £27hb

The post Mission to Mars: from biological barriers to ethical impediments appeared first on Physics World.

  •  

Solar storms could be forecast by monitoring cosmic rays

Using incidental data collected by the BepiColombo mission, an international research team has made the first detailed measurements of how coronal mass ejections (CMEs) reduce cosmic-ray intensity at varying distances from the Sun. Led by Gaku Kinoshita at the University of Tokyo, the team hopes that their approach could help improve the accuracy of space weather forecasts following CMEs.

CMEs are dramatic bursts of plasma originating from the Sun’s outer atmosphere. In particularly violent events, this plasma can travel through interplanetary space, sometimes interacting with Earth’s magnetic field to produce powerful geomagnetic storms. These storms result in vivid aurorae in Earth’s polar regions and can also damage electronics on satellites and spacecraft. Extreme storms can even affect electrical grids on Earth.

To prevent such damage, astronomers aim to predict the path and intensity of CME plasma as accurately as possible – allowing endangered systems to be temporarily shut down with minimal disruption. According to Kinoshita’s team, one source of information has so far been largely unexplored.

Pushing back cosmic rays

Within interplanetary space, CME plasma interacts with cosmic rays, which are energetic charged particles of extrasolar origin that permeate the solar system with a roughly steady flux. When an interplanetary CME (ICME) passes by, it temporarily pushes back these cosmic rays, creating a local decrease in their intensity.

“This phenomenon is known as the Forbush decrease effect,” Kinoshita explains. “It can be detected even with relatively simple particle detectors, and reflects the properties and structure of the passing ICME.”

In principle, cosmic-ray observations can provide detailed insights into the physical profile of a passing ICME. But despite their relative ease of detection, Forbush decreases had not yet been observed simultaneously by detectors at multiple distances from the Sun, leaving astronomers unclear on how propagation distance affects their severity.

Now, Kinoshita’s team have explored this spatial relationship using BepiColombo, a European and Japanese mission that will begin orbiting Mercury in November 2026. While the mission focuses on Mercury’s surface, interior, and magnetosphere, it also carries non-scientific equipment capable of monitoring cosmic rays and solar plasma in its surrounding environment.

“Such radiation monitoring instruments are commonly installed on many spacecraft for engineering purposes,” Kinoshita explains. “We developed a method to observe Forbush decreases using a non-scientific radiation monitor onboard BepiColombo.”

Multiple missions

The team combined these measurements with data from specialized radiation-monitoring missions, including ESA’s Solar Orbiter, which is currently probing the inner heliosphere from inside Mercury’s orbit, as well as a network of near-Earth spacecraft. Together, these instruments allowed the researchers to build a detailed, distance-dependent profile of a week-long ICME that occurred in March 2022.

Just as predicted, the measurements revealed a clear relationship between the Forbush decrease effect and distance from the Sun.

“As the ICME evolved, the depth and gradient of its associated cosmic-ray decrease changed accordingly,” Kinoshita says.

With this method now established, the team hopes it can be applied to non-scientific radiation monitors on other missions throughout the solar system, enabling a more complete picture of the distance dependence of ICME effects.

“An improved understanding of ICME propagation processes could contribute to better forecasting of disturbances such as geomagnetic storms, leading to further advances in space weather prediction,” Kinoshita says. In particular, this approach could help astronomers model the paths and intensities of solar plasma as soon as a CME erupts, improving preparedness for potentially damaging events.

The research is described in The Astrophysical Journal.

The post Solar storms could be forecast by monitoring cosmic rays appeared first on Physics World.

  •  

CERN team solves decades-old mystery of light nuclei formation

When particle colliders smash particles into each other, the resulting debris cloud sometimes contains a puzzling ingredient: light atomic nuclei. Such nuclei have relatively low binding energies, and they would normally break down at temperatures far below those found in high-energy collisions. Somehow, though, their signature remains. This mystery has stumped physicists for decades, but researchers in the ALICE collaboration at CERN have now figured it out. Their experiments showed that light nuclei form via a process called resonance-decay formation – a result that could pave the way towards searches for physics beyond the Standard Model.

Baryon resonance

The ALICE team studied deuterons (a bound proton and neutron) and antideuterons (a bound antiproton and antineutron) that form in experiments at CERN’s Large Hadron Collider. Both deuterons and antideuterons are fragile, and their binding energies of 2.2 MeV would seemingly make it hard for them to form in collisions with energies that can exceed 100 MeV – 100 000 times hotter than the centre of the Sun.

The collaboration found that roughly 90% of the deuterons seen after such collisions form in a three-phase process. In the first phase, an initial collision creates a so-called baryon resonance, which is an excited state of a particle made of three quarks (such as a proton or neutron). This particle is called a Δ baryon and is highly unstable, so it rapidly decays into a pion and a nucleon (a proton or a neutron) during the second phase of the process. Then, in the third (and, crucially, much later) phase, the nucleon cools down to a point where its energy properties allow it to bind with another nucleon to form a deuteron.

Smoking gun

Measuring such a complex process is not easy, especially as everything happens on a length scale of femtometres (10-15 meter). To tease out the details, the collaboration performed precision measurements to correlate the momenta of the pions and deuterons. When they analysed the momentum difference between these particle pairs, they observed a peak in the data corresponding to the mass of the Δ baryon. This peak shows that the pion and the deuteron are kinematically linked because they share a common ancestor: the pion came from the same Δ decay that provided one of the deuteron’s nucleons.

Panos Christakoglou, a member of the ALICE collaboration based at the Netherlands’ Maastricht University, says the experiment is special because in contrast to most previous attempts, where results were interpreted in light of models or phenomenological assumptions, this technique is model-independent. He adds that the results of this study could be used to improve models of high energy proton-proton collisions in which light nuclei (and maybe hadrons more generally) are formed. Other possibilities include improving our interpretations of cosmic-ray studies that measure the fluxes of (anti)nuclei in the galaxy – a useful probe for astrophysical processes.

The hunt is on

Intriguingly, Christakoglou suggests that the team’s technique could also be used to search for indirect signs of dark matter. Many models predict that dark-matter candidates such as Weakly Interacting Massive Particles (WIMPs) will decay or annihilate in processes that also produce Standard Model particles, including (anti)deuterons. “If for example one measures the flux of (anti)nuclei in cosmic rays being above the ‘Standard Model based’ astrophysical background, then this excess could be attributed to new physics which might be connected to dark matter,” Christakoglou tells Physics World.

Michael Kachelriess, a physicist at the Norwegian University of Science and Technology in Trondheim, Norway, who was not involved in this research, says the debate over the correct formation mechanism for light nuclei (and antinuclei) has divided particle physicists for a long time. In his view, the data collected by the ALICE collaboration decisively resolves this debate by showing that light nuclei form in the late stages of a collision via the coalescence of nucleons. Kachelriess calls this a “great achievement” in itself, and adds that similar approaches could make it possible to address other questions, such as whether thermal plasmas form in proton-proton collisions as well as in collisions between heavy ions.

The post CERN team solves decades-old mystery of light nuclei formation appeared first on Physics World.

  •  

Anyon physics could explain coexistence of superconductivity and magnetism

New calculations by physicists in the US provide deeper insights into an exotic material in which superconductivity and magnetism can coexist. Using a specialized effective field theory, Zhengyan Shi and Todadri Senthil at the Massachusetts Institute of Technology show how this coexistence can emerge from the collective states of mobile anyons in certain 2D materials.

An anyon is a quasiparticle with statistical properties that lie somewhere between those of bosons and fermions. First observed in 2D electron gases in strong magnetic fields, anyons are known for their fractional electrical charge and fractional exchange statistics, which alter the quantum state of two identical anyons when they are exchanged for each other.

Unlike ordinary electrons, anyons produced in these early experiments could not move freely, preventing them from forming complex collective states. Yet in 2023, experiments with a twisted bilayer of molybdenum ditelluride provided the first evidence for mobile anyons through observations of fractional quantum anomalous Hall (FQAH) insulators. This effect appears as fractionally quantized electrical resistance in 2D electron systems at zero applied magnetic field.

Remarkably, these experiments revealed that molybdenum ditelluride can exhibit superconductivity and magnetism at the same time. Since superconductivity usually relies on electron pairing that can be disrupted by magnetism, this coexistence was previously thought impossible.

Anyonic quantum matter

“This then raises a new set of theoretical questions,” explains Shi. “What happens when a large number of mobile anyons are assembled together? What kind of novel ‘anyonic quantum matter’ can emerge?”

In their study, Shi and Senthil explored these questions using a new effective field theory for an FQAH insulator. Effective field theories are widely used in physics to approximate complex phenomena without modelling every microscopic detail. In this case, the duo’s model captured the competition between anyon mobility, interactions, and fractional exchange statistics in a many-body system of mobile anyons.

To test their model, the researchers considered the doping of an FQAH insulator – adding mobile anyons beyond the plateau in Hall resistance, where the existing anyons were effectively locked in place. This allowed the quasiparticles to move freely and form new collective phases.

“Crucially, we recognized that the fate of the doped state depends on the energetic hierarchy of different types of anyons,” Shi explains. “This observation allowed us to develop a powerful heuristic for predicting whether the doped state becomes a superconductor without any detailed calculations.”

In their model, Shi and Senthil focused on a specific FQAH insulator called a Jain state, which hosts two types of anyon excitations. One type has electrical charge of 1/3 of an electron and the other with 2/3. In a perfectly clean system, doping the insulator with 2/3-charge anyons produced a chiral topological superconductor, a phase that is robust against disorder and features edge currents flowing in only one direction. In contrast, doping with 1/3-charge anyons produced a metal with broken translation symmetry – still conducting, but with non-uniform patterns in its electron density.

Anomalous vortex glass

“In the presence of impurities, we showed that the chiral superconductor near the superconductor–insulator transition is a novel phase of matter dubbed the ‘anomalous vortex glass’, in which patches of swirling supercurrents are sprinkled randomly across the sample,” Shi describes. “Observing this vortex glass phase would be smoking-gun evidence for the anyonic mechanism for superconductivity.”

The results suggest that even when adding the simplest kind of anyons – like those in the Jain state – the collective behaviour of these quasiparticles can enable the coexistence of magnetism and superconductivity. In future studies, the duo hopes that more advanced methods for introducing mobile anyons could reveal even more exotic phases.

“Remarkably, our theory provides a qualitative account of the phase diagram of a particular 2D material (twisted molybdenum ditelluride), although many more tests are needed to rule out other possible explanations,” Shi says. “Overall, these findings highlight the vast potential of anyonic quantum matter, suggesting a fertile ground for future discoveries.”

The research is described in PNAS.

The post Anyon physics could explain coexistence of superconductivity and magnetism appeared first on Physics World.

  •  

Can entrepreneurship be taught? An engineer’s viewpoint

I am intrigued by entrepreneurship. Is it something we all innately possess – or can entrepreneurship be taught to anyone (myself included) for whom it doesn’t come naturally? Could we all – with enough time, training and support – become the next Jeff Bezos, Richard Branson or Martha Lane Fox?

In my professional life as an engineer in industry, we often talk about the importance of invention and innovation. Without them, products will become dated and firms will lose their competitive edge. However, inventions don’t necessarily sell themselves, which is where entrepreneurs have a key influence.

So what’s the difference between inventors, innovators and entrepreneurs? An inventor, to me, is someone who creates a new process, application or machine. An innovator is a person who introduces something new or does something for the first time. An entrepreneur, however, is someone who sets up a business or takes on a venture, embracing financial risks with the aim of profit.

Scientists and engineers are naturally good inventors and innovators. We like to solve problems, improve how we do things, and make the world more ordered and efficient. In fact, many of the greatest inventors and innovators of all time were scientists and engineers – think James Watt, George Stephenson and Frank Whittle.

But entrepreneurship requires different, additional qualities. Many entrepreneurs come from a variety of different backgrounds – not just science and engineering – and tend to have finance in their blood. They embrace risk and have unlimited amounts of courage and business acumen – skills I’d need to pick up if wanted to be an entrepreneur myself.

Risk and reward

Engineers are encouraged to take risks, exploring new technologies and designs; in fact, it’s critical for companies seeking to stay competitive. But we take risks in a calculated and professional manner that prioritizes safety, quality, regulations and ethics, and project success. We balance risk taking with risk management, spotting and assessing potential risks – and mitigating or removing them if they’re big.

Courage is not something I’ve always had professionally. Over time, I have learned to speak up if I feel I have something to say that’s important to the situation or contributes to our overall understanding. Still, there’s always a fear of saying something silly in front of other people or being unable to articulate a view adequately. But entrepreneurs have courage in their DNA.

So can entrepreneurship be taught? Specifically, can it be taught to people like me with a technical background – and, if so, how? Some of the most famous innovators, like Henry Ford, Thomas Edison, Steve Jobs, James Dyson and Benjamin Franklin, had scientific or engineering backgrounds, so is there a formula for making more people like them?

Skill sets and gaps

Let’s start by listing the skills that most engineers have that could be beneficial for entrepreneurship. In no particular order, these include:

  • problem-solving ability: essential for designing effective solutions or to identify market gaps;
  • innovative mindset: critical for building a successful business venture;
  • analytical thinking: engineers make decisions based on data and logic, which is vital for business planning and decision making;
  • persistence: a pre-requisite for delivering engineering projects and needed to overcome the challenges of starting a business;
  • technical expertise: a significant competitive advantage and providing credibility, especially relevant for tech start-ups.

However, there are mindset differences between engineers and entrepreneurs that any training would need to overcome. These include:

  • risk tolerance: engineers typically focus on improving reliability and reducing risk, whilst entrepreneurs are more comfortable with embracing greater uncertainty;
  • focus: engineers concentrate on delivering to requirements, whilst entrepreneurs focus on consumer needs and speed to market;
  • business acumen: a typical engineering education doesn’t cover essential business skills such as marketing, sales and finance, all of which are vital for running a company.

Such skills may not always come naturally to engineers and scientists, but they can be incorporated into our teaching and learning. Some great examples of how to do this were covered in Physics World last year. In addition, there is a growing number of UK universities offering science and engineering degrees combined with entrepreneurship.

The message is that whilst some scientists and engineers become entrepreneurs, not all do. Simply having a science or engineering background is no guarantee of becoming an entrepreneur, nor is it a requirement. Nevertheless, the problem-solving and technical skills developed by scientists and engineers are powerful assets that, when combined with business acumen and entrepreneurial drive, can lead to business success.

Of course, entrepreneurship may not suit everybody – and that’s perfectly fine. No-one should be forced to become an entrepeneur if they don’t want to. We all need to play to our core strengths and interests and build well-rounded teams with complementary skillsets – something that every successful business needs. But surely there’s a way of teaching entrepeneurism too?

The post Can entrepreneurship be taught? An engineer’s viewpoint appeared first on Physics World.

  •  

Shapiro steps spotted in ultracold bosonic and fermionic gases

Shapiro steps – a series of abrupt jumps in the voltage–current characteristic of a Josephson junction that is exposed to microwave radiation – have been observed for the first time in ultracold gases by groups in Germany and Italy. Their work on atomic Josephson junctions provides new insights into the phenomenon, and could lead to a standard for chemical potential.

In 1962 Brian Josephson of the University of Cambridge calculated that, if two superconductors were separated by a thin insulating barrier, the phase difference between the wavefunctions on either side should induce quantum tunneling, leading to a current at zero potential difference.

A year later, Sidney Shapiro and colleagues at the consultants Arthur D. Little showed that inducing an alternating electric current using a microwave field causes the phase of the wavefunction on either side of a Josephson junction to evolve at different rates, leading to quantized increases in potential difference across the junction. The height of these “Shapiro steps” depends only on the applied frequency of the field and the electrical charge. This is now used as a reference standard for the volt.

Researchers have subsequently developed analogues of Josephson junctions in other systems such as liquid helium and ultracold atomic gases. In the new work, two groups have independently observed Shapiro steps in ultracold quantum gases. Instead of placing a fixed insulator in the centre and driving the system with a field, the researchers used focused laser beams to create potential barriers that divided the traps into two. Then they moved the positions of the barriers to alter the potentials of the atoms on either side.

Current emulation

“If we move the atoms with a constant velocity, that means there’s a constant velocity of atoms through the barrier,” says Herwig Ott of RPTU University Kaiserslautern-Landau in Germany, who led one of the groups. “This is how we emulate a DC current. Now for the Shapiro protocol you have to apply an AC current, and the AC current you simply get by modulating your barrier in time.”

Ott and colleagues in Kaiserslautern, in collaboration with researchers in Hamburg and the United Arab Emirates (UAE), used a Bose–Einstein condensate (BEC) of rubidium-87 atoms. Meanwhile in Italy, Giulia Del Pace of the European Laboratory for Nonlinear Spectroscopy at the University of Florence and colleagues (including the same UAE collaborators) studied ultracold lithium-6 atoms, which are fermions.

Both groups observed the theoretically-predicted Shapiro steps, but Ott and Del Pace explain that these observations do not simply confirm predictions. “The message is that no matter what your microscopic mechanism is, the phenomenon of Shapiro steps is universal,” says Ott. In superconductors, the Shapiro steps are caused by the breaking of Cooper pairs; in ultracold atomic gases, vortex rings are created. Nevertheless, the same mathematics applies. “This is really quite remarkable,” says Ott.

Del Pace says it was unclear whether Shapiro steps would be seen in strongly-interacting fermions, which are “way more interacting than the electrons in superconductors”. She asks, “Is it a limitation to have strong interactions or is it something that actually helps the dynamics to happen? It turns out it’s the latter.”

Magnetic tuning

Del Pace’s group applied a variable magnetic field to tune their system between a BEC of molecules, a system dominated by Cooper pairs and a unitary Fermi gas in which the particles were as strongly interacting as permitted by quantum mechanics. The size of the Shapiro steps was dependent on the strength of the interparticle interaction.

Ott and Del Pace both suggest that this effect could be used to create a reference standard for chemical potential – a measure of the strength of the atomic interaction (or equation of state) in a system.

“This equation of state is very well known for a BEC or for a strongly interacting Fermi gas…but there is a range of interaction strengths where the equation of state is completely unknown, so one can imagine taking inspiration from the way Josephson junctions are used in superconductors and using atomic Josephson junctions to study the equation of state in systems where the equation of state is not known,” explains Del Pace.

The two papers are published side by side in Science: Del Pace and Ott.

Rocío Jáuregui Renaud of the Autonomous University of Mexico is impressed, especially by the demonstration in both bosons and fermions.  “The two papers are important, and they are congruent in their results, but the platform is different,” she says. “At this point, the idea is not to give more information directly about superconductivity, but to learn more about phenomena that sometimes you are not able to see in electronic systems but you would probably see in neutral atoms.”

The post Shapiro steps spotted in ultracold bosonic and fermionic gases appeared first on Physics World.

  •  

Watching how grasshoppers glide inspires new flying robot design

While much insight has been gleaned from how grasshoppers hop, their gliding prowess has mostly been overlooked. Now researchers at Princeton University have studied how these gangly insects deploy and retract their wings to inspire a new approach to flying robots.

Typical insect-inspired robot designs are often based on bees and flies. They feature constant flapping motion, yet that requires a lot of power so the robots either carry heavy batteries or are tethered to a power supply.

Grasshoppers, however, are able to jump and glide as well as flap their wings and while they are not the best gliding insect, they have another trick as they are able to retract and unfurl their wings.

Grasshoppers have two sets of wings, the forewings and hindwings. The front wing is mainly used for protection and camouflage while the hindwing is used for flight. The hindwing is corrugated, which allows it to fold in neatly like an accordion.

A team of engineers, biologists and entomologists analysed the wings of the American grasshopper, also known as the bird grasshopper, due to its superior flying skills. They took CT scans of the insects and then used the findings to 3D-print model wings. They attached these wings to small frames to create grasshopper-inspired gliders, finding that their performance was on par with that of actual grasshoppers.

The team also tweaked certain wing features such as the shape, camber and corrugation, finding that a smooth wing produced gliding that was more efficient and repeatable than one with corrugations. “This showed us that these corrugations might have evolved for other reasons,” notes Princeton engineer Aimy Wissa, who adds that “very little” is known about how grasshoppers deploy their wings.

The researchers say that further work could result in new ways to extend the flight time for insect-sized robots without the need for heavy batteries or tethering. “This grasshopper research opens up new possibilities not only for flight, but also for multimodal locomotion,” adds Lee. “By combining biology with engineering, we’re able to build and ideate on something completely new.”

The post Watching how grasshoppers glide inspires new flying robot design appeared first on Physics World.

  •  

Cracking the limits of clocks: a new uncertainty relation for time itself

What if a chemical reaction, ocean waves or even your heartbeat could all be used as clocks? That’s the starting point of a new study by Kacper Prech, Gabriel Landi and collaborators, who uncovered a fundamental, universal limit to how precisely time can be measured in noisy, fluctuating systems. Their discovery – the clock uncertainty relation (CUR) – doesn’t just refine existing theory, it reframes timekeeping as an information problem embedded in the dynamics of physical processes, from nanoscale biology to engineered devices.

The foundation of this work contains a simple but powerful reframing: anything that “clicks” regularly is a clock. In the research paper’s opening analogy, a castaway tries to cook a fish without a wristwatch. They could count bird calls, ocean waves, or heartbeats – each a potential timekeeper with different cadence and regularity. But questions remain: given real-world fluctuations, what’s the best way to estimate time, and what are the inescapable limits?

The authors answer both. They show for a huge class of systems – those described by classical, Markovian jump processes (systems where the future depends only on the present state, not the past history – a standard model across statistical physics and biophysics) – there is a tight achievable bound on timekeeping precision. The bound is controlled not by how often the system jumps on average (the traditional “dynamical activity”), but by a subtler quantity: the mean residual time, or the average time you’d wait for the next event if you start observing at a random moment. That distinction matters.

The inspection paradox
The inspection paradox The graphic illustrates the mean residual time used in the CUR and how it connects to the so-called inspection paradox – a counterintuitive bias where randomly arriving observers are more likely to land in longer gaps between events. Buses arrive in clusters (gaps of 5 min) separated by long intervals (15 min), so while the average time between buses might seem moderate, a randomly arriving passenger (represented by the coloured figures) is statistically more likely to land in one of the long 15-min gaps than in a short 5-min one. The mean residual time is the average time a passenger waits for their bus if they arrive at the bus stop at a random time. Counterintuitively, this can be much longer than the average time between buses. The visual also demonstrates why the mean residual time captures more information than the simple average interval, since it accounts for the uneven distribution of gaps that biases your real waiting experience. (Courtesy: IOP Publishing)

The study introduces CUR, a universal, tight bound on timekeeping precision that – unlike earlier bounds – can be saturated and the researchers identify the exact observables that achieve this limit. Surprisingly, the optimal strategy for estimating time from a noisy process is remarkably simple: sum the expected waiting times of each observed state along the trajectory, rather than relying on complex fitting methods. The work also reveals that the true limiting factor for precision isn’t the traditional dynamical activity, but rather the inverse of the mean residual time. This makes the CUR provably tighter than the earlier kinetic uncertainty relation, especially in systems far from equilibrium.

The team also connects precision to two practical clock metrics: resolution (how often a clock ticks) and accuracy (how many ticks before it drifts by one tick.) In other words, achieving steadier ticks comes at the cost of accepting fewer of them per unit of time.

This framework offers practical tools across several domains. It can serve as a diagnostic for detecting hidden states in complex biological or chemical systems: if measured event statistics violate the CUR, that signals the presence of hidden transitions or memory effects. For nanoscale and molecular clocks – like biomolecular oscillators (cellular circuits that produce rhythmic chemical signals) and molecular motors (protein machines that walk along cellular tracks) – the CUR sets fundamental performance limits and guides the design of optimal estimators. Finally, while this work focuses on classical systems, it establishes a benchmark for quantum clocks, pointing toward potential quantum advantages and opening new questions about what trade-offs emerge in the quantum regime.

Landi, an associate professor of theoretical quantum physics at the University of Rochester, emphasizes the conceptual shift: that clocks aren’t just pendulums and quartz crystals. “Anything is a clock,” he notes. The team’s framework “gives the recipe for constructing the best possible clock from whatever fluctuations you have,” and tells you “what the best noise-to-signal ratio” can be. In everyday terms, the Sun is accurate but low-resolution for cooking; ocean waves are higher resolution but noisier. The CUR puts that intuition on firm mathematical ground.

Looking forward, the group is exploring quantum generalizations and leveraging CUR-violations to infer hidden structure in biological data. A tantalizing foundational question lingers: can robust biological timekeeping emerge from many bad, noisy clocks, synchronizing into a good one?

Ultimately, this research doesn’t just sharpen a bound; it reframes timekeeping as a universal inference task grounded in the flow of events. Whether you’re a cell sensing a chemical signal, a molecular motor stepping along a track or an engineer building a nanoscale device, the message is clear: to tell time well, count cleverly – and respect the gaps.

The research is detailed in Physical Review X.

The post Cracking the limits of clocks: a new uncertainty relation for time itself appeared first on Physics World.

  •  

Bidirectional scattering microscope detects micro- and nanoscale structures simultaneously

A new microscope that can simultaneously measure both forward- and backward-scattered light from a sample could allow researchers to image both micro- and nanoscale objects at the same time. The device could be used to observe structures as small as individual proteins, as well as the environment in which they move, say the researchers at the University of Tokyo who developed it.

“Our technique could help us link cell structures with the motion of tiny particles inside and outside cells,” explains Kohki Horie of the University of Tokyo’s department of physics, who led this research effort. “Because it is label-free, it is gentler on cells and better for long observations. In the future, it could help quantify cell states, holding potential for drug testing and quality checks in the biotechnology and pharmaceutical industries.”

Detecting forward and backward scattered light at the same time

The new device combines two powerful imaging techniques routinely employed in biomedical applications: quantitative phase microscopy (QPM) and interferometric scattering (iSCAT).

QPM measures forward-scattered (FS) light – that is, light waves that travel in the same direction as before they were scattered. This technique is excellent at imaging structures in the Mie scattering region (greater than 100 nm, referred to as microscale in this study). This makes it ideal for visualizing complex structures such as biological cells. It falls short, however, when it comes to imaging structures in the Rayleigh scattering region (smaller than 100 nm, referred to as nanoscale in this study).

The second technique, iSCAT, detects backward-scattered (BS) light. This is light that’s reflected back towards the direction from which it came and which predominantly contains Rayleigh scattering. As such, iSCAT exhibits high sensitivity for detecting nanoscale objects. Indeed, the technique has recently been used to image single proteins, intracellular vesicles and viruses. It cannot, however, image microscale structures because of its limited ability to detect in the Mie scattering region.

The team’s new bidirectional quantitative scattering microscope (BiQSM) is able to detect both FS and BS light at the same time, thereby overcoming these previous limitations.

Cleanly separating the signals from FS and BS

The BiQSM system illuminates a sample through an objective lens from two opposite directions and detects both the FS and BS light using a single image sensor. The researchers use the spatial-frequency multiplexing method of off-axis digital holography to capture both images simultaneously. The biggest challenge, says Horie, was to cleanly separate the signals from FS and BS light in the images while keeping noise low and avoiding mixing between them.

Horie and colleagues, Keiichiro Toda, Takuma Nakamura and team leader Takuro Ideguchi, tested their technique by imaging live cells. They were able to visualize micron-sized cell structures, including the nucleus, nucleoli and lipid droplets, as well as nanoscale particles. They compared the FS and BS results using the scattering-field amplitude (SA), defined as the amplitude ratios between the scattered wave and the incident illumination wave.

“SA characterizes the light scattered in both the forward and backward directions within a unified framework,” says Horie, “so allowing for a direct comparison between FS and BS light images.”

Spurred on by their findings, which are detailed in Nature Communications, the researchers say they now plan to study even smaller particles such as exosomes and viruses.

The post Bidirectional scattering microscope detects micro- and nanoscale structures simultaneously appeared first on Physics World.

  •  

Quantum information theory sheds light on quantum gravity

This episode of the Physics World Weekly podcast features Alex May, whose research explores the intersection of quantum gravity and quantum information theory. Based at Canada’s Perimeter Institute for Theoretical Physics, May explains how ideas being developed in the burgeoning field of quantum information theory could help solve one of the most enduring mysteries in physics – how to reconcile quantum mechanics with Einstein’s general theory of relativity, creating a viable theory of quantum gravity.

This interview was recorded in autumn 2025 when I had the pleasure of visiting the Perimeter Institute and speaking to four physicists about their research. This is the last of those conversations to appear on the podcast.

The first interview in this series from the Perimeter Institute was with Javier Toledo-Marín, “Quantum computing and AI join forces for particle physics”; the second was with Bianca Dittrich, “Quantum gravity: we explore spin foams and other potential solutions to this enduring challenge“; and the third was with Tim Hsieh, “Building a quantum future using topological phases of matter and error correction”.

APS logo

 

This episode is supported by the APS Global Physics Summit, which takes place on 15–20 March 2026 in Denver, Colorado, and online.

The post Quantum information theory sheds light on quantum gravity appeared first on Physics World.

  •  

Chess960 still results in white having an advantage, finds study

Chess is a seemingly simple game, but one that hides incredible complexity. In the standard game, the starting positions of the pieces are fixed so top players rely on memorizing a plethora of opening moves, which can sometimes result in boring, predictable games. It’s also the case that playing as white, and therefore going first, offers an advantage.

In the 1990s, former chess world champion Bobby Fischer proposed another way to play chess to encourage more creative play.

This form of the game – dubbed Chess960 – keeps the pawns in the same position but randomizes where the pieces at the back of the board – the knights, bishops, rooks, king and queen – are placed at the start while keeping the rest of the rules the same. It is named after the 960 starting positions that result from mixing it up at the back.

It was thought that Chess960 could allow for more permutations that would make the game fairer for both players. Yet research by physicist Marc Barthelemy at Paris-Saclay University suggests it’s not as simple as this.

Initial advantage

He used the open-source chess program called Stockfish to analyse each of the 960 starting positions and developed a statistical method to measure decision-making complexity by calculating how much “information” a player needs to identify the best moves.

He found that the standard game can be unfair, as players with black pieces who go second have to keep up with the moves from the player with white.

Yet regardless of starting positions at the back, Barthelemy discovered that white still has an advantage in almost all – 99.6% – of the 960 positions. He also found that the standard set-up – rook, knight, bishop, queen, king, bishop, knight, rook – is nothing special and is presumably an historical accident possibly as the starting positions are easy to remember, being visually symmetrical.

“Standard chess, despite centuries of cultural evolution, does not occupy an exceptional location in this landscape: it exhibits a typical initial advantage and moderate total complexity, while displaying above-average asymmetry in decision difficulty,” writes Barthelemy.

For a more fair and balanced match, Barthelemy suggests playing position #198, which has the starting positions as queen, knight, bishop, rook, king, bishop, knight and rook.

The post Chess960 still results in white having an advantage, finds study appeared first on Physics World.

  •  

Tetraquark measurements could shed more light on the strong nuclear force

The Compact Muon Solenoid (CMS) Collaboration has made the first measurements of the quantum properties of a family of three “all-charm” tetraquarks that was recently discovered at the Large Hadron Collider (LHC) at CERN. The findings could help shed more light on the properties of the strong nuclear force, which holds protons and neutrons together in nuclei. The result could help us better understand how ordinary matter forms.

In recent years, the LHC has discovered tens of massive particles called hadrons, which are made of quarks bound together by the strong force. Quarks come in six types: up, down, charm, strange, top and bottom. Most observed hadrons comprise two or three quarks (called mesons and baryons, respectively). Physicists have also observed exotic hadrons that comprise four or five quarks. These are the tetraquarks and pentaquarks respectively. Those seen so far usually contain a charm quark and its antimatter counterpart (a charm antiquark), with the remaining two or three quarks being up, down or strange quarks, or their antiquarks.

Identifying and studying tetraquarks and pentaquarks helps physicists to better understand how the strong force binds quarks together. This force also binds protons and neutrons in atomic nuclei.

Physicists are still divided as to the nature of these exotic hadrons. Some models suggest that their quarks are tightly bound via the strong force, so making these hadrons compact objects. Others say that the quarks are only loosely bound. To confuse things further, there is evidence that in some exotic hadrons, the quarks might be both tightly and loosely bound at the same time.

Now, new findings from the CMS Collaboration suggest that tetraquarks are tightly bound, but they do not completely rule out other models.

Measuring quantum numbers

In their work, which is detailed in Nature, CMS physicists studied all-charm tetraquarks. These comprise two charm quarks and two charm antiquarks and were produced by colliding protons at high energies at the LHC. Three states of this tetraquark have been identified at the LHC. These are: X(6900); X(6600); and X(7100), where the numbers denote their approximate mass in millions of electron volts. The team measured the fundamental properties of these tetraquarks, including their quantum numbers: parity (P); charge conjugation (C); angular momentum, and spin (J). P determines whether a particle has the same properties as its spatial mirror image; C whether it has the same properties as its antiparticle; and J, the total angular momentum of the hadron. These numbers provide information on the internal structure of a tetraquark.

The researchers used a version of a well-known technique called angular analysis, which is similar to the technique used to characterize the Higgs boson. This approach focuses on the angles at which the decay products of the all-charm tetraquarks are scattered.

“We call this technique quantum state tomography,” explains CMS team member Chiara Mariotti of the INFN Torino inItaly. “Here, we deduce the quantum state of an exotic state X from the analysis of its decay products. In particular, the angular distributions in the decay X → J/ψJ/ψ, followed by J/ψ decays into two muons, serve as analysers of polarization of two J/ψ particles,” she explains.

The researchers analysed all-charm tetraquarks produced at the CMS experiment between 2016 and 2018. They calculated that J is likely to be 2 and that P and C are both +1. This combination of properties is expressed as 2++.

Result favours tightly-bound quarks

“This result favours models in which all four quarks are tightly bound,” says particle physicist Timothy Gershon of the UK’s University of Warwick, who was not involved in this study. “However, the question is not completely put to bed. The sample size in the CMS analysis is not sufficient to exclude fully other possibilities, and additionally certain assumptions are made that will require further testing in future.”

Gershon adds, “These include assumptions that all three states have the same quantum numbers, and that all correspond to tetraquark decays to two J/ψ mesons with no additional particles not included in the reconstruction (for example there could be missing photons that have been radiated in the decay).”

Further studies with larger data samples are warranted, he adds. “Fortunately, CMS as well as both the LHCb and the ATLAS collaborations [at CERN] already have larger samples in hand, so we should not have to wait too long for updates.”

Indeed, the CMS Collaboration is now gathering more data and exploring additional decay modes of these exotic tetraquarks. “This will ultimately improve our understanding how this matter forms, which, in turn, could help refine our theories of how ordinary matter comes into being,” Mariotti tells Physics World.

The post Tetraquark measurements could shed more light on the strong nuclear force appeared first on Physics World.

  •  

Reinforcement learning could help airborne wind energy take off

When people think of wind energy, they usually think of windmill-like turbines dotted among hills or lined up on offshore platforms. But there is also another kind of wind energy, one that replaces stationary, earthbound generators with tethered kites that harvest energy as they soar through the sky.

This airborne form of wind energy, or AWE, is not as well-developed as the terrestrial version, but in principle it has several advantages. Power-generating kites are much less massive than ground-based turbines, which reduces both their production costs and their impact on the landscape. They are also far easier to install in areas that lack well-developed road infrastructure. Finally, and perhaps most importantly, wind speeds are many times greater at high altitudes than they are near the ground, significantly enhancing the power densities available for kites to harvest.

There is, however, one major technical challenge for AWE, and it can be summed up in a single word: control. AWE technology is operationally more complex than conventional turbines, and the traditional method of controlling kites (known as model-predictive control) struggles to adapt to turbulent wind conditions. At best, this reduces the efficiency of energy generation. At worst, it makes it challenging to keep devices safe, stable and airborne.

In a paper published in EPL, Antonio Celani and his colleagues Lorenzo Basile and Maria Grazia Berni of the University of Trieste, Italy, and the Abdus Salam International Centre for Theoretical Physics (ICTP) propose an alternative control method based on reinforcement learning. In this form of machine learning, an agent learns to make decisions by interacting with its environment and receiving feedback in the form of “rewards” for good performance. This form of control, they say, should be better at adapting to the variable and uncertain conditions that power-generating kites encounter while airborne.

What was your motivation for doing this work?

Our interest originated from some previous work where we studied a fascinating bird behaviour called thermal soaring. Many birds, from the humble seagull to birds of prey and frigatebirds, exploit atmospheric currents to rise in the sky without flapping their wings, and then glide or swoop down. They then repeat this cycle of ascent and descent for hours, or even for weeks if they are migratory birds. They’re able to do this because birds are very effective at extracting energy from the atmosphere to turn it into potential energy, even though the atmospheric flow is turbulent, hence very dynamic and unpredictable.

Photo of Antonio Celani at a blackboard
Antonio Celani. (Courtesy: Antonio Celani)

In those works, we showed that we could use reinforcement learning to train virtual birds and also real toy gliders to soar. That got us wondering whether this same approach could be exported to AWE.

When we started looking at the literature, we saw that in most cases, the goal was to control the kite to follow a predetermined path, irrespective of the changing wind conditions. These cases typically used only simple models of atmospheric flow, and almost invariably ignored turbulence.

This is very different from what we see in birds, which adapt their trajectories on the fly depending on the strength and direction of the fluctuating wind they experience. This led us to ask: can a reinforcement learning (RL) algorithm discover efficient, adaptive ways of controlling a kite in a turbulent environment to extract energy for human consumption?

What is the most important advance in the paper?

We offer a proof of principle that it is indeed possible to do this using a minimal set of sensor inputs and control variables, plus an appropriately designed reward/punishment structure that guides trial-and-error learning. The algorithm we deploy finds a way to manoeuvre the kite such that it generates net energy over one cycle of operation. Most importantly, this strategy autonomously adapts to the ever-fluctuating conditions induced by turbulence.

Photo of Lorenzo Basile
Lorenzo Basile. (Courtesy: Lorenzo Basile)

The main point of RL is that it can learn to control a system just by interacting with the environment, without requiring any a priori knowledge of the dynamical laws that rule its behaviour. This is extremely useful when the systems are very complex, like the turbulent atmosphere and the aerodynamics of a kite.

What are the barriers to implementing RL in real AWE kites, and how might these barriers be overcome?

The virtual environment that we use in our paper to train the kite controller is very simplified, and in general the gap between simulations and reality is wide. We therefore regard the present work mostly as a stimulus for the AWE community to look deeper into alternatives to model-predictive control, like RL.

On the physics side, we found that some phases of an AWE generating cycle are very difficult for our system to learn, and they require a painful fine-tuning of the reward structure. This is especially true when the kite is close to the ground, where winds are weaker and errors are the most punishing. In those cases, it might be a wise choice to use other heuristic, hard-wired control strategies rather than RL.

Finally, in a virtual environment like the one we used to do the RL training in this work, it is possible to perform many trials. In real power kites, this approach is not feasible – it would take too long. However, techniques like offline RL might resolve this issue by interleaving a few field experiments where data are collected with extensive off-line optimization of the strategy. We successfully used this approach in our previous work to train real gliders for soaring.

What do you plan to do next?

We would like to explore the use of offline RL to optimize energy production for a small, real AWE system. In our opinion, the application to low-power systems is particularly relevant in contexts where access to the power grid is limited or uncertain. A lightweight, easily portable device that can produce even small amounts of energy might make a big difference in the everyday life of remote, rural communities, and more generally in the global south.

The post Reinforcement learning could help airborne wind energy take off appeared first on Physics World.

  •  

Organic LED can electrically switch the handedness of emitted light

Circularly polarized (CP) light is encoded with information through its photon spin and can be utilized in applications such as low-power displays, encrypted communications and quantum technologies. Organic light emitting diodes (OLEDs) produce CP light with a left or right “handedness”, depending on the chirality of the light-emitting molecules used to create the device.

While OLEDs usually only emit either left- or right-handed CP light, researchers have now developed OLEDs that can electrically switch between emitting left- or right-handed CP light – without needing different molecules for each handedness.

“We had recently identified an alternative mechanism for the emission of circularly polarized light in OLEDs, using our chiral polymer materials, which we called anomalous circularly polarized electroluminescence,” says lead author Matthew Fuchter from the University of Oxford. “We set about trying to better understand the interplay between this new mechanism and the generally established mechanism for circularly polarized emission in the same chiral materials”.

Light handedness controlled by molecular chirality

The CP light handedness of an organic emissive molecule is controlled by its chirality. A chiral molecule is one that has two mirror-image structural isomers that can’t be superimposed on top of each other. Each of these non-superimposable molecules is called an enantiomer, and will absorb, emit and refract CP light with a defined spin angular momentum. Each enantiomer will produce CP light with a different handedness, through an optical mechanism called normal circularly polarized electroluminescence (NCPE).

OLED designs typically require access to both enantiomers, but most chemical synthesis processes will produce racemic mixtures (equal amounts of the two enantiomers) that are difficult to separate. Extracting each enantiomer so that they can be used individually is complex and expensive, but the research at Oxford has simplified this process by using a molecule that can switch between emitting left- and right-handed CP light.

The molecule in question is a helical molecule called (P)-aza[6]helicene, which is the right-handed enantiomer. Even though it is just a one-handed form, the researchers found a way to control the handedness of the OLED, enabling it to switch between both forms.

Switching handedness without changing the structure

The researchers designed the helicene molecules so that the handedness of the light could be switched electrically, without needing to change the structure of the material itself. “Our work shows that either handedness can be accessed from a single-handed chiral material without changing the composition or thickness of the emissive layer,” says Fuchter. “From a practical standpoint, this approach could have advantages in future circularly polarized OLED technologies.”

Instead of making a structural change, the researchers changed the way that the electric charges are recombined in the device, using interlayers to alter the recombination position and charge carrier mobility inside the device. Depending on where the recombination zone is located, this leads to situations where there is balanced or unbalanced charge transport, which then leads to different handedness of CP light in the device.

When the recombination zone is located in the centre of the emissive layer, the charge transport is balanced, which generates an NCPE mechanism. In these situations, the helicene adopts its normal handedness (right handedness).

However, when the recombination zone is located close to one of the transport layers, it creates an unbalanced charge transport mechanism called anomalous circularly polarized electroluminescence (ACPE). The ACPE overrides the NCPE mechanism and inverts the handedness of the device to left handedness by altering the balance of induced orbital angular momentum in electrons versus holes. The presence of these two electroluminescence mechanisms in the device enables it to be controlled electrically by tuning the charge carrier mobility and the recombination zone position.

The research allows the creation of OLEDs with controllable spin angular momentum information using a single emissive enantiomer, while probing the fundamental physics of chiral optoelectronics. “This work contributes to the growing body of evidence suggesting further rich physics at the intersection of chirality, charge and spin. We have many ongoing projects to try and understand and exploit such interplay,” Fuchter concludes.

The researchers describe their findings in Nature Photonics.

The post Organic LED can electrically switch the handedness of emitted light appeared first on Physics World.

  •  

Francis Crick: a life of twists and turns

Physicist, molecular biologist, neuroscientist: Francis Crick’s scientific career took many turns. And now, he is the subject of zoologist Matthew Cobb’s new book, Crick: a Mind in Motion – from DNA to the Brain.

Born in 1916, Crick studied physics at University College London in the mid-1930s, before working for the Admiralty Research Laboratory during the Second World War. But after reading physicist Erwin Schrödinger’s 1944 book What Is Life? The Physical Aspect of the Living Cell, and a 1946 article on the structure of biological molecules by chemist Linus Pauling, Crick left his career in physics and switched to molecular biology in 1947.

Six years later, while working at the University of Cambridge, he played a key role in decoding the double-helix structure of DNA, working in collaboration with biologist James Watson, biophysicist Maurice Wilkins and other researchers including chemist and X-ray crystallographer Rosalind Franklin. Crick, alongside Watson and Wilkins, went on to receive the 1962 Nobel Prize in Physiology and Medicine for the discovery.

Finally, Crick’s career took one more turn in the mid-1970s. After experiencing a mental health crisis, Crick left Britain and moved to California. He took up neuroscience in an attempt to understand the roots of human consciousness, as discussed in his 1994 book, The Astonishing Hypothesis: the Scientific Search for the Soul.

Parallel lives

When he died in 2004, Crick’s office wall at Salk Institute in La Jolla, US, carried portraits of Charles Darwin and Albert Einstein, as Cobb notes on the final page of his deeply researched and intellectually fascinating biography. But curiously, there is not a single other reference to Einstein in Cobb’s massive book. Furthermore, there is no reference at all to Einstein in the equally large 2009 biography of Crick, Francis Crick: Hunter of Life’s Secrets, by historian of science Robert Olby, who – unlike Cobb – knew Crick personally.

Nevertheless, a comparison of Crick and Einstein is illuminating. Crick’s family background (in the shoe industry), and his childhood and youth are in some ways reminiscent of Einstein’s. Both physicists came from provincial business families of limited financial success, with some interest in science yet little intellectual distinction. Both did moderately well at school and college, but were not academic stars. And both were exposed to established religion, but rejected it in their teens; they had little intrinsic respect for authority, without being open rebels until later in life.

The similarities continue into adulthood, with the two men following unconventional early scientific careers. Both of them were extroverts who loved to debate ideas with fellow scientists (at times devastatingly), although they were equally capable of long, solitary periods of concentration throughout their careers. In middle age, they migrated from their home countries – Germany (Einstein) and Britain (Crick) – to take up academic positions in the US, where they were much admired and inspiring to other scientists, but failed to match their earlier scientific achievements.

In their personal lives, both Crick and Einstein had a complicated history with women. Having divorced their first wives, they had a variety of extramarital affairs – as discussed by Cobb without revealing the names of these women – while remaining married to their second wives. Interestingly, Crick’s second wife, Odile Crick (whom he was married to for 55 years) was an artist, and drew the famous schematic drawing of the double helix published in Nature in 1953.

Stories of friendships

Although Cobb misses this fascinating comparison with Einstein, many other vivid stories light up his book. For example, he recounts Watson’s claim that just after their success with DNA in 1953, “Francis winged into the Eagle [their local pub in Cambridge] to tell everyone within hearing distance that we had found the secret of life” – a story that later appeared on a plaque outside the pub.

“Francis always denied he said anything of the sort,” notes Cobb, “and in 2016, at a celebration of the centenary of Crick’s birth, Watson publicly admitted that he had made it up for dramatic effect (a few years earlier, he had confessed as much to Kindra Crick, Francis’s granddaughter).” No wonder Watson’s much-read 1968 book The Double Helix caused a furious reaction from Crick and a temporary breakdown in their friendship, as Cobb dissects in excoriating detail.

Watson’s deprecatory comments on Franklin helped to provoke the current widespread belief that Crick and Watson succeeded by stealing Franklin’s data. After an extensive analysis of the available evidence, however, Cobb argues that the data was willingly shared with them by Franklin, but that they should have formally asked her permission to use it in their published work – “Ambition, or thoughtlessness, stayed their hand.”

In fact, it seems Crick and Franklin were friends in 1953, and remained so – with Franklin asking Crick for his advice on her draft scientific papers – until her premature death from ovarian cancer in 1958. Indeed, after her first surgery in 1956, Franklin went to stay with Crick and his wife at their house in Cambridge, and then returned to them after her second operation. There certainly appears to be no breakdown in trust between the two. When Crick was nominated for the Nobel prize in 1961, he openly stated, “The data which really helped us obtain the structure was mainly obtained by Rosalind Franklin.”

As for Crick’s later study of consciousness, Cobb comments, “It would be easy to dismiss Crick’s switch to studying the brain as the quixotic project of an ageing scientist who did not know his limits. After all, he did not make any decisive breakthrough in understanding the brain – nothing like the double helix… But then again, nobody else did, in Crick’s lifetime or since.” One is perhaps reminded once again of Einstein, and his preoccupation during later life with his unified field theory, which remains an open line of research today.

  • 2025 Profile Books £30.00hb 595pp

The post Francis Crick: a life of twists and turns appeared first on Physics World.

  •  

Physicists overcome ‘acoustic collapse’ to levitate multiple objects with sound

Sound waves can make small objects hover in the air, but applying this acoustic levitation technique to an array of objects is difficult because the objects tend to clump together. Physicists at the Institute of Science and Technology Austria (ISTA) have now overcome this problem thanks to hybrid structures that emerge from the interplay between attractive acoustic forces and repulsive electrostatic ones. By proving that it is possible to levitate many particles while keeping them separated, the finding could pave the way for advances in acoustic-levitation-assisted 3D printing, mid-air chemical synthesis and micro-robotics.

In acoustic levitation, particles ranging in size from tens of microns to millimetres are drawn up into the air and confined by an acoustic force. The origins of this force lie in the momentum that the applied acoustic field transfers to a particle as sound waves scatter off its surface. While the technique works well for single particles, multiple particles tend to aggregate into a single dense object in mid-air because the acoustic forces they scatter can, collectively, create an attractive interaction between them.

Keeping particles separated

Led by Scott Waitukaitis, the ISTA researchers found a way to avoid this so-called “acoustic collapse” by using a tuneable repulsive electrostatic force to counteract the attractive acoustic one. They began by levitating a single silver-coated poly(methyl methacrylate) (PMMA) microsphere 250‒300 µm in diameter above a reflector plate coated with a transparent and conductive layer of indium tin oxide (ITO). They then imbued the particle with a precisely controlled amount of electrical charge by letting it rest on the ITO plate with the acoustic field off, but with a high-voltage DC potential applied between the plate and a transducer. This produces a capacitive build-up of charge on the particle, and the amount of charge can be estimated from Maxwell’s solutions for two contacting conductive spheres (assuming, in the calculations, that the lower plate acts like a sphere with infinite radius).

The next step in the process is to switch on the acoustic field and, after just 10 ms, add the electric field to it. During the short period in which both fields are on, and provided the electric field is strong enough, either field is capable of launching the particle towards the centre of the levitation setup. The electric fields is then switched off. A few seconds later, the particle levitates stably in the trap, with a charge given, in principle, by Maxwell’s approximations.

A visually mesmerizing dance of particles

This charging method works equally well for multiple particles, allowing the researchers to load particles into the trap with high efficiency and virtually any charge they want, limited only by the breakdown voltage of the surrounding air. Indeed, the physicists found they could tune the charge to levitate particles separately or collapse them into a single, dense object. They could even create hybrid states that mix separated and collapsed particles.

And that wasn’t all. According to team member Sue Shi, a PhD student at ISTA and the lead author of a paper in PNAS about the research, the most exciting moment came when they saw the compact parts of the hybrid structures spontaneously begin to rotate, while the expanded parts remained in one place while oscillating in response to the rotation. The result was “a visually mesmerizing dance,” Shi says, adding that “this is the first time that such acoustically and electrostatically coupled interactions have been observed in an acoustically levitated system.”

As well as having applications in areas such as materials science and micro-robotics, Shi says the technique developed in this work could be used to study non-reciprocal effects that lead to the particles rotating or oscillating. “This would pave the way for understanding more elusive and complex non-reciprocal forces and many-body interactions that likely influence the behaviours of our system,” Shi tells Physics World.

The post Physicists overcome ‘acoustic collapse’ to levitate multiple objects with sound appeared first on Physics World.

  •  

When heat moves sideways

Heat travels across a metal by the movement of electrons. However, in an insulator there are no free charge carriers; instead, vibrations in the atoms (phonons) move the heat from hot regions to cool regions in a straight path. In some materials, when a magnetic field is applied, the phonons begin to move sideways, this is known as the Phonon Hall Effect. Quantised collective excitations of the spin structure, called magnons, can also do this via the Magnon Hall Effect. A combined effect occurs when magnons and phonons strongly interact and traverse sideways in the Magnon–Polaron Hall Effect.

Scientists understand the quantum mechanical property known as Berry curvature that causes this transverse heat flow. Yet in some materials, the effect is greater than what Berry curvature alone can explain. In this research, an exceptionally large thermal Hall effect is recorded in MnPS₃, an insulating antiferromagnetic material with strong magnetoelastic coupling and a spin-flop transition. The thermal Hall angle remains large down to 4 K and cannot be accounted for by standard Berry curvature-based models.

This work provides an in-depth analysis of the role of the spin-flop transition in MnPS₃’s thermal properties and highlights the need for new theoretical approaches to understand magnon–phonon coupling and scattering. Materials with large thermal Hall effects could be used to control heat in nanoscale devices such as thermal diodes and transistors.

Read the full article

Large thermal Hall effect in MnPS3

Mohamed Nawwar et al 2025 Rep. Prog. Phys. 88 080503

Do you want to learn more about this topic?

Quantum-Hall physics and three dimensions Johannes GoothStanislaw Galeski and Tobias Meng (2023)

The post When heat moves sideways appeared first on Physics World.

  •  

Symmetry‑preserving route to higher‑order insulators

Topological insulators are materials that are insulating in the bulk within the bandgap, yet exhibit conductive states on their surface at frequencies within that same bandgap. These surface states are topologically protected, meaning they cannot be easily disrupted by local perturbations. In general, a material of n‑dimensions can host n‑1-dimensional topological boundary states. If the symmetry protecting these states is further broken, a bandgap can open between the n-1-dimensional states, enabling the emergence of n-2-dimensional topological states. For example, a 3D material can host 2D protected surface states, and breaking additional symmetry can create a bandgap between these surface states, allowing for protected 1D edge states. A material undergoing such a process is said to exhibit a phenomenon known as a higher-order topological insulator. In general, higher-order topological states appear in dimensions one lower than the parent topological phase due to the further unit-cell symmetry reduction. This requires at least a 2D lattice for second-order states, with the maximal order in 3D systems being three.

The researchers here introduce a new method for repeatedly opening the bandgap between topological states and generating new states within those gaps in an unbounded manner – without breaking symmetries or reducing dimensions. Their approach creates hierarchical topological insulators by repositioning domain walls between different topological regions. This process opens bandgaps between original topological states while preserving symmetry, enabling the formation of new hierarchical states within the gaps. Using one‑ and two‑dimensional Su–Schrieffer–Heeger models, they show that this procedure can be repeated to generate multiple, even infinite, hierarchical levels of topological states, exhibiting fractal-like behavior reminiscent of a Matryoshka doll. These higher-level states are characterized by a generalized winding number that extends conventional topological classification and maintains bulk-edge correspondence across hierarchies.

The researchers confirm the existence of second‑ and third-level domain‑wall and edge states and demonstrate that these states remain robust against perturbations. Their approach is scalable to higher dimensions and applicable not only to quantum systems but also to classical waves such as phononics. This broadens the definition of topological insulators and provides a flexible way to design complex networks of protected states. Such networks could enable advances in electronics, photonics, and phonon‑based quantum information processing, as well as engineered structures for vibration control. The ability to design complex, robust, and tunable hierarchical topological states could lead to new types of waveguides, sensors, and quantum devices that are more fault-tolerant and programmable.

Read the full article

Hierarchical topological states without dimension reduction

Joel R Pyfrom et al 2025 Rep. Prog. Phys. 88 118003

Do you want to learn more about this topic?

Interacting topological insulators: a review by Stephan Rachel (2018)

The post Symmetry‑preserving route to higher‑order insulators appeared first on Physics World.

  •  

New hybrid state of matter is a mix of solid and liquid

The boundary between a substance’s liquid and solid phases may not be as clear-cut as previously believed. A new state of matter that is a hybrid of both has emerged in research by scientists at the University of Nottingham, UK and the University of Ulm, Germany, and they say the discovery could have applications in catalysis and other thermally-activated processes.

In liquids, atoms move rapidly, sliding over and around each other in a random fashion. In solids, they are fixed in place. The transition between the two states, solidification, occurs when random atomic motion transitions to an ordered crystalline structure.

At least, that’s what we thought. Thanks to a specialist microscopy technique, researchers led by Nottingham’s Andrei Khlobystov found that this simple picture isn’t entirely accurate. In fact, liquid metal nanoparticles can contain stationary atoms – and as the liquid cools, their number and position play a significant role in solidification.

Some atoms remain stationary

The team used a method called spherical and chromatic aberration-corrected high-resolution transmission electron microscopy (Cc/Cs-corrected HRTEM) at the low-voltage SALVE instrument at Ulm to study melted metal nanoparticles (such as platinum, gold and palladium) deposited on an atomically thin layer of graphene. This carbon-based material acted a sort of “hob” for heating the particles, says team member Christopher Leist, who was in charge of the HRTEM experiments. “As they melted, the atoms in the nanoparticles began to move rapidly, as expected,” Leist says. “To our surprise, however, we found that some atoms remained stationary.”

At high temperatures, these static atoms bind strongly to point defects in the graphene support. When the researchers used the electron beam from the transmission microscope to increase the number of these defects, the number of stationary atoms within the liquid increased, too. Khlobystov says that this had a knock-on effect on how the liquid solidified: when the stationary atoms are few in number, a crystal forms directly from the liquid and continues to grow until the entire particle has solidified. When their numbers increase, the crystallization process cannot take place and no crystals form.

“The effect is particularly striking when stationary atoms create a ring (corral) that surrounds and confines the liquid,” he says. “In this unique state, the atoms within the liquid droplet are in motion, while the atoms forming the corral remain motionless, even at temperatures well below the freezing point of the liquid.”

Unprecedented level of detail

The researchers chose to use Cc/Cs-corrected HRTEM in their study because minimizing spherical and chromatic aberrations through specialized hardware installed on the microscope enabled them to resolve single atoms in their images.

“Additionally, we can control both the energy of the electron beam and the sample temperature (the latter using MEMS-heated chip technology),” Khlobystov explains. “As a result, we can study metal samples at temperatures of up to 800 °C, even in a molten state, without sacrificing atomic resolution. We can therefore observe atomic behaviour during crystallization while actively manipulating the environment around the metal particles using the electron beam or by cooling the particles. This level of detail under such extreme conditions is unprecedented.”

Effect could be harnessed for catalysis

The Nottingham-Ulm researchers, who report their work in ACS Nano, say they obtained their results by chance while working on an EPSRC-funded project on 1-2 nm metal particles for catalysis applications. “Our approach involves assembling catalysts from individual metal atoms, utilizing on-surface phenomena to control their assembly and dynamics,” explains Khlobystov. “To gain this control, we needed to investigate the behaviour of metal atoms at varying temperatures and within different local environments on a support material.

“We suspected that the interplay between vacancy defects in the support and the sample temperature creates a powerful mechanism for controlling the size and structure of the metal particles,” he tells Physics World. “Indeed, this study revealed the fundamental mechanisms behind this process with atomic precision.”

The experiments were far from easy, he recalls, with one of the key challenges being to identify a thin, robust and thermally conductive support material for the metal. Happily, graphene meets all these criteria.

“Another significant hurdle to overcome was to be able to control the number of defect sites surrounding each particle,” he adds. “We successfully accomplished this by using the TEM’s electron beam not just as an imaging tool, but also as a means to modify the environment around the particles by creating defects.”

The researchers say they would now like to explore whether the effect can be harnessed for catalysis. To do this, Khlobystov says it will be essential to improve control over defect production and its scale. “We also want to image the corralled particles in a gas environment to understand how the phenomenon is influenced by reaction conditions, since our present measurements were conducted in a vacuum,” he adds.

The post New hybrid state of matter is a mix of solid and liquid appeared first on Physics World.

  •  
❌