This episode of the Physics World Weekly podcast features a conversation with the plasma physicist Debbie Callahan who is chief strategy officer at Focused Energy – a California and Germany based fusion-energy startup. Prior to that she spent 35 years working at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in the US.
Focused Energy is developing a commercial system for generating energy from the laser-driven fusion of hydrogen isotopes. Callahan describes LightHouse, which is the company’s design for a laser-fusion power plant, and Pearl, which is the firm’s deuterium–tritium fuel capsule.
Callahan talks about the challenges and rewards of working in the fusion industry and also calls on early-career physicists to consider careers in this burgeoning sector.
Heavy-duty vehicles (HDVs) powered by hydrogen-based proton-exchange membrane (PEM) fuel cells offer a cleaner alternative to diesel-powered internal combustion engines for decarbonizing long-haul transportation sectors. The development path of sub-components for HDV fuel-cell applications is guided by the total cost of ownership (TCO) analysis of the truck.
TCO analysis suggests that the cost of the hydrogen fuel consumed over the lifetime of the HDV is more dominant because trucks typically operate over very high mileages (~a million miles) than the fuel cell stack capital expense (CapEx). Commercial HDV applications consume more hydrogen and demand higher durability, meaning that TCO is largely related to the fuel-cell efficiency and durability of catalysts.
This article is written to bridge the gap between the industrial requirements and academic activity for advanced cathode catalysts with an emphasis on durability. From a materials perspective, the underlying nature of the carbon support, Pt-alloy crystal structure, stability of the alloying element, cathode ionomer volume fraction, and catalyst–ionomer interface play a critical role in improving performance and durability.
We provide our perspective on four major approaches, namely, mesoporous carbon supports, ordered PtCo intermetallic alloys, thrifting ionomer volume fraction, and shell-protection strategies that are currently being pursued. While each approach has its merits and demerits, their key developmental needs for future are highlighted.
Nagappan Ramaswamy
Nagappan Ramaswamy joined the Department of Chemical Engineering at IIT Bombay as a faculty member in January 2025. He earned his PhD in 2011 from Northeastern University, Boston specialising in fuel cell electrocatalysis.
He then spent 13 years working in industrial R&D – two years at Nissan North American in Michigan USA focusing on lithium-ion batteries, followed by 11 years at General Motors in Michigan USA focusing on low-temperature fuel cells and electrolyser technologies. While at GM, he led two multi-million-dollar research projects funded by the US Department of Energy focused on the development of proton-exchange membrane fuel cells for automotive applications.
At IIT Bombay, his primary research interests include low-temperature electrochemical energy-conversion and storage devices such as fuel cells, electrolysers and redox-flow batteries involving materials development, stack design and diagnostics.
Much of my time is spent trying to build and refine models in quantum optics, usually with just a pencil, paper and a computer. This requires an ability to sit with difficult concepts for a long time, sometimes far longer than is comfortable, until they finally reveal their structure.
Good communication is equally essential – I teach students; collaborate with colleagues from different subfields; and translate complex ideas into accessible language for the broader public. Modern physics connects with many different fields, so being flexible and open-minded matters as much as knowing the technical details. Above all, curiosity drives everything. When I don’t understand something, that uncertainty becomes my strongest motivation to keep going.
What do you like best and least about your job?
What I like the best is the sense of discovery – the moment when a problem that has evaded understanding for weeks suddenly becomes clear. Those flashes of insight feel like hearing the quiet whisper of nature itself. They are rare, but they bring along a joy that is hard to find elsewhere.
I also value the opportunity to guide the next generation of physicists, whether in the university classroom or through public science communication. Teaching brings a different kind of fulfilment: witnessing students develop confidence, curiosity and a genuine love for physics.
What I like the least is the inherent uncertainty of research. Questions do not promise favourable answers, and progress is rarely linear. Fortunately, I have come to see this lack of balance not as a weakness but as a source of power that forces growth, new perspectives, and ultimately deeper understanding.
What do you know today that you wish you knew when you were starting out in your career?
I wish I had known that feeling lost is not a sign of inadequacy but a natural part of doing physics at a high level. Not understanding something can be the greatest motivator, provided one is willing to invest time and effort. Passion and curiosity matter far more than innate brilliance. If I had realized earlier that steady dedication can carry you farther than talent alone, I would have embraced uncertainty with much more confidence.
With this famous remark at the 1927 Solvay Conference, Albert Einstein set the tone for one of physics’ most enduring debates. At the heart of his dispute with Niels Bohr lay a question that continues to shape the foundations of physics: does the apparently probabilistic nature of quantum mechanics reflect something fundamental, or is it simply due to lack of information about some “hidden variables” of the system that we cannot access?
Physicists at University College London, UK (UCL) have now addressed this question via the concept of quantum state diffusion (QSD). In QSD, the wavefunction does not collapse abruptly. Instead, wavefunction collapse is modelled as a continuous interaction with the environment that causes the system to evolve gradually toward a definite state, restoring some degree of intuition to the counterintuitive quantum world.
A quantum coin toss
To appreciate the distinction (and the advantages it might bring), imagine tossing a coin. While the coin is spinning in midair, it is neither fully heads nor fully tails – its state represents a blend of both possibilities. This mirrors a quantum system in superposition.
When the coin eventually lands, the uncertainty disappears and we obtain a definite outcome. In quantum terms, this corresponds to wavefunction collapse: the superposition resolves into a single state upon measurement.
In the standard interpretation of quantum mechanics, wavefunction collapse is considered instantaneous. However, this abrupt transition is challenging from a thermodynamic perspective because uncertainty is closely tied to entropy. Before measurement, a system in superposition carries maximal uncertainty, and thus maximum entropy. After collapse, the outcome is definite and our uncertainty about the system is reduced, thereby reducing the entropy.
This apparent reduction in entropy immediately raises a deeper question. If the system suddenly becomes more ordered at the moment of measurement, where does the “missing” entropy go?
From instant jumps to continuous flows
Returning to the coin analogy, imagine that instead of landing cleanly and instantly revealing heads or tails, the coin wobbles, leans, slows and gradually settles onto one face. The outcome is the same, but the transition is continuous rather than abrupt.
This gradual settling captures the essence of QSD. Instead of an instantaneous “collapse”, the quantum state unfolds continuously over time. This makes it possible to track various parameters of thermodynamic change, including a quantity called environmental stochastic entropy production that measures how irreversible the process is.
Another benefit is that whereas standard projective measurements describe an abrupt “yes/no” outcome, QSD models a broader class of generalized or “weak” measurements, revealing the subtle ways quantum systems evolve. It also allows physicists to follow individual trajectories rather than just average outcomes, uncovering details that the standard framework smooths over.
“The QSD framework helps us understand how unpredictable environmental influences affect quantum systems,” explains Sophia Walls, a PhD student at UCL and the first author of a paper in Physical Review A on the research. Environmental noise, Walls adds, is particularly important for quantum technologies, making the study’s insights valuable for quantum error correction, control protocols and feedback mechanisms.
Bridging determinism and probability
At first glance, QSD might seem to resemble decoherence, which also arises from system–environment interactions such as noise. But the two differ in scope. “Decoherence explains how a system becomes a classical mixed state,” Walls clarifies, “but not how it ultimately purifies into a single eigenstate.” QSD, with its stochastic term, describes this final purification – the point where the coin’s faint shimmer sharpens into heads or tails.
In this view, measurement is not a single act but a continuous, entropy-producing flow of information between system and environment – a process that gradually results in manifestation of one of the possible quantum states, rather than an abrupt “collapse”.
“Standard quantum mechanics separates two kinds of dynamics – the deterministic Schrödinger evolution and the probabilistic, instantaneous collapse,” Walls notes. “QSD connects both in a single dynamical equation, offering a more unified description of measurement.”
This continuous evolution makes otherwise intractable quantities, such as entropy production, measurable and meaningful. It also breathes life into the wavefunction itself. By simulating individual realizations, QSD distinguishes between two seemingly identical mixed states: one genuinely entangled with its environment, and another that simply represents our ignorance. Only in the first case does the system dynamically evolve – a distinction invisible in the orthodox picture.
A window on quantum gravity?
Could this diffusion-based framework also illuminate other fundamental questions beyond the nature of measurement? Walls thinks it’s possible. Recent work suggests that stochastic processes could provide experimental clues about how gravity behaves at the quantum scale. QSD may one day offer a way to formalize or test such ideas. “If the nature of quantum gravity can be studied through a diffusive or stochastic process, then QSD would be a relevant framework to explore it,” Walls says.
A miniature version of an atomic fountain clock has been unveiled by researchers at the UK’s National Physical Laboratory (NPL). Their timekeeper occupies just 5% of the volume of a conventional atomic fountain clock while delivering a time signal with a stability that is on par with a full-sized system. The team is now honing its design to create compact fountain clocks that could be used in portable systems and remote locations.
The ticking of an atomic clock is defined by the frequency of the electromagnetic radiation that is absorbed and emitted by a specific transition between atomic energy levels. Today, the second is defined using a transition in caesium atoms that involves microwave radiation. Caesium atoms are placed in a microwave cavity and a measurement-and-feedback mechanism is used to tune the frequency of the cavity radiation to the atomic transition – creating a source of microwaves with a very narrow frequency range centred at the clock frequency.
The first atomic clocks sent a fast-moving beam of atoms through a microwave cavity. The precision of such a beam clock is limited by the relatively short time that individual atoms spend in the cavity. Also, the speed of the atoms means that the measured frequency peak is shifted and broadened by the Doppler effect.
Launching atoms
These problems were addressed by the development of the fountain clock, in which the atoms are cooled (slowed down) by laser light, which also launches the atoms upwards. The atoms pass through a microwave cavity on the way up, and again as they fall back down. The atoms travel at much slower speeds than in a beam clock. The atoms spend much more time in the cavity and therefore the time signal from an atomic clock is much more precise than a beam clock. However, long times result in greater thermal spread of the atomic beam – which degrades clock performance. Trading-off measurement time with thermal spread means that the caesium fountain clocks that currently define the second have drops of about 30 cm.
Other components are also needed to operate fountain clocks – including a vacuum system and laser and microwave instrumentation. This pushes the height of a typical clock to about 2 m, and makes it a complex and expensive instrument that cannot be easily transported.
Now, Sam Walby and colleagues at NPL have shrunk the overall height of a rubidium-based fountain clock to 80 cm, while retaining the 30 cm drop. The result is an instrument that is 5% the volume of one of NPL’s conventional caesium atomic fountain clocks.
Precise yet portable
“That’s taking it from barely being able to fit though a doorway, to something one could pick up and carry with one arm,” says Walby.
Despite the miniaturization, the mini-fountain achieved a stability of one part in 1015 after several days of operation – which NPL says is comparable to full-sized clocks.
Walby told Physics World that the NPL team achieved miniaturization by eliminating two conventional components from their clock design. One is a dedicated chamber used to measure the quantum states of the atoms. Instead, this measurement is make within the clock’s cooling chamber. Also eliminated is a dedicated state-selection microwave cavity, which puts the atoms into the quantum state from which the clock transition occurs.
“The mini-fountain also does this [state] selection,” explains Walby, “but instead of using a dedicated cavity, we use a coax-to-waveguide adapter that is directed into the cooling chamber, which creates a travelling wave of microwaves at the correct frequency.”
The NPL team also reduced the amount of magnetic shielding used, which meant that the edge-effects of the magnetic field had to be more carefully considered. The optics system of the clock was greatly simplified and the use of commercial components mean that the clock is low maintenance and easy to operate – according to NPL.
Radical simplification
“By radically simplifying and shrinking the atomic fountain, we’re making ultra-precise timing technology available beyond national labs,” said Walby. “This opens new possibilities for resilient infrastructure and next-generation navigation.”
According to Walby, one potential use of a miniature atomic fountain clock is as a holdover clock. These are devices that produce a very stable time signal when not synchronized with other atomic clocks. This is important for creating resilience in infrastructure that relies on precision timing – such as communications networks, global navigation satellite systems (including GPS) and power grids. Synchronization is usually done using GNSS signals but these can be jammed or spoofed to disrupt timing systems.
Holdover clocks require time errors of just a few nanoseconds over a month, which the new NPL clock can deliver. The miniature atomic clock could also be used as a secondary frequency standard for the SI second.
The small size of the clock also lends itself to portable and even mobile applications, according to Walby: “The adaptation of the mini-fountain technology to mobile platforms will be subject of further developments”.
However, the mini-clock is large when compared to more compact or chip-based clocks – which do not perform as well. Therefore, he believes that the technology is more likely to be implemented on ships or ground vehicles than aircraft.
“At a minimum, it should be easily transportable compared to the current solutions of similar performance,” he says.
“Highly innovative”
Atomic-clock expert Elizabeth Donley tells Physics World, “NPL has been highly innovative in recent years in standardizing fountain clock designs and even supplying caesium fountains to other national standards labs and organizations around the world for timekeeping purposes. This new compact rubidium fountain is a continuation of this work and can provide a smaller frequency standard with comparable performance to the larger fountains based on caesium.”
Donley spent more than two decades developing atomic clocks at the US National Institute of Standards and Technology (NIST) and now works as a consultant in the field. She agrees that miniature fountain clocks would be useful for holding-over timing information when time signals are interrupted.
She adds, “Once the international community decides to redefine the second to be based on an optical transition, it won’t matter if you use rubidium or caesium. So I see this work as more of a practical achievement than a ground-breaking one. Practical achievements are what drives progress most of the time.”
Researchers in Switzerland have found an unexpected new use for an optical technique commonly used in silicon chip manufacturing. By shining a focused laser beam onto a sample of material, a team at the Paul Scherrer Institute (PSI) and ETH Zürich showed that it was possible to change the material’s magnetic properties on a scale of nanometres – essentially “writing” these magnetic properties into the sample in the same way as photolithography etches patterns onto wafers. The discovery could have applications for novel forms of computer memory as well as fundamental research.
In standard photolithography – the workhorse of the modern chip manufacturing industry – a light beam passes through a transmission mask and projects an image of the mask’s light-absorption pattern onto a (usually silicon) wafer. The wafer itself is covered with a photosensitive polymer called a resist. Changing the intensity of the light leads to different exposure levels in the resist-covered material, making it possible to create finely detailed structures.
In the new work, Laura Heyderman and colleagues in PSI-ETH Zürich’s joint Mesoscopic System group began by placing a thin film of a magnetic material in a standard photolithography machine, but without a photoresist. They then scanned a focused laser beam over the surface of the sample while modulating the beam’s wavelength of 405 nm to deliver varying intensities of light. This process is known as direct write laser annealing (DWLA), and it makes it possible to heat areas of the sample that measure just 150 nm across.
In each heated area, thermal energy from the laser is deposited at the surface and partially absorbed by the film down to a depth of around 100 nm). The remainder dissipates through a silicon substrate coated in 300-nm-thick silicon oxide. However, the thermal conductivity of this substrate is low, which maximizes the temperature increase in the film for a given laser fluence. The researchers also sought to keep the temperature increase as uniform as possible by using thin-film heterostructures with a total thickness of less than 20 nm.
Crystallization and interdiffusion effects
Members of the PSI-ETH Zürich team applied this technique to several technologically important magnetic thin-film systems, including ferromagnetic CoFeB/MgO, ferrimagnetic CoGd and synthetic antiferromagnets composed of Co/Cr, Co/Ta or CoFeB/Pt/Ru. They found that DWLA induces both crystallization and interdiffusion effects in these materials. During crystallization, the orientation of the sample’s magnetic moments gradually changes, while interdiffusion alters the magnetic exchange coupling between the layers of the structures.
The researchers say that both phenomena could have interesting applications. The magnetized regions in the structures could be used in data storage, for example, with the direction of the magnetization (“up” or “down”) corresponding to the “1” or “0” of a bit of data. In conventional data-storage systems, these bits are switched with a magnetic field, but team member Jeffrey Brock explains that the new technique allows electric currents to be used instead. This is advantageous because electric currents are easier to produce than magnetic fields, while data storage devices switched with electricity are both faster and capable of packing more data into a given space.
Team member Lauren Riddiford says the new work builds on previous studies by members of the same group, which showed it was possible to make devices suitable for computer memory by locally patterning magnetic properties. “The trick we used here was to locally oxidize the topmost layer in a magnetic multilayer,” she explains. “However, we found that this works only in a few systems and only produces abrupt changes in the material properties. We were therefore brainstorming possible alternative methods to create gradual, smooth gradients in material properties, which would open possibilities to even more exciting applications and realized that we could perform local annealing with a laser originally made for patterning polymer resist layers for photolithography.”
Riddiford adds that the method proved so fast and simple to implement that the team’s main challenge was to investigate all the material changes it produced. Physical characterization methods for ultrathin films can be slow and difficult, she tells Physics World.
The researchers, who describe their technique in Nature Communications, now hope to use it to develop structures that are compatible with current chip-manufacturing technology. “Beyond magnetism, our approach can be used to locally modify the properties of any material that undergoes changes when heated, so we hope researchers using thin films for many different devices – electronic, superconducting, optical, microfluidic and so on – could use this technique to design desired functionalities,” Riddiford says. “We are looking forward to seeing where this method will be implemented next, whether in magnetic or non-magnetic materials, and what kind of applications it might bring.”
“The Straton Model of elementary particles had very limited influence in the West,” said Jinyan Liu as she sat with me in a quiet corner of the CERN cafeteria. Liu, who I caught up with during a break in a recent conference on the history of particle physics, was referring to a particular model of elementary particle physics first put together in China in the mid-1960s. The Straton Model was, and still largely is, unknown outside that country. “But it was an essential step forward,” Liu added, “for Chinese physicists in joining the international community.”
Liu was at CERN to give a talk on how Chinese theorists redirected their research efforts in the years after the Cultural Revolution, which ended in 1976. They switched from the Straton Model, which was a politically infused theory of matter favoured by Mao Zedong, the founder of the People’s Republic of China, to mainstream particle physics as practised by the rest of the world. It’s easy to portray the move as the long-overdue moment when Chinese scientists resumed their “real” physics research. But, Liu told me, “actually it was much more complicated”.
The model is essentially a theory of the structure of hadrons – either baryons (such as protons and neutrons) or mesons (such as pions and kaons). But the model’s origins are as improbable as they are labyrinthine. Mao, who had a keen interest in natural science, was convinced that matter was infinitely divisible, and in 1963 he came across an article by the Marxist-inspired Japanese physicist Shoichi Sakata (1911–1970).
First published in Japanese in 1961 and later translated into Russian, Sakata’s paper was entitled “Dialogues concerning a new view of elementary particles”. It restated Sakata’s belief, which he had been working on since the 1950s, that hadrons are made of smaller constituents – “elementary particles are not the ultimate elements of matter” as he put it. With some Chinese scholars back then still paying close attention to publications from the Soviet Union, their former political and ideological ally, that paper was then translated into Chinese.
Mao Zedong was engrossed in Shoichi Sakata’s paper, for it seemed to offer scientific support for his own views.
This version appeared in the Bulletin of the Studies of Dialectics of Nature in 1963. Mao, who received an issue of that bulletin from his son-in-law, was engrossed in Sakata’s paper, for it seemed to offer scientific support for his own views. Sakata’s article – both in the original Japanese and now in Chinese – cited Friedrich Engels’ view that matter has numerous stages of discrete but qualitatively different parts. In addition, it quoted Lenin’s remark that “even the electron is inexhaustible”.
A wider dimension
“International politics now also entered,” Liu told me, as we discussed the issue further at CERN. A split between China and the Soviet Union had begun to open up in the late 1950s, with Mao breaking off relations with the Soviet Union and starting to establish non-governmental science and technology exchanges between China and Japan. Indeed, when China hosted the Peking Symposium of foreign scientists in 1964, Japan brought the biggest delegation, with Sakata as its leader.
At the event, Mao personally congratulated Sakata on his theory. It was, Sakata later recalled, “the most unforgettable moment of my journey to China”. In 1965, Sakata’s paper was retranslated from the Japanese original, with an annotated version published in Red Flag and the newspaper Renmin ribao, or “People’s Daily”, both official organs of the Chinese Communist Party.
Chinese physicists realized that they could capitalize on Mao’s enthusiasm to make elementary particle physics a legitimate research direction.
Chinese physicists, who had been assigned to work on the atomic bomb and other research deemed important by the Communist Party, now started to take note. Uninterested in philosophy, they realized that they could capitalize on Mao’s enthusiasm to make elementary particle physics a legitimate research direction.
As a result, 39 members of CAS, Peking University and the University of Science and Technology of China formed the Beijing Elementary Particle Group. Between 1965 and 1966, they wrote dozens of papers on a model of hadrons inspired by both Sakata’s work and quark theory based on the available experimental data. It was dubbed the Straton Model because it involved layers or “strata” of particles nested in each other.
Liu has interviewed most surviving members of the group and studied details of the model. It differed from the model being developed at the time by the US theorist Murray Gell-Mann, which saw quarks as not physical but mathematical elements. As Liu discovered, Chinese particle physicists were now given resources they’d never had before. In particular, they could use computers, which until then had been devoted to urgent national defence work. “To be honest,” Liu chuckled, “the elementary particle physicists didn’t use computers much, but at least they were made available.”
The high-water mark for the Straton Model occurred in July 1966 when members of the Beijing Elementary Particle Group presented it at a summer physics colloquium organized by the China Association for Science and Technology. The opening ceremony was held in Tiananmen Square, in what was then China’s biggest conference centre, with attendees including Abdus Salam from Imperial College London. The only high-profile figure to be invited from the West, Salam was deemed acceptable because he was science advisor to the president of Pakistan, a country considered outside the western orbit.
The proceedings of the colloquium were later published as “Research on the theory of elementary particles carried out under the brilliant illumination of Mao Tse-Tung’s thought”. Its introduction was what Liu calls a “militant document” – designed to reinforce the idea that the authors were carrying Mao’s thought into scientific research to repudiate “decadent feudal, bourgeois and revisionist ideologies”.
Participants in Beijing had expected to make their advances known internationally by publishing the proceedings in English. But the Cultural Revolution had just begun two months before, and publications in English were forbidden. “As a result,” Liu told me, “the model had very limited influence outside China.” Sakata, however, had an important influence on Japanese theorists having co-authored the key paper on neutrino flavour oscillation (Prog. Theoretical. Physics28 870).
A resurfaced effort
In recent years, Liu has shed new light on the Straton Model, writing a paper in the journal Chinese Annals of History of Science and Technology (2 85). In 2022, she also published a 2022 Chinese-language book entitled Constructing a Theory of Hadron Structure: Chinese Physicists’ Straton Model, which describes the downfall of the model after 1966. None of its predicted material particles appeared, though a candidate event once occurred in a cosmic ray observatory in the south of China.
By 1976, quantum chromodynamics (QCD) had convincingly emerged as the established model of hadrons. The effective end of the Straton Model took place at a conference in January 1980 in Conghua, near Hong Kong. Hung-Yuan Tzu, one of the key leaders of the Beijing Group, gave a paper entitled “Reminiscences of the Straton Model”, signalling that physics had moved on.
During our meeting at CERN, Liu showed me photos of the 1980 event. “It was a very important conference in the history of Chinese physics,” she said, “the first opening to Chinese physicists in the West”. Visits by Chinese expatriates were organized by Tsung-Dao Lee and Chen-Ning Yang, who shared the 1957 Nobel Prize for Physics for their work on parity violation.
The critical point
It is easy for westerners to mock the Straton Model; Sheldon Glashow once referred to it as about “Maons”. But Liu sees it as significant research that had many unexpected consequences, such as helping to advance physics research in China. “It gave physicists a way to pursue quantum field theory without having to do national defence work”.
The model also trained young researchers in particle physics and honed their research competence. After the post-Cultural Revolution reform and its opening to the West, these physicists could then integrate into the international community. “The story,” Liu said, “shows how ingeniously the Chinese physicists adapted to the political situation.”
Nematics are materials made of rod‑like particles that tend to align in the same direction. In active nematics, this alignment is constantly disrupted and renewed because the particles are driven by internal biological or chemical energy. As the orientation field twists and reorganises, it creates topological defects-points where the alignment breaks down. These defects are central to the collective behaviour of active matter, shaping flows, patterns, and self‑organisation.
In this work, the researchers identify an active topological phase transition that separates two distinct regimes of defect organisation. As the system approaches this transition from below, the dynamics slow dramatically: the relaxation of defect density becomes sluggish, fluctuations in the number of defects grow in amplitude and lifetime, and the system becomes increasingly sensitive to small changes in activity. At the critical point, defects begin to interact over long distances, with correlation lengths that grow with system size. This behaviour produces a striking dual‑scaling pattern, defect fluctuations appear uniform at small scales but become anti‑hyperuniform at larger scales, meaning that the number of defects varies far more than expected from a random distribution.
A key finding is that this anti‑hyperuniformity originates from defect clustering. Rather than forming ordered structures or undergoing phase separation, defects tend to appear near existing defects, creating multiscale clusters. This distinguishes the transition from well‑known defect‑unbinding processes such as the Berezinskii-Kosterlitz-Thouless transition in passive nematics or the nematic-isotropic transition in screened active systems. Above the critical activity, the system enters a defect‑laden turbulent state where defects are more uniformly distributed and correlations become short‑ranged and negative.
The researchers confirm these behaviours experimentally using large‑field‑of‑view measurements of endothelial cell monolayers which are the cells that line blood vessels. The same dual‑scaling behaviour, long‑range correlations, and clustering appear in these living tissues, demonstrating that the transition is robust across system sizes, parameter variations, frictional damping, and boundary conditions.
Topological quantum computing is a proposed approach to building quantum computers that aims to solve one of the biggest challenges in quantum technology: error correction.
In conventional quantum systems, qubits are extremely sensitive to their environment and even tiny disturbances can cause errors. Topological quantum computing addresses this by encoding information in the global properties of a system: the topology of certain quantum states.
These systems rely on the use of non-Abelian anyons, exotic quasiparticles that can exist in two-dimensional materials under special conditions.
The main challenge faced by this approach to quantum computing is the creation and control of these quasiparticles.
One possible source of non-Abelian anyons is the fractional quantum Hall state (FQH): an exotic state of matter which can exist at very low temperatures and high magnetic fields.
These states come in two forms: even-denominator and odd-denominator. Here, we’re interested in the even-denominator states – the more interesting but less well understood of the two.
In this latest work, researchers have observed this exotic state in gallium arsenide (GaAs) two-dimensional hole systems.
Typically, FQH states are isotropic, showing no preferred direction. Here, however, the team found states that are strongly anisotropic, suggesting that the system spontaneously breaks rotational symmetry.
This means that it forms a nematic phase – similar to liquid crystals – where molecules align along a direction without forming a rigid structure.
This spontaneous symmetry breaking adds complexity to the state and can influence how quasiparticles behave, interact, and move.
The observation of the existence of spontaneous nematicity in an even-denominator fractional quantum Hall state is the first of its kind.
Although there are many questions left to be answered, the properties of this system could be hugely important for topological quantum computers as well as other novel quantum technologies.
With a PhD in physics from the Technical University in Darmstadt, Germany, Holtkamp has managed large scientific projects throughout his career.
Holtkamp is the former deputy director of the SLAC National Accelerator Laboratory at Stanford University where he managedthe construction of the Linac Coherent Light Source upgrade, the world’s most powerful X-ray laser, along with more than $2bn of onsite construction projects.
Holtkamp also previously served as the principal deputy director general for the international fusion project ITER, which is currently under construction in Cadarache, France.
Holtkamp worked at Fermilab between 1998 and 2001, where he worked on commissioning the Main Injector and also led a study on the feasibility of an intense neutrino source based on a muon storage ring.
LBNF-DUNE will study the properties of neutrinos in unprecedented detail, as well as the differences in behaviour between neutrinos and antineutrinos. The DUNE detector, which lies about 1300 km from Fermilab, will measure the neutrinos that are generated by Fermilab’s accelerator complex, which is just outside Chicago.
In a statement, Holtkamp said he is “deeply honoured” to lead the lab. “Fermilab has done so much to advance our collective understanding of the fundamentals of our universe,” he says. “I am committed to ensuring the laboratory remains the neutrino capital of the world, and the safe and successful completion of LBNF-DUNE is key to that goal. I’m excited to rejoin Fermilab at this pivotal moment to guide this project and our other important modernization efforts to prepare the lab for a bright future.”
Then in October that year, a new organization – Fermi Forward Discovery Group – was announced to manage the lab for the US Department of Energy. That move came under scrutiny given it is dominated by the University of Chicago and Universities Research Association (URA), a consortium of research universities, which had already been part of the management since 2007. Then a month later, almost 2.5% of Fermilab’s employees were laid off.
“We’re excited to welcome Norbert, who brings of a wealth of scientific and managerial experience to Fermilab,” noted University of Chicago president Paul Alivisatos, who is also chair of the board of directors of Fermi Forward Discovery Group.
Alivisatos thanked Kim for her “tireless service” as director. “[Kim] played a critical role in strengthening relationships with Fermilab’s leading stakeholders, driving the lab’s modernization efforts, and positioning Fermilab to amplify DOE’s broader goals in areas like quantum science and AI,” added Alivisatos.
The CERN particle-physics lab near Geneva has received $1bn from private donors towards the construction of the Future Circular Collider (FCC). The cash marks the first time in the lab’s 72-year history that individuals and philanthropic foundations have agreed to support a major CERN project. If built, the FCC would be the successor to the Large Hadron Collider (LHC), where the Higgs boson was discovered.
CERN originally released a four-volume conceptual design report for the FCC in early 2019, with more detail included in a three-volume feasibility study that came out last year. It calls for a giant tunnel some 90.7 km in circumference – roughly three times as long as the LHC – that would be built about 200 m underground on average.
The FCC has been recommended as the preferred option for the next flagship collider at CERN in the ongoing process to update the European Strategy for Particle Physics, which will be passed over to the CERN Council in May 2026.If the plans are given the green light by CERN Council in 2028, construction on the FCC electron-positron machine, dubbed FCC-ee, would begin in 2030. It would start operations in 2047, a few years after the High Luminosity LHC (HL-LHC) closes down, and run for about 15 years until the early 2060s.
The FCC-ee would focus on creating a million Higgs particles in total to allow physicists to study its properties with an accuracy an order of magnitude better that possible with the LHC. The FCC feasibility study then calls for a hadron machine, dubbed FCC-hh, to replace the FCC-ee in the existing 91 km tunnel. It would be a “discovery machine”, smashing together protons at high energy – about 85 TeV – with the aim of creating new particles. If built, the FCC-hh will begin operation in 2073 and run to the end of the century.
The funding model for the FCC-ee, which is expected to have a price tag of about $18bn, is still a work in progress. But it is estimated that at least two-thirds of the construction costs will come from CERN’s 24 member states with the rest needing to be found elsewhere. One option to plug that gap is private donations and in late December CERN received a significant boost from several organizations including the Breakthrough Prize Foundation, the Eric and Wendy Schmidt Fund for Strategic Innovation, and the entrepreneurs John Elkann and Xavier Niel. Together, they pledged a total of $1bn towards the FCC-ee.
Costas Fountas, president of the CERN Council, says CERN is “extremely grateful” for the interest. “This once again demonstrates CERN’s relevance and positive impact on society, and the strong interest in CERN’s future that exists well beyond our own particle physics community,” he notes.
Eric Schmidt, who founded Google, claims that he and Wendy Schmidt were “inspired by the ambition of this project and by what it could mean for the future of humanity”. The FCC, he believes, is an instrument that “could push the boundaries of human knowledge and deepen our understanding of the fundamental laws of the Universe” and could lead to technologies that could benefit society “in profound ways” from medicine to computing to sustainable energy.
The cash promised has been welcomed by outgoing CERN director-general Fabiola Gianotti. “It’s the first time in history that private donors wish to partner with CERN to build an extraordinary research instrument that will allow humanity to take major steps forward in our understanding of fundamental physics and the universe,” she said. “I am profoundly grateful to them for their generosity, vision, and unwavering commitment to knowledge and exploration.”
Further boost
The cash comes a few months after the Circular Electron–Positron Collider (CEPC) – a rival collider to the FCC-ee that also involves building a huge 100 km tunnel to study the Higgs in unprecedented detail – was not considered for inclusion in China’s next five-year plan, which runs from 2026 to 2030. There has been much discussion in China about whether the CEPC is the right project for the country, with the collider facing criticism from particle physicist and Nobel laureate Chen-Ning Yang, before he died last year.
Wang Yifang of the Institute of High Energy Physics (IHEP) in Beijing says they will submit the CEPC for consideration again in 2030 unless FCC is officially approved before then. But for particle theorist John Ellis from Kings College London, China’s decision to effectively put the CEPC on the back burner “certainly simplifies the FCC discussion”. “However, an opportunity for growing the world particle physics community has been lost, or at least deferred [by the decision],” Ellis told Physics World.
Ellis adds, however, that he would welcome China’s participation in the FCC. “Their accelerator and detector [technical design reviews] show that they could bring a lot to the table, if the political obstacles can be overcome,” he says.
However, if the FCC-ee goes ahead China could perhaps make significant “in-kind” contributions rather like those that occur with the ITER experimental fusion reactor, which is currently being built in France. In this case, instead of cash payments, the countries provide components, equipment and other materials.
Those considerations and more will now fall to the British physicist Mark Thomson, who took over from Gianotti as CERN director-general on 1 January for a five-year term. As well as working on funding requirements for the FCC-ee, top of his in-tray will actually be shutting down the LHC in June to make way for further work on the HL-LHC, which involves installing powerful new superconducting magnets and improving the detection.
About 90% of the 27 km LHC accelerator will be affected by the upgrade with a major part being to replace the magnets in the final focus systems of the two large experiments, ATLAS and CMS. These magnets will take the incoming beams and then focus them down to less than 10 µm in cross section. The upgrade includes the installation of brand new state-of-the-art niobium-tin (Nb3Sn) superconducting focusing magnets.
The HL-LHC will probably not turn on until 2030, at which time Thomson’s term will nearly be over, but that doesn’t deter him from leading the world’s foremost particle-physics lab. “It’s an incredibly exciting project,” Thomson told the Guardian. “It’s more interesting than just sitting here with the machine hammering away.”
Imaging tissue fibrosis (a) Mid-infrared dichroism-sensitive photoacoustic microscopy (MIR-DS-PAM) images of cell-induced fibrosis (CIF) and normal control (NC) tissue; (c) MIR-DS-PAM images of drug-induced fibrosis (DIF) and NC tissue; (b) and (d) show the corresponding confocal fluorescence microscopy (CFM) images. Scale bars: 500 µm. (Courtesy: CC-BY 4.0/Light Sci. Appl. 10.1038/s41377-025-02117-0)
Many of the tissues in the human body rely upon highly organized microstructures to function effectively. If the collagen fibres in heart muscle become disordered, for instance, this can lead to or reflect disorders such as fibrosis and cancer. To image and analyse such structural changes, researchers at Pohang University of Science and Technology (POSTECH) in Korea have developed a new label-free microscopy technique and demonstrated its use in engineered heart tissue.
The ability to assess the alignment of microstructures such as protein fibres within tissue’s extracellular matrix provides a valuable tool for diagnosing disease, monitoring therapy response and evaluating tissue engineering models. Currently, however, this is achieved using histological imaging methods based on immunofluorescent staining, which can be labour-intensive and sensitive to the imaging conditions and antibodies used.
Instead, a team headed up by Chulhong Kim and Jinah Jang is investigating photoacoustic microscopy (PAM), a label-free imaging modality that relies on light absorption by endogenous tissue chromophores to reveal structural and functional information. In particular, PAM with mid-infrared (MIR) incident light provides bond-selective, high-contrast imaging of proteins, lipids and carbohydrates. The researchers also incorporated dichroism-sensitive (DS) functionality, resulting in a technique referred to as MIR-DS-PAM.
“Dichroism-sensitivity enables the quantitative assessment of fibre alignment by detecting the polarization-dependent absorption of anisotropic materials like collagen,” explains first author Eunwoo Park. “This adds a new contrast mechanism to conventional photoacoustic imaging, allowing simultaneous visualization of molecular content and microstructural organization without any labelling.”
Park and colleagues constructed a MIR-DS-PAM system using a pulsed quantum cascade laser as the light source. They tuned the laser to a centre wavelength of 6.0 µm to correspond with an absorption peak from the C=O stretching vibration in proteins. The laser beam was linearly polarized, modulated by a half-wave plate and used to illuminate the target tissue.
Tissue analysis
To validate the functionality of their MIR-DS-PAM technique, the researchers used it to image a formalin-fixed section of engineered heart tissue (EHT). They obtained images at four incident angles and used the acquired photoacoustic data to calculate the photoacoustic amplitude, which visualizes the protein content, as well as the degree of linear dichroism (DoLD) and the orientation angle of linear dichroism (AoLD), which reveal the extracellular matrix alignment.
“Cardiac tissue features highly aligned extracellular matrix with complex fibre orientation and layered architecture, which are critical to its mechanical and electrical function,” Park explains. “These properties make it an ideal model for demonstrating the ability of MIR-DS-PAM to detect physiologically relevant histostructural and fibrosis-related changes.”
The researchers also used MIR-DS-PAM to quantify the structural integrity of EHT during development, using specimens cultured for one to five days before fixing. Analysis of the label-free images revealed that as the tissue matured, the DoLD gradually increased, while the standard deviation of the AoLD decreased – indicating increased protein accumulation with more uniform fibre alignment over time. They note that these results agree with those from immunofluorescence-stained confocal fluorescence microscopy.
Next, they examined diseased EHT with two types of fibrosis: cell-induced fibrosis (CIF) and drug-induced fibrosis (DIF). In the CIF sample, the average photoacoustic amplitude and AoLD uniformity were both lower than found in normal EHT, indicating reduced protein density and disrupted fibre alignment. DIF exhibited a higher photoacoustic amplitude and lower AoLD uniformity than normal EHT, suggesting extensive extracellular matrix accumulation with disorganized orientation.
Both CIF and DIF showed a slight reduction in DoLD, again signifying a disorganized tissue structure, a common hallmark of fibrosis. The two fibrosis types, however, exhibited diverse biochemical profiles and different levels of mechanical dysfunction. The findings demonstrate the ability of MIR-DS-PAM to distinguish diseased from healthy tissue and identify different types of fibrosis. The researchers also imaged a tissue assembly containing both normal and fibrotic EHT to show that MIR-DS-PAM can capture features in a composite sample.
They conclude that MIR-DS-PAM enables label-free monitoring of both tissue development and fibrotic remodelling. As such, the technique shows potential for use within tissue engineering research, as well as providing a diagnostic tool for assessing tissue fibrosis or remodelling in biopsied samples. “Its ability to visualize both biochemical composition and structural alignment could aid in identifying pathological changes in cardiological, musculoskeletal or ocular tissues,” says Park.
“We are currently expanding the application of MIR-DS-PAM to disease contexts where extracellular matrix remodelling plays a central role,” he adds. “Our goal is to identify label-free histological biomarkers that capture both molecular and structural signatures of fibrosis and degeneration, enabling multiparametric analysis in pathological conditions.”
Jaffe joins the Giant Magellan Telescope Corporation as it aims to secure the funding necessary to complete the $2.5bn telescope. (Courtesy: Giant Magellan Telescope – GMTO Corporation)
Astronomer Daniel Jaffe has been appointed the next president of the Giant Magellan Telescope Corporation – the international consortium building the $2.5bn Giant Magellan Telescope (GMT). He succeeds Robert Shelton, who announced his retirement last year after eight years in the role.
A former head of astronomy at the University of Texas at Austin from 2011 to 2015, Jaffe became vice-president for research at the university from 2016 to 2025 and he also served as interim provost from 2020 to 2021.
Jaffe has sat on the board of directors of the Association of Universities for Research in Astronomy and the Gemini Observatory and played a role in establishing the University of Texas at Austin’s partnership in the GMT.
Under construction in Chile and expected to be complete in the 2030s, the GMT consists of seven mirrors to create a 25.4 m telescope. From the ground it will produce images 4–16 times sharper than the James Webb Space Telescope and will investigate the origins of the chemical elements, and search for signs of life on distant planets.
“I am honoured to lead the GMT at this exciting stage,” notes Jaffe. “[It] represents a profound leap in our ability to explore the universe and employ a host of new technologies to make fundamental discoveries.”
“[Jaffe] brings decades of leadership in research, astronomy instrumentation, public-private partnerships, and academia,” noted Taft Armandroff, board chair of the GMTO Corporation. “His deep understanding of the Giant Magellan Telescope, combined with his experience leading large research enterprises and cultivating a collaborative environment, make him exceptionally well suited to lead the observatory through its next phase of construction and toward operations.”
Jaffe joins the GMT at a pivotal time, as it aims to secure the funding necessary to complete the telescope with just over $1bn from private funds having been pledges so far. The collaboration recently added Northwestern University and the Massachusetts Institute of Technology to its international consortium taking the number of members to 16 universities and research institutions.
In June 2025 the GMT, which is already 40% completed, received NSF approval confirming that the observatory will advance into its “major facilities final design phase”, one of the final steps before becoming eligible for federal construction funding.
Yet it faces competition from another next-generation telescope – the Thirty Meter Telescope (TMT) that will use a segmented primary mirror consisting of 492 elements of zero-expansion glass for a 30 m-diameter primary mirror.
India has been involved in nuclear energy and power for decades, but now the country is turning to small modular nuclear reactors (SMRs) as part of a new, long-term push towards nuclear and renewable energy. In December 2025 the country’s parliament passed a bill that allows private companies for the first time to participate in India’s nuclear programme, which could see them involved in generating power, operating plants and making equipment.
Some commentators are unconvinced that the move will be enough to help meet India’s climate pledge to achieving 500 GW of non-fossil-fuel based energy generation by 2030. Interestingly, however, India has now joined other nations, such as Russia and China, in taking an interest in SMRs. They could help stem the overall decline in nuclear power, which now accounts for just 9% of electricity generated around the world – down from 17.5% in 1996.
Last year India’s finance minister Nirmala Sitharaman announced a nuclear energy mission funded with 200 billion Indian rupees ($2.2bn) to develop at least five indigenously designed and operational SMRs by 2033. Unlike huge, conventional nuclear plants, such as pressurized heavy-water reactors (PHWRs), most or all components of an SMR are manufactured in factories before being assembled at the reactor site.
SMRs, typically generate less than 300 MW of electrical power but – being modular – additional capacity can be brought on quickly and easily given their lower capital costs, shorter construction times, ability to work with lower-capacity grids and lower carbon emissions. Despite their promise, there are only two fully operating SMRs in the world – both in Russia – although two further high-temperature gas-cooled SMRs are currently being built in China. In June 2025 Rolls-Royce SMR was selected as the preferred bidder by Great British Nuclear to build the UK’s first fleet of SMRs, with plans to provide 470 MW of low-carbon electricity.
Cost benefit analysis
An official at the Department of Atomic Energy told Physics World that part of that mix of five new SMRs in India could be the 200 MW Bharat small modular reactor, which are based on pressurized water reactor technology and use slightly enriched uranium as a fuel. Other options are 55 MW small modular reactors and the Indian government also plans to partner with the private sector to deploy 220 MW Bharat small reactors.
Despite such moves, some are unconvinced that small nuclear reactors could help India scale its nuclear ambitions. “SMRs are still to demonstrate that they can supply electricity at scale,” says Karthik Ganesan, a fellow and director of partnerships at the Council on Energy, Environment and Water (CEEW), a non-profit policy research think-tank based in New Delhi. “SMRs are a great option for captive consumption, where large investment that will take time to start generating is at a premium.”
Ganesan, however, says it is too early to comment on the commercial viability of SMRs as cost reductions from SMRs depend on how much of the technology is produced in a factory and in what quantities. “We are yet to get to that point and any test reactors deployed would certainly not be the ones to benchmark their long-term competitiveness,” he says. “[But] even at a higher tariff, SMRs will still have a use case for industrial consumers who want certainty in long-term tariffs and reliable continuous supply in a world where carbon dioxide emissions will be much smaller than what we see from the power sector today.”
M V Ramana from the University of British Columbia, Vancouver, who works in international security and energy supply, is concerned over the cost efficiency of SMRs compared to their traditional counterparts. “Larger reactors are cheaper on a per-megawatt basis because their material and work requirements do not scale linearly with power capacity,” says Ramana. This, according to Ramana, means that the electricity SMRs produce will be more expensive than nuclear energy from large reactors, which are already far more expensive than renewables such as solar and wind energy.
Clean or unclean?
Even if SMRs take over from PHWRs, there is still the question of what do with its nuclear waste. As Ramana points out, all activities linked to the nuclear fuel chain – from mining uranium to dealing with the radioactive wastes produced – have significant health and environmental impacts. “The nuclear fuel chain is polluting, albeit in a different way from that of fossil fuels,” he says, adding that those pollutants remain hazardous for hundreds of thousands of years. “There is no demonstrated solution to managing these radioactive wastes – nor can there be, given the challenge of trying to ensure that these materials do not come into contact with living beings,” says Ramana.
Ganesan, however, thinks that nuclear energy is still clean as it produces electricity with much a lower environmental footprint especially when it comes to so-called “criteria pollutants”: ozone; particulate matter; carbon monoxide; lead; sulphur dioxide; and nitrogen dioxide. While nuclear waste still needs to be managed, Ganesan says the associated costs are already included in the price of setting up a reactor. “In due course, with technological development, the burn up will significantly higher and waste generated a lot lesser.”
By studying how light from eight distant quasars is gravitationally lensed as it propagates towards Earth, astronomers have calculated a new value for the Hubble constant – a parameter that describes the rate at which the universe is expanding. The result agrees more closely with previous “late-universe” probes of this constant than it does with calculations based on observations of the cosmic microwave background (CMB) in the early universe, strengthening the notion that we may be misunderstanding something fundamental about how the universe works.
The universe has been expanding ever since the Big Bang nearly 14 billion years ago. We know this, in part, because of observations made in the 1920s by the American astronomer Edwin Hubble. By measuring the redshift of various galaxies, Hubble discovered that galaxies further away from Earth are moving away faster than galaxies that are closer to us. The relationship between this speed and the galaxies’ distance is known as the Hubble constant, H0.
Astronomers have developed several techniques for measuring H0. The problem is that different techniques deliver different values. According to measurements made by the European Space Agency’s Planck satellite of CMB radiation “left over” from the Big Bang, the value of H0 is about 67 kilometres per second per megaparsec (km/s/Mpc), where one Mpc is 3.3 million light years. In contrast, “distance-ladder” measurements such as those made by the SH0ES collaboration those involving observations of type Ia supernovae yield a value of about 73 km/s/Mpc. This discrepancy is known as the Hubble tension.
Time-delay cosmography
In the latest work, the TDCOSMO collaboration, which includes astronomers Kenneth Wong and Eric Paic of the University of Tokyo, Japan, measured H0 using a technique called time-delay cosmography. This well-established method dates back to 1964 and uses the fact that massive galaxies can act as lenses, deflecting the light from objects behind them so that from our perspective, these objects appear distorted.
“This is called gravitational lensing, and if the circumstances are right, we’ll actually see multiple distorted images, each of which will have taken a slightly different pathway to get to us, taking different amounts of time,” Wong explains.
By looking for changes in these images that are identical, but slightly out of sync, astronomers can measure the time differences required for the light from the objects to reach Earth. Then, by combining these data with estimates of the distribution of the mass of the distorting galactic lens, they can calculate H0.
Based on these measurements, they obtained a H0 value of roughly 71.6 km s−1 Mpc−1, which is more consistent with current-day observations (such as that from SH0ES) than early-universe ones (such as that from Planck). Wong explains that this discrepancy supports the idea that the Hubble tension arises from real physics, not just some unknown error in the various methods. “Our measurement is completely independent of other methods, both early- and late-universe, so if there are any systematic uncertainties in those, we should not be affected by them,” he says.
The astronomers say that the SLACS and SL2S sample data are in excellent agreement with the new TDCOSMO-2025 sample, while the new measurements improve the precision of H0 to 4.6%. However, Paic notes that nailing down the value of H0 to a level that would “definitely confirm” the Hubble tension will require a precision of 1-2%. “This could be possible by increasing the number of objects observed as well as ruling out any systematic errors as yet unaccounted for,” he says.
Wong adds that while the TDCOSMO-2025 dataset contains its own uncertainties, multiple independent measurements should, in principle, strengthen the result. “One of the largest sources of uncertainty is the fact that we don’t know exactly how the mass in the lens galaxies is distributed,” he explains. “It is usually assumed that the mass follows some simple profile that is consistent with observations, but it is hard to be sure and this uncertainty can directly influence the values we calculate.”
The biggest hurdle, Wang adds, will “probably be addressing potential sources of systematic uncertainty, making sure we have thought of all the possible ways that our result could be wrong or biased and figuring out how to handle those uncertainties.”
Taking medication as and when prescribed is crucial for it to have the desired effect. But nearly half of people with chronic conditions don’t adhere to their medication regimes, a serious problem that leads to preventable deaths, drug resistance and increased healthcare costs. So how can medical professionals ensure that patients are taking their medicine as prescribed?
A team at Massachusetts Institute of Technology (MIT) has come up with a solution: a drug capsule containing an RFID tag that uses radiofrequency (RF) signals to communicate that it has been swallowed, and then bioresorbs into the body.
“Medication non-adherence remains a major cause of preventable morbidity and cost, but existing ingestible tracking systems rely on non-degradable electronics,” explains project leader Giovanni Traverso. “Our motivation was to create a passive, battery-free adherence sensor that confirms ingestion while fully biodegrading, avoiding long-term safety and environmental concerns associated with persistent electronic devices.”
The device – named SAFARI (smart adherence via Faraday cage and resorbable ingestible) – incorporates an RFID tag with a zinc foil RF antenna and an RF chip, as well as the drug payload, inside an ingestible gelatin capsule. The capsule is coated with a mixture of cellulose and molybdenum particles, which blocks the transit of any RF signals.
SAFARI capsules Photos of the capsules with (left) and without (right) the RF-blocking coating. (Courtesy: Mehmet Say)
Once swallowed, however, this shielding layer breaks down in the stomach. The RFID tag (which can be preprogrammed with information such as dose metadata, manufacturing details and unique ID) can then be wirelessly queried by an external reader and return a signal from inside the body confirming that the medication has been ingested.
The capsule itself dissolves upon exposure to digestive fluids, releasing the desired medication; the metal antenna components also dissolve completely in the stomach. The use of biodegradable materials is key as it eliminates the need for device retrieval and minimizes the risk of gastrointestinal (GI) blockage. The tiny (0.16 mm²) RFID chip remains intact and should safely leave the body through the GI tract.
Traverso suggests that the first clinical applications for the SAFARI capsule will likely be high-risk settings in which objective ingestion confirmation is particularly valuable. “[This includes] tuberculosis, HIV, transplant immunosuppression or cardiovascular therapies, where missed doses can have serious clinical consequences,” he tells Physics World.
In vivo demonstration
To assess the degradation of the SAFARI capsule and its components in vitro, Traverso and colleagues placed the capsule into simulated gastric fluid at physiological temperature (37 °C). The RF shielding coating dissolved in 10–20 min, while the capsule and the zinc layer in the RFID tag disintegrated into pieces after one day.
Next, the team endoscopically delivered the SAFARI capsules into the stomachs of sedated pigs, chosen as they have a similar sized GI tract to humans. Once in contact with gastric fluid in the stomach, the capsule coating swelled and then partially dissolved (as seen using endoscopic images), exposing the RFID tag. The researchers found that, in general, the tag and capsule parts disintegrated in the stomach at up to 24 h later.
A panel antenna positioned 10 cm from the animal captured the tag data. Even with the RFID tags immersed in gastric fluid, the external receiver could effectively record signals in the frequency range of 900–925 MHz, with RSSI (received signal strength indicator) values ranging from 65 to 78 dB – demonstrating that the tag could effectively transmit RF signals from inside the stomach.
The researchers conclude that this successful use of SAFARI in swine indicates the potential for translation to clinical research. They note that the device should be safe for human ingestion as its composite materials meet established dietary and biomedical exposure limits, with levels of zinc and molybdenum orders of magnitude below those associated with toxicity.
“We have demonstrated robust performance and safety in large-animal models, which is an important translational milestone,” explains first author Mehmet Girayhan Say. “Before human studies, further work is needed on chronic exposure with characterization of any material accumulation upon repeated dosing, as well as user-centred integration of external readers to support real-world clinical workflows.”
Physicists at the University of Stuttgart, Germany have teleported a quantum state between photons generated by two different semiconductor quantum dot light sources located several metres apart. Though the distance involved in this proof-of-principle “quantum repeater” experiment is small, members of the team describe the feat as a prerequisite for future long-distance quantum communications networks.
“Our result is particularly exciting because such a quantum Internet will encompass these types of distant quantum nodes and will require quantum states that are transmitted among these different nodes,” explains Tim Strobel, a PhD student at Stuttgart’s Institute of Semiconductor Optics and Functional Interfaces (IHFG) and the lead author of a paper describing the research. “It is therefore an important step in showing that remote sources can be effectively interfaced in this way in quantum teleportation experiments.”
In the Stuttgart study, one of the quantum dots generates a single photon while the other produces a pair of photons that are entangled – meaning that the quantum state of one photon is closely linked to the state of the other, no matter how far apart they are. One of the photons in the entangled pair then travels to the other quantum dot and interferes with the photon there. This process produces a superposition that allows the information encapsulated in the single photon to be transferred to the distant “partner” photon from the pair.
Quantum frequency converters
Strobel says the most challenging part of the experiment was making photons from two remote quantum dots interfere with each other. Such interference is only possible if the two particles are indistinguishable, meaning they must be similar in every regard, be it in their temporal shape, spatial shape or wavelength. In contrast, each quantum dot is unique, especially in terms of its spectral properties, and each one emits photons at slightly different wavelengths.
To close the gap, the team used devices called quantum frequency converters to precisely tune the wavelength of the photons and match them spectrally. The researchers also used the converters to shift the original wavelengths of the photons emitted from the quantum dots (around 780 nm) to a wavelength commonly used in telecommunications (1515 nm) without altering the quantum state of the photons. This offers further advantages: “Being at telecommunication wavelengths makes the technology compatible with the existing global optical fibre network, an important step towards real-life applications,” Strobel tells Physics World.
Proof-of-principle experiment
In this work, the quantum dots were separated by an optical fibre just 10 m in length. However, the researchers aim to push this to considerably greater distances in the future. Strobel notes that the Stuttgart study was published in Nature Communications back-to-back with an independent work carried out by researchers led by Rinaldo Trotta of Sapienza University in Rome, Italy. The Rome-based group demonstrated quantum state teleportation across the Sapienza University campus at shorter wavelengths, enabled by the brightness of their quantum-dot source.
“These two papers that we published independently strengthen the measurement outcomes, demonstrating the maturity of quantum dot light sources in this domain,” Strobel says. Semiconducting quantum dots are particularly attractive for this application, he adds, because as well as producing both single and entangled photons on demand, they are also compatible with other semiconductor technologies.
Fundamental research pays off
Simone Luca Portalupi, who leads the quantum optics group at IHFG, notes that “several years of fundamental research and semiconductor technology are converging into these quantum teleportation experiments”. For Peter Michler, who led the study team, the next step is to leverage these advances to bring quantum-dot-based teleportation technology out of a controlled laboratory environment and into the real world.
Strobel points out that there is already some precedent for this, as one of the group’s previous studies showed that they could maintain photon entanglement across a 36-km fibre link deployed across the city of Stuttgart. “The natural next step would be to show that we can teleport the state of a photon across this deployed fibre link,” he says. “Our results will stimulate us to improve each building block of the experiment, from the sample to the setup.”
This episode of the Physics World Weekly podcast features a conversation with Tim Prior and John Devaney of the National Physical Laboratory (NPL), which is the UK’s national metrology institute.
Prior is NPL’s quantum programme manager and Devaney is its quantum standards manager. They talk about NPL’s central role in the recent launch of NMI-Q, which brings together some of the world’s leading national metrology institutes to accelerate the development and adoption of quantum technologies.
Prior and Devaney describe the challenges and opportunities of developing metrology and standards for rapidly evolving technologies including quantum sensors, quantum computing and quantum cryptography. They talk about the importance of NPL’s collaborations with industry and academia and explore the diverse career opportunities for physicists at NPL. Prior and Devaney also talk about their own careers and share their enthusiasm for working in the cutting-edge and fast-paced field of quantum metrology.
This podcast is sponsored by the National Physical Laboratory.
Carbon nanotube arrays are designed to investigate the behaviour of electrons in low‑dimensional systems. By arranging well‑aligned 1D nanotubes into a 2D film, the researchers create a coupled‑wire structure that allows them to study how electrons move and interact as the system transitions between different dimensionalities. Using a gate electrode positioned on top of the array, the researchers were able to tune both the carrier density (number of electrons and holes in a unit area) and the strength of electron–electron interactions, enabling controlled access to regimes. The nanotubes behave as weakly coupled 1D channels where electrons move along each nanotube, as a 2D Fermi liquid where the electrons can move between nanotubes behaving like a conventional metal, or as a set of quantum‑dot‑like islands showing Coulomb blockade where at low carrier densities sections of the nanotubes become isolated.
The dimensional transitions are set by two key temperatures: T₂D, where electrons begin to hop between neighbouring nanotubes, and T₁D, where the system behaves as a Luttinger liquid which is a 1D state in which electrons cannot easily pass each other and therefore move in a strongly correlated, collective way. Changing the number of holes in the nanotubes changes how strongly the tubes interact with each other. This controls when the system stops acting like separate 1D wires and when strong interactions make parts of the film break up into isolated regions that show Coulomb blockade.
The researchers built a phase diagram by looking at how the conductance changes with temperature and voltage, and by checking how well it follows power‑law behaviour at different energy ranges. This approach allows them to identify the boundaries between Tomonaga–Luttinger liquid, Fermi liquid and Coulomb blockade phases across a wide range of gate voltages and temperatures.
Overall, the work demonstrates a continuous crossover between 2D, 1D and 0D electronic behaviour in a controllable nanotube array. This provides an experimentally accessible platform for studying correlated low‑dimensional physics and offers insights relevant to the development of nanoscale electronic devices and future carbon nanotube technologies.
The CMS Collaboration investigated in detail events in which a top quark and an anti‑top quark are produced together in high‑energy proton–proton collisions at √s = 13 TeV, using the full 138 fb⁻¹ dataset collected between 2016 and 2018. The top quark is the heaviest fundamental particle and decays almost immediately after being produced in high-energy collisions. As a consequence, the formation of a bound top–antitop state was long considered highly unlikely and had never been observed. The anti-top quark has the same mass and lifetime as the top quark but opposite charges. When a top quark and an anti-top quark are produced together, they form a top-antitop pair (tt̄).
Focusing on events with two charged leptons (top quarks and anti-top quarks decay into two electrons, two muons or one electron and one muon) and multiple jets (sprays of particles associated with top quark decay), the analysis examines the invariant mass of the top–antitop system along with two angular observables that directly probe how the spins of the top and anti‑top quarks are correlated. These measurements allow the team to compare the data with the prediction for the non resonant tt̄ production based on fixed order perturbative quantum chromodynamics (QCD), which is what physicists normally use to calculate how quarks behave according to the standard model of particle physics.
Near the kinematic threshold where the top–antitop pair is produced, CMS observes a significant excess of events relative to the QCD prediction. The number of extra events they see can be translated into a production rate. Using a simplified model based on non‑relativistic QCD, they estimate that this excess corresponds to a cross section of about 8.8 picobarns, with an uncertainty of roughly +1.2/–1.4 picobarns. The pattern of the excess, including its spin‑correlation features, is consistent with the production of a colour singlet pseudoscalar (a top–antitop pair in the 1S₀ state, i.e. the simplest, lowest energy configuration), and therefore with the prediction of non-relativistic QCD near the tt̄ threshold. The statistical significance of the excess exceeds five standard deviations, indicating that the effect is unlikely to be a statistical fluctuation. Researchers want to find a toponium‑like state because it would reveal how the strongest force in nature behaves at the highest energies, test key theories of heavy‑quark physics, and potentially expose new physics beyond the Standard Model.
The researchers emphasise that modelling the tt̄ threshold region is theoretically challenging, and that alternative explanations remain possible. Nonetheless, the result aligns with long‑standing predictions from non‑relativistic QCD that heavy quarks could form short‑lived bound states near threshold. The analysis also showcases spin correlation as an effective means to discover and characterise such effects, which were previously considered to be beyond the reach of experimental capabilities. Starting with the confirmation by the ATLAS Collaboration last July, this observation has sparked and continues to inspire follow-up theoretical follow-up theoretical and experimental works, opening up a new field of study involving bound states of heavy quarks and providing new insight into the behaviour of the strong force at high energies.
The 2026 SPIE Photonics West meeting takes place in San Francisco, California, from 17 to 22 January. The premier event for photonics research and technology, Photonics West incorporates more than 100 technical conferences covering topics including lasers, biomedical optics, optoelectronics, quantum technologies and more.
As well as the conferences, Photonics West also offers 60 technical courses and a new Career Hub with a co-located job fair. There are also five world-class exhibitions featuring over 1500 companies and incorporating industry-focused presentations, product launches and live demonstrations. The first of these is the BiOS Expo, which begins on 17 January and examines the latest breakthroughs in biomedical optics and biophotonics technologies.
Then starting on 20 January, the main Photonics West Exhibition will host more than 1200 companies and showcase the latest innovative optics and photonics devices, components, systems and services. Alongside, the Quantum West Expo features the best in quantum-enabling technology advances, the AR | VR | MR Expo brings together leading companies in XR hardware and systems and – new for 2026 – the Vision Tech Expo highlights cutting-edge vision, sensing and imaging technologies.
Here are some of the product innovations on show at this year’s event.
Enabling high-performance photonics assembly with SmarAct
As photonics applications increasingly require systems with high complexity and integration density, manufacturers face a common challenge: how to assemble, align and test optical components with nanometre precision – quickly, reliably and at scale. At Photonics West, SmarAct presents a comprehensive technology portfolio addressing exactly these demands, spanning optical assembly, fast photonics alignment, precision motion and advanced metrology.
Rapid and reliable SmarAct’s technology portfolio enables assembly, alignment and testing of optical components with nanometre precision. (Courtesy: SmarAct)
A central highlight is SmarAct’s Optical Assembly Solution, presented together with a preview of a powerful new software platform planned for release in late-Q1 2026. This software tool is designed to provide exceptional flexibility for implementing automation routines and process workflows into user-specific control applications, laying the foundation for scalable and future-proof photonics solutions.
For high-throughput applications, SmarAct showcases its Fast Photonics Alignment capabilities. By combining high-dynamic motion systems with real-time feedback and controller-based algorithms, SmarAct enables rapid scanning and active alignment of PICs and optical components such as fibres, fibre array units, lenses, beam splitters and more. These solutions significantly reduce alignment time while maintaining sub-micrometre accuracy, making them ideal for demanding photonics packaging and assembly tasks.
Both the Optical Assembly Solution and Fast Photonics Alignment are powered by SmarAct’s electromagnetic (EM) positioning axes, which form the dynamic backbone of these systems. The direct-drive EM axes combine high speed, high force and exceptional long-term durability, enabling fast scanning, smooth motion and stable positioning even under demanding duty cycles. Their vibration-free operation and robustness make them ideally suited for high-throughput optical assembly and alignment tasks in both laboratory and industrial environments.
Precision feedback is provided by SmarAct’s advanced METIRIO optical encoder family, designed to deliver high-resolution position feedback for demanding photonics and semiconductor applications. The METIRIO stands out by offering sub-nanometre position feedback in an exceptionally compact and easy-to-integrate form factor. Compatible with linear, rotary and goniometric motion systems – and available in vacuum-compatible designs – the METIRIO is ideally suited for space-constrained photonics setups, semiconductor manufacturing, nanopositioning and scientific instrumentation.
For applications requiring ultimate measurement performance, SmarAct presents the PICOSCALE Interferometer and Vibrometer. These systems provide picometre-level displacement and vibration measurements directly at the point of interest, enabling precise motion tracking, dynamic alignment, and detailed characterization of optical and optoelectronic components. When combined with SmarAct’s precision stages, they form a powerful closed-loop solution for high-yield photonics testing and inspection.
Together, SmarAct’s motion, metrology and automation solutions form a unified platform for next-generation photonics assembly and alignment.
Visit SmarAct at booth #3438 at Photonics West and booth #8438 at BiOS to discover how these technologies can accelerate your photonics workflows.
Avantes previews AvaSoftX software platform and new broadband light source
Photonics West 2026 will see Avantes present the first live demonstration of its completely redesigned software platform, AvaSoftX, together with a sneak peek of its new broadband light source, the AvaLight-DH-BAL. The company will also run a series of application-focused live demonstrations, highlighting recent developments in laser-induced breakdown spectroscopy (LIBS), thin-film characterization and biomedical spectroscopy.
AvaSoftX is developed to streamline the path from raw spectra to usable results. The new software platform offers preloaded applications tailored to specific measurement techniques and types, such as irradiance, LIBS, chemometry and Raman. Each application presents the controls and visualizations needed for that workflow, reducing time and the risk of user error.
Streamlined solution The new AvaSoftX software platform offers next-generation control and data handling. (Courtesy: Avantes)
Smart wizards guide users step-by-step through the setup of a measurement – from instrument configuration and referencing to data acquisition and evaluation. For more advanced users, AvaSoftX supports customization with scripting and user-defined libraries, enabling the creation of reusable methods and application-specific data handling. The platform also includes integrated instruction videos and online manuals to support the users directly on the platform.
The software features an accessible dark interface optimized for extended use in laboratory and production environments. Improved LIBS functionality will be highlighted through a live demonstration that combines AvaSoftX with the latest Avantes spectrometers and light sources.
Also making its public debut is the AvaLight-DH-BAL, a new and improved deuterium–halogen broadband light source designed to replace the current DH product line. The system delivers continuous broadband output from 215 to 2500 nm and combines a more powerful halogen lamp with a reworked deuterium section for improved optical performance and stability.
A switchable deuterium and halogen optical path is combined with deuterium peak suppression to improve dynamic range and spectral balance. The source is built into a newly developed, more robust housing to improve mechanical and thermal stability. Updated electronics support adjustable halogen output, a built-in filter holder, and both front-panel and remote-controlled shutter operation.
The AvaLight-DH-BAL is intended for applications requiring stable, high-output broadband illumination, including UV–VIS–NIR absorbance spectroscopy, materials research and thin-film analysis. The official launch date for the light source, as well as the software, will be shared in the near future.
Avantes will also run a series of live application demonstrations. These include a LIBS setup for rapid elemental analysis, a thin-film measurement system for optical coating characterization, and a biomedical spectroscopy demonstration focusing on real-time measurement and analysis. Each demo will be operated using the latest Avantes hardware and controlled through AvaSoftX, allowing visitors to assess overall system performance and workflow integration. Avantes’ engineering team will be available throughout the event.
For product previews, live demonstrations and more, meet Avantes at booth #1157.
HydraHarp 500: high-performance time tagger redefines precision and scalability
One year after its successful market introduction, the HydraHarp 500 continues to be a standout highlight at PicoQuant’s booth at Photonics West. Designed to meet the growing demands of advanced photonics and quantum optics, the HydraHarp 500 sets benchmarks in timing performance, scalability and flexible interfacing.
At its core, the HydraHarp 500 delivers exceptional timing precision combined with ultrashort jitter and dead time, enabling reliable photon timing measurements even at very high count rates. With support for up to 16 fully independent input channels plus a common sync channel, the system allows true simultaneous multichannel data acquisition without cross-channel dead time, making it ideal for complex correlation experiments and high-throughput applications.
At the forefront of photon timing The high-resolution multichannel time tagger HydraHarp 500 offers picosecond timing precision. It combines versatile trigger methods with multiple interfaces, making it ideally suited for demanding applications that require many input channels and high data throughput. (Courtesy: PicoQuant)
A key strength of the HydraHarp 500 is its high flexibility in detector integration. Multiple trigger methods support a wide range of detector technologies, from single-photon avalanche diodes (SPADs) to superconducting nanowire single-photon detectors (SNSPDs). Versatile interfaces, including USB 3.0 and a dedicated FPGA interface, ensure seamless data transfer and easy integration into existing experimental setups. For distributed and synchronized systems, White Rabbit compatibility enables precise cross-device timing coordination.
Engineered for speed and efficiency, the HydraHarp 500 combines ultrashort per-channel dead time with industry-leading timing performance, ensuring complete datasets and excellent statistical accuracy even under demanding experimental conditions.
Looking ahead, PicoQuant is preparing to expand the HydraHarp family with the upcoming HydraHarp 500 L. This new variant will set new standards for data throughput and scalability. With outstanding timing resolution, excellent timing precision and up to 64 flexible channels, the HydraHarp 500 L is engineered for highest-throughput applications powered – for the first time – by USB 3.2 Gen 2×2, making it ideal for rapid, large-volume data acquisition.
With the HydraHarp 500 and the forthcoming HydraHarp 500 L, PicoQuant continues to redefine what is possible in photon timing, delivering precision, scalability and flexibility for today’s and tomorrow’s photonics research. For more information, visit www.picoquant.com or contact us at info@picoquant.com.
Meet PicoQuant at BiOS booth #8511 and Photonics West booth #3511.
“It’s hard to say when exactly sending people to Mars became a goal for humanity,” ponders author Scott Solomon in his new book Becoming Martian: How Living in Space Will Change Our Bodies and Minds – and I think we’d all agree. Ten years ago, I’m not sure any of us thought even returning to the Moon was seriously on the cards. Yet here we are, suddenly living in a second space age, where the first people to purchase one-way tickets to the Red Planet have likely already been born.
The technology required to ship humans to Mars, and the infrastructure required to keep them alive, is well constrained, at least in theory. One could write thousands of words discussing the technical details of reusable rocket boosters and underground architectures. However, Becoming Martian is not that book. Instead, it deals with the effect Martian life will have on the human body – both in the short term across a single lifetime; and in the long term, on evolutionary timescales.
This book’s strength lies in its authorship: it is not written by a physicist enthralled by the engineering challenge of Mars, nor by an astronomer predisposed to romanticizing space exploration. Instead, Solomon is a research biologist who teaches ecology, evolutionary biology and scientific communication at Rice University in Houston, Texas.
Becoming Martian starts with a whirlwind, stripped-down tour of Mars across mythology, astronomy, culture and modern exploration. This effectively sets out the core issue: Mars is fundamentally different from Earth, and life there is going to be very difficult. Solomon goes on to describe the effects of space travel and microgravity on humans that we know of so far: anaemia, muscle wastage, bone density loss and increased radiation exposure, to name just a few.
Where the book really excels, though, is when Solomon uses his understanding of evolutionary processes to extend these findings and conclude how Martian life would be different. For example, childbirth becomes a very risky business on a planet with about one-third of Earth’s gravity. The loss of bone density translates into increased pelvic fractures, and the muscle wastage into an inability for the uterus to contract strongly enough. The result? All Martian births will likely need to be C-sections.
Solomon applies his expertise to the whole human body, including our “entourage” of micro-organisms. The indoor life of a Martian is likely to affect the immune system to the degree that contact with an Earthling would be immensely risky. “More than any other factor, the risk of disease transmission may be the wedge that drives the separation between people on the two planets,” he writes. “It will, perhaps inevitably, cause the people on Mars to truly become Martians.” Since many diseases are harboured or spread by animals, there is a compelling argument that Martians would be vegan and – a dealbreaker for some I imagine – unable to have any pets. So no dogs, no cats, no steak and chips on Mars.
Let’s get physical
The most fascinating part of the book for me is how Solomon repeatedly links the biological and psychological research with the more technical aspects of designing a mission to Mars. For example, the first exploratory teams should have odd numbers, to make decisions easier and us-versus-them rifts less likely. The first colonies will also need to number between 10,000 and 11,000 individuals to ensure enough genetic diversity to protect against evolutionary concepts such as genetic drift and population crashes.
Amusingly, the one part of human activity most important for a sustainable colony – procreation – is the most understudied. When a NASA scientist made the suggestion a colony would need private spaces with soundproof walls, the backlash was so severe that NASA had to reassure Congress that taxpayer dollars were not being “wasted” encouraging sexual activity among astronauts.
Solomon’s writing is concise yet extraordinarily thorough – there is always just enough for you to feel you can understand the importance and nuance of topics ranging from Apollo-era health studies to evolution, and from AI to genetic engineering. The book is impeccably researched, and he presents conflicting ethical viewpoints so deftly, and without apparent judgement, that you are left plenty of space to imprint your own opinions. So much so that when Solomon shares his own stance on the colonization of Mars in the epilogue, it comes as a bit of a surprise.
In essence, this book lays out a convincing argument that it might be our biology, not our technology, that limits humanity’s expansion to Mars. And if we are able to overcome those limitations, either with purposeful genetic engineering or passive evolutionary change, this could mean we have shed our humanity.
Becoming Martian is one of the best popular-science books I have read within the field, and it is an uplifting read, despite dealing with some of the heaviest ethical questions in space sciences. Whether you’re planning your future as a Martian or just wondering if humans can have sex in space, this book should be on your wish list.
Using incidental data collected by the BepiColombo mission, an international research team has made the first detailed measurements of how coronal mass ejections (CMEs) reduce cosmic-ray intensity at varying distances from the Sun. Led by Gaku Kinoshita at the University of Tokyo, the team hopes that their approach could help improve the accuracy of space weather forecasts following CMEs.
CMEs are dramatic bursts of plasma originating from the Sun’s outer atmosphere. In particularly violent events, this plasma can travel through interplanetary space, sometimes interacting with Earth’s magnetic field to produce powerful geomagnetic storms. These storms result in vivid aurorae in Earth’s polar regions and can also damage electronics on satellites and spacecraft. Extreme storms can even affect electrical grids on Earth.
To prevent such damage, astronomers aim to predict the path and intensity of CME plasma as accurately as possible – allowing endangered systems to be temporarily shut down with minimal disruption. According to Kinoshita’s team, one source of information has so far been largely unexplored.
Pushing back cosmic rays
Within interplanetary space, CME plasma interacts with cosmic rays, which are energetic charged particles of extrasolar origin that permeate the solar system with a roughly steady flux. When an interplanetary CME (ICME) passes by, it temporarily pushes back these cosmic rays, creating a local decrease in their intensity.
“This phenomenon is known as the Forbush decrease effect,” Kinoshita explains. “It can be detected even with relatively simple particle detectors, and reflects the properties and structure of the passing ICME.”
In principle, cosmic-ray observations can provide detailed insights into the physical profile of a passing ICME. But despite their relative ease of detection, Forbush decreases had not yet been observed simultaneously by detectors at multiple distances from the Sun, leaving astronomers unclear on how propagation distance affects their severity.
Now, Kinoshita’s team have explored this spatial relationship using BepiColombo, a European and Japanese mission that will begin orbiting Mercury in November 2026. While the mission focuses on Mercury’s surface, interior, and magnetosphere, it also carries non-scientific equipment capable of monitoring cosmic rays and solar plasma in its surrounding environment.
“Such radiation monitoring instruments are commonly installed on many spacecraft for engineering purposes,” Kinoshita explains. “We developed a method to observe Forbush decreases using a non-scientific radiation monitor onboard BepiColombo.”
Multiple missions
The team combined these measurements with data from specialized radiation-monitoring missions, including ESA’s Solar Orbiter, which is currently probing the inner heliosphere from inside Mercury’s orbit, as well as a network of near-Earth spacecraft. Together, these instruments allowed the researchers to build a detailed, distance-dependent profile of a week-long ICME that occurred in March 2022.
Just as predicted, the measurements revealed a clear relationship between the Forbush decrease effect and distance from the Sun.
“As the ICME evolved, the depth and gradient of its associated cosmic-ray decrease changed accordingly,” Kinoshita says.
With this method now established, the team hopes it can be applied to non-scientific radiation monitors on other missions throughout the solar system, enabling a more complete picture of the distance dependence of ICME effects.
“An improved understanding of ICME propagation processes could contribute to better forecasting of disturbances such as geomagnetic storms, leading to further advances in space weather prediction,” Kinoshita says. In particular, this approach could help astronomers model the paths and intensities of solar plasma as soon as a CME erupts, improving preparedness for potentially damaging events.
When particle colliders smash particles into each other, the resulting debris cloud sometimes contains a puzzling ingredient: light atomic nuclei. Such nuclei have relatively low binding energies, and they would normally break down at temperatures far below those found in high-energy collisions. Somehow, though, their signature remains. This mystery has stumped physicists for decades, but researchers in the ALICE collaboration at CERN have now figured it out. Their experiments showed that light nuclei form via a process called resonance-decay formation – a result that could pave the way towards searches for physics beyond the Standard Model.
Baryon resonance
The ALICE team studied deuterons (a bound proton and neutron) and antideuterons (a bound antiproton and antineutron) that form in experiments at CERN’s Large Hadron Collider. Both deuterons and antideuterons are fragile, and their binding energies of 2.2 MeV would seemingly make it hard for them to form in collisions with energies that can exceed 100 MeV – 100 000 times hotter than the centre of the Sun.
The collaboration found that roughly 90% of the deuterons seen after such collisions form in a three-phase process. In the first phase, an initial collision creates a so-called baryon resonance, which is an excited state of a particle made of three quarks (such as a proton or neutron). This particle is called a Δ baryon and is highly unstable, so it rapidly decays into a pion and a nucleon (a proton or a neutron) during the second phase of the process. Then, in the third (and, crucially, much later) phase, the nucleon cools down to a point where its energy properties allow it to bind with another nucleon to form a deuteron.
Smoking gun
Measuring such a complex process is not easy, especially as everything happens on a length scale of femtometres (10-15 meter). To tease out the details, the collaboration performed precision measurements to correlate the momenta of the pions and deuterons. When they analysed the momentum difference between these particle pairs, they observed a peak in the data corresponding to the mass of the Δ baryon. This peak shows that the pion and the deuteron are kinematically linked because they share a common ancestor: the pion came from the same Δ decay that provided one of the deuteron’s nucleons.
Panos Christakoglou, a member of the ALICE collaboration based at the Netherlands’ Maastricht University, says the experiment is special because in contrast to most previous attempts, where results were interpreted in light of models or phenomenological assumptions, this technique is model-independent. He adds that the results of this study could be used to improve models of high energy proton-proton collisions in which light nuclei (and maybe hadrons more generally) are formed. Other possibilities include improving our interpretations of cosmic-ray studies that measure the fluxes of (anti)nuclei in the galaxy – a useful probe for astrophysical processes.
The hunt is on
Intriguingly, Christakoglou suggests that the team’s technique could also be used to search for indirect signs of dark matter. Many models predict that dark-matter candidates such as Weakly Interacting Massive Particles (WIMPs) will decay or annihilate in processes that also produce Standard Model particles, including (anti)deuterons. “If for example one measures the flux of (anti)nuclei in cosmic rays being above the ‘Standard Model based’ astrophysical background, then this excess could be attributed to new physics which might be connected to dark matter,” Christakoglou tells Physics World.
Michael Kachelriess, a physicist at the Norwegian University of Science and Technology in Trondheim, Norway, who was not involved in this research, says the debate over the correct formation mechanism for light nuclei (and antinuclei) has divided particle physicists for a long time. In his view, the data collected by the ALICE collaboration decisively resolves this debate by showing that light nuclei form in the late stages of a collision via the coalescence of nucleons. Kachelriess calls this a “great achievement” in itself, and adds that similar approaches could make it possible to address other questions, such as whether thermal plasmas form in proton-proton collisions as well as in collisions between heavy ions.