↩ Accueil

Vue lecture

The muon’s magnetic moment exposes a huge hole in the Standard Model – unless it doesn’t

A tense particle-physics showdown will reach new heights in 2025. Over the past 25 years researchers have seen a persistent and growing discrepancy between the theoretical predictions and experimental measurements of an inherent property of the muon – its anomalous magnetic moment. Known as the “muon g-2”, this property serves as a robust test of our understanding of particle physics.

Theoretical predictions of the muon g-2 are based on the Standard Model of particle physics (SM). This is our current best theory of fundamental forces and particles, but it does not agree with everything observed in the universe. While the tensions between g-2 theory and experiment have challenged the foundations of particle physics and potentially offer a tantalizing glimpse of new physics beyond the SM, it turns out that there is more than one way to make SM predictions.

In recent years, a new SM prediction of the muon g-2 has emerged that questions whether the discrepancy exists at all, suggesting that there is no new physics in the muon g-2. For the particle-physics community, the stakes are higher than ever.

Rising to the occasion?

To understand how this discrepancy in the value of the muon g-2 arises, imagine you’re baking some cupcakes. A well-known and trusted recipe tells you that by accurately weighing the ingredients using your kitchen scales you will make enough batter to give you 10 identical cupcakes of a given size. However, to your surprise, after portioning out the batter, you end up with 11 cakes of the expected size instead of 10.

What has happened? Maybe your scales are imprecise. You check and find that you’re confident that your measurements are accurate to 1%. This means each of your 10 cupcakes could be 1% larger than they should be, or you could have enough leftover mixture to make 1/10th of an extra cupcake, but there’s no way you should have a whole extra cupcake.

You repeat the process several times, always with the same outcome. The recipe clearly states that you should have batter for 10 cupcakes, but you always end up with 11. Not only do you now have a worrying number of cupcakes to eat but, thanks to all your repeated experiments, you’re more confident that you are following all the steps and measurements accurately. You start to wonder whether something is missing from the recipe itself.

Before you jump to conclusions, it’s worth checking that there isn’t something systematically wrong with your scales. You ask several friends to follow the same recipe using their own scales. Amazingly, when each friend follows the recipe, they all end up with 11 cupcakes. You are more sure than ever that the cupcake recipe isn’t quite right.

You’re really excited now, as you have corroborating evidence that something is amiss. This is unprecedented, as the recipe is considered sacrosanct. Cupcakes have never been made differently and if this recipe is incomplete there could be other, larger implications. What if all cake recipes are incomplete? These claims are causing a stir, and people are starting to take notice.

Close-up of weighing scale with small cakes on top
Food for thought Just as a trusted cake recipe can be relied on to produce reliable results, so the Standard Model has been incredibly successful at predicting the behaviour of fundamental particles and forces. However, there are instances where the Standard Model breaks down, prompting scientists to hunt for new physics that will explain this mystery. (Courtesy: iStock/Shutter2U)

Then, a new friend comes along and explains that they checked the recipe by simulating baking the cupcakes using a computer. This approach doesn’t need physical scales, but it uses the same recipe. To your shock, the simulation produces 11 cupcakes of the expected size, with a precision as good as when you baked them for real.

There is no explaining this. You were certain that the recipe was missing something crucial, but now a computer simulation is telling you that the recipe has always predicted 11 cupcakes.

Of course, one extra cupcake isn’t going to change the world. But what if instead of cake, the recipe was particle physics’ best and most-tested theory of everything, and the ingredients were the known particles and forces? And what if the number of cupcakes was a measurable outcome of those particles interacting, one hurtling towards a pivotal bake-off between theory and experiment?

What is the muon g-2?

Muons are an elementary particle in the SM that have a half-integer spin, and are similar to electrons, but are some 207 times heavier. Muons interact directly with other SM particles via electromagnetism (photons) and the weak force (W and Z bosons, and the Higgs particle). All quarks and leptons – such as electrons and muons – have a magnetic moment due to their intrinsic angular momentum or “spin”. Quantum theory dictates that the magnetic moment is related to the spin by a quantity known as the “g-factor”. Initially, this value was predicted to be at g = 2 for both the electron and the muon.

However, these calculations did not take into account the effects of “radiative corrections” – the continuous emission and re-absorption of short-lived “virtual particles” (see box) by the electron or muon – which increases g by about 0.1%. This seemingly minute difference is referred to as “anomalous g-factor”, aµ = (g – 2)/2. As well as the electromagnetic and weak interactions, the muon’s magnetic moment also receives contributions from the strong force, even though the muon does not itself participate in strong interactions. The strong contributions arise through the muon’s interaction with the photon, which in turn interacts with quarks. The quarks then themselves interact via the strong-force mediator, the gluon.

This effect, and any discrepancies, are of particular interest to physicists because the g-factor acts as a probe of the existence of other particles – both known particles such as electrons and photons, and other, as yet undiscovered, particles that are not part of the SM.

“Virtual” particles

Illustration of subatomic particles in the Standard Model
(Courtesy: CERN)

The Standard Model of particle physics (SM) describes the basic building blocks – the particles and forces – of our universe. It includes the elementary particles – quarks and leptons – that make up all known matter as well as the force-carrying particles, or bosons, that influence the quarks and leptons. The SM also explains three of the four fundamental forces that govern the universe –electromagnetism, the strong force and the weak force. Gravity, however, is not adequately explained within the model.

“Virtual” particles arise from the universe’s underlying, non-zero background energy, known as the vacuum energy. Heisenberg’s uncertainty principle states that it is impossible to simultaneously measure both the position and momentum of a particle. A non-zero energy always exists for “something” to arise from “nothing” if the “something” returns to “nothing” in a very short interval – before it can be observed. Therefore, at every point in space and time, virtual particles are rapidly created and annihilated.

The “g-factor” in muon g-2 represents the total value of the magnetic moment of the muon, including all corrections from the vacuum. If there were no virtual interactions, the muon’s g-factor would be exactly g = 2. The first confirmation of g > 2 came in 1948 when Julian Schwinger calculated the simplest contribution from a virtual photon interacting with an electron (Phys. Rev. 73 416). His famous result explained a measurement from the same year that found the electron’s g-factor to be slightly larger than 2 (Phys. Rev. 74 250). This confirmed the existence of virtual particles and paved the way for the invention of relativistic quantum field theories like the SM.

The muon, the (lighter) electron and the (heavier) tau lepton all have an anomalous magnetic moment.  However, because the muon is heavier than the electron, the impact of heavy new particles on the muon g-2 is amplified. While tau leptons are even heavier than muons, tau leptons are extremely short-lived (muons have a lifetime of 2.2 μs, while the lifetime of tau leptons is 0.29 ns), making measurements impracticable with current technologies. Neither too light nor too heavy, the muon is the perfect tool to search for new physics.

New physics beyond the Standard Model (commonly known as BSM physics) is sorely needed because, despite its many successes, the SM does not provide the answers to all that we observe in the universe, such as the existence of dark matter. “We know there is something beyond the predictions of the Standard Model, we just don’t know where,” says Patrick Koppenburg, a physicist at the Dutch National Institute for Subatomic Physics (Nikhef) in the Netherlands, who works on the LHCb Experiment at CERN and on future collider experiments. “This new physics will provide new particles that we haven’t observed yet. The LHC collider experiments are actively searching for such particles but haven’t found anything to date.”

Testing the Standard Model: experiment vs theory

In 2021 the Muon g-2 experiment at Fermilab in the US captured the world’s attention with the release of its first result (Phys. Rev. Lett. 126 141801). It had directly measured the muon g-2 to an unprecedented precision of 460 parts per billion (ppb). While the LHC experiments attempt to produce and detect BSM particles directly, the Muon g-2 experiment takes a different, complementary approach – it compares precision measurements of particles with SM predictions to expose discrepancies that could be due to new physics. In the Muon g-2 experiment, muons travel round and round a circular ring, confined by a strong magnetic field. In this field, the muons precess like spinning tops (see image at the top of this article). The frequency of this precession is the anomalous magnetic moment and it can be extracted by detecting where and when the muons decay.

The Muon g-2 experiment
Magnetic muons The Muon g-2 experiment at the Fermi National Accelerator Laboratory. (Courtesy: Reidar Hahn/Fermilab, US Department of Energy)

Having led the experiment as manager and run co-ordinator, Muon g-2 is an awe-inspiring feature of science and engineering, involving more than 200 scientists from 35 institutions in seven countries. I have been involved in both the operation of the experiment and the analysis of results. “A lot of my favourite memories from g-2 are ‘firsts’,” says Saskia Charity, a researcher at the University of Liverpool in the UK and a principal analyser of the Muon g-2 experiment’s results. “The first time we powered the magnet; the first time we stored muons and saw particles in the detectors; and the first time we released a result in 2021.”

The Muon g-2 result turned heads because the measured value was significantly higher than the best SM prediction (at that time) of the muon g-2 (Phys. Rep. 887 1). This SM prediction was the culmination of years of collaborative work by the Muon g-2 Theory Initiative, an international consortium of roughly 200 theoretical physicists (myself among them). In 2020 the collaboration published one community-approved number for the muon g-2. This value had a precision comparable to the Fermilab experiment – resulting in a deviation between the two that has a chance of 1 in 40,000 of being a statistical fluke  – making the discrepancy all the more intriguing.

While much of the SM prediction, including contributions from virtual photons and leptons, can be calculated from first principles alone, the strong force contributions involving quarks and gluons are more difficult. However, there is a mathematical link between the strong force contributions to muon g-2 and the probability of experimentally producing hadrons (composite particles made of quarks) from electron–positron annihilation. These so-called “hadronic processes” are something we can observe with existing particle colliders; much like weighing cupcake ingredients, these measurements determine how much each hadronic process contributes to the SM correction to the muon g-2. This is the approach used to calculate the 2020 result, producing what is called a “data-driven” prediction.

Measurements were performed at many experiments, including the BaBar Experiment at the Stanford Linear Accelerator Center (SLAC) in the US, the BESIII Experiment at the Beijing Electron–Positron Collider II in China, the KLOE Experiment at DAFNE Collider in Italy, and the SND and CMD-2 experiments at the VEPP-2000 electron–positron collider in Russia. These different experiments measured a complete catalogue of hadronic processes in different ways over several decades. Myself and other members of the Muon g-2 Theory Initiative combined these findings to produce the data-driven SM prediction of the muon g-2. There was (and still is) strong, corroborating evidence that this SM prediction is reliable.

This discrepancy strongly indicates, to a very high level of confidence, the existence of new physics. It seemed more likely than ever that BSM physics had finally been detected in a laboratory.

1 Eyes on the prize

Chart of muon g-2 results from 5 different experiments
(Courtesy: Muon g-2 collaboration/IOP Publishing)

Over the last two decades, direct experimental measurements of the muon g-2 have become much more precise. The predecessor to the Fermilab experiment was based at Brookhaven National Laboratory in the US, and when that experiment ended, the magnetic ring in which the muons are confined was transported to its current home at Fermilab.

That was until the release of the first SM prediction of the muon g-2 using an alternative method called lattice QCD (Nature 593 51). Like the data-driven prediction, lattice QCD is a way to tackle the tricky hadronic contributions, but it doesn’t use experimental results as a basis for the calculation. Instead, it treats the universe as a finite box containing a grid of points (a lattice) that represent points in space and time. Virtual quarks and gluons are simulated inside this box, and the results are extrapolated to a universe of infinite size and continuous space and time. This method requires a huge amount of computer power to arrive at an accurate, physical result but it is a powerful tool that directly simulates the strong-force contributions to the muon g-2.

The researchers who published this new result are also part of the Muon g-2 Theory Initiative. Several other groups within the consortium have since published QCD calculations, producing values for g-2 that are in good agreement with each other and the experiment at Fermilab. “Striking agreement, to better than 1%, is seen between results from multiple groups,” says Christine Davis of the University of Glasgow in the UK, a member of the High-precision lattice QCD (HPQCD) collaboration within the Muon g-2 Theory Initiative. “A range of methods have been developed to improve control of uncertainties meaning further, more complete, lattice QCD calculations are now appearing. The aim is for several results with 0.5% uncertainty in the near future.”

If these lattice QCD predictions are the true SM value, there is no muon g-2 discrepancy between experiment and theory. However, this would conflict with the decades of experimental measurements of hadronic processes that were used to produce the data-driven SM prediction.

To make the situation even more confusing, a new experimental measurement of the muon g-2’s dominant hadronic process was released in 2023 by the CMD-3 experiment (Phys. Rev. D 109 112002). This result is significantly larger than all the other, older measurements of the same process, including its own predecessor experiment, CMD-2 (Phys. Lett. B 648 28). With this new value, the data-driven SM prediction of aµ = (g – 2)/2 is in agreement with the Muon g-2 experiment and lattice QCD. Over the last few years, the CMD-3 measurements (and all older measurements) have been scrutinized in great detail, but the source of the difference between the measurements remains unknown.

2 Which Standard Model?

Chart of the Muon g-2 experiment results versus the various Standard Model predictions
(Courtesy: Alex Keshavarzi/IOP Publishing)

Summary of the four values of the anomalous magnetic moment of the muon aμ that have been obtained from different experiments and models. The 2020 and CMD-3 predictions were both obtained using a data-driven approach. The lattice QCD value is a theoretical prediction and the Muon g-2 experiment value was measured at Fermilab in the US. The positions of the points with respect to the y axis have been chosen for clarity only.

Since then, the Muon g-2 experiment at Fermilab has confirmed and improved on that first result to a precision of 200 ppb (Phys. Rev. Lett. 131 161802). “Our second result based on the data from 2019 and 2020 has been the first step in increasing the precision of the magnetic anomaly measurement,” says Peter Winter of Argonne National Laboratory in the US and co-spokesperson for the Muon g-2 experiment.

The new result is in full agreement with the SM predictions from lattice QCD and the data-driven prediction based on CMD-3’s measurement. However, with the increased precision, it now disagrees with the 2020 SM prediction by even more than in 2021.

The community therefore faces a conundrum. The muon g-2 either exhibits a much-needed discovery of BSM physics or a remarkable, multi-method confirmation of the Standard Model.

On your marks, get set, bake!

In 2025 the Muon g-2 experiment at Fermilab will release its final result. “It will be exciting to see our final result for g-2 in 2025 that will lead to the ultimate precision of 140 parts-per-billion,” says Winter. “This measurement of g-2 will be a benchmark result for years to come for any extension to the Standard Model of particle physics.” Assuming this agrees with the previous results, it will further widen the discrepancy with the 2020 data-driven SM prediction.

For the lattice QCD SM prediction, the many groups calculating the muon’s anomalous magnetic moment have since corroborated and improved the precision of the first lattice QCD result. Their next task is to combine the results from the various lattice QCD predictions to arrive at one SM prediction from lattice QCD. While this is not a trivial task, the agreement between the groups means a single lattice QCD result with improved precision is likely within the next year, increasing the tension with the 2020 data-driven SM prediction.

New, robust experimental measurements of the muon g-2’s dominant hadronic processes are also expected over the next couple of years. The previous experiments will update their measurements with more precise results and a newcomer measurement is expected from the Belle-II experiment in Japan. It is hoped that they will confirm either the catalogue of older hadronic measurements or the newer CMD-3 result. Should they confirm the older data, the potential for new physics in the muon g-2 lives on, but the discrepancy with the lattice QCD predictions will still need to be investigated. If the CMD-3 measurement is confirmed, it is likely the older data will be superseded, and the muon g-2 will have once again confirmed the Standard Model as the best and most resilient description of the fundamental nature of our universe.

Large group of people stood holding a banner that says Muon g-2
International consensus The Muon g-2 Theory Initiative pictured at their seventh annual plenary workshop at the KEK Laboratory, Japan in September 2024. (Courtesy: KEK-IPNS)

The task before the Muon g-2 Theory Initiative is to solve these dilemmas and update the 2020 data-driven SM prediction. Two new publications are planned. The first will be released in 2025 (to coincide with the new experimental result from Fermilab). This will describe the current status and ongoing body of work, but a full, updated SM prediction will have to wait for the second paper, likely to be published several years later.

It’s going to be an exciting few years. Being part of both the experiment and the theory means I have been privileged to see the process from both sides. For the SM prediction, much work is still to be done but science with this much at stake cannot be rushed and it will be fascinating work. I’m looking forward to the journey just as much as the outcome.

The post The muon’s magnetic moment exposes a huge hole in the Standard Model – unless it doesn’t appeared first on Physics World.

Low-temperature plasma halves cancer recurrence in mice

Treatment with low-temperature plasma is emerging as a novel cancer therapy. Previous studies have shown that plasma can deactivate cancer cells in vitro, suppress tumour growth in vivo and potentially induce anti-tumour immunity. Researchers at the University of Tokyo are investigating another promising application – the use of plasma to inhibit tumour recurrence after surgery.

Lead author Ryo Ono and colleagues demonstrated that treating cancer resection sites with streamer discharge – a type of low-temperature atmospheric plasma – significantly reduced the recurrence rate of melanoma tumours in mice.

“We believe that plasma is more effective when used as an adjuvant therapy rather than as a standalone treatment, which led us to focus on post-surgical treatment in this study,” says Ono.

In vivo experiments

To create the streamer discharge, the team applied a high-voltage pulse (25 kV, 20 ns, 100 pulse/s) to a 3 mm-diameter rod electrode with a hemispherical tip. The rod was placed in a quartz tube with a 4 mm inner diameter, and the working gas – humid oxygen mixed with ambient air – was flowed through the tube. As electrons in the plasma collide with molecules in the gas, the mixture generates cytotoxic reactive oxygen and nitrogen species.

The researchers performed three experiments on mice with melanoma, a skin cancer with a local recurrence rate of up to 10%. In the first experiment, they injected 11 mice with mouse melanoma cells, resecting the resulting tumours eight days later. They then treated five of the mice with streamer discharge for 10 min, with the mouse placed on a grounded plate and the electrode tip 10 mm above the resection site.

Experimental setup for plasma generation
Experimental setup Streamer discharge generation and treatment. (Courtesy: J. Phys. D: Appl. Phys. 10.1088/1361-6463/ada98c)

Tumour recurrence occurred in five of the six control mice (no plasma treatment) and two of the five plasma-treated mice, corresponding to recurrence rates of 83% and 40%, respectively. In a second experiment with the same parameters, recurrence rates were 44% in nine control mice and 25% in eight plasma-treated mice.

In a third experiment, the researchers delayed the surgery until 12 days after cell injection, increasing the size of the tumour before resection. This led to a 100% recurrence rate in the control group of five mice. Only one recurrence was seen in five plasma-treated mice, although one mouse that died of unknown causes was counted as a recurrence, resulting in a recurrence rate of 40%.

All of the experiments showed that plasma treatment reduced the recurrence rate by roughly 50%. The researchers note that the plasma treatment did not affect the animals’ overall health.

Cytotoxic mechanisms

To further confirm the cytotoxicity of streamer discharge, Ono and colleagues treated cultured melanoma cells for between 0 and 250 s, at an electrode–surface distance of 10 mm. The cells were then incubated for 3, 6 or 24 h. Following plasma treatments of up to 100 s, most cells were still viable 24 h later. But between 100 and 150 s of treatment, the cell survival rate decreased rapidly.

The experiment also revealed a rapid transition from apoptosis (natural programmed cell death) to late apoptosis/necrosis (cell death due to external toxins) between 3 and 24 h post-treatment. Indeed, 24 h after a 150 s plasma treatment, 95% of the dead cells were in the late stages of apoptosis/necrosis. This finding suggests that the observed cytotoxicity may arise from direct induction of apoptosis and necrosis, combined with inhibition of cell growth at extended time points.

In a previous experiment, the researchers used streamer discharge to treat tumours in mice before resection. This treatment delayed tumour regrowth by at least six days, but all mice still experienced local recurrence. In contrast, in the current study, plasma treatment reduced the recurrence rate.

The difference may be due to different mechanisms by which plasma inhibits tumour recurrence: cytotoxic reactive species killing residual cancer cells at the resection site; or reactive species triggering immunogenic cell death. The team note that either or both of these mechanisms may be occurring in the current study.

“Initially, we considered streamer discharge as the main contributor to the therapeutic effect, as it is the primary source of highly reactive short-lived species,” explains Ono. “However, recent experiments suggest that the discharge within the quartz tube also generates a significant amount of long-lived reactive species (with lifetimes typically exceeding 0.1 s), which may contribute to the therapeutic effect.”

One advantage of the streamer discharge device is that it uses only room air and oxygen, without requiring the noble gases employed in other cold atmospheric plasmas. “Additionally, since different plasma types generate different reactive species, we hypothesized that streamer discharge could produce a unique therapeutic effect,” says Ono. “Conducting in vivo experiments with different plasma sources will be an important direction for future research.”

Looking ahead to use in the clinic, Ono believes that the low cost of the device and its operation should make it feasible to use plasma treatment immediately after tumour resection to reduce recurrence risk. “Currently, we have only obtained preliminary results in mice,” he tells Physics World. “Clinical application remains a long-term goal.”

The study is reported in Journal of Physics D: Applied Physics.

The post Low-temperature plasma halves cancer recurrence in mice appeared first on Physics World.

Ultra-high-energy neutrino detection opens a new window on the universe

Using an observatory located deep beneath the Mediterranean Sea, an international team has detected an ultra-high-energy cosmic neutrino with an energy greater than 100 PeV, which is well above the previous record. Made by the KM3NeT neutrino observatory, such detections could enhance our understanding of cosmic neutrino sources or reveal new physics.

“We expect neutrinos to originate from very powerful cosmic accelerators that also accelerate other particles, but which have never been clearly identified in the sky. Neutrinos may provide the opportunity to identify these sources,” explains Paul de Jong, a professor at the University of Amsterdam and spokesperson for the KM3NeT collaboration. “Apart from that, the properties of neutrinos themselves have not been studied as well as those of other particles, and further studies of neutrinos could open up possibilities to detect new physics beyond the Standard Model.”

Neutrinos are subatomic particles with masses less than a millionth of that of electrons. They are electrically neutral and interact rarely with matter via the weak force. As a result, neutrinos can travel vast cosmic distances without being deflected by magnetic fields or absorbed by interstellar material. “[This] makes them very good probes for the study of energetic processes far away in our universe,” de Jong explains.

Scientists expect high-energy neutrinos to come from powerful astrophysical accelerators – objects that are expected to produce high-energy cosmic rays and gamma rays. These objects include active galactic nuclei powered by supermassive black holes, gamma-ray bursts, and other extreme cosmic events. However, pinpointing such accelerators remains challenging because their cosmic rays are deflected by magnetic fields as they travel to Earth, while their gamma rays can be absorbed on their journey. Neutrinos, however, move in straight lines and this makes them unique messengers that could point back to astrophysical accelerators.

Underwater detection

Because they rarely interact, neutrinos are studied using large-volume detectors. The largest observatories use natural environments such as deep water or ice, which are shielded from most background noise including cosmic rays.

The KM3NeT observatory is situated on the Mediterranean seabed, with detectors more than 2000::m below the surface. Occasionally, a high-energy neutrino will collide with a water molecule, producing a secondary charged particle. This particle moves faster than the speed of light in water, creating a faint flash of Cherenkov radiation. The detector’s array of optical sensors capture these flashes, allowing researchers to reconstruct the neutrino’s direction and energy.

KM3NeT has already identified many high-energy neutrinos, but in 2023 it detected a neutrino with an energy far in excess of any previously detected cosmic neutrino. Now, analysis by de Jong and colleagues puts this neutrino’s energy at about 30 times higher than that of the previous record-holder, which was spotted by the IceCube observatory at the South Pole. “It is a surprising and unexpected event,” he says.

Scientists suspect that such a neutrino could originate from the most powerful cosmic accelerators, such as blazars. The neutrino could also be cosmogenic, being produced when ultra-high-energy cosmic rays interact with the cosmic microwave background radiation.

New class of astrophysical messengers

While this single neutrino has not been traced back to a specific source, it opens the possibility of studying ultra-high-energy neutrinos as a new class of astrophysical messengers. “Regardless of what the source is, our event is spectacular: it tells us that either there are cosmic accelerators that result in these extreme energies, or this could be the first cosmogenic neutrino detected,” de Jong noted.

Neutrino experts not associated with KM3NeT agree on the significance of the observation. Elisa Resconi at the Technical University of Munich tells Physics World, “This discovery confirms that cosmic neutrinos extend to unprecedented energies, suggesting that somewhere in the universe, extreme astrophysical processes – or even exotic phenomena like decaying dark matter – could be producing them”.

Francis Halzen at the University of Wisconsin-Madison, who is IceCube’s principal investigator, adds, “Observing neutrinos with a million times the energy of those produced at Fermilab (ten million for the KM3NeT event!) is a great opportunity to reveal the physics beyond the Standard Model associated with neutrino mass.”

With ongoing upgrades to KM3NeT and other neutrino observatories, scientists hope to detect more of these rare but highly informative particles, bringing them closer to answering fundamental questions in astrophysics.

Resconi, explains, “With a global network of neutrino telescopes, we will detect more of these ultrahigh-energy neutrinos, map the sky in neutrinos, and identify their sources. Once we do, we will be able to use these cosmic messengers to probe fundamental physics in energy regimes far beyond what is possible on Earth.”

The observation is described in Nature.

The post Ultra-high-energy neutrino detection opens a new window on the universe appeared first on Physics World.

Threads of fire: uncovering volcanic secrets with Pele’s hair and tears

Volcanoes are awe-inspiring beasts. They spew molten rivers, towering ash plumes, and – in rarer cases – delicate glassy formations known as Pele’s hair and Pele’s tears. These volcanic materials, named after the Hawaiian goddess of volcanoes and fire, are the focus of the latest Physics World Stories podcast, featuring volcanologists Kenna Rubin (University of Rhode Island) and Tamsin Mather (University of Oxford).

Pele’s hair is striking: fine, golden filaments of volcanic glass that shimmer like spider silk in the sunlight. Formed when lava is ejected explosively and rapidly stretched into thin strands, these fragile fibres range from 1 to 300 µm thick – similar to human hair. Meanwhile, Pele’s tears – small, smooth droplets of solidified lava – can preserve tiny bubbles of volcanic gases within themselves, trapped in cavities.

These materials are more than just geological curiosities. By studying their structure and chemistry, researchers can infer crucial details about past eruptions. Understanding these “fossil” samples provides insights into the history of volcanic activity and its role in shaping planetary environments.

Rubin and Mather describe what it’s like working in extreme volcanic landscapes. One day, you might be near the molten slopes of active craters, and then on another trip you could be exploring the murky depths of underwater eruptions via deep-sea research submersibles like Alvin.

For a deeper dive into Pele’s hair and tears, listen to the podcast and explore our recent Physics World feature on the subject.

The post Threads of fire: uncovering volcanic secrets with Pele’s hair and tears appeared first on Physics World.

💾

Modelling the motion of confined crowds could help prevent crushing incidents

Researchers led by Denis Bartolo, a physicist at the École Normale Supérieure (ENS) of Lyon, France, have constructed a theoretical model that forecasts the movements of confined, densely packed crowds. The study could help predict potentially life-threatening crowd behaviour in confined environments. 

To investigate what makes some confined crowds safe and others dangerous, Bartolo and colleagues – also from the Université Claude Bernard Lyon 1 in France and the Universidad de Navarra in Pamplona, Spain – studied the Chupinazo opening ceremony of the San Fermín Festival in Pamplona in four different years (2019, 2022, 2023 and 2024).

The team analysed high-resolution video captured from two locations above the gathering of around 5000 people as the crowd grew in the 50 x 20 m city plaza: swelling from two to six people per square metre, and ultimately peaking at local densities of nine per square metre. A machine-learning algorithm enabled automated detection of the position of each person’s head; from which localized crowd density was then calculated.

“The Chupinazo is an ideal experimental platform to study the spontaneous motion of crowds, as it repeats from one year to the next with approximately the same amount of people, and the geometry of the plaza remains the same,” says theoretical physicist Benjamin Guiselin, a study co-author formerly from ENS Lyon and now at the Université de Montpellier.

In a first for crowd studies, the researchers treated the densely packed crowd as a continuum like water, and “constructed a mechanics theory for the crowd movement without making any behavioural assumptions on the motion of individuals,” Guiselin tells Physics World.

Their studies, recently described in Nature, revealed a change in behaviour akin to a phase change when the crowd density passed a critical threshold of four individuals per square metre. Below this density the crowd remained relatively inactive. But above that threshold it started moving, exhibiting localized oscillations that were periodic over about 18 s, and occurred without any external guiding such as corralling.

Unlike a back-and-forth oscillation, this motion – which involves hundreds of people moving over several metres – has an almost circular trajectory that shows chirality (or handedness) and a 50:50 chance of turning to either the right or left. “Our model captures the fact that the chirality is not fixed. Instead it emerges in the dynamics: the crowd spontaneously decides between clockwise or counter-clockwise circular motion,” explains Guiselin, who worked on the mathematical modelling.

“The dynamics is complicated because if the crowd is pushed, then it will react by creating a propulsion force in the direction in which it is pushed: we’ve called this the windsock effect. But the crowd also has a resistance mechanism, a counter-reactive effect, which is a propulsive force opposite to the direction of motion: what we have called the weathercock effect,” continues Guiselin, adding that it is these two competing mechanisms in conjunction with the confined situation that gives rise to the circular oscillations.

The team observed similar oscillations in footage of the 2010 tragedy at the Love Parade music festival in Duisburg, Germany, in which 21 people died and several hundred were injured during a crush.

Early results suggest that the oscillation period for such crowds is proportional to the size of the space they are confined in. But the team want to test their theory at other events, and learn more about both the circular oscillations and the compression waves they observed when people started pushing their way into the already crowded square at the Chupinazo.

If their model is proven to work for all densely packed, confined crowds, it could in principle form the basis for a crowd management protocol. “You could monitor crowd motion with a camera, and as soon as you detect these oscillations emerging try to evacuate the space, because we see these oscillations well before larger amplitude motions set in,” Guiselin explains.

The post Modelling the motion of confined crowds could help prevent crushing incidents appeared first on Physics World.

❌