The muon’s magnetic moment exposes a huge hole in the Standard Model – unless it doesn’t
A tense particle-physics showdown will reach new heights in 2025. Over the past 25 years researchers have seen a persistent and growing discrepancy between the theoretical predictions and experimental measurements of an inherent property of the muon – its anomalous magnetic moment. Known as the “muon g-2”, this property serves as a robust test of our understanding of particle physics.
Theoretical predictions of the muon g-2 are based on the Standard Model of particle physics (SM). This is our current best theory of fundamental forces and particles, but it does not agree with everything observed in the universe. While the tensions between g-2 theory and experiment have challenged the foundations of particle physics and potentially offer a tantalizing glimpse of new physics beyond the SM, it turns out that there is more than one way to make SM predictions.
In recent years, a new SM prediction of the muon g-2 has emerged that questions whether the discrepancy exists at all, suggesting that there is no new physics in the muon g-2. For the particle-physics community, the stakes are higher than ever.
Rising to the occasion?
To understand how this discrepancy in the value of the muon g-2 arises, imagine you’re baking some cupcakes. A well-known and trusted recipe tells you that by accurately weighing the ingredients using your kitchen scales you will make enough batter to give you 10 identical cupcakes of a given size. However, to your surprise, after portioning out the batter, you end up with 11 cakes of the expected size instead of 10.
What has happened? Maybe your scales are imprecise. You check and find that you’re confident that your measurements are accurate to 1%. This means each of your 10 cupcakes could be 1% larger than they should be, or you could have enough leftover mixture to make 1/10th of an extra cupcake, but there’s no way you should have a whole extra cupcake.
You repeat the process several times, always with the same outcome. The recipe clearly states that you should have batter for 10 cupcakes, but you always end up with 11. Not only do you now have a worrying number of cupcakes to eat but, thanks to all your repeated experiments, you’re more confident that you are following all the steps and measurements accurately. You start to wonder whether something is missing from the recipe itself.
Before you jump to conclusions, it’s worth checking that there isn’t something systematically wrong with your scales. You ask several friends to follow the same recipe using their own scales. Amazingly, when each friend follows the recipe, they all end up with 11 cupcakes. You are more sure than ever that the cupcake recipe isn’t quite right.
You’re really excited now, as you have corroborating evidence that something is amiss. This is unprecedented, as the recipe is considered sacrosanct. Cupcakes have never been made differently and if this recipe is incomplete there could be other, larger implications. What if all cake recipes are incomplete? These claims are causing a stir, and people are starting to take notice.
![Close-up of weighing scale with small cakes on top](../themes/icons/grey.gif)
Then, a new friend comes along and explains that they checked the recipe by simulating baking the cupcakes using a computer. This approach doesn’t need physical scales, but it uses the same recipe. To your shock, the simulation produces 11 cupcakes of the expected size, with a precision as good as when you baked them for real.
There is no explaining this. You were certain that the recipe was missing something crucial, but now a computer simulation is telling you that the recipe has always predicted 11 cupcakes.
Of course, one extra cupcake isn’t going to change the world. But what if instead of cake, the recipe was particle physics’ best and most-tested theory of everything, and the ingredients were the known particles and forces? And what if the number of cupcakes was a measurable outcome of those particles interacting, one hurtling towards a pivotal bake-off between theory and experiment?
What is the muon g-2?
Muons are an elementary particle in the SM that have a half-integer spin, and are similar to electrons, but are some 207 times heavier. Muons interact directly with other SM particles via electromagnetism (photons) and the weak force (W and Z bosons, and the Higgs particle). All quarks and leptons – such as electrons and muons – have a magnetic moment due to their intrinsic angular momentum or “spin”. Quantum theory dictates that the magnetic moment is related to the spin by a quantity known as the “g-factor”. Initially, this value was predicted to be at g = 2 for both the electron and the muon.
However, these calculations did not take into account the effects of “radiative corrections” – the continuous emission and re-absorption of short-lived “virtual particles” (see box) by the electron or muon – which increases g by about 0.1%. This seemingly minute difference is referred to as “anomalous g-factor”, aµ = (g – 2)/2. As well as the electromagnetic and weak interactions, the muon’s magnetic moment also receives contributions from the strong force, even though the muon does not itself participate in strong interactions. The strong contributions arise through the muon’s interaction with the photon, which in turn interacts with quarks. The quarks then themselves interact via the strong-force mediator, the gluon.
This effect, and any discrepancies, are of particular interest to physicists because the g-factor acts as a probe of the existence of other particles – both known particles such as electrons and photons, and other, as yet undiscovered, particles that are not part of the SM.
“Virtual” particles
The Standard Model of particle physics (SM) describes the basic building blocks – the particles and forces – of our universe. It includes the elementary particles – quarks and leptons – that make up all known matter as well as the force-carrying particles, or bosons, that influence the quarks and leptons. The SM also explains three of the four fundamental forces that govern the universe –electromagnetism, the strong force and the weak force. Gravity, however, is not adequately explained within the model.
“Virtual” particles arise from the universe’s underlying, non-zero background energy, known as the vacuum energy. Heisenberg’s uncertainty principle states that it is impossible to simultaneously measure both the position and momentum of a particle. A non-zero energy always exists for “something” to arise from “nothing” if the “something” returns to “nothing” in a very short interval – before it can be observed. Therefore, at every point in space and time, virtual particles are rapidly created and annihilated.
The “g-factor” in muon g-2 represents the total value of the magnetic moment of the muon, including all corrections from the vacuum. If there were no virtual interactions, the muon’s g-factor would be exactly g = 2. The first confirmation of g > 2 came in 1948 when Julian Schwinger calculated the simplest contribution from a virtual photon interacting with an electron (Phys. Rev. 73 416). His famous result explained a measurement from the same year that found the electron’s g-factor to be slightly larger than 2 (Phys. Rev. 74 250). This confirmed the existence of virtual particles and paved the way for the invention of relativistic quantum field theories like the SM.
The muon, the (lighter) electron and the (heavier) tau lepton all have an anomalous magnetic moment. However, because the muon is heavier than the electron, the impact of heavy new particles on the muon g-2 is amplified. While tau leptons are even heavier than muons, tau leptons are extremely short-lived (muons have a lifetime of 2.2 μs, while the lifetime of tau leptons is 0.29 ns), making measurements impracticable with current technologies. Neither too light nor too heavy, the muon is the perfect tool to search for new physics.
New physics beyond the Standard Model (commonly known as BSM physics) is sorely needed because, despite its many successes, the SM does not provide the answers to all that we observe in the universe, such as the existence of dark matter. “We know there is something beyond the predictions of the Standard Model, we just don’t know where,” says Patrick Koppenburg, a physicist at the Dutch National Institute for Subatomic Physics (Nikhef) in the Netherlands, who works on the LHCb Experiment at CERN and on future collider experiments. “This new physics will provide new particles that we haven’t observed yet. The LHC collider experiments are actively searching for such particles but haven’t found anything to date.”
Testing the Standard Model: experiment vs theory
In 2021 the Muon g-2 experiment at Fermilab in the US captured the world’s attention with the release of its first result (Phys. Rev. Lett. 126 141801). It had directly measured the muon g-2 to an unprecedented precision of 460 parts per billion (ppb). While the LHC experiments attempt to produce and detect BSM particles directly, the Muon g-2 experiment takes a different, complementary approach – it compares precision measurements of particles with SM predictions to expose discrepancies that could be due to new physics. In the Muon g-2 experiment, muons travel round and round a circular ring, confined by a strong magnetic field. In this field, the muons precess like spinning tops (see image at the top of this article). The frequency of this precession is the anomalous magnetic moment and it can be extracted by detecting where and when the muons decay.
![The Muon g-2 experiment](../themes/icons/grey.gif)
Having led the experiment as manager and run co-ordinator, Muon g-2 is an awe-inspiring feature of science and engineering, involving more than 200 scientists from 35 institutions in seven countries. I have been involved in both the operation of the experiment and the analysis of results. “A lot of my favourite memories from g-2 are ‘firsts’,” says Saskia Charity, a researcher at the University of Liverpool in the UK and a principal analyser of the Muon g-2 experiment’s results. “The first time we powered the magnet; the first time we stored muons and saw particles in the detectors; and the first time we released a result in 2021.”
The Muon g-2 result turned heads because the measured value was significantly higher than the best SM prediction (at that time) of the muon g-2 (Phys. Rep. 887 1). This SM prediction was the culmination of years of collaborative work by the Muon g-2 Theory Initiative, an international consortium of roughly 200 theoretical physicists (myself among them). In 2020 the collaboration published one community-approved number for the muon g-2. This value had a precision comparable to the Fermilab experiment – resulting in a deviation between the two that has a chance of 1 in 40,000 of being a statistical fluke – making the discrepancy all the more intriguing.
While much of the SM prediction, including contributions from virtual photons and leptons, can be calculated from first principles alone, the strong force contributions involving quarks and gluons are more difficult. However, there is a mathematical link between the strong force contributions to muon g-2 and the probability of experimentally producing hadrons (composite particles made of quarks) from electron–positron annihilation. These so-called “hadronic processes” are something we can observe with existing particle colliders; much like weighing cupcake ingredients, these measurements determine how much each hadronic process contributes to the SM correction to the muon g-2. This is the approach used to calculate the 2020 result, producing what is called a “data-driven” prediction.
Measurements were performed at many experiments, including the BaBar Experiment at the Stanford Linear Accelerator Center (SLAC) in the US, the BESIII Experiment at the Beijing Electron–Positron Collider II in China, the KLOE Experiment at DAFNE Collider in Italy, and the SND and CMD-2 experiments at the VEPP-2000 electron–positron collider in Russia. These different experiments measured a complete catalogue of hadronic processes in different ways over several decades. Myself and other members of the Muon g-2 Theory Initiative combined these findings to produce the data-driven SM prediction of the muon g-2. There was (and still is) strong, corroborating evidence that this SM prediction is reliable.
This discrepancy strongly indicates, to a very high level of confidence, the existence of new physics. It seemed more likely than ever that BSM physics had finally been detected in a laboratory.
1 Eyes on the prize
Over the last two decades, direct experimental measurements of the muon g-2 have become much more precise. The predecessor to the Fermilab experiment was based at Brookhaven National Laboratory in the US, and when that experiment ended, the magnetic ring in which the muons are confined was transported to its current home at Fermilab.
That was until the release of the first SM prediction of the muon g-2 using an alternative method called lattice QCD (Nature 593 51). Like the data-driven prediction, lattice QCD is a way to tackle the tricky hadronic contributions, but it doesn’t use experimental results as a basis for the calculation. Instead, it treats the universe as a finite box containing a grid of points (a lattice) that represent points in space and time. Virtual quarks and gluons are simulated inside this box, and the results are extrapolated to a universe of infinite size and continuous space and time. This method requires a huge amount of computer power to arrive at an accurate, physical result but it is a powerful tool that directly simulates the strong-force contributions to the muon g-2.
The researchers who published this new result are also part of the Muon g-2 Theory Initiative. Several other groups within the consortium have since published QCD calculations, producing values for g-2 that are in good agreement with each other and the experiment at Fermilab. “Striking agreement, to better than 1%, is seen between results from multiple groups,” says Christine Davis of the University of Glasgow in the UK, a member of the High-precision lattice QCD (HPQCD) collaboration within the Muon g-2 Theory Initiative. “A range of methods have been developed to improve control of uncertainties meaning further, more complete, lattice QCD calculations are now appearing. The aim is for several results with 0.5% uncertainty in the near future.”
If these lattice QCD predictions are the true SM value, there is no muon g-2 discrepancy between experiment and theory. However, this would conflict with the decades of experimental measurements of hadronic processes that were used to produce the data-driven SM prediction.
To make the situation even more confusing, a new experimental measurement of the muon g-2’s dominant hadronic process was released in 2023 by the CMD-3 experiment (Phys. Rev. D 109 112002). This result is significantly larger than all the other, older measurements of the same process, including its own predecessor experiment, CMD-2 (Phys. Lett. B 648 28). With this new value, the data-driven SM prediction of aµ = (g – 2)/2 is in agreement with the Muon g-2 experiment and lattice QCD. Over the last few years, the CMD-3 measurements (and all older measurements) have been scrutinized in great detail, but the source of the difference between the measurements remains unknown.
2 Which Standard Model?
Summary of the four values of the anomalous magnetic moment of the muon aμ that have been obtained from different experiments and models. The 2020 and CMD-3 predictions were both obtained using a data-driven approach. The lattice QCD value is a theoretical prediction and the Muon g-2 experiment value was measured at Fermilab in the US. The positions of the points with respect to the y axis have been chosen for clarity only.
Since then, the Muon g-2 experiment at Fermilab has confirmed and improved on that first result to a precision of 200 ppb (Phys. Rev. Lett. 131 161802). “Our second result based on the data from 2019 and 2020 has been the first step in increasing the precision of the magnetic anomaly measurement,” says Peter Winter of Argonne National Laboratory in the US and co-spokesperson for the Muon g-2 experiment.
The new result is in full agreement with the SM predictions from lattice QCD and the data-driven prediction based on CMD-3’s measurement. However, with the increased precision, it now disagrees with the 2020 SM prediction by even more than in 2021.
The community therefore faces a conundrum. The muon g-2 either exhibits a much-needed discovery of BSM physics or a remarkable, multi-method confirmation of the Standard Model.
On your marks, get set, bake!
In 2025 the Muon g-2 experiment at Fermilab will release its final result. “It will be exciting to see our final result for g-2 in 2025 that will lead to the ultimate precision of 140 parts-per-billion,” says Winter. “This measurement of g-2 will be a benchmark result for years to come for any extension to the Standard Model of particle physics.” Assuming this agrees with the previous results, it will further widen the discrepancy with the 2020 data-driven SM prediction.
For the lattice QCD SM prediction, the many groups calculating the muon’s anomalous magnetic moment have since corroborated and improved the precision of the first lattice QCD result. Their next task is to combine the results from the various lattice QCD predictions to arrive at one SM prediction from lattice QCD. While this is not a trivial task, the agreement between the groups means a single lattice QCD result with improved precision is likely within the next year, increasing the tension with the 2020 data-driven SM prediction.
New, robust experimental measurements of the muon g-2’s dominant hadronic processes are also expected over the next couple of years. The previous experiments will update their measurements with more precise results and a newcomer measurement is expected from the Belle-II experiment in Japan. It is hoped that they will confirm either the catalogue of older hadronic measurements or the newer CMD-3 result. Should they confirm the older data, the potential for new physics in the muon g-2 lives on, but the discrepancy with the lattice QCD predictions will still need to be investigated. If the CMD-3 measurement is confirmed, it is likely the older data will be superseded, and the muon g-2 will have once again confirmed the Standard Model as the best and most resilient description of the fundamental nature of our universe.
![Large group of people stood holding a banner that says Muon g-2](../themes/icons/grey.gif)
The task before the Muon g-2 Theory Initiative is to solve these dilemmas and update the 2020 data-driven SM prediction. Two new publications are planned. The first will be released in 2025 (to coincide with the new experimental result from Fermilab). This will describe the current status and ongoing body of work, but a full, updated SM prediction will have to wait for the second paper, likely to be published several years later.
It’s going to be an exciting few years. Being part of both the experiment and the theory means I have been privileged to see the process from both sides. For the SM prediction, much work is still to be done but science with this much at stake cannot be rushed and it will be fascinating work. I’m looking forward to the journey just as much as the outcome.
The post The muon’s magnetic moment exposes a huge hole in the Standard Model – unless it doesn’t appeared first on Physics World.