Physicists working on the ATLAS experiment on the Large Hadron Collider (LHC) are the first to report the production of top quark–antiquark pairs in collisions involving heavy nuclei. By colliding lead ions, CERN’s LHC creates a fleeting state of matter called the quark–gluon plasma. This is an extremely hot and dense soup of subatomic particles that includes deconfined quarks and gluons. This plasma is believed to have filled the early universe microseconds after the Big Bang.
“Heavy-ion collisions at the LHC recreate the quark–gluon plasma in a laboratory setting,” Anthony Badea, a postdoctoral researcher at the University of Chicago and one of the lead authors of a paper describing the research. As well as boosting our understanding of the early universe, studying the quark–gluon plasma at the LHC could also provide insights into quantum chromodynamics (QCD), which is the theory of how quarks and gluons interact.
Although the quark–gluon plasma at the LHC vanishes after about 10-23 s, scientists can study it by analysing how other particles produced in collisions move through it. The top quark is the heaviest known elementary particle and its short lifetime and distinct decay pattern offer a unique way to explore the quark–gluon plasma. This because the top quark decays before the quark–gluon plasma dissipates.
“The top quark decays into lighter particles that subsequently further decay,” explains Stefano Forte at the University of Milan, who was not involved in the research. “The time lag between these subsequent decays is modified if they happen within the quark–gluon plasma, and thus studying them has been suggested as a way to probe [quark–gluon plasma’s] structure. In order for this to be possible, the very first step is to know how many top quarks are produced in the first place, and determining this experimentally is what is done in this [ATLAS] study.”
First observations
The ATLAS team analysed data from lead–lead collisions and searched for events in which a top quark and its antimatter counterpart were produced. These particles can then decay in several different ways and the researchers focused on a less frequent but more easily identifiable mode known as the di-lepton channel. In this scenario, each top quark decays into a bottom quark and a W boson, which is a weak force-carrying particle that then transforms into a detectable lepton and an invisible neutrino.
The results not only confirmed that top quarks are created in this complex environment but also showed that their production rate matches predictions based on our current understanding of the strong nuclear force.
“This is a very important study,” says Juan Rojo, a theoretical physicist at the Free University of Amsterdam who did not take part in the research. “We have studied the production of top quarks, the heaviest known elementary particle, in the relatively simple proton–proton collisions for decades. This work represents the first time that we observe the production of these very heavy particles in a much more complex environment, with two lead nuclei colliding among them.”
As well as confirming QCD’s prediction of heavy-quark production in heavy-nuclei collisions, Rojo explains that “we have a novel probe to resolve the structure of the quark–gluon plasma”. He also says that future studies will enable us “to understand novel phenomena in the strong interactions such as how much gluons in a heavy nucleus differ from gluons within the proton”.
Crucial first step
“This is a first step – a crucial one – but further studies will require larger samples of top quark events to explore more subtle effects,” adds Rojo.
The number of top quarks created in the ATLAS lead–lead collisions agrees with theoretical expectations. In the future, more detailed measurements could help refine our understanding of how quarks and gluons behave inside nuclei. Eventually, physicists hope to use top quarks not just to confirm existing models, but to reveal entirely new features of the quark–gluon plasma.
Rojo says we could, “learn about the time structure of the quark–gluon plasma, measurements which are ‘finer’ would be better, but for this we need to wait until more data is collected, in particular during the upcoming high-luminosity run of the LHC”.
Badea agrees that ATLAS’s observation opens the door to deeper explorations. “As we collect more nuclei collision data and improve our understanding of top-quark processes in proton collisions, the future will open up exciting prospects”.
Great mind Grete Hermann, pictured here in 1955, was one of the first scientists to consider the philosophical implications of quantum mechanics. (Photo: Lohrisch-Achilles. Courtesy: Bremen State Archives)
In the early days of quantum mechanics, physicists found its radical nature difficult to accept – even though the theory had successes. In particular Werner Heisenberg developed the first comprehensive formulation of quantum mechanics in 1925, while the following year Erwin Schrödinger was able to predict the spectrum of light emitted by hydrogen using his eponymous equation. Satisfying though these achievements were, there was trouble in store.
Long accustomed to Isaac Newton’s mechanical view of the universe, physicists had assumed that identical systems always evolve with time in exactly the same way, that is to say “deterministically”. But Heisenberg’s uncertainty principle and the probabilistic nature of Schrödinger’s wave function suggested worrying flaws in this notion. Those doubts were famously expressed by Albert Einstein, Boris Podolsky and Nathan Rosen in their “EPR” paper of 1935 (Phys. Rev.47 777) and in debates between Einstein and Niels Bohr.
But the issues at stake went deeper than just a disagreement among physicists. They also touched on long-standing philosophical questions about whether we inhabit a deterministic universe, the related question of human free will, and the centrality of cause and effect. One person who rigorously addressed the questions raised by quantum theory was the German mathematician and philosopher Grete Hermann (1901–1984).
Hermann stands out in an era when it was rare for women to contribute to physics or philosophy, let alone to both. Writing in The Oxford Handbook of the History of Quantum Interpretations, published in 2022, the City University of New York philosopher of science Elise Crull has called Hermann’s work “one of the first, and finest, philosophical treatments of quantum mechanics”.
Grete Hermann upended the famous ‘proof’, developed by the Hungarian-American mathematician and physicist John von Neumann, that ‘hidden variables’ are impossible in quantum mechanics
What’s more, Hermann upended the famous “proof”, developed by the Hungarian-American mathematician and physicist John von Neumann, that “hidden variables” are impossible in quantum mechanics. But why have Hermann’s successes in studying the roots and meanings of quantum physics been so often overlooked? With 2025 being the International Year of Quantum Science and Technology, it’s time to find out.
Free thinker
Hermann was born on 2 March 1901 in the north German port city of Bremen. One of seven children, her mother was deeply religious, while her father was a merchant, a sailor and later an itinerant preacher. According to the 2016 book Grete Hermann: Between Physics and Philosophy by Crull and Guido Bacciagaluppi, she was raised according to her father’s maxim: “I train my children in freedom!” Essentially, he enabled Hermann to develop a wide range of interests and benefit from the best that the educational system could offer a woman at the time.
She was eventually admitted as one of a handful of girls at the Neue Gymnasium – a grammar school in Bremen – where she took a rigorous and broad programme of subjects. In 1921 Hermann earned a certificate to teach high-school pupils – an interest in education that reappeared in her later life – and began studying mathematics, physics and philosophy at the University of Göttingen.
In just four years, Hermann earned a PhD under the exceptional Göttingen mathematician Emmy Noether (1882–1935), famous for her groundbreaking theorem linking symmetry to physical conservation laws. Hermann’s final oral exam in 1925 featured not just mathematics, which was the subject of her PhD, but physics and philosophy too. She had specifically requested to be examined in the latter by the Göttingen philosopher Leonard Nelson, whose “logical sharpness” in lectures had impressed her.
Mutual interconnections Grete Hermann was fascinated by the fundamental overlap between physics and philosophy. (Courtesy: iStock/agsandrew)
By this time, Hermann’s interest in philosophy was starting to dominate her commitment to mathematics. Although Noether had found a mathematics position for her at the University of Freiburg, Hermann instead decided to become Nelson’s assistant, editing his books on philosophy. “She studies mathematics for four years,” Noether declared, “and suddenly she discovers her philosophical heart!”
Hermann found Nelson to be demanding and sometimes overbearing but benefitted from the challenges he set. “I gradually learnt to eke out, step by step,” she later declared, “the courage for truth that is necessary if one is to utterly place one’s trust, also within one’s own thinking, in a method of thought recognized as cogent.” Hermann, it appeared, was searching for a path to the internal discovery of truth, rather like Einstein’s Gedankenexperimente.
After Nelson died in 1927 aged just 45, Hermann stayed in Göttingen, where she continued editing and expanding his philosophical work and related political ideas. Espousing a form of socialism based on ethical reasoning to produce a just society, Nelson had co-founded a political action group and set up the associated Philosophical-Political Academy (PPA) to teach his ideas. Hermann contributed to both and also wrote for the PPA’s anti-Nazi newspaper.
Hermann’s involvement in the organizations Nelson had founded later saw her move to other locations in Germany, including Berlin. But after Hitler came to power in 1933, the Nazis banned the PPA, and Hermann and her socialist associates drew up plans to leave Germany. Initially, she lived at a PPA “school-in-exile” in neighbouring Denmark. As the Nazis began to arrest socialists, Hermann feared that Germany might occupy Denmark (as it indeed later did) and so moved again, first to Paris and then London.
Amid all these disruptions, Hermann continued to bring her dual philosophical and mathematical perspectives to physics, and especially to quantum mechanics
Arriving in Britain in early 1938, Hermann became acquainted with Edward Henry, another socialist, whom she later married. It was, however, merely a marriage of convenience that gave Hermann British citizenship and – when the Second World War started in 1939 – stopped her from being interned as an enemy alien. (The couple divorced after the war.) Amid all these disruptions, Hermann continued to bring her dual philosophical and mathematical perspectives to physics, and especially to quantum mechanics.
Mixing philosophy and physics
A major stimulus for Hermann’s work came from discussions she had in 1934 with Heisenberg and Carl Friedrich von Weizsäcker, who was then his research assistant at the Institute for Theoretical Physics in Leipzig. The previous year Hermann had written an essay entitled “Determinism and quantum mechanics”, which analysed whether the indeterminate nature of quantum mechanics – central to the “Copenhagen interpretation” of quantum behaviour – challenged the concept of causality.
Much cherished by physicists, causality says that every event has a cause, and that a given cause always produces a single specific event. Causality was also a tenet of the 18th-century German philosopher Immanuel Kant, best known for his famous 1781 treatise Critique of Pure Reason. He believed that causality is fundamental for how humans organize their experiences and make sense of the world.
Hermann, like Nelson, was a “neo-Kantian” who believed that Kant’s ideas should be treated with scientific rigour. In her 1933 essay, Hermann examined how the Copenhagen interpretation undermines Kant’s principle of causality. Although the article was not published at the time, she sent copies to Heisenberg, von Weizsäcker, Bohr and also Paul Dirac, who was then at the University of Cambridge in the UK.
In fact, we only know of the essay’s existence because Crull and Bacciagaluppi discovered a copy in Dirac’s archives at Churchill College, Cambridge. They also found a 1933 letter to Hermann from Gustav Heckmann, a physicist who said that Heisenberg, von Weizsäcker and Bohr had all read her essay and took it “absolutely and completely seriously”. Heisenberg added that Hermann was a “fabulously clever woman”.
Heckmann then advised Hermann to discuss her ideas more fully with Heisenberg, who he felt would be more open than Bohr to new ideas from an unexpected source. In 1934 Hermann visited Heisenberg and von Weizsäcker in Leipzig, with Heisenberg later describing his interaction in his 1971 memoir Physics and Beyond: Encounters and Conversations.
In that book, Heisenberg relates how rigorously Hermann wanted to treat philosophical questions. “[She] believed she could prove that the causal law – in the form Kant had given it – was unshakable,” Heisenberg recalled. “Now the new quantum mechanics seemed to be challenging the Kantian conception, and she had accordingly decided to fight the matter out with us.”
Their interaction was no fight, but a spirited discussion, with some sharp questioning from Hermann. When Heisenberg suggested, for instance, that a particular radium atom emitting an electron is an example of an unpredictable random event that has no cause, Hermann countered by saying that just because no cause has been found, it didn’t mean no such cause exists.
Significantly, this was a reference to what we now call “hidden variables” – the idea that quantum mechanics is being steered by additional parameters that we possibly don’t know anything about. Heisenberg then argued that even with such causes, knowing them would lead to complications in other experiments because of the wave nature of electrons.
Forward thinker Grete Hermann was one of the first people to study the notion that quantum mechanics might be steered by mysterious additional parameters – now dubbed “hidden variables” – that we know nothing about. (Courtesy: iStock/pobytov)
Suppose, using a hidden variable, we could predict exactly which direction an electron would move. The electron wave wouldn’t then be able to split and interfere with itself, resulting in an extinction of the electron. But such electron interference effects are experimentally observed, which Heisenberg took as evidence that no additional hidden variables are needed to make quantum mechanics complete. Once again, Hermann pointed out a discrepancy in Heisenberg’s argument.
In the end, neither side fully convinced the other, but inroads were made, with Heisenberg concluding in his 1971 book that “we had all learned a good deal about the relationship between Kant’s philosophy and modern science”. Hermann herself paid tribute to Heisenberg in a 1935 paper “Natural-philosophical foundations of quantum mechanics”, which appeared in a relatively obscure philosophy journal called Abhandlungen der Fries’schen Schule (6 69). In it, she thanked Heisenberg “above all for his willingness to discuss the foundations of quantum mechanics, which was crucial in helping the present investigations”.
Quantum indeterminacy versus causality
In her 1933 paper, Hermann aimed to understand if the indeterminacy of quantum mechanics threatens causality. Her overall finding was that wherever indeterminacy is invoked in quantum mechanics, it is not logically essential to the theory. So without claiming that quantum theory actually supports causality, she left the possibility open that it might.
To illustrate her point, Hermann considered Heisenberg’s uncertainty principle, which says that there’s a limit to the accuracy with which complementary variables, such as position, q, and momentum, p, can be measured, namely ΔqΔp ≥ h where h is Planck’s constant. Does this principle, she wondered, truly indicate quantum indeterminism?
Hermann asserted that this relation can mean only one of two possible things. One is that measuring one variable leaves the value of the other undetermined. Alternatively, the result of measuring the other variable can’t be precisely predicted. Hermann dismissed the first option because its very statement implies that exact values exist, and so it cannot be logically used to argue against determinism. The second choice could be valid, but that does not exclude the possibility of finding new properties – hidden variables – that give an exact prediction.
Hermann used her mathematical training to point out a flaw in von Neumann’s 1932 famous proof, which said that no hidden-variable theory can ever reproduce the features of quantum mechanics
In making her argument about hidden variables, Hermann used her mathematical training to point out a flaw in von Neumann’s 1932 famous proof, which said that no hidden-variable theory can ever reproduce the features of quantum mechanics. Quantum mechanics, according to von Neumann, is complete and no extra deterministic features need to be added.
For decades, his result was cited as “proof” that any deterministic addition to quantum mechanics must be wrong. Indeed, von Neumann had such a well-deserved reputation as a brilliant mathematician that few people had ever bothered to scrutinize his analysis. But in 1964 the Northern Irish theorist John Bell famously showed that a valid hidden-variable theory could indeed exist, though only if it’s “non-local” (Physics 1 195).
Non-locality says that things can happen at different parts of the universe simultaneously without needing faster-than-light communication. Despite being a notion that Einstein never liked, non-locality has been widely confirmed experimentally. In fact, non-locality is a defining feature of quantum physics and one that’s eminently useful in quantum technology.
Then, in 1966 Bell examined von Neumann’s reasoning and found an error that decisively refuted the proof (Rev. Mod, Phys. 38 447). Bell, in other words, showed that quantum mechanics could permit hidden variables after all – a finding that opened the door to alternative interpretations of quantum mechanics. However, Hermann had reported the very same error in her 1933 paper, and again in her 1935 essay, with an especially lucid exposition that almost exactly foresees Bell’s objection.
She had got there first, more than three decades earlier (see box).
Grete Hermann: 30 years ahead of John Bell
(Courtesy: iStock/Chayanan)
According to Grete Hermann, John von Neumann’s 1933 proof that quantum mechanics doesn’t need hidden variables “stands or falls” on his assumption concerning “expectation values”, which is the sum of all possible outcomes weighted by their respective probabilities. In the case of two quantities, say, r and s, von Neumann supposed that the expectation value of (r + s) is the same as the expectation value of r plus the expectation value of s. In other words, <(r + s)> = <r> + <s>.
This is clearly true in classical physics, Hermann writes, but the truth is more complicated in quantum mechanics. Suppose r and s are the conjugate variables in an uncertainty relationship, such as momentum q and position p given by ΔqΔp ≥ h. By definition, measuring q eliminates making a precise measurement of p, so it is impossible to simultaneously measure them and satisfy the relation <q + p> = <q> + <p>.
Further analysis, which Hermann supplied and Bell presented more fully, shows exactly why this invalidates or at least strongly limits the applicability of von Neumann’s proof; but Hermann caught the essence of the error first. Bell did not recognize or cite Hermann’s work, most probably because it was hardly known to the physics community until years after his 1966 paper.
A new view of causality
After rebutting von Neumann’s proof in her 1935 essay, Hermann didn’t actually turn to hidden variables. Instead, Hermann went in a different and surprising direction, probably as a result of her discussions with Heisenberg. She accepted that quantum mechanics is a complete theory that makes only statistical predictions, but proposed an alternative view of causality within this interpretation.
We cannot foresee precise causal links in a quantum mechanics that is statistical, she wrote. But once a measurement has been made with a known result, we can work backwards to get a cause that led to that result. In fact, Hermann showed exactly how to do this with various examples. In this way, she maintains, quantum mechanics does not refute the general Kantian category of causality.
Not all philosophers have been satisfied by the idea of retroactive causality. But writing in The Oxford Handbook of the History of Quantum Interpretations, Crull says that Hermann “provides the contours of a neo-Kantian interpretation of quantum mechanics”. “With one foot squarely on Kant’s turf and the other squarely on Bohr’s and Heisenberg’s,” Crull concludes, “[Hermann’s] interpretation truly stands on unique ground.”
Grete Hermann’s 1935 paper shows a deep and subtle grasp of elements of the Copenhagen interpretation.
But Hermann’s 1935 paper did more than just upset von Neumann’s proof. In the article, she shows a deep and subtle grasp of elements of the Copenhagen interpretation such as its correspondence principle, which says that – in the limit of large quantum numbers – answers derived from quantum physics must approach those from classical physics.
The paper also shows that Hermann was fully aware – and indeed extended the meaning – of the implications of Heisenberg’s thought experiment that he used to illustrate the uncertainty principle. Heisenberg envisaged a photon colliding with an electron, but after that contact, she writes, the wave function of the physical system is a linear combination of terms, each being “the product of one wave function describing the electron and one describing the light quantum”.
As she went on to say, “The light quantum and the electron are thus not described each by itself, but only in their relation to each other. Each state of the one is associated with one of the other.” Remarkably, this amounts to an early perception of quantum entanglement, which Schrödinger described and named later in 1935. There is no evidence, however, that Schrödinger knew of Hermann’s insights.
Hermann’s legacy
On the centenary of the birth of a full theory of quantum mechanics, how should we remember Hermann? According to Crull, the early founders of quantum mechanics were “asking philosophical questions about the implications of their theory [but] none of these men were trained in both physics and philosophy”. Hermann, however, was an expert in the two. “[She] composed a brilliant philosophical analysis of quantum mechanics, as only one with her training and insight could have done,” Crull says.
Had Hermann’s 1935 paper been more widely known, it could have altered the early development of quantum mechanics
Sadly for Hermann, few physicists at the time were aware of her 1935 paper even though she had sent copies to some of them. Had it been more widely known, her paper could have altered the early development of quantum mechanics. Reading it today shows how Hermann’s style of incisive logical examination can bring new understanding.
Hermann leaves other legacies too. As the Second World War drew to a close, she started writing about the ethics of science, especially the way in which it was carried out under the Nazis. After the war, she returned to Germany, where she devoted herself to pedagogy and teacher training. She disseminated Nelson’s views as well as her own through the reconstituted PPA, and took on governmental positions where she worked to rebuild the German educational system, apparently to good effect according to contemporary testimony.
Hermann also became active in politics as an adviser to the Social Democratic Party. She continued to have an interest in quantum mechanics, but it is not clear how seriously she pursued it in later life, which saw her move back to Bremen to care for an ill comrade from her early socialist days.
Hermann’s achievements first came to light in 1974 when the physicist and historian Max Jammer revealed her 1935 critique of von Neumann’s proof in his book The Philosophy of Quantum Mechanics. Following Hermann’s death in Bremen on 15 April 1984, interest slowly grew, culminating in Crull and Bacciagaluppi’s 2016 landmark study Grete Hermann: Between Physics and Philosophy.
The life of this deep thinker, who also worked to educate others and to achieve worthy societal goals, remains an inspiration for any scientist or philosopher today.
Oh, balls A record-breaking 34-ball, 12-storey tower with three balls per layer (photo a); a 21-ball six-storey tower with four balls per layer (photo b); an 11-ball, three-storey tower with five balls per layers (photo c); and why a tower with six balls per layer would be impossible as the “locker” ball just sits in the middle (photo d). (Courtesy: Andria Rogava)
A few years ago, I wrote in Physics World about various bizarre structures I’d built from tennis balls, the most peculiar of which I termed “tennis-ball towers”. They consisted of a series of three-ball layers topped by a single ball (“the locker”) that keeps the whole tower intact. Each tower had (3n + 1) balls, where n is the number of triangular layers. The tallest tower I made was a seven-storey, 19-ball structure (n = 6). Shortly afterwards, I made an even bigger, nine-storey, 25-ball structure (n = 8).
Now, in the latest exciting development, I have built a new, record-breaking tower with 34 balls (n = 11), in which all 30 balls from the second to the eleventh layer are kept in equilibrium by the locker on the top (see photo a). The three balls in the bottom layer aren’t influenced by the locker as they stay in place by virtue of being on the horizontal surface of a table.
I tried going even higher but failed to build a structure that would stay intact without supporting “scaffolds”. Now in case you think I’ve just glued the balls together, watch the video below to see how the incredible 34-ball structure collapses spontaneously, probably due to a slight vibration as I walked around the table.
Even more unexpectedly, I have been able to make tennis-ball towers consisting of layers of four balls (4n + 1) and five balls too (5n + 1). Their equilibria are more delicate and, in the case of four-ball structures, so far I have only managed to build (photo b) a 21-ball, six-storey tower (n = 5). You can also see the tower in the video below.
The (5n + 1) towers are even trickier to make and (photo c) I have only got up to a three-storey structure with 11 balls (n = 2): two lots of five balls with a sixth single ball on top. In case you’re wondering, towers with six balls in each layer are physically impossible to build because they form a regular hexagon. You can’t just use another ball as a locker because it would simply sit between the other six (photo d).
Researchers from the Karlsruhe Tritium Neutrino experiment (KATRIN) have announced the most precise upper limit yet on the neutrino’s mass. Thanks to new data and upgraded techniques, the new limit – 0.45 electron volts (eV) at 90% confidence – is half that of the previous tightest constraint, and marks a step toward answering one of particle physics’ longest-standing questions.
Neutrinos are ghostlike particles that barely interact with matter, slipping through the universe almost unnoticed. They come in three types, or flavours: electron, muon, and tau. For decades, physicists assumed all three were massless, but that changed in the late 1990s when experiments revealed that neutrinos can oscillate between flavours as they travel. This flavour-shifting behaviour is only possible if neutrinos have mass.
Although neutrino oscillation experiments confirmed that neutrinos have mass, and showed that the masses of the three flavours are different, they did not divulge the actual scale of these masses. Doing so requires an entirely different approach.
Looking for clues in electrons
In KATRIN’s case, that means focusing on a process called tritium beta decay, where a tritium nucleus (a proton and two neutrons) decays into a helium-3 nucleus (two protons and one neutron) by releasing an electron and an electron antineutrino. Due to energy conservation, the total energy from the decay is shared between the electron and the antineutrino. The neutrino’s mass determines the balance of the split.
“If the neutrino has even a tiny mass, it slightly lowers the energy that the electron can carry away,” explains Christoph Wiesinger, a physicist at the Technical University of Munich, Germany and a member of the KATRIN collaboration. “By measuring that [electron] spectrum with extreme precision, we can infer how heavy the neutrino is.”
Because the subtle effects of neutrino mass are most visible in decays where the neutrino carries away very little energy (most of it bound up in mass), KATRIN concentrates on measuring electrons that have taken the lion’s share. From these measurements, physicists can calculate neutrino mass without having to detect these notoriously weakly-interacting particles directly.
Improvements over previous results
The new neutrino mass limit is based on data taken between 2019 and 2021, with 259 days of operations yielding over 36 million electron measurements. “That’s six times more than the previous result,” Wiesinger says.
Other improvements include better temperature control in the tritium source and a new calibration method using a monoenergetic krypton source. “We were able to reduce background noise rates by a factor of two, which really helped the precision,” he adds.
Keeping track: Laser system for the analysis of the tritium gas composition at KATRIN’s Windowless Gaseous Tritium Source. Improvements to temperature control in this source helped raise the precision of the neutrino mass limit. (Courtesy: Tritium Laboratory, KIT)
At 0.45 eV, the new limit means the neutrino is at least a million times lighter than the electron. “This is a fundamental number,” Wiesinger says. “It tells us that neutrinos are the lightest known massive particles in the universe, and maybe that their mass has origins beyond the Standard Model.”
Despite the new tighter limit, however, definitive answers about the neutrino’s mass are still some ways off. “Neutrino oscillation experiments tell us that the lower bound on the neutrino mass is about 0.05 eV,” says Patrick Huber, a theoretical physicist at Virginia Tech, US, who was not involved in the experiment. “That’s still about 10 times smaller than the new KATRIN limit… For now, this result fits comfortably within what we expect from a Standard Model that includes neutrino mass.”
Model independence
Though Huber emphasizes that there are “no surprises” in the latest measurement, KATRIN has a key advantage over its rivals. Unlike cosmological methods, which infer neutrino mass based on how it affects the structure and evolution of the universe, KATRIN’s direct measurement is model-independent, relying only on energy and momentum conservation. “That makes it very powerful,” Wiesinger argues. “If another experiment sees a measurement in the future, it will be interesting to check if the observation matches something as clean as ours.”
KATRIN’s own measurements are ongoing, with the collaboration aiming for 1000 days of operations by the end of 2025 and a final sensitivity approaching 0.3 eV. Beyond that, the plan is to repurpose the instrument to search for sterile neutrinos – hypothetical heavier particles that don’t interact via the weak force and could be candidates for dark matter.
“We’re testing things like atomic tritium sources and ultra-precise energy detectors,” Wiesinger says. “There are exciting ideas, but it’s not yet clear what the next-generation experiment after KATRIN will look like.”
The high-street bank HSBC has worked with the NQCC, hardware provider Rigetti and the Quantum Software Lab to investigate the advantages that quantum computing could offer for detecting the signs of fraud in transactional data. (Courtesy: Shutterstock/Westend61 on Offset)
Rapid technical innovation in quantum computing is expected to yield an array of hardware platforms that can run increasingly sophisticated algorithms. In the real world, however, such technical advances will remain little more than a curiosity if they are not adopted by businesses and the public sector to drive positive change. As a result, one key priority for the UK’s National Quantum Computing Centre (NQCC) has been to help companies and other organizations to gain an early understanding of the value that quantum computing can offer for improving performance and enhancing outcomes.
To meet that objective the NQCC has supported several feasibility studies that enable commercial organizations in the UK to work alongside quantum specialists to investigate specific use cases where quantum computing could have a significant impact within their industry. One prime example is a project involving the high-street bank HSBC, which has been exploring the potential of quantum technologies for spotting the signs of fraud in financial transactions. Such fraudulent activity, which affects millions of people every year, now accounts for about 40% of all criminal offences in the UK and in 2023 generated total losses of more than £2.3 bn across all sectors of the economy.
Banks like HSBC currently exploit classical machine learning to detect fraudulent transactions, but these techniques require a large computational overhead to train the models and deliver accurate results. Quantum specialists at the bank have therefore been working with the NQCC, along with hardware provider Rigetti and the Quantum Software Lab at the University of Edinburgh, to investigate the capabilities of quantum machine learning (QML) for identifying the tell-tale indicators of fraud.
“HSBC’s involvement in this project has brought transactional fraud detection into the realm of cutting-edge technology, demonstrating our commitment to pushing the boundaries of quantum-inspired solutions for near-term benefit,” comments Philip Intallura, Group Head of Quantum Technologies at HSBC. “Our philosophy is to innovate today while preparing for the quantum advantage of tomorrow.”
Another study focused on a key problem in the aviation industry that has a direct impact on fuel consumption and the amount of carbon emissions produced during a flight. In this logistical challenge, the aim was to find the optimal way to load cargo containers onto a commercial aircraft. One motivation was to maximize the amount of cargo that can be carried, the other was to balance the weight of the cargo to reduce drag and improve fuel efficiency.
“Even a small shift in the centre of gravity can have a big effect,” explains Salvatore Sinno of technology solutions company Unisys, who worked on the project along with applications engineers at the NQCC and mathematicians at the University of Newcastle. “On a Boeing 747 a displacement of just 75 cm can increase the carbon emissions on a flight of 10,000 miles by four tonnes, and also increases the fuel costs for the airline company.”
A hybrid quantum–classical solution has been used to optimize the configuration of air freight, which can improve fuel efficiency and lower carbon emissions. (Courtesy: Shutterstock/supakitswn)
With such a large number of possible loading combinations, classical computers cannot produce an exact solution for the optimal arrangement of cargo containers. In their project the team improved the precision of the solution by combining quantum annealing with high-performance computing, a hybrid approach that Unisys believes can offer immediate value for complex optimization problems. “We have reached the limit of what we can achieve with classical computing, and with this work we have shown the benefit of incorporating an element of quantum processing into our solution,” explains Sinno.
The HSBC project team also found that a hybrid quantum–classical solution could provide an immediate performance boost for detecting anomalous transactions. In this case, a quantum simulator running on a classical computer was used to run quantum algorithms for machine learning. “These simulators allow us to execute simple QML programmes, even though they can’t be run to the same level of complexity as we could achieve with a physical quantum processor,” explains Marco Paini, the project lead for Rigetti. “These simulations show the potential of these low-depth QML programmes for fraud detection in the near term.”
The team also simulated more complex QML approaches using a similar but smaller-scale problem, demonstrating a further improvement in performance. This outcome suggests that running deeper QML algorithms on a physical quantum processor could deliver an advantage for detecting anomalies in larger datasets, even though the hardware does not yet provide the performance needed to achieve reliable results. “This initiative not only showcases the near-term applicability of advanced fraud models, but it also equips us with the expertise to leverage QML methods as quantum computing scales,” comments Intellura.
Indeed, the results obtained so far have enabled the project partners to develop a roadmap that will guide their ongoing development work as the hardware matures. One key insight, for example, is that even a fault-tolerant quantum computer would struggle to process the huge financial datasets produced by a bank like HSBC, since a finite amount of time is needed to run the quantum calculation for each data point. “From the simulations we found that the hybrid quantum–classical solution produces more false positives than classical methods,” says Paini. “One approach we can explore would be to use the simulations to flag suspicious transactions and then run the deeper algorithms on a quantum processor to analyse the filtered results.”
This particular project also highlighted the need for agreed protocols to navigate the strict rules on data security within the banking sector. For this project the HSBC team was able to run the QML simulations on its existing computing infrastructure, avoiding the need to share sensitive financial data with external partners. In the longer term, however, banks will need reassurance that their customer information can be protected when processed using a quantum computer. Anticipating this need, the NQCC has already started to work with regulators such as the Financial Conduct Authority, which is exploring some of the key considerations around privacy and data security, with that initial work feeding into international initiatives that are starting to consider the regulatory frameworks for using quantum computing within the financial sector.
For the cargo-loading project, meanwhile, Sinno says that an important learning point has been the need to formulate the problem in a way that can be tackled by the current generation of quantum computers. In practical terms that means defining constraints that reduce the complexity of the problem, but that still reflect the requirements of the real-world scenario. “Working with the applications engineers at the NQCC has helped us to understand what is possible with today’s quantum hardware, and how to make the quantum algorithms more viable for our particular problem,” he says. “Participating in these studies is a great way to learn and has allowed us to start using these emerging quantum technologies without taking a huge risk.”
Indeed, one key feature of these feasibility studies is the opportunity they offer for different project partners to learn from each other. Each project includes an end-user organization with a deep knowledge of the problem, quantum specialists who understand the capabilities and limitations of present-day solutions, and academic experts who offer an insight into emerging theoretical approaches as well as methodologies for benchmarking the results. The domain knowledge provided by the end users is particularly important, says Paini, to guide ongoing development work within the quantum sector. “If we only focused on the hardware for the next few years, we might come up with a better technical solution but it might not address the right problem,” he says. “We need to know where quantum computing will be useful, and to find that convergence we need to develop the applications alongside the algorithms and the hardware.”
Another major outcome from these projects has been the ability to make new connections and identify opportunities for future collaborations. As a national facility NQCC has played an important role in providing networking opportunities that bring diverse stakeholders together, creating a community of end users and technology providers, and supporting project partners with an expert and independent view of emerging quantum technologies. The NQCC has also helped the project teams to share their results more widely, generating positive feedback from the wider community that has already sparked new ideas and interactions.
“We have been able to network with start-up companies and larger enterprise firms, and with the NQCC we are already working with them to develop some proof-of-concept projects,” says Sinno. “Having access to that wider network will be really important as we continue to develop our expertise and capability in quantum computing.”
Through new experiments, researchers in Switzerland have tested models of how microwaves affect low-temperature chemical reactions between ions and molecules. Through their innovative setup, Valentina Zhelyazkova and colleagues at ETH Zurich showed for the first time how the application of microwave pulses can slow down reaction rates via nonthermal mechanisms.
Physicists have been studying chemical reactions between ions and neutral molecules for some time. At close to room temperature, classical models can closely predict how the electric fields emanating from ions will induce dipoles in nearby neutral molecules, allowing researchers to calculate these reaction rates with impressive accuracy. Yet as temperatures drop close to absolute zero, a wide array of more complex effects come into play, which have gradually been incorporated into the latest theoretical models.
“At low temperatures, models of reactivity must include the effects of the permanent electric dipoles and quadrupole moments of the molecules, the effect of their vibrational and rotational motion,” Zhelyazkova explains. “At extremely low temperatures, even the quantum-mechanical wave nature of the reactants must be considered.”
Rigorous experiments
Although these low-temperature models have steadily improved in recent years, the ability to put them to the test through rigorous experiments has so far been hampered by external factors.
In particular, stray electric fields in the surrounding environment can heat the ions and molecules, so that any important quantum effects are quickly drowned out by noise. “Consequently, it is only in the past few years that experiments have provided information on the rates of ion–molecule reactions at very low temperatures,” Zhelyazkova explains.
In their study, Zhelyazkova’s team improved on these past experiments through an innovative approach to cooling the internal motions of the molecules being heated by stray electric fields. Their experiment involved a reaction between positively-charged helium ions and neutral molecules of carbon monoxide (CO). This creates neutral atoms of helium and oxygen, and a positively-charged carbon atom.
To initiate the reaction, the researchers created separate but parallel supersonic beams of helium and CO that were combined in a reaction cell. “In order to overcome the problem of heating the ions by stray electric fields, we study the reactions within the distant orbit of a highly excited electron, which makes the overall system electrically neutral without affecting the ion–molecule reaction taking place within the electron orbit,” explains ETH’s Frédéric Merkt.
Giant atoms
In such a “Rydberg atom”, the highly excited electron is some distance from the helium nucleus and its other electron. As a result, a Rydberg helium atom can be considered an ion with a “spectator” electron, which has little influence over how the reaction unfolds. To ensure the best possible accuracy, “we use a printed circuit board device with carefully designed surface electrodes to deflect one of the two beams,” explains ETH’s, Fernanda Martins. “We then merged this beam with the other, and controlled the relative velocity of the two beams.”
Altogether, this approach enabled the researchers to cool the molecules internally to temperatures below 10 K – where their quantum effects can dominate over externally induced noise. With this setup, Zhelyazkova, Merkt, Martins, and their colleagues could finally put the latest theoretical models to the test.
According to the latest low-temperature models, the rate of the CO–helium ion reaction should be determined by the quantized rotational states of the CO molecule – whose energies lie within the microwave range. In this case, the team used microwave pulses to put the CO into different rotational states, allowing them to directly probe their influence on the overall reaction rate.
Three important findings
Altogether, their experiment yielded three important findings. It confirmed that the reaction rate can vary, depending on the rotational state of the CO molecule; it showed that this reactivity can be modified by using a short microwave pulse to excite the CO molecule from its ground state to its first excited state – with the first excited state being less reactive than the ground state.
The third and most counterintuitive finding is that microwaves can slow down the reaction rate, via mechanisms unrelated to the heat they impart on the molecules absorbing them. “In most applications of microwaves in chemical synthesis, the microwaves are used as a way to thermally heat the molecules up, which always makes them more reactive,” Zhelyazkova says.
Building on the success of their experimental approach, the team now hopes to investigate these nonthermal mechanisms in more detail – with the aim to shed new light on how microwaves can influence chemical reactions via effects other than heating. In turn, their results could ultimately pave the way for advanced new techniques for fine-tuning the rate of reactions between ions and neutral molecules.
Superpositions of quantum states known as Schrödinger cat states can be created in “hot” environments with temperatures up to 1.8 K, say researchers in Austria and Spain. By reducing the restrictions involved in obtaining ultracold temperatures, the work could benefit fields such as quantum computing and quantum sensing.
In 1935, Erwin Schrödinger used a thought experiment now known as “Schrödinger’s cat” to emphasize what he saw as a problem with some interpretations of quantum theory. His gedankenexperiment involved placing a quantum system (a cat in a box with a radioactive sample and a flask of poison) in a state that is a superposition of two states (“alive cat” if the sample has not decayed and “dead cat” if it has). These superposition states are now known as Schrödinger cat states (or simply cat states) and are useful in many fields, including quantum computing, quantum networks and quantum sensing.
Creating a cat state, however, requires quantum particles to be in their ground state. This, in turn, means cooling them to extremely low temperatures. Even marginally higher temperatures were thought to destroy the fragile nature of these states, rendering them useless for applications. But the need for ultracold temperatures comes with its own challenges, as it restricts the range of possible applications and hinders the development of large-scale systems such as powerful quantum computers.
Cat on a hot tin…microwave cavity?
The new work, which was carried out by researchers at the University of Innsbruck and IQOQI in Austria together with colleagues at the ICFO in Spain, challenges the idea that ultralow temperatures are a must for generating cat states. Instead of starting from the ground state, they used thermally excited states to show that quantum superpositions can exist at temperatures of up to 1.8 K – an environment that might as well be an oven in the quantum world.
Team leader Gerhard Kirchmair, a physicist at the University of Innsbruck and the IQOQI, says the study evolved from one of those “happy accidents” that characterize work in a collaborative environment. During a coffee break with a colleague, he realized he was well-equipped to prove the hypothesis of another colleague, Oriol Romero-Isart, who had shown theoretically that cat states can be generated out of a thermal state.
The experiment involved creating cat states inside a microwave cavity that acts as a quantum harmonic oscillator. This cavity is coupled to a superconducting transmon qubit that behaves as a two-level system where the superposition is generated. While the overall setup is cooled to 30 mK, the cavity mode itself is heated by equilibrating it with amplified Johnson-Nyquist noise from a resistor, making it 60 times hotter than its environment.
To establish the existence of quantum correlations at this higher temperature, the team directly measured the Wigner functions of the states. Doing so revealed the characteristic interference patterns of Schrödinger cat states.
Benefits for quantum sensing and error correction
According to Kirchmair, being able to realize cat states without ground-state cooling could bring benefits for quantum sensing. The mechanical oscillator systems used to sense acceleration or force, for example, are normally cooled to the ground state to achieve the necessary high sensitivity, but such extreme cooling may not be necessary. He adds that quantum error correction schemes could also benefit, as they rely on being able to create cat states reliably; the team’s work shows that a residual thermal population places fewer limitations on this than previously thought.
“For next steps we will use the system for what it was originally designed, i.e. to mediate interactions between multiple qubits for novel quantum gates,” he tells Physics World.
Yiwen Chu, a quantum physicist from ETH Zürich in Switzerland who was not involved in this research, praises the “creativeness of the idea”. She describes the results as interesting and surprising because they seem to counter the common view that lack of purity in a quantum state degrades quantum features. She also agrees that the work could be important for quantum sensing, adding that many systems – including some more suited for sensing – are difficult to prepare in the ground state.
However, Chu notes that, for reasons stemming from the system’s parameters and the protocols the team used to generate the cat states, it should be possible to cool this particular system very efficiently to the ground state. This, she says, somewhat diminishes the argument that the method will be useful for systems where this isn’t the case. “However, these parameters and the protocols they showed might not be the only way to prepare such states, so on a fundamental level it is still very interesting,” she concludes.
With increased water scarcity and global warming looming, electrochemical technology offers low-energy mitigation pathways via desalination and carbon capture. This webinar will demonstrate how the less than 5 molar solid-state concentration swings afforded by cation intercalation materials – used originally in rocking-chair batteries – can effect desalination using Faradaic deionization (FDI). We show how the salt depletion/accumulation effect – that plagues Li-ion battery capacity under fast charging conditions – is exploited in a symmetric Na-ion battery to achieve seawater desalination, exceeding by an order of magnitude the limits of capacitive deionization with electric double layers. While initial modeling that introduced such an architecture blazed the trail for the development of new and old intercalation materials in FDI, experimental demonstration of seawater-level desalination using Prussian blue analogs required cell engineering to overcome the performance-degrading processes that are unique to the cycling of intercalation electrodes in the presence of flow, leading to innovative embedded, micro-interdigitated flow fields with broader application toward fuel cells, flow batteries, and other flow-based electrochemical devices. Similar symmetric FDI architectures using proton intercalation materials are also shown to facilitate direct-air capture of carbon dioxide with unprecedentedly low energy input by reversibly shifting pH within aqueous electrolyte.
Kyle Smith
Kyle C Smith joined the faculty of Mechanical Science and Engineering at the University of Illinois Urbana-Champaign (UIUC) in 2014 after completing his PhD in mechanical engineering (Purdue, 2012) and his post-doc in materials science and engineering (MIT, 2014). His group uses understanding of flow, transport, and thermodynamics in electrochemical devices and materials to innovate toward separations, energy storage, and conversion. For his research he was awarded the 2018 ISE-Elsevier Prize in Applied Electrochemistry of the International Society of Electrochemistry and the 2024 Dean’s Award for Early Innovation as an associate professor by UIUC’s Grainger College. Among his 59 journal papers and 14 patents and patents pending, his work that introduced Na-ion battery-based desalination using porous electrode theory [Smith and Dmello, J. Electrochem. Soc., 163, p. A530 (2016)] was among the top ten most downloaded in the Journal of the Electrochemical Society for five months in 2016. His group was also the first to experimentally demonstrate seawater-level salt removal using this approach [Do et al., Energy Environ. Sci., 16, p. 3025 (2023); Rahman et al., Electrochimica Acta, 514, p. 145632 (2025)], introducing flow fields embedded in electrodes to do so.
A model that could help explain how heavy elements are forged within collapsing stars has been unveiled by Matthew Mumpower at Los Alamos National Laboratory and colleagues in the US. The team suggests that energetic photons generated by newly forming black holes or neutron stars transmute protons within ejected stellar material into neutrons, thereby providing ideal conditions for heavy elements to form.
Astrophysicists believe that elements heavier than iron are created in violent processes such as the explosions of massive stars and the mergers of neutron stars. One way that this is thought to occur is the rapid neutron-capture process (r-process), whereby lighter nuclei created in stars capture neutrons in rapid succession. However, exactly where the r-process occurs is not well understood.
As Mumpower explains, the r-process must be occurring in environments where free neutrons are available in abundance. “But there’s a catch,” he says. “Free neutrons are unstable and decay in about 15 min. Only a few places in the universe have the right conditions to create and use these neutrons quickly enough. Identifying those places has been one of the toughest open questions in physics.”
Intense flashes of light
In their study, Mumpower’s team – which included researchers from the Los Alamos and Argonne national laboratories – looked at how lots of neutrons could be created within massive stars that are collapsing to become neutron stars or black holes. Their idea focuses on the intense flashes of light that are known to be emitted from the cores of these objects.
This radiation is emitted at wavelengths across the electromagnetic spectrum – including highly energetic gamma rays. Furthermore, the light is emitted along a pair of narrow jets, which blast outward above each pole of the star’s collapsing core. As they form, these jets plough through the envelope of stellar material surrounding the core, which had been previously ejected by the star. This is believed to create a “cocoon” of hot, dense material surrounding each jet.
In this environment, Mumpower’s team suggest that energetic photons in a jet collide with protons to create a neutron and a pion. Since these neutrons are have no electrical charge, many of them could dissolve into the cocoon, providing ideal conditions for the r-process to occur.
To test their hypothesis, the researchers carried out detailed computer simulations to predict the number of free neutrons entering the cocoon due to this process.
Gold and platinum
“We found that this light-based process can create a large number of neutrons,” Mumpower says. “There may be enough neutrons produced this way to build heavy elements, from gold and platinum all the way up to the heaviest elements in the periodic table – and maybe even beyond.”
If their model is correct, suggests that the origin of some heavy elements involves processes that are associated with the high-energy particle physics that is studied at facilities like the Large Hadron Collider.
“This process connects high-energy physics – which usually focuses on particles like quarks, with low-energy astrophysics – which studies stars and galaxies,” Mumpower says. “These are two areas that rarely intersect in the context of forming heavy elements.”
Kilonova explosions
The team’s findings also shed new light on some other astrophysical phenomena. “Our study offers a new explanation for why certain cosmic events, like long gamma-ray bursts, are often followed by kilonova explosions – the glow from the radioactive decay of freshly made heavy elements,” Mumpower continues. “It also helps explain why the pattern of heavy elements in old stars across the galaxy looks surprisingly similar.”
The findings could also improve our understanding of the chemical makeup of deep-sea deposits on Earth. The presence of both iron and plutonium in this material suggests that both elements may have been created in the same type of event, before coalescing into the newly forming Earth.
For now, the team will aim to strengthen their model through further simulations – which could better reproduce the complex, dynamic processes taking place as massive stars collapse.
US universities are in the firing line of the Trump administration, which is seeking to revoke the visas of foreign students, threatening to withdraw grants and demanding control over academic syllabuses. “The voice of science must not be silenced,” the letter writers say. “We all benefit from science, and we all stand to lose if the nation’s research enterprise is destroyed.”
Particularly hard hit are the country’s eight Ivy League universities, which have been accused of downplaying antisemitism exhibited in campus demonstrations in support of Gaza. Columbia University in New York, for example, has been trying to regain $400m in federal funds that the Trump administration threatened to cancel.
Columbia initially reached an agreement with the government on issues such as banning facemasks on its campus and taking control of its department responsible for courses on the Middle East. But on 8 April, according to reports, the National Institutes of Health, under orders from the Department of Health and Human Services, blocked all of its grants to Columbia.
Harvard University, meanwhile, has announced plans to privately borrow $750m after the Trump administration announced that it would review $9bn in the university’s government funding. Brown University in Rhode Island faces a loss of $510m, while the government has suspended several dozen research grants for Princeton University.
The administration also continues to oppose the use of diversity, equity and inclusion (DEI) programmes in universities. The University of Pennsylvania, from which Donald Trump graduated, faces the suspension of $175m in grants for offences against the government’s DEI policy.
Brain drain
Researchers in medical and social sciences are bearing the brunt of government cuts, with physics departments seeing relatively little impact on staffing and recruitment so far. “Of course we are concerned,” Peter Littlewood, chair of the University of Chicago’s physics department, told Physics World. “Nonetheless, we have made a deliberate decision not to halt faculty recruiting and stand by all our PhD offers.”
David Hsieh, executive officer for physics at California Institute of Technology, told Physics World that his department has also not taken any action so far. “I am sure that each institution is preparing in ways that make the most sense for them,” he says. “But I am not aware of any collective response at the moment.”
Yet universities are already bracing themselves for a potential brain drain. “The faculty and postdoc market is international, and the current sentiment makes the US less attractive for reasons beyond just finance,” warns Littlewood at Chicago.
That sentiment is echoed by Maura Healey, governor of Massachusetts, who claims that Europe, the Middle East and China are already recruiting the state’s best and brightest. “[They’re saying] we’ll give you a lab; we’ll give you staff. We’re giving away assets to other countries instead of training them, growing them [and] supporting them here.”
Science agencies remain under pressure too. The Department of Government Efficiency, run by Elon Musk, has already ended $420m in “unneeded” NASA contracts. The administration aims to cut the year’s National Science Foundation (NSF) construction budget, with data indicating that the agency has roughly halved its number of new grants since Trump became president.
Yet a threat to reduce the percentage of ancillary costs related to scientific grants appeared at least on hold, for now at least. “NSF awardees may continue to budget and charge indirect costs using either their federally negotiated indirect cost rate agreement or the “de minimis” rate of 15%, as authorized by the uniform guidance and other Federal regulations,” says an NSF spokesperson.
A quantum computer has been used for the first time to generate strings of certifiably random numbers. The protocol for doing this, which was developed by a team that included researchers at JPMorganChase and the quantum computing firm Quantinuum, could have applications in areas ranging from lotteries to cryptography – leading Quantinuum to claim it as quantum computing’s first commercial application, though other firms have made similar assertions. Separately, Quantinuum and its academic collaborators used the same trapped-ion quantum computer to explore problems in quantum magnetism and knot theory.
Genuinely random numbers are important in several fields, but classical computers cannot create them. The best they can do is to generate apparently random or “pseudorandom” numbers. Randomness is inherent in the laws of quantum mechanics, however, so quantum computers are naturally suited to random number generation. In fact, random circuit sampling – in which all qubits are initialized in a given state and allowed to evolve via quantum gates before having their states measured at the output – is often used to benchmark their power.
Of course, not everyone who wants to produce random numbers will have their own quantum computer. However, in 2023 Scott Aaronson of the University of Texas at Austin, US and his then-PhD student Shi-Han Hungsuggested that a client could send a series of pseudorandomly chosen “challenge” circuits to a central server. There, a quantum computer could perform random circuit sampling before sending the readouts to the client.
If these readouts are truly the product of random circuit sampling measurements performed on a quantum computer, they will be truly random numbers. “Certifying the ‘quantumness’ of the output guarantees its randomness,” says Marco Pistoia, JPMorganChase’s head of global technology applied research.
Importantly, this certification is something a classical computer can do. The way this works is that the client samples a subset of the bit strings in the readouts and performs a test called cross-entropy benchmarking. This test measures the probability that the numbers could have come from a non-quantum source. If the client is satisfied with this measurement, they can trust that the samples were genuinely the result of random circuit sampling. Otherwise, they may conclude that the data could have been generated by “spoofing” – that is, using a classical algorithm to mimic a quantum computer. The degree of confidence in this test, and the number of bits they are willing to settle for to achieve this confidence, is up to the client.
High-fidelity quantum computing
In the new work, Pistoia, Aaronson, Hung and colleagues sent challenge circuits to the 56-qubit Quantinuum H2-1 quantum computer over the Internet. The attraction of the Quantinuum H2-1, Pistoia explains, is its high fidelity: “Somebody could say ‘Well, when it comes to randomness, why would you care about accuracy – it’s random anyway’,” he says. “But we want to measure whether the number that we get from Quantinuum really came from a quantum computer, and a low-fidelity quantum computer makes it more difficult to ascertain that with confidence… That’s why we needed to wait all these years, because a low-fidelity quantum computer wouldn’t have given us the certification part.”
The team then certified the randomness of the bits they got back by performing cross-entropy benchmarking using four of the world’s most powerful supercomputers, including Frontier at the US Department of Energy’s Oak Ridge National Laboratory. The results showed that it would have been impossible for a dishonest adversary with similar classical computing power to spoof a quantum computer – provided the client set a short enough time limit.
One drawback is that at present, the computational cost of verifying that random numbers have not been spoofed is similar to the computational cost of spoofing them. “New work is needed to develop approaches for which the certification process can run on a regular computer,” Pistoia says. “I think this will remain an active area of research in the future.”
A more important difference, Foss-Feig argues, is that whereas the other groups used a partly analogue approach to simulating their quantum magnetic system, with all quantum gates activated simultaneously, Quantinuum’s approach divided time into a series of discrete steps, with operations following in a sequence similar to that of a classical computer. This digitization meant the researchers could perform a discrete gate operation as required, between any of the ionic qubits in their lattice. “This digital architecture is an extremely convenient way to compile a very wide range of physical problems,” Foss-Feig says. “You might think, for example, of simulating not just spins, for example, but also fermions or bosons.”
While the researchers say it would be just possible to reproduce these simulations using classical computers, they plan to study larger models soon. A 96-qubit version of their device, called Helios, is slated for launch later in 2025.
“We’ve gone through a shift”
Quantum information scientist Barry Sanders of the University of Calgary, Canada is impressed by all three works. “The real game changer here is Quantinuum’s really nice 56-qubit quantum computer,” he says. “Instead of just being bigger in its number of qubits, it’s hit multiple important targets.”
In Sanders’ view, the computer’s fully digital architecture is important for scalability, although he notes that many in the field would dispute that. The most important development, he adds, is that the research frames the value of a quantum computer in terms of its accomplishments.
“We’ve gone through a shift: when you buy a normal computer, you want to know what that computer can do for you, not how good is the transistor,” he says. “In the old days, we used to say ‘I made a quantum computer and my components are better than your components – my two-qubit gate is better’… Now we say, ‘I made a quantum computer and I’m going to brag about the problem I solved’.”
The random number generation paper is published in Nature. The others are available on the arXiv pre-print server.
A ground-breaking method to create “audible enclaves” – localized zones where sound is perceptible while remaining completely unheard outside – has been unveiled by researchers at Pennsylvania State University and Lawrence Livermore National Laboratory. Their innovation could transform personal audio experiences in public spaces and improve secure communications.
“One of the biggest challenges in sound engineering is delivering audio to specific listeners without disturbing others,” explains Penn State’s Jiaxin Zhong. “Traditional speakers broadcast sound in all directions, and even directional sound technologies still generate audible sound along their entire path. We aimed to develop a method that allows sound to be generated only at a specific location, without any leakage along the way. This would enable applications such as private speech zones, immersive audio experiences, and spatially controlled sound environments.”
To achieve precise audio targeting, the researchers used a phenomenon known as difference-frequency wave generation. This process involves emitting two ultrasonic beams – sound waves with frequencies beyond the range of human hearing – that intersect at a chosen point. At their intersection, these beams interact to produce a lower-frequency sound wave within the audible range. In their experiments, the team used ultrasonic waves at frequencies of 40 kHz and 39.5 kHz. When these waves converge, they generated an audible sound at 500 Hz, which falls within the typical human hearing range of approximately 20 Hz–20 kHz.
To prevent obstacles like human bodies from blocking the sound beams, the researchers used self-bending beams that follow curved paths instead of travelling in straight lines. They did this by passing ultrasound waves through specially designed metasurfaces, which redirected the waves along controlled trajectories, allowing them to meet at a specific point where the sound is generated.
Manipulative metasurfaces
“Metasurfaces are engineered materials that manipulate wave behaviour in ways that natural materials cannot,” said Zhong. “In our study, we use metasurfaces to precisely control the phase of ultrasonic waves, shaping them into self-bending beams. This is similar to how an optical lens bends light.”
The researchers began with computer simulations to model how ultrasonic waves would travel around obstacles, such as a human head, to determine the optimal design for the sound sources and metasurfaces. These simulations confirmed the feasibility of creating an audible enclave at the intersection of the curved beams. Subsequently, the team constructed a physical setup in a room-sized environment to validate their findings experimentally. The results closely matched their simulations, demonstrating the practical viability of their approach.
“Our method allows sound to be produced only in an intended area while remaining completely silent everywhere else,” says Zhong. “By using acoustic metasurfaces, we direct ultrasound along curved paths, making it possible to ‘place’ sound behind objects without a direct line of sight. A person standing inside the enclave can hear the sound, but someone just a few centimetres away will hear almost nothing.”
Initially, the team produced a steady 500 Hz sound within the enclave. By allowing the frequencies of the two ultrasonic sources to vary, they generated a broader range of audible sounds, covering the frequencies from 125 Hz–4 kHz. This expanded range includes much of the human auditory spectrum, increasing the potential applications of the technique.
The ability to generate sound in a confined space without any audible leakage opens up many possible applications. Museums and exhibitions could provide visitors with personalized audio experiences without the need for headphones, allowing individuals to hear different information depending on their location. In cars, drivers could receive navigation instructions without disturbing passengers, who could simultaneously listen to music or other content. Virtual and augmented reality applications could benefit from more immersive soundscapes that do not require bulky headsets.
The technology could also enhance secure communications, creating localized zones where sensitive conversations remain private even in shared spaces. In noisy environments, future adaptations of this method might allow for targeted noise cancellation, reducing unwanted sound in specific areas while preserving important auditory information elsewhere.
Future challenges
While their results are promising, the researchers acknowledge several challenges that must be addressed before the technology can be widely implemented. One concern is the intensity of the ultrasonic beams required to generate audible sound at a practical volume. Currently, achieving sufficient sound levels necessitates ultrasonic intensities that may have unknown effects on human health.
Another challenge is ensuring high-quality sound reproduction. The relationship between the ultrasonic beam parameters and the resulting audible sound is complex, making it difficult to produce clear audio across a wide range of frequencies and volumes.
“We are currently working on improving sound quality and efficiency,” Zhong said. “We are exploring deep learning and advanced nonlinear signal processing methods to optimize sound clarity. Another area of development is power efficiency — ensuring that the ultrasound-to-audio conversion is both effective and safe for practical use. In the long run, we hope to collaborate with industry partners to bring this technology to consumer electronics, automotive audio, and immersive media applications.”
Agrivoltaics is an interdisciplinary research area that lies at the intersection of photovoltaics (PVs) and agriculture. Traditional PV systems used in agricultural settings are made from silicon materials and are opaque. The opaque nature of these solar cells can block sunlight reaching plants and hinder their growth. As such, there’s a need for advanced semi-transparent solar cells that can provide sufficient power but still enable plants to grow instead of casting a shadow over them.
In a recent study headed up at the Institute for Microelectronics and Microsystems (IMM) in Italy, Alessandra Alberti and colleagues investigated the potential of semi-transparent perovskite solar cells as coatings on the roof of a greenhouse housing radicchio seedlings.
Solar cell shading an issue for plant growth
Opaque solar cells are known to induce shade avoidance syndrome in plants. This can cause morphological adaptations, including changes in chlorophyll content and an increased leaf area, as well as a change in the metabolite profile of the plant. Lower UV exposure can also reduce polyphenol content – antioxidant and anti-inflammatory molecules that humans get from plants.
Addressing these issues requires the development of semi-transparent PV panels with high enough efficiencies to be commercially feasible. Some common panels that can be made thin enough to be semi-transparent include organic and dye-sensitized solar cells (DSSCs). While these have been used to provide power while growing tomatoes and lettuces, they typically only have a power conversion efficiency (PCE) of a few percent – a more efficient energy harvester is still required.
A semi-transparent perovskite solar cell greenhouse
Perovskite PVs are seen as the future of the solar cell industry and show a lot of promise in terms of PCE, even if they are not yet up to the level of silicon. However, perovskite PVs can also be made semi-transparent.
Experimental set-up The laboratory-scale greenhouse. (Courtesy: CNR-IMM)
In this latest study, the researchers designed a laboratory-scale greenhouse using a semi-transparent europium (Eu)-enriched CsPbI3 perovskite-coated rooftop and investigated how radicchio seeds grew in the greenhouse for 15 days. They chose this Eu-enriched perovskite composition because CsPbI3 has superior thermal stability compared with other perovskites, making it ideal for long exposures to the Sun’s rays. The addition of Eu into the CsPbI3 structure improved the perovskite stability by minimizing the number of intrinsic defects and increasing the surface-to-volume ratio of perovskite grains.
Alongside this stability, this perovskite also has no volatile components that could potentially effuse under high surface temperatures. It also typically possesses a high PCE – the record for this composition is 21.15%, which is significantly higher and much more commercially feasible than previously possible with organic PVs and DSSCs. This perovskite, therefore, provides a good trade-off between the PCE that can be achieved while transmitting enough light to allow the seedlings to grow.
Low light conditions promote seedling growth
Even though the seedlings were exposed to lower light conditions than natural light, the team found that they grew more quickly, and with bigger leaves, than those under glass panels. This is attributed to the perovskite acting as a filter for only red light to pass through. And red light is known to improve the photosynthetic efficiency and light absorption capabilities of plants, as well as increase the levels of sucrose and hexose within the plant.
The researchers also found that seedlings grown under these conditions had different gene expression patterns compared with those grown under glass. These expression patterns were associated with environmental stress responses, growth regulation, metabolism and light perception, suggesting that the seedlings naturally adapted to different light conditions – although further research is needed to see whether these adaptations will improve the crop yield.
Overall, the use of perovskite PVs strikes a good balance between being able to provide enough power to cover the annual energy needs for irrigation, lighting and air conditioning, while still allowing the seedlings to grow – and grow much quicker and faster. The team suggest that the perovskite solar cells could help with indoor food production operations in the agricultural sector as a potentially affordable solution, although more work now needs to be done on much larger scales to test the technology’s commercial feasibility.
The first results from the Dark Energy Spectroscopic Instrument (DESI) are a cosmological bombshell, suggesting that the strength of dark energy has not remained constant throughout history. Instead, it appears to be weakening at the moment, and in the past it seems to have existed in an extreme form known as “phantom” dark energy.
The new findings have the potential to change everything we thought we knew about dark energy, a hypothetical entity that is used to explain the accelerating expansion of the universe.
“The subject needed a bit of a shake-up, and we’re now right on the boundary of seeing a whole new paradigm,” says Ofer Lahav, a cosmologist from University College London and a member of the DESI team.
DESI is mounted on the Nicholas U Mayall four-metre telescope at Kitt Peak National Observatory in Arizona, and has the primary goal of shedding light on the “dark universe”. The term dark universe reflects our ignorance of the nature of about 95% of the mass–energy of the cosmos.
Intrinsic energy density
Today’s favoured Standard Model of cosmology is the lambda–cold dark matter (CDM) model. Lambda refers to a cosmological constant, which was first introduced by Albert Einstein in 1917 to keep the universe in a steady state by counteracting the effect of gravity. We now know that universe is expanding at an accelerating rate, so lambda is used to quantify this acceleration. It can be interpreted as an intrinsic energy density that is driving expansion. Now, DESI’s findings imply that this energy density is erratic and even more mysterious than previously thought.
DESI is creating a humungous 3D map of the universe. Its first full data release comprise 270 terabytes of data and was made public in March. The data include distance and spectral information about 18.7 million objects including 12.1 million galaxies and 1.6 million quasars. The spectral details of about four million nearby stars nearby are also included.
This is the largest 3D map of the universe ever made, bigger even than all the previous spectroscopic surveys combined. DESI scientists are already working with even more data that will be part of a second public release.
DESI can observe patterns in the cosmos called baryonic acoustic oscillations (BAOs). These were created after the Big Bang, when the universe was filled with a hot plasma of atomic nuclei and electrons. Density waves associated with quantum fluctuations in the Big Bang rippled through this plasma, until about 379,000 years after the Big Bang. Then, the temperature dropped sufficiently to allow the atomic nuclei to sweep up all the electrons. This froze the plasma density waves into regions of high mass density (where galaxies formed) and low density (intergalactic space). These density fluctuations are the BAOs; and they can be mapped by doing statistical analyses of the separation between pairs of galaxies and quasars.
The BAOs grow as the universe expands, and therefore they provide a “standard ruler” that allows cosmologists to study the expansion of the universe. DESI has observed galaxies and quasars going back 11 billion years in cosmic history.
Density fluctuations DESI observations showing nearby bright galaxies (yellow), luminous red galaxies (orange), emission-line galaxies (blue), and quasars (green). The inset shows the large-scale structure of a small portion of the universe. (Courtesy: Claire Lamman/DESI collaboration)
“What DESI has measured is that the distance [between pairs of galaxies] is smaller than what is predicted,” says team member Willem Elbers of the UK’s University of Durham. “We’re finding that dark energy is weakening, so the acceleration of the expansion of the universe is decreasing.”
As co-chair of DESI’s Cosmological Parameter Estimation Working Group, it is Elbers’ job to test different models of cosmology against the data. The results point to a bizarre form of “phantom” dark energy that boosted the expansion acceleration in the past, but is not present today.
The puzzle is related to dark energy’s equation of state, which describes the ratio of pressure of the universe to its energy density. In a universe with an accelerating expansion, the equation of state will have value that is less than about –1/3. A value of –1 characterizes the lambda–CDM model.
However, some alternative cosmological models allow the equation of state to be lower than –1. This means that the universe would expand faster than the cosmological constant would have it do. This points to a “phantom” dark energy that grew in strength as the universe expanded, but then petered out.
“It’s seems that dark energy was ‘phantom’ in the past, but it’s no longer phantom today,” says Elbers. “And that’s interesting because the simplest theories about what dark energy could be do not allow for that kind of behaviour.”
Dark energy takes over
The universe began expanding because of the energy of the Big Bang. We already know that for the first few billion years of cosmic history this expansion was slowing because the universe was smaller, meaning that the gravity of all the matter it contains was strong enough to put the brakes on the expansion. As the density decreased as the universe expanded, gravity’s influence waned and dark energy was able to take over. What DESI is telling us is that at the point that dark energy became more influential than matter, it was in its phantom guise.
“This is really weird,” says Lahav; and it gets weirder. The energy density of dark energy reached a peak at a redshift of 0.4, which equates to about 4.5 billion years ago. At that point, dark energy ceased its phantom behaviour and since then the strength of dark energy has been decreasing. The expansion of the universe is still accelerating, but not as rapidly. “Creating a universe that does that, which gets to a peak density and then declines, well, someone’s going to have to work out that model,” says Lahav.
Scalar quantum field
Unlike the unchanging dark-energy density described by the cosmological constant, a alternative concept called quintessence describes dark energy as a scalar quantum field that can have different values at different times and locations.
However, Elbers explains that a single field such as quintessence is incompatible with phantom dark energy. Instead, he says that “there might be multiple fields interacting, which on their own are not phantom but together produce this phantom equation of state,” adding that “the data seem to suggest that it is something more complicated.”
Before cosmology is overturned, however, more data are needed. On its own, the DESI data’s departure from the Standard Model of cosmology has a statistical significance 1.7σ. This is well below 5σ, which is considered a discovery in cosmology. However, when combined with independent observations of the cosmic microwave background and type Ia supernovae the significance jumps 4.2σ.
“Big rip” avoided
Confirmation of a phantom era and a current weakening would be mean that dark energy is far more complex than previously thought – deepening the mystery surrounding the expansion of the universe. Indeed, had dark energy continued on its phantom course, it would have caused a “big rip” in which cosmic expansion is so extreme that space itself is torn apart.
“Even if dark energy is weakening, the universe will probably keep expanding, but not at an accelerated rate,” says Elbers. “Or it could settle down in a quiescent state, or if it continues to weaken in the future we could get a collapse,” into a big crunch. With a form of dark energy that seems to do what it wants as its equation of state changes with time, it’s impossible to say what it will do in the future until cosmologists have more data.
Lahav, however, will wait until 5σ before changing his views on dark energy. “Some of my colleagues have already sold their shares in lambda,” he says. “But I’m not selling them just yet. I’m too cautious.”
The observations are reported in a series of papers on the arXiv server. Links to the papers can be found here.
Core physics This apple tree at Woolsthorpe Manor is believed to have been the inspiration for Isaac Newton. (Courtesy: Bs0u10e01/CC BY-SA 4.0)
Physicists in the UK have drawn up plans for an International Year of Classical Physics (IYC) in 2027 – exactly three centuries after the death of Isaac Newton. Following successful international years devoted to astronomy (2009), light (2015) and quantum science (2025), they want more recognition for a branch of physics that underpins much of everyday life.
A bright green Flower of Kent apple has now been picked as the official IYC logo in tribute to Newton, who is seen as the “father of classical physics”. Newton, who died in 1727, famously developed our understanding of gravity – one of the fundamental forces of nature – after watching an apple fall from a tree of that variety in his home town of Woolsthorpe, Lincolnshire, in 1666.
“Gravity is central to classical physics and contributes an estimated $270bn to the global economy,” says Crispin McIntosh-Smith, chief classical physicist at the University of Lincoln. “Whether it’s rockets escaping Earth’s pull or skiing down a mountain slope, gravity is loads more important than quantum physics.”
McIntosh-Smith, who also works in cosmology having developed the Cosmic Crisp theory of the universe during his PhD, will now be leading attempts to get endorsement for IYC from the United Nations. He is set to take a 10-strong delegation from Bramley, Surrey, to Paris later this month.
An official gala launch ceremony is being pencilled in for the Travelodge in Grantham, which is the closest hotel to Newton’s birthplace. A parallel scientific workshop will take place in the grounds of Woolsthorpe Manor, with a plenary lecture from TV physicist Brian Cox. Evening entertainment will feature a jazz band.
Numerous outreach events are planned for the year, including the world’s largest demonstration of a wooden block on a ramp balanced by a crate on a pulley. It will involve schoolchildren pouring Golden Delicious apples into the crate to illustrate Newton’s laws of motion. Physicists will also be attempting to break the record for the tallest tower of stacked Braeburn apples.
But there is envy from those behind the 2025 International Year of Quantum Science and Technology. “Of course, classical physics is important but we fear this year will peel attention away from the game-changing impact of quantum physics,” says Anne Oyd from the start-up firm Qrunch, who insists she will only play a cameo role in events. “I believe the impact of classical physics is over-hyped.”
FLASH irradiation, an emerging cancer treatment that delivers radiation at ultrahigh dose rates, has been shown to significantly reduce acute skin toxicity in laboratory mice compared with conventional radiotherapy. Having demonstrated this effect using proton-based FLASH treatments, researchers from Aarhus University in Denmark have now repeated their investigations using electron-based FLASH (eFLASH).
Reporting their findings in Radiotherapy and Oncology, the researchers note a “remarkable similarity” between eFLASH and proton FLASH with respect to acute skin sparing.
Principal investigator Brita Singers Sørensen and colleagues quantified the dose–response modification of eFLASH irradiation for acute skin toxicity and late fibrotic toxicity in mice, using similar experimental designs to those previously employed for their proton FLASH study. This enabled the researchers to make direct quantitative comparisons of acute skin response between electrons and protons. They also compared the effectiveness of the two modalities to determine whether radiobiological differences were observed.
Over four months, the team examined 197 female mice across five irradiation experiments. After being weighed, earmarked and given an ID number, each mouse was randomized to receive either eFLASH irradiation (average dose rate of 233 Gy/s) or conventional electron radiotherapy (average dose rate of 0.162 Gy/s) at various doses.
For the treatment, two unanaesthetized mice (one from each group) were restrained in a jig with their right legs placed in a water bath and irradiated by a horizontal 16 MeV electron beam. The animals were placed on opposite sides of the field centre and irradiated simultaneously, with their legs at a 3.2 cm water-equivalent depth, corresponding to the dose maximum.
The researchers used a diamond detector to measure the absolute dose at the target position in the water bath and assumed that the mouse foot target received the same dose. The resulting foot doses were 19.2–57.6 Gy for eFLASH treatments and 19.4–43.7 Gy for conventional radiotherapy, chosen to cover the entire range of acute skin response.
FLASH confers skin protection
To evaluate the animals’ response to irradiation, the researchers assessed acute skin damage daily from seven to 28 days post-irradiation using an established assay. They weighed the mice weekly, and one of three observers blinded to previous grades and treatment regimens assessed skin toxicity. Photographs were taken whenever possible. Skin damage was also graded using an automated deep-learning model, generating a dose–response curve independent of observer assessments.
The researchers also assessed radiation-induced fibrosis in the leg joint, biweekly from weeks nine to 52 post-irradiation. They defined radiation-induced fibrosis as a permanent reduction of leg extensibility by 75% or more in the irradiated leg compared with the untreated left leg.
To assess the tissue-sparing effect of eFLASH, the researchers used dose–response curves to derive TD50 – the toxic dose eliciting a skin response in 50% of mice. They then determined a dose modification factor (DMF), defined as the ratio of eFLASH TD50 to conventional TD50. A DMF larger than one suggests that eFLASH reduces toxicity.
The eFLASH treatments had a DMF of 1.45–1.54 – in other words, a 45–54% higher dose was needed to cause comparable skin toxicity to that caused by conventional radiotherapy. “The DMF indicated a considerable acute skin sparing effect of eFLASH irradiation,” the team explain. Radiation-induced fibrosis was also reduced using eFLASH, with a DMF of 1.15.
Reducing skin damage Dose-response curves for acute skin toxicity (left) and fibrotic toxicity (right) for conventional electron radiotherapy and electron FLASH treatments. (Courtesy: CC BY 4.0/adapted from Radiother. Oncol. 10.1016/j.radonc.2025.110796)
For DMF-based equivalent doses, the development of skin toxicity over time was similar for eFLASH and conventional treatments, throughout the dose groups. This supports the hypothesis that eFLASH modifies the dose–response rather than causing a changed biological mechanism. The team also suggests that the difference in DMF seen for fibrotic response and acute skin damage suggests that FLASH sparing depends on tissue type and might be specific to acute and late-responding tissue.
Similar skin damage between electrons and protons
Sørensen and colleagues compared their findings to previous studies of normal-tissue damage from proton irradiation, both in the entrance plateau and using the spread-out Bragg peak (SOBP). DMF values for electrons (1.45–1.54) were similar to those of transmission protons (1.44–1.50) and slightly higher than for SOBP protons (1.35–1.40). “Despite dose rate and pulse structure differences, the response to electron irradiation showed substantial similarity to transmission and SOBP damage,” they write.
Although the average eFLASH dose rate (233 Gy/s) was higher than that of the proton studies (80 and 60 Gy/s), it did not appear to influence the biological response. This supports the hypothesis that beyond a certain dose rate threshold, the tissue-sparing effect of FLASH does not increase notably.
The researchers point out that previous studies also found biological similarities in the FLASH effect for electrons and protons, with this latest work adding data on similar comparable and quantifiable effects. They add, however, that “based on the data of this study alone, we cannot say that the biological response is identical, nor that the electron and proton irradiation elicit the same biological mechanisms for DNA damage and repair. This data only suggests a similar biological response in the skin.”
Last year the UK government placed a new cap of £9535 on annual tuition fees, a figure that will likely rise in the coming years as universities tackle a funding crisis. Indeed, shortfalls are already affecting institutions, with some saying they will run out of money in the next few years. The past couple of months alone have seen several universities announce plans to shed academic staff and even shut departments.
Whether you agree with tuition fees or not, the fact is that students will continue to pay a significant sum for a university education. Value for money is part of the university proposition and lecturers can play a role by conveying the excitement of their chosen field. But what are the key requirements to help do so? In the late 1990s we carried out a study aimed at improving the long-term performance of students who initially struggled with university-level physics.
With funding from the Higher Education Funding Council for Wales, the study involved structured interviews with 28 students and 17 staff. An internal report – The Rough Guide to Lecturing – was written which, while not published, informed the teaching strategy of Cardiff University’s physics department for the next quarter of a century.
From the findings we concluded that lecture courses can be significantly enhanced by simply focusing on three principles, which we dub the three “E”s. The first “E” is enthusiasm. If a lecturer appears bored with the subject – perhaps they have given the same course for many years – why should their students be interested? This might sound obvious, but a bit of reading, or examining the latest research, can do wonders to freshen up a lecture that has been given many times before.
For both old and new courses it is usually possible to highlight at least one research current paper in a semester’s lectures. Students are not going to understand all of the paper, but that is not the point – it is the sharing in contemporary progress that will elicit excitement. Commenting on a nifty experiment in the work, or the elegance of the theory, can help to inspire both teacher and student.
As well as freshening up the lecture course’s content, another tip is to mention the wider context of the subject being taught, perhaps by mentioning its history or possible exciting applications. Be inventive –we have evidence of a lecturer “live” translating parts of Louis de Broglie’s classic 1925 paper “La relation du quantum et la relativité” during a lecture. It may seem unlikely, but the students responded rather well to that.
Supporting students
The second “E” is engagement. The role of the lecturer as a guide is obvious, but it should also be emphasized that the learner’s desire is to share the lecturer’s passion for, and mastery of, a subject. Styles of lecturing and visual aids can vary greatly between people, but the important thing is to keep students thinking.
Don’t succumb to the apocryphal definition of a lecture as only a means of transferring the lecturer’s notes to the student’s pad without first passing through the minds of either person. In our study, when the students were asked “What do you expect from a lecture?”, they responded simply to learn something new, but we might extend this to a desire to learn how to do something new.
Simple demonstrations can be effective for engagement. Large foam dice, for example, can illustrate the non-commutation of 3D rotations. Fidget-spinners in the hands of students can help explain the vector nature of angular momentum. Lecturers should also ask rhetorical questions that make students think, but do not expect or demand answers, particularly in large classes.
More importantly, if a student asks a question, never insult them – there is no such thing as a “stupid” question. After all, what may seem a trivial point could eliminate a major conceptual block for them. If you cannot answer a technical query, admit it and say you will find out for next time – but make sure you do. Indeed, seeing that the lecturer has to work at the subject too can be very encouraging for students.
The final “E” is enablement. Make sure that students have access to supporting material. This could be additional notes; a carefully curated reading list of papers and books; or sets of suitable interesting problems with hints for solutions, worked examples they can follow, and previous exam papers. Explain what amount of self-study will be needed if they are going to benefit from the course.
Have clear and accessible statements concerning the course content and learning outcomes – in particular, what students will be expected to be able to do as a result of their learning. In our study, the general feeling was that a limited amount of continuous assessment (10–20% of the total lecture course mark) encourages both participation and overall achievement, provided students are given good feedback to help them improve.
Next time you are planning to teach a new course, or looking through those decades-old notes, remember enthusiasm, engagement and enablement. It’s not rocket science, but it will certainly help the students learn it.
Researchers in China have unveiled a 105-qubit quantum processor that can solve in minutes a quantum computation problem that would take billions of years using the world’s most powerful classical supercomputers. The result sets a new benchmark for claims of so-called “quantum advantage”, though some previous claims have faded after classical algorithms improved.
The fundamental promise of quantum computation is that it will reduce the computational resources required to solve certain problems. More precisely, it promises to reduce the rate at which resource requirements grow as problems become more complex. Evidence that a quantum computer can solve a problem faster than a classical computer – quantum advantage – is therefore a key measure of success.
The first claim of quantum advantage came in 2019, when researchers at Google reported that their 53-qubit Sycamore processor had solved a problem known as random circuit sampling (RCS) in just 200 seconds. Xiaobu Zhu, a physicist at the University of Science and Technology of China (USTC) in Hefei who co-led the latest work, describes RCS as follows: “First, you initialize all the qubits, then you run them in single-qubit and two-qubit gates and finally you read them out,” he says. “Since this process includes every key element of quantum computing, such as initializing the gate operations and readout, unless you have really good fidelity at each step you cannot demonstrate quantum advantage.”
At the time, the Google team claimed that the best supercomputers would take 10::000 years to solve this problem. However, subsequent improvements to classical algorithms reduced this to less than 15 seconds. This pattern has continued ever since, with experimentalists pushing quantum computing forward even as information theorists make quantum advantage harder to achieve by improving techniques used to simulate quantum algorithms on classical computers.
Recent claims of quantum advantage
In October 2024, Google researchers announced that their 67-qubit Sycamore processor had solved an RCS problem that would take an estimated 3600 years for the Frontier supercomputer at the US’s Oak Ridge National Laboratory to complete. In the latest work, published in Physical Review Letters, Jian-Wei Pan, Zhu and colleagues set the bar even higher. They show that their new Zuchongzhi 3.0 processor can complete in minutes an RCS calculation that they estimate would take Frontier billions of years using the best classical algorithms currently available.
To achieve this, they redesigned the readout circuit of their earlier Zuchongzhi processor to improve its efficiency, modified the structures of the qubits to increase their coherence times and increased the total number of superconducting qubits to 105. “We really upgraded every aspect and some parts of it were redesigned,” Zhu says.
Google’s latest processor, Willow, also uses 105 superconducting qubits, and in December 2024 researchers there announced that they had used it to demonstrate quantum error correction. This achievement, together with complementary advances in Rydberg atom qubits from Harvard University’s Mikhail Lukin and colleagues, was named Physics World’s Breakthrough of the Year in 2024. However, Zhu notes that Google has not yet produced any peer-reviewed research on using Willow for RCS, making it hard to compare the two systems directly.
The USTC team now plans to demonstrate quantum error correction on Zuchongzhi 3.0. This will involve using an error correction code such as the surface code to combine multiple physical qubits into a single “logical qubit” that is robust to errors. “The requirements for error-correction readout are much more difficult than for RCS,” Zhu notes. “RCS only needs one readout, whereas error-correction needs readout many times with very short readout times…Nevertheless, RCS can be a benchmark to show we have the tools to run the surface code. I hope that, in my lab, within a few months we can demonstrate a good-quality error correction code.”
“How progress gets made”
Quantum information theorist Bill Fefferman of the University of Chicago in the US praises the USTC team’s work, describing it as “how progress gets made”. However, he offers two caveats. The first is that recent demonstrations of quantum advantage do not have efficient classical verification schemes – meaning, in effect, that classical computers cannot check the quantum computer’s work. While the USTC researchers simulated a smaller problem on both classical and quantum computers and checked that the answers matched, Fefferman doesn’t think this is sufficient. “With the current experiments, at the moment you can’t simulate it efficiently, the verification doesn’t work anymore,” he says.
The second caveat is that the rigorous hardness arguments proving that the classical computational power needed to solve an RCS problem grows exponentially with the problem’s complexity apply only to situations with no noise. This is far from the case in today’s quantum computers, and Fefferman says this loophole has been exploited in many past quantum advantage experiments.
Still, he is upbeat about the field’s prospects. “The fact that the original estimates the experimentalists gave did not match some future algorithm’s performance is not a failure: I see that as progress on all fronts,” he says. “The theorists are learning more and more about how these systems work and improving their simulation algorithms and, based on that, the experimentalists are making their systems better and better.”
Sometimes, you just have to follow your instincts and let serendipity take care of the rest.
North Ronaldsay, a remote island north of mainland Orkney, has a population of about 50 and a lot of sheep. In the early 19th century, it thrived on the kelp ash industry, producing sodium carbonate (soda ash), potassium salts and iodine for soap and glass making.
But when cheaper alternatives became available, the island turned to its unique breed of seaweed-eating sheep. In 1832 islanders built a 12-mile-long dry stone wall around the island to keep the sheep on the shore, preserving inland pasture for crops.
My connection with North Ronaldsay began last summer when my partner, Sue Bowler, and I volunteered for the island’s Sheep Festival, where teams of like minded people rebuild sections of the crumbling wall. That experience made us all the more excited when we learned that North Ronaldsay also had a science festival.
This year’s event took place on 14–16 March and getting there was no small undertaking. From our base in Leeds, the journey involved a 500-mile drive to a ferry, a crossing to Orkney mainland, and finally, a flight in a light aircraft. With just 50 inhabitants, we had no idea how many people would turn up but instinct told us it was worth the trip.
Sue, who works for the Royal Astronomical Society (RAS), presented Back to the Moon, while together we ran hands-on maker activities, a geology walk and a trip to the lighthouse, where we explored light beams and Fresnel lenses.
The Yorkshire Branch of the Institute of Physics (IOP) provided laser-cut hoist kits to demonstrate levers and concepts like mechanical advantage, while the RAS shared Connecting the Dots – a modern LED circuit version of a Victorian after-dinner card set illustrating constellations.
Hands-on science Participants get stuck into maker activities at the festival. (Courtesy: @Lazy.Photon on Instagram)
Despite the island’s small size, the festival drew attendees from neighbouring islands, with 56 people participating in person and another 41 joining online. Across multiple events, the total accumulated attendance reached 314.
One thing I’ve always believed in science communication is to listen to your audience and never make assumptions. Orkney has a rich history of radio and maritime communications, shaped in part by the strategic importance of Scapa Flow during the Second World War.
Stars in their eyes Making a constellation board at the North Ronaldsay Science Festival. (Courtesy: @Lazy.Photon on Instagram)
The Orkney Wireless Museum is a testament to this legacy, and one of our festival guests had even reconstructed a working 1930s Baird television receiver for the museum.
Leaving North Ronaldsay was hard. The festival sparked fascinating conversations, and I hope we inspired a few young minds to explore physics and astronomy.
The author would like to thanks Alexandra Wright (festival organizer), Lucinda Offer (education, outreach and events officer at the RAS) and Sue Bowler (editor of Astronomy & Geophysics)
Cell separation Illustration of the fabricated optimal acousto-microfluidic chip. (Courtesy: Afshin Kouhkord and Naserifar Naser)
Analysing circulating tumour cells (CTCs) in the blood could help scientists detect cancer in the body. But separating CTCs from blood is a difficult, laborious process and requires large sample volumes.
Researchers at the K N Toosi University of Technology (KNTU) in Tehran, Iran believe that ultrasonic waves could separate CTCs from red blood cells accurately, in an energy efficient way and in real time. They publish their study in the journal Physics of Fluids.
“In a broader sense, we asked: ‘How can we design a microfluidic, lab-on-a-chip device powered by SAWs [standing acoustic waves] that remains simple enough for medical experts to use easily, while still delivering precise and efficient cell separation?’,” says senior author Naser Naserifar, an assistant professor in mechanical engineering at KNTU. “We became interested in acoustofluidics because it offers strong, biocompatible forces that effectively handle cells with minimal damage.”
Acoustic waves can deliver enough force to move cells over small distances without damaging them. The researchers used dual pressure acoustic fields at critical positions in a microchannel to separate CTCs from other cells. The CTCs are gathered at an outlet for further analyses, cultures and laboratory procedures.
In the process of designing the chip, the researchers integrated computational modelling, experimental analysis and artificial intelligence (AI) algorithms to analyse acoustofluidic phenomena and generate datasets that predict CTC migration in the body.
“We introduced an acoustofluidic microchannel with two optimized acoustic zones, enabling fast, accurate separation of CTCs from RBCs [red blood cells],” explains Afshin Kouhkord, who performed the work while a master’s student in the Advance Research in Micro And Nano Systems Lab at KNTU. “Despite the added complexity under the hood, the resulting chip is designed for simple operation in a clinical environment.”
So far, the researchers have evaluated the device with numerical simulations and tested it using a physical prototype. Simulations modelled fluid flow, acoustic pressure fields and particle trajectories. The physical prototype was made of lithium niobate, with polystyrene microspheres used as surrogates for red blood cells and CTCs. Results from the prototype agreed with numerical simulations to within 3.5%.
“This innovative approach in laboratory-on-chip technology paves the way for personalized medicine, real-time molecular analysis and point-of-care diagnostics,” Kouhkord and Naserifar write.
The researchers are now refining their design, aiming for a portable device that could be operated with a small battery pack in resource-limited and remote environments.
D-Wave Systems has used quantum annealing to do simulations of quantum magnetic phase transitions. The company claims that some of their calculations would be beyond the capabilities of the most powerful conventional (classical) computers – an achievement referred to as quantum advantage. This would mark the first time quantum computers had achieved such a feat for a practical physics problem.
However, the claim has been challenged by two independent groups of researchers in Switzerland and the US, who have published papers on the arXiv preprint server that report that similar calculations could be done using classical computers. D-Wave’s experts believe these classical results fall well short of the company’s own accomplishments, and some independent experts agree with D-Wave.
While most companies trying to build practical quantum computers are developing “universal” or “gate model” quantum systems, US-based D-Wave has principally focused on quantum annealing devices. While such systems are less programmable than gate model systems, the approach has allowed D-Wave to build machines with many more quantum bits (qubits) than any of its competitors. Whereas researchers at Google Quantum AI and researchers in China have, independently, recently unveiled 105-qubit universal quantum processors, some of D-Wave’s have more than 5000 qubits. Moreover, D-Wave’s systems are already in practical use, with hardware owned by the Japanese mobile phone company NTT Docomo being used to optimize cell tower operations. Systems are also being used for network optimization at motor companies, food producers and elsewhere.
Trevor Lanting, the chief development officer at D-Wave, explains the central principles behind quantum-annealing computation: “You have a network of qubits with programmable couplings and weights between those devices and then you program in a certain configuration – a certain bias on all of the connections in the annealing processor,” he says. The quantum annealing algorithm places the system in a superposition of all possible states of the system. When the couplings are slowly switched off, the system settles into its most energetically favoured state – which is the desired solution.
Quantum hiking
Lanting compares this to a hiker in the mountains searching for the lowest point on a landscape: “As a classical hiker all you can really do is start going downhill until you get to a minimum, he explains; “The problem is that, because you’re not doing a global search, you could get stuck in a local valley that isn’t at the minimum elevation.” By starting out in a quantum superposition of all possible states (or locations in the mountains), however, quantum annealing is able to find the global potential minimum.
In the new work, researchers at D-Wave and elsewhere set out to show that their machines could use quantum annealing to solve practical physics problems beyond the reach of classical computers. The researchers used two different 1200-qubit processors to model magnetic quantum phase transitions. This is a similar problem to one studied in gate-model systems by researchers at Google and Harvard University in independent work announced in February.
“When water freezes into ice, you can sometimes see patterns in the ice crystal, and this is a result of the dynamics of the phase transition,” explains Andrew King, who is senior distinguished scientist at D-Wave and the lead author of a paper describing the work. “The experiments that we’re demonstrating shed light on a quantum analogue of this phenomenon taking place in a magnetic material that has been programmed into our quantum processors and a phase transition driven by a magnetic field.” Understanding such phase transitions are important in the discovery and design of new magnetic materials.
Quantum versus classical
The researchers studied multiple configurations, comprising ever-more spins arranged in ever-more complex lattice structures. The company says that its system performed the most complex simulation in minutes. They also ascertained how long it would take to do the simulations using several leading classical computation techniques, including neural network methods, and how the time to achieve a solution grew with the complexity of the problem. Based on this, they extrapolated that the most complex lattices would require almost a million years on Frontier, which is one of the world’s most powerful supercomputers.
However, two independent groups – one at EPFL in Switzerland and one at the Flatiron Institute in the US – have posted papers on the arXiv preprint server claiming to have done some of the less complex calculations using classical computers. They argue that their results should scale simply to larger sizes; the implication being that classical computers could solve the more complicated problems addressed by D-Wave.
King has a simple response: “You don’t just need to do the easy simulations, you need to do the hard ones as well, and nobody has demonstrated that.” Lanting adds that “I see this as a healthy back and forth between quantum and classical methods, but I really think that, with these results, we’re pulling ahead of classical methods on the biggest scales we can calculate”.
Very interesting work
Frank Verstraete of the University of Cambridge is unsurprised by some scientists’ scepticism. “D-Wave have historically been the absolute champions at overselling what they did,” he says. “But now it seems they’re doing something nobody else can reproduce, and in that sense it’s very interesting.” He does note, however, that the specific problem chosen is not, in his view an interesting one from a physics perspective, and has been chosen purely to be difficult for a classical computer.
Daniel Lidar of the University of Southern California, who has previously collaborated with D-Wave on similar problems but was not involved in the current work, says “I do think this is quite the breakthrough…The ability to anneal very fast on the timescales of the coherence times of the qubits has now become possible, and that’s really a game changer here.” He concludes that “the arms race is destined to continue between quantum and classical simulations, and because, in all likelihood, these are problems that are extremely hard classically, I think the quantum win is going to become more and more indisputable.”
Scientists who have been publicly accused of sexual misconduct see a significant and immediate decrease in the rate at which their work is cited, according to a study by behavioural scientists in the US. However, researchers who are publicly accused of scientific misconduct are found not to suffer the same drop in citations (PLOS One20 e0317736). Despite its flaws, citation rates are often seen a marker of impact and quality.
The study was carried by a team led by Giulia Maimone from the University of California, Los Angeles, who collected data from the Web of Science covering 31,941 scientific publications across 18 disciplines. They then analysed the citation rates for 5888 papers authored by 30 researchers accused of either sexual or scientific misconduct, the latter including data fabrication, falsification and plagiarism.
Maimone told Physics World that they used strict selection criteria to ensure that the two groups of academics were comparable and that the accusations against them were public. This meant her team only used scholars whose misconduct allegations have been reported in the media and had “detailed accounts of the allegations online”.
Maimone’s team concluded that papers by scientists accused of sexual misconduct experienced a significant drop in citations in the three years after allegations become public compared with a “control” group of academics of a similar professional standing. Those accused of scientific fraud, meanwhile, saw no statistically significant change in the citation rates of their papers.
Further work
To further explore attitudes towards sexual and scientific misconduct, the researchers surveyed 231 non-academics and 240 academics. The non-academics considered sexual misconduct more reprehensible than scientific misconduct and more deserving of punishment, while academics claimed that they would more likely continue to cite researchers accused of sexual misconduct as compared to scientific misconduct. “Exactly the opposite of what we observe in the real data,” adds Maimone.
According to the researchers, there are two possible explanations for this discrepancy. One is that academics, according to Maimone, “overestimate their ability to disentangle the scientists from the science”. Another is that scientists are aware that they would not cite sexual harassers, but they are unwilling to admit it because they feel they should take a harsher professional approach towards scientific misconduct.
Maimone says they would now like to explore the longer-term consequences of misconduct as well as the psychological mechanisms behind the citation drop for those accused of sexual misconduct. “Do [academics] simply want to distance themselves from these allegations or are they actively trying to punish these scholars?” she asks.
Researchers have demonstrated that they can remotely detect radioactive material from 10 m away using short-pulse CO2 lasers – a distance over ten times farther than achieved via previous methods.
Conventional radiation detectors, such as Geiger counters, detect particles that are emitted by the radioactive material, typically limiting their operational range to the material’s direct vicinity. The new method, developed by a research team headed up at the University of Maryland, instead leverages the ionization in the surrounding air, enabling detection from much greater distances.
The study may one day lead to remote sensing technologies that could be used in nuclear disaster response and nuclear security.
Using atmospheric ionization
Radioactive materials emit particles – such as alpha, beta or gamma particles – that can ionize air molecules, creating free electrons and negative ions. These charged particles are typically present at very low concentrations, making them difficult to detect.
Senior author Howard Milchberg and colleagues – also from Brookhaven National Laboratory, Los Alamos National Laboratory and Lawrence Livermore National Laboratory – demonstrated that CO2 lasers could accelerate these charged particles, causing them to collide with neutral gas molecules, in turn creating further ionization. These additional free charges would then undergo the same laser-induced accelerations and collisions, leading to a cascade of charged particles.
This effect, known as “electron avalanche breakdown”, can create microplasmas that scatter laser light. By measuring the profile of the backscattered light, researchers can detect the presence of radioactive material.
The team tested their technique using a 3.6-mCi polonium-210 alpha particle source at a standoff distance of 10 m, significantly longer than previous experiments that used different types of lasers and electromagnetic radiation sources.
“The researchers successfully demonstrated 10-m standoff detection of radioactive material, significantly surpassing the previous range of approximately 1 m,” she says.
Milchberg and collaborators had previously used a mid-infrared laser in a similar experiment in 2019. Changing to a long-wavelength (9.2 μm) CO2 laser brought significant advantages, he says.
“You can’t use any laser to do this cascading breakdown process,” Milchberg explains. The CO2 laser’s wavelength was able to enhance the avalanche process, while being low energy enough to not create its own ionization sources. “CO2 is sort of the limit for long wavelengths on powerful lasers and it turns out CO2 lasers are very, very efficient as well,” he says. “So this is like a sweet spot.”
Imaging microplasmas
The team also used a CMOS camera to capture visible-light emissions from the microplasmas. Milchberg says that this fluorescence around radioactive sources resembled balls of plasma, indicating the localized regions where electron avalanche breakdowns had occurred.
By counting these “plasma balls” and calibrating them against the backscattered laser signal, the researchers could link fluorescence intensity to the density of ionization in the air, and use that to determine the type of radiation source.
The CMOS imagers, however, had to be placed close to the measured radiation source, reducing their applicability to remote sensing. “Although fluorescence imaging is not practical for field deployment due to the need for close-range cameras, it provides a valuable calibration tool,” Milchberg says.
Scaling to longer distances
The researchers believe their method can be extended to standoff distances exceeding 100 m. The primary limitation is the laser’s focusing geometry, which would affect the regions in which it could trigger an avalanche breakdown. A longer focal length would require a larger laser aperture but could enable kilometre-scale detection.
Choi points out, however, that deploying a CO2 laser may be difficult in real-world applications. “A CO₂ laser is a bulky system, making it challenging to deploy in a portable manner in the field,” she says, adding that mounting the laser for long-range detection may be a solution.
Milchberg says that the next steps will be to continue developing a technique that can differentiate between different types of radioactive sources completely remotely. Choi agrees, noting that accurately quantifying both the amount and type of radioactive material continues to be a significant hurdle to realising remote sensing technologies in the field.
“There’s also the question of environmental conditions,” says Milchberg, explaining that it is critical to ensure that detection techniques are robust against the noise introduced by aerosols or air turbulence.
The Square Kilometre Array (SKA) Observatory has released the first images from its partially built low-frequency telescope in Australia, known as SKA-Low.
The new SKA-Low image was created using 1024 two-metre-high antennas. It shows an area of the sky that would be obscured by a person’s clenched fist held at arm’s length.
Observed at 150 MHz to 175 MHz, the image contains 85 of the brightest known galaxies in that region, each with a black hole at their centre.
“We are demonstrating that the system as a whole is working,” notes SKA Observatory director-general Phil Diamond. “As the telescopes grow, and more stations and dishes come online, we’ll see the images improve in leaps and bounds and start to realise the full power of the SKAO.”
SKA-Low will ultimately have 131 072 two-metre-high antennas that will be clumped together in arrays to act as a single instrument.
These arrays collect the relatively quiet signals from space and combine them to produce radio images of the sky with the aim of answering some of cosmology’s most enigmatic questions, including what dark matter is, how galaxies form, and if there is other life in the universe.
When the full SKA-Low gazes at the same portion of sky as captured in the image released yesterday, it will be able to observe more than 600,000 galaxies.
“The bright galaxies we can see in this image are just the tip of the iceberg,” says George Heald, lead commissioning scientist for SKA-Low. “With the full telescope we will have the sensitivity to reveal the faintest and most distant galaxies, back to the early universe when the first stars and galaxies started to form.”
‘Milestone’ achieved
SKA-Low is one of two telescopes under construction by the observatory. The other, SKA-Mid, which observes mid-frequency range, will include 197 three-storey dishes and is being built in South Africa.
The telescopes, with a combined price tag of £1bn, are projected to begin making science observations in 2028. They are being funded through a consortium of member states, including China, Germany and the UK.
University of Cambridge astrophysicist Eloy de Lera Acedo, who is principal Investigator at his institution for the observatory’s science data processor, says the first image from SKA-Low is an “important milestone” for the project.
“It is worth remembering that these images now require a lot of work, and a lot more data to be captured with the telescope as it builds up, to reach the science quality level we all expect and hope for,” he adds.
Rob Fender, an astrophysicist at the University of Oxford, who is not directly involved in the SKA Observatory, says that the first image “hints at the enormous potential” for the array that will eventually “provide humanity’s deepest ever view of the universe at wavelengths longer than a metre”.
A new study probing quantum phenomena in neurons as they transmit messages in the brain could provide fresh insight into how our brains function.
In this project, described in the Computational and Structural Biotechnology Journal, theoretical physicist Partha Ghose from the Tagore Centre for Natural Sciences and Philosophy in India, together with theoretical neuroscientist Dimitris Pinotsis from City St George’s, University of London and the MillerLab of MIT, proved that established equations describing the classical physics of brain responses are mathematically equivalent to equations describing quantum mechanics. Ghose and Pinotsis then derived a Schrödinger-like equation specifically for neurons.
Our brains process information via a vast network containing many millions of neurons, which can each send and receive chemical and electrical signals. Information is transmitted by nerve impulses that pass from one neuron to the next, thanks to a flow of ions across the neuron’s cell membrane. This results in an experimentally detectable change in electrical potential difference across the membrane known as the “action potential” or “spike”.
When this potential passes a threshold value, the impulse is passed on. But below the threshold for a spike, a neuron’s action potential randomly fluctuates in a similar way to classical Brownian motion – the continuous random motion of tiny particles suspended in a fluid – due to interactions with its surroundings. This creates the so-called “neuronal noise” that the researchers investigated in this study.
Previously, “both physicists and neuroscientists have largely dismissed the relevance of standard quantum mechanics to neuronal processes, as quantum effects are thought to disappear at the large scale of neurons,” says Pinotsis. But some researchers studying quantum cognition hold an alternative to this prevailing view, explains Ghose.
“They have argued that quantum probability theory better explains certain cognitive effects observed in the social sciences than classical probability theory,” Ghose tells Physics World. “[But] most researchers in this field treat quantum formalism [the mathematical framework describing quantum behaviour] as a purely mathematical tool, without assuming any physical basis in quantum mechanics. I found this perspective rather perplexing and unsatisfactory, prompting me to explore a more rigorous foundation for quantum cognition – one that might be physically grounded.”
As such, Ghose and Pinotsis began their work by taking ideas from American mathematician Edward Nelson, who in 1966 derived the Schrödinger equation – which predicts the position and motion of particles in terms of a probability wave known as a wavefunction – using classical Brownian motion.
Firstly they proved that the variables in the classical equations for Brownian motion that describe the random neuronal noise seen in brain activity also obey quantum mechanical equations, deriving a Schrödinger-like equation for a single neuron. This equation describes neuronal noise by revealing the probability of a neuron having a particular value of membrane potential at a specific instant. Next, the researchers showed how the FitzHugh-Nagumo equations, which are widely used for modelling neuronal dynamics, could be re-written as a Schrödinger equation. Finally, they introduced a neuronal constant in these Schrödinger-like equations that is analogous to Planck’s constant (which defines the amount of energy in a quantum).
“I got excited when the mathematical proof showed that the FitzHugh-Nagumo equations are connected to quantum mechanics and the Schrödinger equation,” enthuses Pinotsis. “This suggested that quantum phenomena, including quantum entanglement, might survive at larger scales.”
“Penrose and Hameroff have suggested that quantum entanglement might be related to lack of consciousness, so this study could shed light on how anaesthetics work,” he explains, adding that their work might also connect oscillations seen in recordings of brain activity to quantum phenomena. “This is important because oscillations are considered to be markers of diseases: the brain oscillates differently in patients and controls and by measuring these oscillations we can tell whether a person is sick or not.”
Going forward, Ghose hopes that “neuroscientists will get interested in our work and help us design critical neuroscience experiments to test our theory”. Measuring the energy levels for neurons predicted in this study, and ultimately confirming the existence of a neuronal constant along with quantum effects including entanglement would, he says, “represent a big step forward in our understanding of brain function”.
(Courtesy: EHT Collaboration; Los Alamos National Laboratory)
1 When the Event Horizon Telescope imaged a black hole in 2019, what was the total mass of all the hard drives needed to store the data? A 1 kg B 50 kg C 500 kg D 2000 kg
2 In 1956 MANIAC I became the first computer to defeat a human being in chess, but because of its limited memory and power, the pawns and which other pieces had to be removed from the game? A Bishops B Knights C Queens D Rooks
(Courtesy: IOP Publishing; CERN)
3 The logic behind the Monty Hall problem, which involves a car and two goats behind different doors, is one of the cornerstones of machine learning. On which TV game show is it based? A Deal or No Deal BFamily Fortunes CLet’s Make a Deal DWheel of Fortune
4 In 2023 CERN broke which barrier for the amount of data stored on devices at the lab? A 10 petabytes (1016 bytes) B 100 petabytes (1017 bytes) C 1 exabyte (1018 bytes) D 10 exabytes (1019 bytes)
5 What was the world’s first electronic computer? A Atanasoff–Berry Computer (ABC) B Electronic Discrete Variable Automatic Computer (EDVAC) C Electronic Numerical Integrator and Computer (ENIAC) D Small-Scale Experimental Machine (SSEM)
6 What was the outcome of the chess match between astronaut Frank Poole and the HAL 9000 computer in the movie 2001: A Space Odyssey? A Draw B HAL wins C Poole wins D Match abandoned
7 Which of the following physics breakthroughs used traditional machine learning methods? A Discovery of the Higgs boson (2012) B Discovery of gravitational waves (2016) C Multimessenger observation of a neutron-star collision (2017) D Imaging of a black hole (2019)
8 The physicist John Hopfield shared the 2024 Nobel Prize for Physics with Geoffrey Hinton for their work underpinning machine learning and artificial neural networks – but what did Hinton originally study? A Biology B Chemistry C Mathematics D Psychology
9 Put the following data-driven discoveries in chronological order. A Johann Balmer’s discovery of a formula computing wavelength from Anders Ångström’s measurements of the hydrogen lines B Johannes Kepler’s laws of planetary motion based on Tycho Brahe’s astronomical observations C Henrietta Swan Leavitt’s discovery of the period-luminosity relationship for Cepheid variables D Ole Rømer’s estimation of the speed of light from observations of the eclipses of Jupiter’s moon Io
10 Inspired by Alan Turing’s “Imitation Game” – in which an interrogator tries to distinguish between a human and machine – when did Joseph Weizenbaum develop ELIZA, the world’s first “chatbot”? A 1964 B 1984 C 2004 D 2024
11 What does the CERN particle-physics lab use to store data from the Large Hadron Collider? A Compact discs B Hard-disk drives C Magnetic tape D Solid-state drives
12 In preparation for the High Luminosity Large Hadron Collider, CERN tested a data link to the Nikhef lab in Amsterdam in 2024 that ran at what speed? A 80 Mbps B 8 Gbps C 80 Gbps D 800 Gbps
13 When complete, the Square Kilometre Array telescope will be the world’s largest radio telescope. How many petabytes of data is it expected to archive per year? A 15 B 50 C 350 D 700
This quiz is for fun and there are no prizes. Answers will be published in April.
Helium deep with the Earth could bond with iron to form stable compounds – according to experiments done by scientists in Japan and Taiwan. The work was done by Haruki Takezawa and Kei Hirose at the University of Tokyo and colleagues, who suggest that Earth’s core could host a vast reservoir of primordial helium-3 – reshaping our understanding of the planet’s interior.
Noble gases including helium are normally chemically inert. But under extreme pressures, heavier members of the group (including xenon and krypton) can form a variety of compounds with other elements. To date, however, less is known about compounds containing helium – the lightest noble gas.
Beyond the synthesis of disodium helide (Na2He) in 2016, and a handful of molecules in which helium forms weak van der Waals bonds with other atoms, the existence of other helium compounds has remained purely theoretical.
As a result, the conventional view is that any primordial helium-3 present when our planet first formed would have quickly diffused through Earth’s interior, before escaping into the atmosphere and then into space.
Tantalizing clues
However, there are tantalizing clues that helium compounds could exist in some volcanic rocks on Earth’s surface. These rocks contain unusually high isotopic ratios of helium-3 to helium-4. “Unlike helium-4, which is produced through radioactivity, helium-3 is primordial and not produced in planetary interiors,” explains Hirose. “Based on volcanic rock measurements, helium-3 is known to be enriched in hot magma, which originally derives from hot plumes coming from deep within Earth’s mantle.” The mantle is the region between Earth’s core and crust.
The fact that the isotope can still be found in rock and magma suggests that it must have somehow become trapped in the Earth. “This argument suggests that helium-3 was incorporated into the iron-rich core during Earth’s formation, some of which leaked from the core to the mantle,” Hirose explains.
It could be that the extreme pressures present in Earth’s iron-rich core enabled primordial helium-3 to bond with iron to form stable molecular lattices. To date, however, this possibility has never been explored experimentally.
Now, Takezawa, Hirose and colleagues have triggered reactions between iron and helium within a laser-heated diamond-anvil cell. Such cells crush small samples to extreme pressures – in this case as high as 54 GPa. While this is less than the pressure in the core (about 350 GPa), the reactions created molecular lattices of iron and helium. These structures remained stable even when the diamond-anvil’s extreme pressure was released.
To determine the molecular structures of the compounds, the researchers did X-ray diffraction experiments at Japan’s SPring-8 synchrotron. The team also used secondary ion mass spectrometry to determine the concentration of helium within their samples.
Synchrotron and mass spectrometer
“We also performed first-principles calculations to support experimental findings,” Hirose adds. “Our calculations also revealed a dynamically stable crystal structure, supporting our experimental findings.” Altogether, this combination of experiments and calculations showed that the reaction could form two distinct lattices (face-centred cubic and distorted hexagonal close packed), each with differing ratios of iron to helium atoms.
These results suggest that similar reactions between helium and iron may have occurred within Earth’s core shortly after its formation, trapping much of the primordial helium-3 in the material that coalesced to form Earth. This would have created a vast reservoir of helium in the core, which is gradually making its way to the surface.
However, further experiments are needed to confirm this thesis. “For the next step, we need to see the partitioning of helium between iron in the core and silicate in the mantle under high temperatures and pressures,” Hirose explains.
Observing this partitioning would help rule out the lingering possibility that unbonded helium-3 could be more abundant than expected within the mantle – where it could be trapped by some other mechanism. Either way, further studies would improve our understanding of Earth’s interior composition – and could even tell us more about the gases present when the solar system formed.
Two months into Donald Trump’s second presidency and many parts of US science – across government, academia, and industry – continue to be hit hard by the new administration’s policies. Science-related government agencies are seeing budgets and staff cut, especially in programmes linked to climate change and diversity, equity and inclusion (DEI). Elon Musk’s Department of Government Efficiency (DOGE) is also causing havoc as it seeks to slash spending.
In mid-February, DOGE fired more than 300 employees at the National Nuclear Safety Administration, which is part of the US Department of Energy, many of whom were responsible for reassembling nuclear warheads at the Pantex plant in Texas. A day later, the agency was forced to rescind all but 28 of the sackings amid concerns that their absence could jeopardise national security.
A judge has also reinstated workers who were laid off at the National Science Foundation (NSF) as well as at the Centers for Disease Control and Prevention. The judge said the government’s Office of Personnel Management, which sacked the staff, did not have the authority to do so. However, the NSF rehiring applies mainly to military veterans and staff with disabilities, with the overall workforce down by about 140 people – or roughly 10%.
The NSF has also announced a reduction, the size of which is unknown, in its Research Experiences for Undergraduates programme. Over the last 38 years, the initiative has given thousands of college students – many with backgrounds that are underrepresented in science – the opportunity to carry out original research at institutions during the summer holidays. NSF staff are also reviewing thousands of grants containing such words as “women” and “diversity”.
NASA, meanwhile, is to shut its office of technology, policy and strategy, along with its chief-scientist office, and the DEI and accessibility branch of its diversity and equal opportunity office. “I know this news is difficult and may affect us all differently,” admitted acting administrator Janet Petro in an all-staff e-mail. Affecting about 20 staff, the move is on top of plans to reduce NASA’s overall workforce. Reports also suggest that NASA’s science budget could be slashed by as much as 50%.
Hundreds of “probationary employees” have also been sacked by the National Oceanic and Atmospheric Administration (NOAA), which provides weather forecasts that are vital for farmers and people in areas threatened by tornadoes and hurricanes. “If there were to be large staffing reductions at NOAA there will be people who die in extreme weather events and weather-related disasters who would not have otherwise,” warns climate scientist Daniel Swain from the University of California, Los Angeles.
Climate concerns
In his first cabinet meeting on 26 February, Trump suggested that officials “use scalpels” when trimming their departments’ spending and personnel – rather than Musk’s figurative chainsaw. But bosses at the Environmental Protection Agency (EPA) still plan to cut its budget by about two-thirds. “[W]e fear that such cuts would render the agency incapable of protecting Americans from grave threats in our air, water, and land,” wrote former EPA administrators William Reilly, Christine Todd Whitman and Gina McCarthy in the New York Times.
The White House’s attack on climate science goes beyond just the EPA. In January, the US Department of Agriculture removed almost all data on climate change from its website. The action resulted in a lawsuit in March from the Northeast Organic Farming Association of New York and two non-profit organizations – the Natural Resources Defense Council and the Environmental Working Group. They say that the removal hinders research and “agricultural decisions”.
The Trump administration has also barred NASA’s now former chief scientist Katherine Calvin and members of the State Department from travelling to China for a planning meeting of the Intergovernmental Panel on Climate Change. Meanwhile, in a speech to African energy ministers in Washington on 7 March, US energy secretary Chris Wright claimed that coal has “transformed our world and made it better”, adding that climate change, while real, is not on his list of the world’s top 10 problems. “We’ve had years of Western countries shamelessly saying ‘don’t develop coal’,” he said. “That’s just nonsense.”
At the National Institutes of Health (NIH), staff are being told to cancel hundreds of research grants that involve DEI and transgender issues. The Trump administration also wants to cut the allowance for indirect costs of NIH’s and other agencies’ research grants to 15% of research contracts, although a district court judge has put that move on hold pending further legal arguments. On 8 March, the Trump administration also threatened to cancel $400m in funding to Columbia purportedly due to its failure to tackle anti-semitism on the campus.
A Trump policy of removing “undocumented aliens” continues to alarm universities that have overseas students. Some institutions have already advised overseas students against travelling abroad during holidays, in case immigration officers do not let them back in when they return. Others warn that their international students should carry their immigration documents with them at all times. Universities have also started to rein in spending with Harvard and the Massachusetts Institute of Technology, for example, implementing a hiring freeze.
Falling behind
Amid the turmoil, the US scientific community is beginning to fight back. Individual scientists have supported court cases that have overturned sackings at government agencies, while a letter to Congress signed by the Union of Concerned Scientists and 48 scientific societies asserts that the administration has “already caused significant harm to American science”. On 7 March, more than 30 US cities also hosted “Stand Up for Science” rallies attended by thousands of demonstrators.
Elsewhere, a group of government, academic and industry leaders – known collectively as Vision for American Science and Technology – has released a report warning that the US could fall behind China and other competitors in science and technology. Entitled Unleashing American Potential, it calls for increased public and private investment in science to maintain US leadership. “The more dollars we put in from the feds, the more investment comes in from industry, and we get job growth, we get economic success, and we get national security out of it,” notes Sudip Parikh, chief executive of the American Association for the Advancement of Science, who was involved in the report.
Marcia McNutt, president of the National Academy of Sciences, meanwhile, has called on the community to continue to highlight the benefit of science. “We need to underscore the fact that stable federal funding of research is the main mode by which radical new discoveries have come to light – discoveries that have enabled the age of quantum computing and AI and new materials science,” she said. “These are areas that I am sure are very important to this administration as well.”
New for 2025, the American Physical Society (APS) is combining its March Meeting and April Meeting into a joint event known as the APS Global Physics Summit. The largest physics research conference in the world, the Global Physics Summit brings together 14,000 attendees across all disciplines of physics. The meeting takes place in Anaheim, California (as well as virtually) from 16 to 21 March.
Uniting all disciplines of physics in one joint event reflects the increasingly interdisciplinary nature of scientific research and enables everybody to participate in any session. The meeting includes cross-disciplinary sessions and collaborative events, where attendees can meet to connect with others, discuss new ideas and discover groundbreaking physics research.
The meeting will take place in three adjacent venues. The Anaheim Convention Center will host March Meeting sessions, while the April Meeting sessions will be held at the Anaheim Marriott. The Hilton Anaheim will host SPLASHY (soft, polymeric, living, active, statistical, heterogenous and yielding) matter and medical physics sessions. Cross-disciplinary sessions and networking events will take place at all sites and in the connecting outdoor plaza.
With programming aligned with the 2025 International Year of Quantum Science and Technology, the meeting also celebrates all things quantum with a dedicated Quantum Festival. Designed to “inspire and educate”, the festival incorporates events at the intersection of art, science and fun – with multimedia performances, science demonstrations, circus performers, and talks by Nobel laureates and a NASA astronaut.
Finally, there’s the exhibit hall, where more than 200 exhibitors will showcase products and services for the physics community. Here, delegates can also attend poster sessions, a career fair and a graduate school fair. Read on to find out about some of the innovative product offerings on show at the technical exhibition.
Precision motion drives innovative instruments for physics applications
For over 25 years Mad City Labs has provided precision instrumentation for research and industry, including nanopositioning systems, micropositioners, microscope stages and platforms, single-molecule microscopes and atomic force microscopes (AFMs).
This product portfolio, coupled with the company’s expertise in custom design and manufacturing, enables Mad City Labs to provide solutions for nanoscale motion for diverse applications such as astronomy, biophysics, materials science, photonics and quantum sensing.
Mad City Labs’ piezo nanopositioners feature the company’s proprietary PicoQ sensors, which provide ultralow noise and excellent stability to yield sub-nanometre resolution and motion control down to the single picometre level. The performance of the nanopositioners is central to the company’s instrumentation solutions, as well as the diverse applications that it can serve.
Within the scanning probe microscopy solutions, the nanopositioning systems provide true decoupled motion with virtually undetectable out-of-plane movement, while their precision and stability yields high positioning performance and control. Uniquely, Mad City Labs offers both optical deflection AFMs and resonant probe AFM models.
Product portfolio Mad City Labs provides precision instrumentation for applications ranging from astronomy and biophysics, to materials science, photonics and quantum sensing. (Courtesy: Mad City Labs)
The MadAFM is a sample scanning AFM in a compact, tabletop design. Designed for simple user-led installation, the MadAFM is a multimodal optical deflection AFM and includes software. The resonant probe AFM products include the AFM controllers MadPLL and QS-PLL, which enable users to build their own flexibly configured AFMs using Mad City Labs micro- and nanopositioners. All AFM instruments are ideal for material characterization, but resonant probe AFMs are uniquely well suited for quantum sensing and nano-magnetometry applications.
Stop by the Mad City Labs booth and ask about the new do-it-yourself quantum scanning microscope based on the company’s AFM products.
Mad City Labs also offers standalone micropositioning products such as optical microscope stages, compact positioners and the Mad-Deck XYZ stage platform. These products employ proprietary intelligent control to optimize stability and precision. These micropositioning products are compatible with the high-resolution nanopositioning systems, enabling motion control across micro–picometre length scales.
The new MMP-UHV50 micropositioning system offers 50 mm travel with 190 nm step size and maximum vertical payload of 2 kg, and is constructed entirely from UHV-compatible materials and carefully designed to eliminate sources of virtual leaks. Uniquely, the MMP-UHV50 incorporates a zero power feature when not in motion to minimize heating and drift. Safety features include limit switches and overheat protection, a critical item when operating in vacuum environments.
For advanced microscopy techniques for biophysics, the RM21 single-molecule microscope, featuring the unique MicroMirror TIRF system, offers multicolour total internal-reflection fluorescence microscopy with an excellent signal-to-noise ratio and efficient data collection, along with an array of options to support multiple single-molecule techniques. Finally, new motorized micromirrors enable easier alignment and stored setpoints.
Visit Mad City Labs at the APS Global Summit, at booth #401
New lasers target quantum, Raman spectroscopy and life sciences
HÜBNER Photonics, manufacturer of high-performance lasers for advanced imaging, detection and analysis, is highlighting a large range of exciting new laser products at this year’s APS event. With these new lasers, the company responds to market trends specifically within the areas of quantum research and Raman spectroscopy, as well as fluorescence imaging and analysis for life sciences.
Dedicated to the quantum research field, a new series of CW ultralow-noise single-frequency fibre amplifier products – the Ampheia Series lasers – offer output powers of up to 50 W at 1064 nm and 5 W at 532 nm, with an industry-leading low relative intensity noise. The Ampheia Series lasers ensure unmatched stability and accuracy, empowering researchers and engineers to push the boundaries of what’s possible. The lasers are specifically suited for quantum technology research applications such as atom trapping, semiconductor inspection and laser pumping.
Ultralow-noise operation The Ampheia Series lasers are particularly suitable for quantum technology research applications. (Courtesy: HÜBNER Photonics)
In addition to the Ampheia Series, the new Cobolt Qu-T Series of single frequency, tunable lasers addresses atom cooling. With wavelengths of 707, 780 and 813 nm, course tunability of greater than 4 nm, narrow mode-hop free tuning of below 5 GHz, linewidth of below 50 kHz and powers of 500 mW, the Cobolt Qu-T Series is perfect for atom cooling of rubidium, strontium and other atoms used in quantum applications.
For the Raman spectroscopy market, HÜBNER Photonics announces the new Cobolt Disco single-frequency laser with available power of up to 500 mW at 785 nm, in a perfect TEM00 beam. This new wavelength is an extension of the Cobolt 05-01 Series platform, which with excellent wavelength stability, a linewidth of less than 100 kHz and spectral purity better than 70 dB, provides the performance needed for high-resolution, ultralow-frequency Raman spectroscopy measurements.
For life science applications, a number of new wavelengths and higher power levels are available, including 553 nm with 100 mW and 594 nm with 150 mW. These new wavelengths and power levels are available on the Cobolt 06-01 Series of modulated lasers, which offer versatile and advanced modulation performance with perfect linear optical response, true OFF states and stable illumination from the first pulse – for any duty cycles and power levels across all wavelengths.
The company’s unique multi-line laser, Cobolt Skyra, is now available with laser lines covering the full green–orange spectral range, including 594 nm, with up to 100 mW per line. This makes this multi-line laser highly attractive as a compact and convenient illumination source in most bioimaging applications, and now also specifically suitable for excitation of AF594, mCherry, mKate2 and other red fluorescent proteins.
In addition, with the Cobolt Kizomba laser, the company is introducing a new UV wavelength that specifically addresses the flow cytometry market. The Cobolt Kizomba laser offers 349 nm output at 50 mW with the renowned performance and reliability of the Cobolt 05-01 Series lasers.
Visit HÜBNER Photonics at the APS Global Summit, at booth #359.
Researchers from the Amazon Web Services (AWS) Center for Quantum Computing have announced what they describe as a “breakthrough” in quantum error correction. Their method uses so-called cat qubits to reduce the total number of qubits required to build a large-scale, fault-tolerant quantum computer, and they claim it could shorten the time required to develop such machines by up to five years.
Quantum computers are promising candidates for solving complex problems that today’s classical computers cannot handle. Their main drawback is the tendency for errors to crop up in the quantum bits, or qubits, they use to perform computations. Just like classical bits, the states of qubits can erroneously flip from 0 to 1, which is known as a bit-flip error. In addition, qubits can suffer from inadvertent changes to their phase, which is a parameter that characterizes their quantum superposition (phase-flip errors). A further complication is that whereas classical bits can be copied in order to detect and correct errors, the quantum nature of qubits makes copying impossible. Hence, errors need to be dealt with in other ways.
One error-correction scheme involves building physical or “measurement” qubits around each logical or “data” qubit. The job of the measurement qubits is to detect phase-flip or bit-flip errors in the data qubits without destroying their quantum nature. In 2024, a team at Google Quantum AI showed that this approach is scalable in a system of a few dozen qubits. However, a truly powerful quantum computer would require around a million data qubits and an even larger number of measurement qubits.
Cat qubits to the rescue
The AWS researchers showed that it is possible reduce this total number of qubits. They did this by using a special type of qubit called a cat qubit. Named after the Schrödinger’s cat thought that illustrates the concept of quantum superposition, cat qubits use the superposition of coherent states to encode information in a way that resists bit flips. Doing so may increase the number of phase-flip errors, but special error-correction algorithms can deal with these efficiently.
The AWS team got this result by building a microchip containing an array of five cat qubits. These are connected to four transmon qubits, which are a type of superconducting qubit with a reduced sensitivity to charge noise (a major source of errors in quantum computations). Here, the cat qubits serve as data qubits, while the transmon qubits measure and correct phase-flip errors. The cat qubits were further stabilized by connecting each of them to a buffer mode that uses a non-linear process called two-photon dissipation to ensure that their noise bias is maintained over time.
According to Harry Putterman, a senior research scientist at AWS, the team’s foremost challenge (and innovation) was to ensure that the system did not introduce too many bit-flip errors. This was important because the system uses a classical repetition code as its “outer layer” of error correction, which left it with no redundancy against residual bit flips. With this aspect under control, the researchers demonstrated that their superconducting quantum circuit suppressed errors from 1.75% per cycle for a three-cat qubit array to 1.65% per cycle for a five-cat qubit array. Achieving this degree of error suppression with larger error-correcting codes previously required tens of additional qubits.
On a scalable path
AWS’s director of quantum hardware, Oskar Painter, says the result will reduce the development time for a full-scale quantum computer by 3-5 years. This is, he says, a direct outcome of the system’s simple architecture as well as its 90% reduction in the “overhead” required for quantum error correction. The team does, however, need to reduce the error rates of the error-corrected logical qubits. “The two most important next steps towards building a fault-tolerant quantum computer at scale is that we need to scale up to several logical qubits and begin to perform and study logical operations at the logical qubit level,” Painter tells Physics World.
According to David Schlegel, a research scientist at the French quantum computing firm Alice & Bob, which specializes in cat qubits, this work marks the beginning of a shift from noisy, classically simulable quantum devices to fully error-corrected quantum chips. He says the AWS team’s most notable achievement is its clever hybrid arrangement of cat qubits for quantum information storage and traditional transmon qubits for error readout.
However, while Schlegel calls the research “innovative”, he says it is not without limitations. Because the AWS chip incorporates transmons, it still needs to address both bit-flip and phase-flip errors. “Other cat qubit approaches focus on completely eliminating bit flips, further reducing the qubit count by more than a factor of 10,” Schlegel says. “But it remains to be seen which approach will prove more effective and hardware-efficient for large-scale error-corrected quantum devices in the long run.”
Physicists in Serbia have begun strike action today in response to what they say is government corruption and social injustice. The one-day strike, called by the country’s official union for researchers, is expected to result in thousands of scientists joining students who have already been demonstrating for months over conditions in the country.
The student protests, which began in November, were triggered by a railway station canopy collapse that killed 15 people. Since then, it has grown into an ongoing mass protest seen by many as indirectly seeking to change the government, currently led by president Aleksandar Vučić.
The Serbian government, however, claims it has met all student demands such as transparent publication of all documents related to the accident and the prosecution of individuals who have disrupted the protests. The government has also accepted the resignation of prime minister Miloš Vučević as well as transport minister Goran Vesić and trade minister Tomislav Momirović, who previously held the transport role during the station’s reconstruction.
“The students are championing noble causes that resonate with all citizens,” says Igor Stanković, a statistical physicist at the Institute of Physics (IPB) in Belgrade, who is joining today’s walkout. In January, around 100 employees from the IPB in Belgrade signed a letter in support of the students, one of many from various research institutions since December.
Stanković believes that the corruption and lack of accountability that students are protesting against “stem from systemic societal and political problems, including entrenched patronage networks and a lack of transparency”.
“I believe there is no turning back now,” adds Stanković. “The students have gained support from people across the academic spectrum – including those I personally agree with and others I believe bear responsibility for the current state of affairs. That, in my view, is their strength: standing firmly behind principles, not political affiliations.”
Meanwhile, Miloš Stojaković, a mathematician at the University of Novi Sad, says that the faculty at the university have backed the students from the start especially given that they are making “a concerted effort to minimize disruptions to our scientific work”.
Many university faculties in Serbia have been blockaded by protesting students, who have been using them as a base for their demonstrations. “The situation will have a temporary negative impact on research activities,” admits Dejan Vukobratović, an electrical engineer from the University of Novi Sad. However, most researchers are “finding their way through this situation”, he adds, with “most teams keeping their project partners and funders informed about the situation, anticipating possible risks”.
Missed exams
Amidst the continuing disruptions, the Serbian national science foundation has twice delayed a deadline for the award of €24m of research grants, citing “circumstances that adversely affect the collection of project documentation”. The foundation adds that 96% of its survey participants requested an extension. The researchers’ union has also called on the government to freeze the work status of PhD students employed as research assistants or interns to accommodate the months’ long pause to their work. The government has promised to look into it.
Meanwhile, universities are setting up expert groups to figure out how to deal with the delays to studies and missed exams. Physics World approached Serbia’s government for comment, but did not receive a reply.
Researchers in Australia have developed a nanosensor that can detect the onset of gestational diabetes with 95% accuracy. Demonstrated by a team led by Carlos Salomon at the University of Queensland, the superparamagnetic “nanoflower” sensor could enable doctors to detect a variety of complications in the early stages of pregnancy.
Many complications in pregnancy can have profound and lasting effects on both the mother and the developing foetus. Today, these conditions are detected using methods such as blood tests, ultrasound screening and blood pressure monitoring. In many cases, however, their sensitivity is severely limited in the earliest stages of pregnancy.
“Currently, most pregnancy complications cannot be identified until the second or third trimester, which means it can sometimes be too late for effective intervention,” Salomon explains.
To tackle this challenge, Salomon and his colleagues are investigating the use of specially engineered nanoparticles to isolate and detect biomarkers in the blood associated with complications in early pregnancy. Specifically, they aim to detect the protein molecules carried by extracellular vesicles (EVs) – tiny, membrane-bound particles released by the placenta, which play a crucial role in cell signalling.
In their previous research, the team pioneered the development of superparamagnetic nanostructures that selectively bind to specific EV biomarkers. Superparamagnetism occurs specifically in small, ferromagnetic nanoparticles, causing their magnetization to randomly flip direction under the influence of temperature. When proteins are bound to the surfaces of these nanostructures, their magnetic responses are altered detectably, providing the team with a reliable EV sensor.
“This technology has been developed using nanomaterials to detect biomarkers at low concentrations,” explains co-author Mostafa Masud. “This is what makes our technology more sensitive than current testing methods, and why it can pick up potential pregnancy complications much earlier.”
Previous versions of the sensor used porous nanocubes that efficiently captured EVs carrying a key placental protein named PLAP. By detecting unusual levels of PLAP in the blood of pregnant women, this approach enabled the researchers to detect complications far more easily than with existing techniques. However, the method generally required detection times lasting several hours, making it unsuitable for on-site screening.
In their latest study, reported in Science Advances, Salomon’s team started with a deeper analysis of the EV proteins carried by these blood samples. Through advanced computer modelling, they discovered that complications can be linked to changes in the relative abundance of PLAP and another placental protein, CD9.
Based on these findings, they developed a new superparamagnetic nanosensor capable of detecting both biomarkers simultaneously. Their design features flower-shaped nanostructures made of nickel ferrite, which were embedded into specialized testing strips to boost their sensitivity even further.
Using this sensor, the researchers collected blood samples from 201 pregnant women at 11 to 13 weeks’ gestation. “We detected possible complications, such as preterm birth, gestational diabetes and preeclampsia, which is high blood pressure during pregnancy,” Salomon describes. For gestational diabetes, the sensor demonstrated 95% sensitivity in identifying at-risk cases, and 100% specificity in ruling out healthy cases.
Based on these results, the researchers are hopeful that further refinements to their nanoflower sensor could lead to a new generation of EV protein detectors, enabling the early diagnosis of a wide range of pregnancy complications.
“With this technology, pregnant women will be able to seek medical intervention much earlier,” Salomon says. “This has the potential to revolutionize risk assessment and improve clinical decision-making in obstetric care.”
A counterintuitive result from Einstein’s special theory of relativity has finally been verified more than 65 years after it was predicted. The prediction states that objects moving near the speed of light will appear rotated to an external observer, and physicists in Austria have now observed this experimentally using a laser and an ultrafast stop-motion camera.
A central postulate of special relativity is that the speed of light is the same in all reference frames. An observer who sees an object travelling close to the speed of light and makes simultaneous measurements of its front and back (in the direction of travel) will therefore find that, because photons coming from each end of the object both travel at the speed of light, the object is measurably shorter than it would be for an observer in the object’s reference frame. This is the long-established phenomenon of Lorentz contraction.
In 1959, however, two physicists, James Terrell and the future Nobel laureate Roger Penrose, independently noted something else. If the object has any significant optical depth relative to its length – in other words, if its extension parallel to the observer’s line of sight is comparable to its extension perpendicular to this line of sight, as is the case for a cube or a sphere – then photons from the far side of the object (from the observer’s perspective) will take longer to reach the observer than photons from its near side. Hence, if a camera takes an instantaneous snapshot of the moving object, it will collect photons from the far side that were emitted earlier at the same time as it collects photons from the near side that were emitted later.
This time difference stretches the image out, making the object appear longer even as Lorentz contraction makes its measurements shorter. Because the stretching and the contraction cancel out, the photographed object will not appear to change length at all.
But that isn’t the whole story. For the cancellation to work, the photons reaching the observer from the part of the object facing its direction of travel must have been emitted later than the photons that come from its trailing edge. This is because photons from the far and back sides come from parts of the object that would normally be obscured by the front and near sides. However, because the object moves in the time it takes photons to propagate, it creates a clear passage for trailing-edge photons to reach the camera.
The cumulative effect, Terrell and Penrose showed, is that instead of appearing to contract – as one would naïvely expect – a three-dimensional object photographed travelling at nearly the speed of light will appear rotated.
The Terrell effect in the lab
While multiple computer models have been constructed to illustrate this “Terrell effect” rotation, it has largely remained a thought experiment. In the new work, however, Peter Schattschneider of the Technical University of Vienna and colleagues realized it in an experimental setup. To do this, they shone pulsed laser light onto one of two moving objects: a sphere or a cube. The laser pulses were synchronized to a picosecond camera that collected light scattered off the object.
The researchers programmed the camera to produce a series of images at each position of the moving object. They then allowed the object to move to the next position and, when the laser pulsed again, recorded another series of ultrafast images with the camera. By linking together images recorded from the camera in response to different laser pulses, the researchers were able to, in effect, reduce the speed of light to less than 2 m/s.
When they did so, they observed that the object rotated rather than contracted, just as Terrell and Penrose predicted. While their results did deviate somewhat from theoretical predictions, this was unsurprising given that the predictions rest on certain assumptions. One of these is that incoming rays of light should be parallel to the observer, which is only true if the distance from object to observer is infinite. Another is that each image should be recorded instantaneously, whereas the shutter speed of real cameras is inevitably finite.
Because their research is awaiting publication by a journal with an embargo policy, Schattschneider and colleagues were unavailable for comment. However, the Harvard University astrophysicist Avi Loeb, who suggested in 2017 that the Terrell effect could have applications for measuring exoplanet masses, is impressed: “What [the researchers] did here is a very clever experiment where they used very short pulses of light from an object, then moved the object, and then looked again at the object and then put these snapshots together into a movie – and because it involves different parts of the body reflecting light at different times, they were able to get exactly the effect that Terrell and Penrose envisioned,” he says. Though Loeb notes that there’s “nothing fundamentally new” in the work, he nevertheless calls it “a nice experimental confirmation”.
The research is available on the arXiv pre-print server.
The integrity of science could be threatened by publishers changing scientific papers after they have been published – but without making any formal public notification. That’s the verdict of a new study by an international team of researchers, who coin such changes “stealth corrections”. They want publishers to publicly log all changes that are made to published scientific research (Learned Publishing 38 e1660).
When corrections are made to a paper after publication, it is standard practice for a notice to be added to the article explaining what has been changed and why. This transparent record keeping is designed to retain trust in the scientific record. But last year, René Aquarius, a neurosurgery researcher at Radboud University Medical Center in the Netherlands, noticed this does not always happen.
After spotting an issue with an image in a published paper, he raised concerns with the authors, who acknowledged the concerns and stated that they were “checking the original data to figure out the problem” and would keep him updated. However, Aquarius was surprised to see that the figure had been updated a month later, but without a correction notice stating that the paper had been changed.
Teaming up with colleagues from Belgium, France, the UK and the US, Aquarius began to identify and document similar stealth corrections. They did so by recording instances that they and other “science sleuths” had already found and by searching online for for terms such as “no erratum”, “no corrigendum” and “stealth” on PubPeer – an online platform where users discuss and review scientific publications.
Sustained vigilance
The researchers define a stealth correction as at least one post-publication change being made to a scientific article that does not provide a correction note or any other indicator that the publication has been temporarily or permanently altered. The researchers identified 131 stealth corrections spread across 10 scientific publishers and in different fields of research. In 92 of the cases, the stealth correction involved a change in the content of the article, such as to figures, data or text.
The remaining unrecorded changes covered three categories: “author information” such as the addition of authors or changes in affiliation; “additional information”, including edits to ethics and conflict of interest statements; and “the record of editorial process”, for instance alterations to editor details and publication dates. “For most cases, we think that the issue was big enough to have a correction notice that informs the readers what was happening,” Aquarius says.
After the authors began drawing attention to the stealth corrections, five of the papers received an official correction notice, nine were given expressions of concern, 17 reverted to the original version and 11 were retracted. Aquarius says he believes it is “important” that reader knows what has happened to a paper “so they can make up their own mind whether they want to trust [it] or not”.
The researchers would now like to see publishers implementing online correction logs that make it impossible to change anything in a published article without it being transparently reported, however small the edit. They also say that clearer definitions and guidelines are required concerning what constitutes a correction and needs a correction notice.
“We need to have sustained vigilance in the scientific community to spot these stealth corrections and also register them publicly, for example on PubPeer,” Aquarius says.
The story begins with the startling event that gives the book its unusual moniker: the firing of a Colt revolver in the famous London cathedral in 1951. A similar experiment was also performed in the Royal Festival Hall in the same year (see above photo). Fortunately, this was simply a demonstration for journalists of an experiment to understand and improve the listening experience in a space notorious for its echo and other problematic acoustic features.
St Paul’s was completed in 1711 and Smyth, a historian of architecture, science and construction at the University of Cambridge in the UK, explains that until the turn of the last century, the only way to evaluate the quality of sound in such a building was by ear. The book then reveals how this changed. Over five decades of innovative experiments, scientists and architects built a quantitative understanding of how a building’s shape, size and interior furnishings determine the quality of speech and music through reflection and absorption of sound waves.
The evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers
We are first taken back to the dawn of the 20th century and shown how the evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers. This includes architect and pioneering acoustician Hope Bagenal, along with several physicists, notably Harvard-based US physicist Wallace Clement Sabine.
Details of Sabine’s career, alongside those of Bagenal, whose personal story forms the backbone for much of the book, deftly put a human face on the research that transformed these public spaces. Perhaps Sabine’s most significant contribution was the derivation of a formula to predict the time taken for sound to fade away in a room. Known as the “reverberation time”, this became a foundation of architectural acoustics, and his mathematical work still forms the basis for the field today.
The presence of people, objects and reflective or absorbing surfaces all affect a room’s acoustics. Smyth describes how materials ranging from rugs and timber panelling to specially developed acoustic plaster and tiles have all been investigated for their acoustic properties. She also vividly details the venues where acoustics interventions were added – such as the reflective teak flooring and vast murals painted on absorbent felt in the Henry Jarvis Memorial Hall of the Royal Institute of British Architects in London.
Other locations featured include the Royal Albert Hall, Abbey Road Studios, White Rock Pavilion at Hastings, and the Assembly Chamber of the Legislative Building in New Delhi, India. Temporary structures and spaces for musical performance are highlighted too. These include the National Gallery while it was cleared of paintings during the Second World War and the triumph of acoustic design that was the Glasgow Empire Exhibition concert hall – built for the 1938 event and sadly dismantled that same year.
Unsurprisingly, much of this acoustic work was either punctuated or heavily influenced by the two world wars. While in the trenches during the First World War, Bagenal wrote a journal paper on cathedral acoustics that detailed his pre-war work at St Paul’s Cathedral, Westminster Cathedral and Westminster Abbey. His paper discussed timbre, resonant frequency “and the effects of interference and delay on clarity and harmony”.
In 1916, back in England recovering from a shellfire injury, Bagenal started what would become a long-standing research collaboration with the commandant of the hospital where he was recuperating – who happened to be Alex Wood, a physics lecturer at Cambridge. Equally fascinating is hearing about the push in the wake of the First World War for good speech acoustics in public spaces used for legislative and diplomatic purposes.
Smyth also relates tales of the wrangling that sometimes took place over funding for acoustic experiments on public buildings, and how, as the 20th century progressed, companies specializing in acoustic materials sprang up – and in some cases made dubious claims about the merits of their products. Meanwhile, new technologies such as tape recorders and microphones helped bring a more scientific approach to architectural acoustics research.
The author concludes by describing how the acoustic research from the preceding decades influenced the auditorium design of the Royal Festival Hall on the South Bank in London, which, as Smyth states, was “the first building to have been designed from the outset as a manifestation of acoustic science”.
As evidenced by the copious notes, the wealth of contemporary quotes, and the captivating historical photos and excerpts from archive documents, this book is well-researched. But while I enjoyed the pace and found myself hooked into the story, I found the text repetitive in places, and felt that more details about the physics of acoustics would have enhanced the narrative.
But these are minor grumbles. Overall Smyth paints an evocative picture, transporting us into these legendary auditoria. I have always found it a rather magical experience attending concerts at the Royal Albert Hall. Now, thanks to this book, the next time I have that pleasure I will do so with a far greater understanding of the role physics and physicists played in shaping the music I hear. For me at least, listening will never be quite the same again.
2024 Manchester University Press 328pp £25.00/$36.95
As service lifetimes of electric vehicle (EV) and grid storage batteries continually improve, it has become increasingly important to understand how Li-ion batteries perform after extensive cycling. Using a combination of spatially resolved synchrotron x-ray diffraction and computed tomography, the complex kinetics and spatially heterogeneous behavior of extensively cycled cells can be mapped and characterized under both near-equilibrium and non-equilibrium conditions.
This webinar shows examples of commercial cells with thousands (even tens of thousands) of cycles over many years. The behaviour of such cells can be surprisingly complex and spatially heterogeneous, requiring a different approach to analysis and modelling than what is typically used in the literature. Using this approach, we investigate the long-term behavior of Ni-rich NMC cells and examine ways to prevent degradation. This work also showcases the incredible durability of single-crystal cathodes, which show very little evidence of mechanical or kinetic degradation after more than 20,000 cycles – the equivalent to driving an EV for 8 million km!
Toby Bond
Toby Bond is a senior scientist in the Industrial Science group at the Canadian Light Source (CLS), Canada’s national synchrotron facility. He is a specialist in x-ray imaging and diffraction, specializing in in-situ and operando analysis of batteries and fuel cells for industry clients of the CLS. Bond is an electrochemist by training, who completed his MSc and PhD in Jeff Dahn’s laboratory at Dalhousie University with a focus in developing methods and instrumentation to characterize long-term degradation in Li-ion batteries.
The Superconducting Quantum Materials and Systems (SQMS) Center, led by Fermi National Accelerator Laboratory (Chicago, Illinois), is on a mission “to develop beyond-the-state-of-the-art quantum computers and sensors applying technologies developed for the world’s most advanced particle accelerators”. SQMS director Anna Grassellino talks to Physics World about the evolution of a unique multidisciplinary research hub for quantum science, technology and applications.
What’s the headline take on SQMS?
Established as part of the US National Quantum Initiative (NQI) Act of 2018, SQMS is one of the five National Quantum Information Science Research Centers run by the US Department of Energy (DOE). With funding of $115m through its initial five-year funding cycle (2020-25), SQMS represents a coordinated, at-scale effort – comprising 35 partner institutions – to address pressing scientific and technological challenges for the realization of practical quantum computers and sensors, as well as exploring how novel quantum tools can advance fundamental physics.
Our mission is to tackle one of the biggest cross-cutting challenges in quantum information science: the lifetime of superconducting quantum states – also known as the coherence time (the length of time that a qubit can effectively store and process information). Understanding and mitigating the physical processes that cause decoherence – and, by extension, limit the performance of superconducting qubits – is critical to the realization of practical and useful quantum computers and quantum sensors.
How is the centre delivering versus the vision laid out in the NQI?
SQMS has brought together an outstanding group of researchers who, collectively, have utilized a suite of enabling technologies from Fermilab’s accelerator science programme – and from our network of partners – to realize breakthroughs in qubit chip materials and fabrication processes; design and development of novel quantum devices and architectures; as well as the scale-up of complex quantum systems. Central to this endeavour are superconducting materials, superconducting radiofrequency (SRF) cavities and cryogenic systems – all workhorse technologies for particle accelerators employed in high-energy physics, nuclear physics and materials science.
Collective endeavour At the core of SQMS success are top-level scientists and engineers leading the centre’s cutting-edge quantum research programmes. From left to right: Alexander Romanenko, Silvia Zorzetti, Tanay Roy, Yao Lu, Anna Grassellino, Akshay Murthy, Roni Harnik, Hank Lamm, Bianca Giaccone, Mustafa Bal, Sam Posen. (Courtesy: Hannah Brumbaugh/Fermilab)
Take our research on decoherence channels in quantum devices. SQMS has made significant progress in the fundamental science and mitigation of losses in the oxides, interfaces, substrates and metals that underpin high-coherence qubits and quantum processors. These advances – the result of wide-ranging experimental and theoretical investigations by SQMS materials scientists and engineers – led, for example, to the demonstration of transmon qubits (a type of charge qubit exhibiting reduced sensitivity to noise) with systematic improvements in coherence, record-breaking lifetimes of over a millisecond, and reductions in performance variation.
How are you building on these breakthroughs?
First of all, we have worked on technology transfer. By developing novel chip fabrication processes together with quantum computing companies, we have contributed to our industry partners’ results of up to 2.5x improvement in error performance in their superconducting chip-based quantum processors.
We have combined these qubit advances with Fermilab’s ultrahigh-coherence 3D SRF cavities: advancing our efforts to build a cavity-based quantum processor and, in turn, demonstrating the longest-lived superconducting multimode quantum processor unit ever built (coherence times in excess of 20 ms). These systems open the path to a more powerful qudit-based quantum computing approach. (A qudit is a multilevel quantum unit that can be more than two states.) What’s more, SQMS has already put these novel systems to use as quantum sensors within Fermilab’s particle physics programme – probing for the existence of dark-matter candidates, for example, as well as enabling precision measurements and fundamental tests of quantum mechanics.
Elsewhere, we have been pushing early-stage societal impacts of quantum technologies and applications – including the use of quantum computing methods to enhance data analysis in magnetic resonance imaging (MRI). Here, SQMS scientists are working alongside clinical experts at New York University Langone Health to apply quantum techniques to quantitative MRI, an emerging diagnostic modality that could one day provide doctors with a powerful tool for evaluating tissue damage and disease.
What technologies pursued by SQMS will be critical to the scale-up of quantum systems?
There are several important examples, but I will highlight two of specific note. For starters, there’s our R&D effort to efficiently scale millikelvin-regime cryogenic systems. SQMS teams are currently developing technologies for larger and higher-cooling-power dilution refrigerators. We have designed and prototyped novel systems allowing over 20x higher cooling power, a necessary step to enable the scale-up to thousands of superconducting qubits per dilution refrigerator.
Materials insights The SQMS collaboration is studying the origins of decoherence in state-of-the-art qubits (above) using a raft of advanced materials characterization techniques – among them time-of-flight secondary-ion mass spectrometry, cryo electron microscopy and scanning probe microscopy. With a parallel effort in materials modelling, the centre is building a hierarchy of loss mechanisms that is informing how to fabricate the next generation of high-coherence qubits and quantum processors. (Courtesy: Dan Svoboda/Fermilab)
Also, we are working to optimize microwave interconnects with very low energy loss, taking advantage of SQMS expertise in low-loss superconducting resonators and materials in the quantum regime. (Quantum interconnects are critical components for linking devices together to enable scaling to large quantum processors and systems.)
How important are partnerships to the SQMS mission?
Partnerships are foundational to the success of SQMS. The DOE National Quantum Information Science Research Centers were conceived and built as mini-Manhattan projects, bringing together the power of multidisciplinary and multi-institutional groups of experts. SQMS is a leading example of building bridges across the “quantum ecosystem” – with other national and federal laboratories, with academia and industry, and across agency and international boundaries.
In this way, we have scaled up unique capabilities – multidisciplinary know-how, infrastructure and a network of R&D collaborations – to tackle the decoherence challenge and to harvest the power of quantum technologies. A case study in this regard is Ames National Laboratory, a specialist DOE centre for materials science and engineering on the campus of Iowa State University.
Ames is a key player in a coalition of materials science experts – coordinated by SQMS – seeking to unlock fundamental insights about qubit decoherence at the nanoscale. Through Ames, SQMS and its partners get access to powerful analytical tools – modalities like terahertz spectroscopy and cryo transmission electron microscopy – that aren’t routinely found in academia or industry.
What are the drivers for your engagement with the quantum technology industry?
The SQMS strategy for industry engagement is clear: to work hand-in-hand to solve technological challenges utilizing complementary facilities and expertise; to abate critical performance barriers; and to bring bidirectional value. I believe that even large companies do not have the ability to achieve practical quantum computing systems working exclusively on their own. The challenges at hand are vast and often require R&D partnerships among experts across diverse and highly specialized disciplines.
I also believe that DOE National Laboratories – given their depth of expertise and ability to build large-scale and complex scientific instruments – are, and will continue to be, key players in the development and deployment of the first useful and practical quantum computers. This means not only as end-users, but as technology developers. Our vision at SQMS is to lay the foundations of how we are going to build these extraordinary machines in partnership with industry. It’s about learning to work together and leveraging our mutual strengths.
How do Rigetti and IBM, for example, benefit from their engagement with SQMS?
The partnership with IBM, although more recent, is equally significant. Together with IBM researchers, we are interested in developing quantum interconnects – including the development of high-Q cables to make them less lossy – for the high-fidelity connection and scale-up of quantum processors into large and useful quantum computing systems.
At the same time, SQMS scientists are exploring simulations of problems in high-energy physics and condensed-matter physics using quantum computing cloud services from Rigetti and IBM.
Presumably, similar benefits accrue to suppliers of ancillary equipment to the SQMS quantum R&D programme?
Correct. We challenge our suppliers of advanced materials and fabrication equipment to go above and beyond, working closely with them on continuous improvement and new product innovation. In this way, for example, our suppliers of silicon and sapphire substrates and nanofabrication platforms – key technologies for advanced quantum circuits – benefit from SQMS materials characterization tools and fundamental physics insights that would simply not be available in isolation. These technologies are still at a stage where we need fundamental science to help define the ideal materials specifications and standards.
We are also working with companies developing quantum control boards and software, collaborating on custom solutions to unique hardware architectures such as the cavity-based qudit platforms in development at Fermilab.
How is your team building capacity to support quantum R&D and technology innovation?
We’ve pursued a twin-track approach to the scaling of SQMS infrastructure. On the one hand, we have augmented – very successfully – a network of pre-existing facilities at Fermilab and at SQMS partners, spanning accelerator technologies, materials science and cryogenic engineering. In aggregate, this covers hundreds of millions of dollars’ worth of infrastructure that we have re-employed or upgraded for studying quantum devices, including access to a host of leading-edge facilities via our R&D partners – for example, microkelvin-regime quantum platforms at Royal Holloway, University of London, and underground quantum testbeds at INFN’s Gran Sasso Laboratory.
Thinking big in quantum The SQMS Quantum Garage (above) houses a suite of R&D testbeds to support granular studies of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects. (Courtesy: Ryan Postel/Fermilab)
In parallel, we have invested in new and dedicated infrastructure to accelerate our quantum R&D programme. The Quantum Garage here at Fermilab is the centrepiece of this effort: a 560 square-metre laboratory with a fleet of six additional dilution refrigerators for cryogenic cooling of SQMS experiments as well as test, measurement and characterization of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects.
What is the vision for the future of SQMS?
SQMS is putting together an exciting proposal in response to a DOE call for the next five years of research. Our efforts on coherence will remain paramount. We have come a long way, but the field still needs to make substantial advances in terms of noise reduction of superconducting quantum devices. There’s great momentum and we will continue to build on the discoveries made so far.
We have also demonstrated significant progress regarding our 3D SRF cavity-based quantum computing platform. So much so that we now have a clear vision of how to implement a mid-scale prototype quantum computer with over 50 qudits in the coming years. To get us there, we will be laying out an exciting SQMS quantum computing roadmap by the end of 2025.
It’s equally imperative to address the scalability of quantum systems. Together with industry, we will work to demonstrate practical and economically feasible approaches to be able to scale up to large quantum computing data centres with millions of qubits.
Finally, SQMS scientists will work on exploring early-stage applications of quantum computers, sensors and networks. Technology will drive the science, science will push the technology – a continuous virtuous cycle that I’m certain will lead to plenty more ground-breaking discoveries.
How SQMS is bridging the quantum skills gap
Education, education, education SQMS hosted the inaugural US Quantum Information Science (USQIS) School in summer 2023. Held annually, the USQIS is organized in conjunction with other DOE National Laboratories, academia and industry. (Courtesy: Dan Svoboda/Fermilab)
As with its efforts in infrastructure and capacity-building, SQMS is addressing quantum workforce development on multiple fronts.
Across the centre, Grassellino and her management team have recruited upwards of 150 technical staff and early-career researchers over the past five years to accelerate the SQMS R&D effort. “These ‘boots on the ground’ are a mix of PhD students, postdoctoral researchers plus senior research and engineering managers,” she explains.
Another significant initiative was launched in summer 2023, when SQMS hosted nearly 150 delegates at Fermilab for the inaugural US Quantum Information Science (USQIS) School – now an annual event organized in conjunction with other National Laboratories, academia and industry. The long-term goal is to develop the next generation of quantum scientists, engineers and technicians by sharing SQMS know-how and experimental skills in a systematic way.
“The prioritization of quantum education and training is key to sustainable workforce development,” notes Grassellino. With this in mind, she is currently in talks with academic and industry partners about an SQMS-developed master’s degree in quantum engineering. Such a programme would reinforce the centre’s already diverse internship initiatives, with graduate students benefiting from dedicated placements at SQMS and its network partners.
“Wherever possible, we aim to assign our interns with co-supervisors – one from a National Laboratory, say, another from industry,” adds Grassellino. “This ensures the learning experience shapes informed decision-making about future career pathways in quantum science and technology.”
From its sites in South Africa and Australia, the Square Kilometre Array (SKA) Observatory last year achieved “first light” – producing its first-ever images. When its planned 197 dishes and 131,072 antennas are fully operational, the SKA will be the largest and most sensitive radio telescope in the world.
Under the umbrella of a single observatory, the telescopes at the two sites will work together to survey the cosmos. The Australian side, known as SKA-Low, will focus on low-frequencies, while South Africa’s SKA-Mid will observe middle-range frequencies. The £1bn telescopes, which are projected to begin making science observations in 2028, were built to shed light on some of the most intractable problems in astronomy, such as how galaxies form, the nature of dark matter, and whether life exists on other planets.
Three decades in the making, the SKA will stand on the shoulders of many smaller experiments and telescopes – a suite of so-called “precursors” and “pathfinders” that have trialled new technologies and shaped the instrument’s trajectory. The 15 pathfinder experiments dotted around the planet are exploring different aspects of SKA science.
Meanwhile on the SKA sites in Australia and South Africa, there are four precursor telescopes – MeerKAT and HERA in South Africa and Australian SKA Pathfinder (ASKAP) and Murchison Widefield Array (MWA) in Australia. These precursors are weathering the arid local conditions and are already broadening scientists’ understanding of the universe.
“The SKA was the big, ambitious end game that was going to take decades,” says Steven Tingay, director of the MWA based in Bentley, Australia. “Underneath that umbrella, a huge number of already fantastic things have been done with the precursors, and they’ve all been investments that have been motivated by the path to the SKA.”
Even as technology and science testbeds, “they have far surpassed what anyone reasonably expected of them”, adds Emma Chapman, a radio astronomer at the University of Nottingham, UK.
MeerKAT: glimpsing the heart of the Milky Way
In 2018, radio astronomers in South Africa were scrambling to pull together an image for the inauguration of the 64-dish MeerKAT radio telescope. MeerKAT will eventually form the heart of SKA-Mid, picking up frequencies between 350 megahertz and 15.4 gigahertz, and the researchers wanted to show what it was capable of.
As you’ve never seen it before A radio image of the centre of the Milky Way taken by the MeerKAT telescope. The elongated radio filaments visible emanating from the heart of the galaxy are 10 times more numerous than in any previous image. (Courtesy: I. Heywood, SARAO)
Like all the SKA precursors, MeerKAT is an interferometer, with many dishes acting like a single giant instrument. MeerKAT’s dishes stand about three storeys high, with a diameter of 13.5 m, and the largest distance between dishes being about 8 km. This is part of what gives the interferometer its sensitivity: large baselines between dishes increase the telescope’s angular resolution and thus its sensitivity.
Additional dishes will be integrated into the interferometer to form SKA-Mid. The new dishes will be larger (with diameters of 15 m) and further apart (with baselines of up to 150 km), making it much more sensitive than MeerKAT on its own. Nevertheless, using just the provisional data from MeerKAT, the researchers were able to mark the unveiling of the telescope with the clearest radio image yet of our galactic centre.
Now, we finally see the big picture – a panoramic view filled with an abundance of filaments…. This is a watershed in furthering our understanding of these structures
Farhad Yusef-Zadeh
Four years later, an international team used the MeerKAT data to produce an even more detailed image of the centre of the Milky Way (ApJL 949 L31). The image (above) shows long radio-emitting filaments up to 150 light–years long unspooling from the heart of the galaxy. These structures, whose origin remains unknown, were first observed in 1984, but the new image revealed 10 times more than had ever been seen before.
“We have studied individual filaments for a long time with a myopic view,” Farhad Yusef-Zadeh, an astronomer at Northwestern University in the US and an author on the image paper, said at the time. “Now, we finally see the big picture – a panoramic view filled with an abundance of filaments. This is a watershed in furthering our understanding of these structures.”
The image resembles a “glorious artwork, conveying how bright black holes are in radio waves, but with the busyness of the galaxy going on around it”, says Chapman. “Runaway pulsars, supernovae remnant bubbles, magnetic field lines – it has it all.”
In a different area of astronomy, MeerKAT “has been a surprising new contender in the field of pulsar timing”, says Natasha Hurley-Walker, an astronomer at the Curtin University node of the International Centre for Radio Astronomy Research in Bentley. Pulsars are rotating neutron stars that produce periodic pulses of radiation hundreds of times a second. MeerKAT’s sensitivity, combined with its precise time-stamping, allows it to accurately map these powerful radio sources.
An experiment called the MeerKAT Pulsar Timing Array has been observing a group of 80 pulsars once a fortnight since 2019 and is using them as “cosmic clocks” to create a map of gravitational-wave sources. “If we see pulsars in the same direction in the sky lose time in a connected way, we start suspecting that it is not the pulsars that are acting funny but rather a gravitational wave background that has interfered,” says Marisa Geyer, an astronomer at the University of Cape Town and a co-author on several papers about the array published last year.
HERA: the first stars and galaxies
When astronomers dreamed up the idea for the SKA about 30 years ago, they wanted an instrument that could not only capture a wide view of the universe but was also sensitive enough to look far back in time. In the first billion years after the Big Bang, the universe cooled enough for hydrogen and helium to form, eventually clumping into stars and galaxies.
When these early stars began to shine, their light stripped electrons from the primordial hydrogen that still populated most of the cosmos – a period of cosmic history known as the Epoch of Reionization. The re-ionised hydrogen gave off a faint signal and catching glimpses of this ancient radiation remains one of the major science goals of the SKA.
Developing methods to identify primordial hydrogen signals will be the Hydrogen Epoch of Reionization Array (HERA) – a collection of hundreds of 14 m dishes, packed closely together as they watch the sky, like bowls made of wire mesh (see image below). They have been specifically designed to observe fluctuations in primordial hydrogen in the low-frequency range of 100 MHz to 200 MHz.
Echoes of the early universe The HERA telescope is listening for the faint signals from the first primordial hydrogen that formed after the Big Bang. (Courtesy: South African Radio Astronomy Observatory (SARAO))
Understanding this mysterious epoch sheds light on how young cosmic objects influenced the formation of larger ones and later seeded other objects in the universe. Scientists using HERA data have already reported the most sensitive power limits on the reionization signal (ApJ 945 124), bringing us closer to pinning down what the early universe looked like and how it evolved, and will eventually guide SKA observations. “It always helps to be able to target things better before you begin to build and operate a telescope,” explains HERA project manager David de Boer, an astronomer at the University of California, Berkeley in the US.
MWA: “unexpected” new objects
Over in Australia, meanwhile, the MWA’s 4096 antennas crouch on the red desert sand like spiders (see image below). This interferometer has a particularly wide-field view because, unlike its mid-frequency precursor cousins, it has no moving parts, allowing it to view large parts of the sky at the same time. Each antenna also contains a low-noise amplifier in its centre, boosting the relatively weak low-frequency signals from space. “In a single observation, you cover an enormous fraction of the sky”, says Tingay. “That’s when you can start to pick up rare events and rare objects.”
Sharp eyes With its wide field of view and low-noise signal amplifiers, the MWA telescope in Australia is poised to spot brief and rare cosmic events, and it has already discovered a new class of mysterious radio transients. (Courtesy: Marianne Annereau, 2015 Murchison Widefield Array (MWA))
Hurley-Walker and colleagues discovered one such object a few years ago – repeated, powerful blasts of radio waves that occurred every 18 minutes and lasted about a minute. These signals were an example of a “radio transient” – an astrophysical phenomena that last for milliseconds to years, and may repeat or occur just once. Radio transients have been attributed to many sources including pulsars, but the period of this event was much longer than had ever been observed before.
New transients are challenging our current models of stellar evolution
Cathryn Trott, Curtin Institute of Radio Astronomy in Bentley, Australia
After the researchers first noticed this signal, they followed up with other telescopes and searched archival data from other observatories going back 30 years to confirm the peculiar time scale. “This has spurred observers around the world to look through their archival data in a new way, and now many new similar sources are being discovered,” Hurley-Walker says.
The discovery of new transients, including this one, are “challenging our current models of stellar evolution”, according to Cathryn Trott, a radio astronomer at the Curtin Institute of Radio Astronomy in Bentley, Australia. “No one knows what they are, how they are powered, how they generate radio waves, or even whether they are all the same type of object,” she adds.
This is something that the SKA – both SKA-Mid and SKA-Low – will investigate. The Australian SKA-Low antennas detect frequencies between 50 MHz and 350 MHz. They build on some of the techniques trialled by the MWA, such as the efficacy of using low-frequency antennas and how to combine their received signals into a digital beam. SKA-Low, with its similarly wide field of view, will offer a powerful new perspective on this developing area of astronomy.
ASKAP: giant sky surveys
The 36-dish ASKAP saw first light in 2012, the same year it was decided to split the SKA between Australia and South Africa. ASKAP was part of Australia’s efforts to prove that it could host the massive telescope, but it has since become an important instrument in its own right. These dishes use a technology called a phased array feed which allows the telescope to view different parts of the sky simultaneously.
Each dish contains one of these phased array feeds, which consists of 188 receivers arranged like a chessboard. With this technology, ASKAP can produce 36 concurrent beams looking at 30 degrees of sky. This means it has a wide field of view, says de Boer, who was ASKAP’s inaugural director in 2010. In its first large-area survey, published in 2020, astronomers stitched together 903 images and identified more than 3 million sources of radio emissions in the southern sky, many of which were new (PASA37 e048).
Down under The ASKAP telescope array in Australia was used to demonstrate Australia’s capability to host the SKA. Able to rapidly take wide surveys of the sky, it is also a valuable scientific instrument in its own right, and has made significant discoveries in the study of Fast Radio Bursts. (Courtesy: CSIRO)
Because it can quickly survey large areas of the sky, the telescope has shown itself to be particularly adept at identifying and studying new fast radio bursts (FRBs). Discovered in 2007, FRBs are another kind of radio transient. They have been observed in many galaxies, and though some have been observed to repeat, most are detected only once.
This work is also helping scientists to understand one of the universe’s biggest mysteries. For decades, researchers have puzzled over the fact that the detectable mass of the universe is about half the mass that we know existed after the Big Bang. The dispersion of FRBs by this “missing matter” allows us to weigh all of the normal matter between us and the distant galaxies hosting the FRB.
By combing through ASKAP data, researchers in 2020 also discovered a new class of radio sources, which they dubbed “odd radio circles” (PASA38 e003). These are giant rings of radiation that are observed only in radio waves. Five years later their origins remain a mystery, but some scientists maintain they are flashes from ancient star formation.
The precursors are so important. They’ve given us new questions. And it’s incredibly exciting
Philippa Hartley, SKAO, Manchester
While SKA has many concrete goals, it is these unexpected discoveries that Philippa Hartley, a scientist at the SKAO, based near Manchester, is most excited about. “We’ve got so many huge questions that we’re going to use the SKA to try and answer, but then you switch on these new telescopes, you’re like, ‘Whoa! We didn’t expect that.’” That is why the precursors are so important. “They’ve given us new questions. And it’s incredibly exciting,” she adds.
Trouble on the horizon
As well as pushing the boundaries of astronomy and shaping the design of the SKA, the precursors have made a discovery much closer to home – one that could be a significant issue for the telescope. In a development that SKA’s founders will not have foreseen, the race to fill the skies with constellations of satellites is a problem both for the precursors and also for SKA itself.
Large corporations, including SpaceX in Hawthorne, California, OneWeb in London, UK, and Amazon’s Project Kuiper in Seattle, Washington, have launched more than 6000 communications satellites into space. Many others are also planned, including more than 12,000 from the Shanghai Spacecom Satellite Technology’s G60 Starlink based in Shanghai. These satellites, as well as global positioning satellites, are “photobombing” astronomy observatories and affecting observations across the electromagnetic spectrum.
The wild, wild west Satellites constellations are causing interference with ground-based observatories. (Courtesy: iStock/yucelyilmaz)
ASKAP, MeerKAT and the MWA have all flagged the impact of satellites on their observations. “The likelihood of a beam of a satellite being within the beam of our telescopes is vanishingly small and is easily avoided,” says Robert Braun, SKAO director of science. However, because they are everywhere, these satellites still introduce background radio interference that contaminates observations, he says.
Although the SKA Observatory is engaging with individual companies to devise engineering solutions, “we really can’t be in a situation where we have bespoke solutions with all of these companies”, SKAO director-general Phil Diamond told a side event at the IAU general assembly in Cape Town last year. “That’s why we’re pursuing the regulatory and policy approach so that there are systems in place,” he said. “At the moment, it’s a bit like the wild, wild west and we do need a sheriff to stride into town to help put that required protection in place.”
In this, too, SKA precursors are charting a path forward, identifying ways to observe even with mega satellite constellations staring down at them. When the full SKA telescopes finally come online in 2028, the discoveries it makes will, in large part, be thanks to the telescopes that came before it.
The internal temperature of a building is important – particularly in offices and work environments –for maximizing comfort and productivity. Managing the temperature is also essential for reducing the energy consumption of a building. In the US, buildings account for around 29% of total end-use energy consumption, with more than 40% of this energy dedicated to managing the internal temperature of a building via heating and cooling.
The human body is sensitive to both radiative and convective heat. The convective part revolves around humidity and air temperature, whereas radiative heat depends upon the surrounding surface temperatures inside the building. Understanding both thermal aspects is key for balancing energy consumption with occupant comfort. However, there are not many practical methods available for measuring the impact of radiative heat inside buildings. Researchers from the University of Minnesota Twin Cities have developed an optical sensor that could help solve this problem.
Limitation of thermostats for radiative heat
Room thermostats are used in almost every building today to regulate the internal temperature and improve the comfort levels for the occupants. However, modern thermostats only measure the local air temperature and don’t account for the effects of radiant heat exchange between surfaces and occupants, resulting in suboptimal comfort levels and inefficient energy use.
Finding a way to measure the mean radiant temperature in real time inside buildings could provide a more efficient way of heating the building – leading to more advanced and efficient thermostat controls. Currently, radiant temperature can be measured using either radiometers or black globe sensors. But radiometers are too expensive for commercial use and black globe sensors are slow, bulky and error strewn for many internal environments.
In search of a new approach, first author Fatih Evren (now at Pacific Northwest National Laboratory) and colleagues used low-resolution, low-cost infrared sensors to measure the longwave mean radiant temperature inside buildings. These sensors eliminate the pan/tilt mechanism (where sensors rotate periodically to measure the temperature at different points and an algorithm determines the surface temperature distribution) required by many other sensors used to measure radiative heat. The new optical sensor also requires 4.5 times less computation power than pan/tilt approaches with the same resolution.
Integrating optical sensors to improve room comfort
The researchers tested infrared thermal array sensors with 32 x 32 pixels in four real-world environments (three living spaces and an office) with different room sizes and layouts. They examined three sensor configurations: one sensor on each of the room’s four walls; two sensors; and a single-sensor setup. The sensors measured the mean radiant temperature for 290 h at internal temperatures of between 18 and 26.8 °C.
The optical sensors capture raw 2D thermal data containing temperature information for adjacent walls, floor and ceiling. To determine surface temperature distributions from these raw data, the researchers used projective homographic transformations – a transformation between two different geometric planes. The surfaces of the room were segmented into a homography matrix by marking the corners of the room. Applying the transformations to this matrix provides the surface distribution temperature on each of the surfaces. The surface temperatures can then be used to calculate the mean radiant temperature.
The team compared the temperatures measured by their sensors against ground truth measurements obtained via the net-radiometer method. The optical sensor was found to be repeatable and reliable for different room sizes, layouts and temperature sensing scenarios, with most approaches agreeing within ±0.5 °C of the ground truth measurement, and a maximum error (arising from a single-sensor configuration) of only ±0.96 °C. The optical sensors were also more accurate than the black globe sensor method, which tends to have higher errors due to under/overestimating solar effects.
The researchers conclude that the sensors are repeatable, scalable and predictable, and that they could be integrated into room thermostats to improve human comfort and energy efficiency – especially for controlling the radiant heating and cooling systems now commonly used in high-performance buildings. They also note that a future direction could be to integrate machine learning and other advanced algorithms to improve the calibration of the sensors.
A new technique for using frequency combs to measure trace concentrations of gas molecules has been developed by researchers in the US. The team reports single-digit parts-per-trillion detection sensitivity, and extreme broadband coverage over 1000 cm-1 wavenumbers. This record-level sensing performance could open up a variety of hitherto inaccessible applications in fields such as medicine, environmental chemistry and chemical kinetics.
Each molecular species will absorb light at a specific set of frequencies. So, shining light through a sample of gas and measuring this absorption can reveal the molecular composition of the gas.
Cavity ringdown spectroscopy is an established way to increase the sensitivity of absorption spectroscopy and needs no calibration. A laser is injected between two mirrors, creating an optical standing wave. A sample of gas is then injected into the cavity, so the laser beam passes through it, normally many thousands of times. The absorption of light by the gas is then determined by the rate at which the intracavity light intensity “rings down” – in other words, the rate at which the standing wave decays away.
Researchers have used this method with frequency comb lasers to probe the absorption of gas samples at a range of different light frequencies. A frequency comb produces light at a series of very sharp intensity peaks that are equidistant in frequency – resembling the teeth of a comb.
Shifting resonances
However, the more reflective the mirrors become (the higher the cavity finesse), the narrower each cavity resonance becomes. Due to the fact that their frequencies are not evenly spaced and can be heavily altered by the loaded gas, normally one relies on creating oscillations in the length of the cavity. This creates shifts in all the cavity resonance frequencies to modulate around the comb lines. Multiple resonances are sequentially excited and the transient comb intensity dynamics are captured by a camera, following spatial separation by an optical grating.
“That experimental scheme works in the near-infrared, but not in the mid-infrared,” says Qizhong Liang. “Mid-infrared cameras are not fast enough to capture those dynamics yet.” This is a problem because the mid-infrared is where many molecules can be identified by their unique absorption spectra.
Liang is a member of Jun Ye’s group in JILA in Colorado, which has shown that it is possible to measure transient comb dynamics simply with a Michelson interferometer. The spectrometer entails only beam splitters, a delay stage, and photodetectors. The researchers worked out that, the periodically generated intensity dynamics arising from each tooth of the frequency comb can be detected as a set of Fourier components offset by Doppler frequency shifts. Absorption from the loaded gas can thus be determined.
Dithering the cavity
This process of reading out transient dynamics from “dithering” the cavity by a passive Michelson interferometer is much simpler than previous setups and thus can be used by people with little experience with combs, says Liang. It also places no restrictions on the finesse of the cavity, spectral resolution, or spectral coverage. “If you’re dithering the cavity resonances, then no matter how narrow the cavity resonance is, it’s guaranteed that the comb lines can be deterministically coupled to the cavity resonance twice per cavity round trip modulation,” he explains.
The researchers reported detections of various molecules at concentrations as low as parts-per-billion with parts-per-trillion uncertainty in exhaled air from volunteers. This included biomedically relevant molecules such as acetone, which is a sign of diabetes, and formaldehyde, which is diagnostic of lung cancer. “Detection of molecules in exhaled breath in medicine has been done in the past,” explains Liang. “The more important point here is that, even if you have no prior knowledge about what the gas sample composition is, be it in industrial applications, environmental science applications or whatever you can still use it.”
Konstantin Vodopyanov of the University of Central Florida in Orlando comments: “This achievement is remarkable, as it integrates two cutting-edge techniques: cavity ringdown spectroscopy, where a high-finesse optical cavity dramatically extends the laser beam’s path to enhance sensitivity in detecting weak molecular resonances, and frequency combs, which serve as a precise frequency ruler composed of ultra-sharp spectral lines. By further refining the spectral resolution to the Doppler broadening limit of less than 100 MHz and referencing the absolute frequency scale to a reliable frequency standard, this technology holds great promise for applications such as trace gas detection and medical breath analysis.”