This episode of the Physics World Weekly podcast features William Phillips, who shared the 1997 Nobel Prize for Physics for his work on cooling and trapping atoms using laser light.
In a wide-ranging conversation with Physics World’s Margaret Harris, Phillips talks about his long-time fascination with quantum physics – which began with an undergraduate project on electron spin resonance. Phillips chats about quirky quantum phenomena such as entanglement and superposition and explains how they are exploited in atomic clocks and quantum computing. He also looks to the future of quantum technologies and stresses the importance of curiosity-led research.
Phillips has spent much of his career at US’s National Institute for Standards and Technology (NIST) in Maryland and he also a professor of physics at the University of Maryland.
This podcast is supported by Atlas Technologies, specialists in custom aluminium and titanium vacuum chambers as well as bonded bimetal flanges and fittings used everywhere from physics labs to semiconductor fabs.
Scientists in the US have developed a new type of photovoltaic battery that runs on the energy given off by nuclear waste. The battery uses a scintillator crystal to transform the intense gamma rays from radioisotopes into electricity and can produce more than a microwatt of power. According to its developers at Ohio State University and the University of Toledo, it could be used to power microelectronic devices such as microchips.
The idea of a nuclear waste battery is not new. Indeed, Raymond Cao, the Ohio State nuclear engineer who led the new research effort, points out that the first experiments in this field date back to the early 1950s. These studies, he explains, used a 50 milli-Curie 90Sr-90Y source to produce electricity via the electron-voltaic effect in p-n junction devices.
However, the maximum power output of these devices was just 0.8 μW, and their power conversion efficiency (PCE) was an abysmal 0.4 %. Since then, the PCE of nuclear voltaic batteries has remained low, typically in the 1–3% range, and even the most promising devices have produced, at best, a few hundred nanowatts of power.
Exploiting the nuclear photovoltaic effect
Cao is confident that his team’s work will change this. “Our yet-to-be-optimized battery has already produced 1.5 μW,” he says, “and there is much room for improvement.”
To achieve this benchmark, Cao and colleagues focused on a different physical process called the nuclear photovoltaic effect. This effect captures the energy from highly-penetrating gamma rays indirectly, by coupling a photovoltaic solar cell to a scintillator crystal that emits visible light when it absorbs radiation. This radiation can come from several possible sources, including nuclear power plants, storage facilities for spent nuclear fuel, space- and submarine-based nuclear reactors or, really, anyplace that happens to have large amounts of gamma ray-producing radioisotopes on hand.
The scintillator crystal Cao and colleagues used is gadolinium aluminium garnet (GAGG), and they attached it to a solar cell made from polycrystalline CdTe. The resulting device measures around 2 x 2 x 1 cm, and they tested it using intense gamma rays emitted by two different radioactive sources, 137Cs and 60Co, that produced 1.5 kRad/h and 10 kRad/h, respectively. 137Cs is the most common fission product found in spent nuclear fuel, while 60Co is an activation product.
Enough power for a microsensor
The Ohio-Toledo team found that the maximum power output of their battery was around 288 nW with the 137Cs source. Using the 60Co irradiator boosted this to 1.5 μW. “The greater the radiation intensity, the more light is produced, resulting in increased electricity generation,” Cao explains.
The higher figure is already enough to power a microsensor, he says, and he and his colleagues aim to scale the system up to milliwatts in future efforts. However, they acknowledge that doing so presents several challenges. Scaling up the technology will be expensive, and gamma radiation gradually damages both the scintillator and the solar cell. To overcome the latter problem, Cao says they will need to replace the materials in their battery with new ones. “We are interested in finding alternative scintillator and solar cell materials that are more radiation-hard,” he tells Physics World.
The researchers are optimistic, though, arguing that optimized nuclear photovoltaic batteries could be a viable option for harvesting ambient radiation that would otherwise be wasted. They report their work in Optical Materials X.
Researchers in the Netherlands, Austria, and France have created what they describe as the first operating system for networking quantum computers. Called QNodeOS, the system was developed by a team led by Stephanie Wehner at Delft University of Technology. The system has been tested using several different types of quantum processor and it could help boost the accessibility of quantum computing for people without an expert knowledge of the field.
In the 1960s, the development of early operating systems such as OS/360 and UNIX represented a major leap forward in computing. By providing a level of abstraction in its user interface, an operating system enables users to program and run applications, without having to worry about how to reconfigure the transistors in the computer processors. This advance laid the groundwork for the many of the digital technologies that have revolutionized our lives.
“If you needed to directly program the chip installed in your computer in order to use it, modern information technologies would not exist,” Wehner explains. “As such, the ability to program and run applications without needing to know what the chip even is has been key in making networks like the Internet actually useful.”
Quantum and classical
The users of nascent quantum computers would also benefit from an operating system that allows quantum (and classical) computers to be connected in networks. Not least because most people are not familiar with the intricacies of quantum information processing.
However, quantum computers are fundamentally different from their classical counterparts, and this means a host of new challenges faces those developing network operating systems.
“These include the need to execute hybrid classical–quantum programs, merging high-level classical processing (such as sending messages over a network) with quantum operations (such as executing gates or generating entanglement),” Wehner explains.
Within these hybrid programs, quantum computing resources would only be used when specifically required. Otherwise, routine computations would be offloaded to classical systems, making it significantly easier for developers to program and run their applications.
No standardized architecture
In addition, Wehner’s team considered that, unlike the transistor circuits used in classical systems, quantum operations currently lack a standardized architecture – and can be carried out using many different types of qubits.
Wehner’s team addressed these design challenges by creating a QNodeOS, which is a hybridized network operating system. It combines classical and quantum “blocks”, that provide users with a platform for performing quantum operations.
“We implemented this architecture in a software system, and demonstrated that it can work with different types of quantum hardware,” Wehner explains. The qubit-types used by the team included the electronic spin states of nitrogen–vacancy defects in diamond and the energy levels of individual trapped ions.
Multi-tasking operation
“We also showed how QNodeOS can perform advanced functions such as multi-tasking. This involved the concurrent execution of several programs at once, including compilers and scheduling algorithms.”
QNodeOS is still a long way from having the same impact as UNIX and other early operating systems. However, Wehner’s team is confident that QNodeOS will accelerate the development of future quantum networks.
“It will allow for easier software development, including the ability to develop new applications for a quantum Internet,” she says. “This could open the door to a new area of quantum computer science research.”
The nervous system is often considered the body’s wiring, sending electrical signals to communicate needs and hazards between different parts of the body. However, researchers at the University of Massachusetts at Amherst have now also measured bioelectronic signals propagating from cultured epithelial cells, as they respond to a critical injury.
“Cells are pretty amazing in terms of how they are making collective decisions, because it seems like there is no centre, like a brain,” says researcher Sunmin Yu, who likens epithelial cells to ants in the way that they gather information and solve problems. Alongside lab leader Steve Granick, Yu reports this latest finding in Proceedings of the National Academy of Sciences, suggesting a means for the communication between cells that enables them to coordinate with each other.
While neurons function by bioelectric signals, and punctuated rhythmic bioelectrical signals allow heart muscle cells to keep the heart pumping blood throughout our body, when it comes to intercell signals for any other type of cell, the most common hypothesis is the exchange of chemical cues. Yu, however, had noted from previous work by other groups that the process of “extruding” wounded epithelial cells to get rid of them involved increased expression of the relevant proteins at some distance from the wound itself.
“Our thought process was to inquire about the mechanism by which information could be transmitted over the necessary long distance,” says Yu. She realised that common molecular signalling mechanisms, such as extracellular signal-regulated kinase 1/2 (ERK), which has a speed of around 1 mm/s, would be rather slow as a potential conduit.
Epithelial signals measure up
Yu and Granick grew a layer of epithelial cells on a microelectrode array (MEA). While other approaches to measuring electrical activity in cultured cells exist, an MEA has the advantage of combining electrical sensitivity with a long range, enabling the researchers to collect both temporal and spatial information on electrical activity. They then “wounded” the cells by exposing them to an intense focused laser beam.
Following the wound, the researchers observed electrical potential changes with comparable amplitudes and similar shapes to those observed in neurons, but over much longer periods of time. “The signal propagation speed we measured is about 1000 times slower than neurons and 10 times faster than ERK,” says Yu, expressing great interest in whether the “high-pitch speaking” neurons and heart tissue cells communicate with these “low-pitch speaking” epithelial cells, and if so, how.
The researchers noted an apparent threshold in the amplitude of the generated signal required for it to propagate. But for those that met this threshold, propagation of the electric signals spanned regions up to 600 µm for as long as measurements could be recorded, which was 5 h. Given the mechanical forces generated during “cell extrusion”, the researchers hypothesized the likely role of mechanosensitive proteins in generating the signals. Sure enough, inhibiting the mechanosensitive ion channels shut down the generation of electrical signals.
Yu and Granick highlight previous suggestions that electrical potentials in epithelial cells may be important for regulating the coordinated changes that take place during embryogenesis and regeneration, as well as being implicated in cancer. However, this is the first observation of such electrical potentials being generated and propagating across epithelial tissue.
“Yu and Granick have discovered a remarkable new form of electrical signalling emitted by wounded epithelial cells – cells traditionally viewed as electrically passive,” says Seth Fraden, whose lab at Brandeis University in Massachusetts in the US investigates a range of soft matter topics but was not involved in this research.
Fraden adds that it raises an “intriguing” question: “What is the signal’s target? In light of recent findings by Nathan Belliveau and colleagues, identifying the protein Galvanin as a specific electric-field sensor in immune cells, a compelling hypothesis emerges: epithelial cells send these electric signals as distress calls and immune cells – nature’s healers – receive them to rapidly locate and respond to tissue injuries. Such insights may have profound implications for developing novel regenerative therapies and bioelectric devices aimed at accelerating wound healing.”
Adam Ezra Cohen, whose team at Harvard University in the US focuses on innovative technology for probing molecules and cells, and who was not directly involved in this research, also finds the research “intriguing” but raises numerous questions: “What are the underlying membrane voltage dynamics? What are the molecular mechanisms that drive these spikes? Do similar things happen in intact tissues or live animals?” he asks, adding that techniques such as patch clamp electrophysiology and voltage imaging could address these questions.
A new technique could reduce the risk of blood clots associated with medical implants, making them safer for patients. The technique, which was developed by researchers at the University of Sydney, Australia, involves coating the implants with highly hydrophilic molecules known as zwitterions, thereby inhibiting the build-up of clot-triggering proteins.
Proteins in blood can stick to the surfaces of medical implants such as heart valves and vascular stents. When this happens, it produces a cascade effect in which multiple mechanisms lead to the formation of extensive clots and fibrous networks. These clots and networks can impair the function of implanted medical devices so much that invasive surgery may be required to remove or replace the implant.
To prevent this from happening, the surfaces of implants are often treated with polymeric coatings that resist biofouling. Hydrophilic polymeric coatings such as polyethylene glycol are especially useful, as their water-loving nature allows a thin layer of water to form between them and the surface of the implants, held in place via hydrogen and/or electrostatic bonds. This water layer forms a barrier that prevents proteins from sticking, or adsorbing, to the implant.
An extra layer of zwitterions
Recently, researchers discovered that polymers coated with an extra layer of small molecules called zwitterions provided even more protection against protein adsorption. “Zwitter” means “hybrid” in German; hence, zwitterions are molecules that carry both positive and negative charge, making them neutrally charged overall. These molecules are also very hydrophilic and easily form tight bonds with water molecules. The resulting layer of water has a structure that is similar to that of bulk water, which is energetically stable.
A further attraction of zwitterionic coatings for medical implants is that zwitterions are naturally present in our bodies. In fact, they make up the hydrophilic phospholipid heads of mammalian cell membranes, which play a vital role in regulating interactions between biological cells and the extracellular environment.
Plasma functionalization
In the new work, researchers led by Sina Naficy grafted nanometre-thick zwitterionic coatings onto the surfaces of implant materials using a technique called plasma functionalization. They found that the resulting structures reduce the amount of fibrinogen proteins that adsorb onto the implants by roughly nine-fold and decrease blood clot formation (thrombosis) by almost 75%.
Naficy and colleagues achieved their results by optimizing the density, coverage and thickness of the coating. This was critical for realizing the full potential of these materials, they say, because a coating that is not fully optimized would not reduce clotting.
Naficy tells Physics World that the team’s main goal is to enhance the surface properties of medical devices. “These devices when implanted are in contact with blood and can readily cause thrombosis or infection if the surface initiates certain biological cascade reactions,” he explains. “Most such reactions begin when specific proteins adsorb on the surface and activate the next stage of cascade. Optimizing surface properties with the aid of zwitterions can control / inhibit protein adsorption, hence reducing the severity of adverse body reactions.”
The researchers say they will now be evaluating the long-term stability of the zwitterion-polymer coatings and trying to scale up their grafting process. They report their work in Communications Materials and Cell Biomaterials.
The CERN particle-physics lab near Geneva has released plans for the 15bn SwFr (£13bn) Future Circular Collider (FCC) – a huge 91 km circumference machine. The three-volume feasibility study, released on 31 March, calls for the giant accelerator to collide electrons with positrons to study the Higgs boson in unprecedented detail. If built, the FCC would replace the 27 km Large Hadron Collider (LHC), which will come to an end in the early 2040s.
Work on the FCC feasibility study began in 2020 and the report examines the physics objectives, geology, civil engineering, technical infrastructure and territorial and environmental impact. It also looks at the R&D needed for the accelerators and detectors as well as the socioeconomic benefits and cost.
The study, involving some 150 institutes in over 30 countries, took into account some 100 different scenarios for the collider before landing on a ring circumference of 90.7 km that would be built underground at a depth of about 200 m, on average.
The FCC would also contain eight surface sites to access the tunnel with seven in France and one in Switzerland, and four main detectors. “The design is such that there is minimal impact on the surface, but with the best possible physics output,” says FCC study leader Michael Benedikt.
The funding model for the FCC is still a work in progress, but it is estimated that at least two-thirds of the cost of building the FCC-ee will come from CERN’s 24 member states.
Four committees will now review the feasibility study, beginning with CERN’s scientific committee in July. It will then go to a cost-review panel before being reviewed by the CERN council’s scientific and finance committees. In November, the CERN council will then examine the proposal with a decision to go ahead taken in 2028.
If given the green light, construction on the FCC electron-positron machine, dubbed FCC-ee, would begin in 2030 and it would start operations in 2047, a few years after the High Luminosity LHC (HL-LHC) closes down, and run for about 15 years. It’s main aim would be to study the Higgs boson with a much better precision that the LHC.
To the energy frontier: if built, the FCC-hh would begin operation in 2073 and run to the end of the century (courtesy: PIXELRISE)
The FCC feasibility study then calls for a hadron machine, dubbed FCC-hh, to replace the FCC-ee in the existing 91 km tunnel. It would be a “discovery machine”, smashing together protons at high energy with the aim of creating new particles. If built, the FCC-hh will begin operation in 2073 and run to the end of the century.
The original design energy for the FCC-hh was to reach 100 TeV but that has now been reduced to 85 TeV. That is mostly due to the uncertainty in magnet technology. The HL-LHC will use 12 T superconducting quadrupole magnets made from niobium-tin (Nb3Sn) to squeeze the beams to boost the luminosity.
CERN engineers think it is possible to increase that to 14 T and if this was used for the FCC it would result in a collision centre-of-mass energy of about 85 TeV. “It’s a prudent approach at this stage,” noted Fabiola Gianotti, current CERN director-general, adding that the FCC would be “the most extraordinary instrument ever built.”
The original design called for high-temperature superconducting magnets, such as so-called ReBCO tapes, and CERN is looking into such technology. If it came to fruition in the necessary timescales and was implemented in the FCC-hh then it could push the energy to 120 TeV.
China plans
One potential spanner in the works is China’s plans for a very similar machine called the Circular Election-Positron Collider (CEPC). A decision on the CEPC could come this year with construction beginning in 2027.
Yet officials at CERN are not concerned. They point to the fact that many different colliders have been built by CERN, which has the expertise as well as infrastructure to build such a huge collider. “Even if China goes ahead, I hope the decision is to compete,” says CERN council president Costas Fountas. “Just like Europe did with the LHC when the US started to build the [cancelled] Superconducting Super Collider.”
If the CERN council decides, however, not to go ahead with the FCC, then Gianotti says that other designs to replace the LHC are still on the table such as a linear machine or a demonstrator muon collider.
I was unprepared for the Roger Penrose that I met in The Impossible Man. As a PhD student training in relativity and quantum gravity at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, I once got to sit next to Penrose. Unsure of what to say to the man whose ideas about black-hole singularities are arguably why I took an interest in becoming a physicist, I asked him how he had come up with the idea for the space-time diagrams now known as “Penrose diagrams”.
Penrose explained to me that he simply couldn’t make sense of space-time without them, that was all. He spoke in kind terms, something I wasn’t quite used to. I was more familiar with people reacting as if my questions were stupid or impertinent. What I felt from Penrose – who eventually shared the 2020 Nobel Prize for Physics with Reinhard Genzel and Andrea Ghez for his work on singularities – was humility and generosity.
The Penrose of The Impossible Man isn’t so much humble as oblivious and, in my reading, quite spoiled
In hindsight, I wonder if I overread him, or if, having been around too many brusque theoretical physicists, my bar as a PhD student was simply too low. The Penrose of The Impossible Man isn’t so much humble as oblivious and, in my reading, quite spoiled. As a teenager he was especially good at taking care of his sister and her friends, generous with his time and thoughtfulness. But it ends there.
As we learn in this biography – written by the Canadian journalist Patchen Barss – one of those young friends, Judith Daniels, later became the object of Penrose’s affection when he was a distinguished faculty member at the University of Oxford in his 40s. A significant fraction of the book is centred on Penrose’s relationship with Daniels, whom he became reacquainted with in the early 1970s when she was an undergraduate studying mathematics at John Cass College in London.
At the time Penrose was unhappily married to Joan, an American he’d met in 1958 when he was a postdoc at the University of Cambridge. In Barss’s telling, Penrose essentially forces Daniels into the position of muse. He writes her copious letters explaining his intellectual ideas and communicating his inability to get his work done without replies from her, which he expects to contain critical analyses of his scientific proposals.
The letters are numerous and obsessive, even when her replies are thin and distant. Eventually, Penrose also begins to request something more – affection and even love. He wants a relationship with her. Barss never exactly states that this was against Daniels’s will, but he offers readers sufficient details of her letters to Penrose that it’s hard to draw another conclusion.
Unanswered questions
Barss was able to read her letters because they had been returned to Penrose after Daniels’s death in 2005. Penrose, however, never re-examined any of them until Barss interviewed him for this biography. This raises a lot of questions that remain unanswered by the end of the book. In particular, why did Daniels continue to participate in a correspondence that was eventually thousands of pages long on Penrose’s side?
Judith Daniels was a significant figure in Penrose’s life, yet her death and memory seem to have been unremarkable to him for much of his later life
My theory is that Daniels felt she owed it to this great man of science. She also confesses at one point that she had a childhood crush on him. Her affection was real, even if not romantic; it is as if she was trapped in the dynamic. Penrose’s lack of curiosity about the letters after her death is also strange to me. Daniels was a significant figure in his life, yet her death and memory seem to have been unremarkable to him for much of his later life.
By the mid-1970s, when Daniels was finally able to separate herself from what was – on Penrose’s side – an extramarital emotional affair, Penrose went seeking new muses. They were always female students of mathematics and physics.
Just when it seems like we’ve met the worst of Penrose’s treatment of women, we’re told about his “physical aggression” toward his eventual ex-wife Joan and his partial abandonment of the three sons they had together. This is glossed over very quickly. And it turns out there is even more.
Penrose, like many of his contemporaries, primarily trained male students. Eventually he did take on one woman, Vanessa Thomas, who was a PhD student in his group at Oxford’s Mathematical Institute, where he’d moved in 1972.
Thomas never finished her PhD; Penrose pursued her romantically and that was the end of her doctorate. As scandalous as this is, I didn’t find the fact of the romance especially shocking because it is common enough in physics, even if it is increasingly frowned upon and, in my opinion, generally inappropriate. For better or worse, I can think of other examples of men in physics who fell in love with women advisees.
But in all the cases I know of, the woman has gone on to complete her degree either under his or someone else’s supervision. In these same cases, the age difference was usually about a decade. What happened with Thomas – who married Penrose in 1988 – seems like the worst-case scenario: a 40-year age difference and a budding star of mathematics, reduced to serving her husband’s career. Professional boundaries were not just transgressed, but obliterated.
Barss chooses not to offer much in the way of judgement about the impact that Penrose had on the women in science whom he made into muses and objects of romantic affection. The only exception is Ivette Fuentes, who was already a star theoretical physicist in her own right when Penrose first met her in 2012. Interview snippets with Fuentes reveal that the one time Penrose spoke of her as a muse, she rejected him and their friendship until he apologized.
No woman, it seems, had ever been able to hold him to the fire before. Fuentes does, however, describe how Penrose gave her an intellectual companion, something she’d previously been denied by the way the physics community is structured around insider “families” and pedigrees. It is interesting to read this in the context of Penrose’s own upbringing as landed gentry.
Gilded childhood
An intellectually precocious child growing up in 1930s England, Penrose is handed every resource for his intellectual potential to blossom. When he notices a specific pattern linking addition and multiplication, an older sibling is on hand to show him there’s a general rule from number theory that explains the pattern. The family at this point, we’re told, has a cook and a maid who doubles as a nanny. Even in a community of people from well-resourced backgrounds, Penrose stands out as an especially privileged example.
When the Second World War starts, his family readily secures safe passage to a comfortable home in Canada – a privilege related to their status as welcomed members of Britain’s upper class and one that was not afforded to many continental European Jewish families at the time (Penrose’s mother and therefore Penrose was Jewish by descent). Indeed, Canada admitted the fewest Jewish refugees of any Allied nation and famously denied entry to the St Louis, which was sent back to Europe, where a third of its 937 Jewish passengers were murdered in the Holocaust.
In Ontario, the Penrose children have a relatively idyllic experience. Throughout the rest of his childhood and his adult life, the path has been continuously smoothed for Penrose, either by his parents (who bought him multiple homes) or mentors and colleagues who believed in his genius. One is left wondering how many other people might have such a distinguished career if, from birth, they are handed everything on a silver platter and never required to take responsibility for anything.
To tell these and later stories, Barss relies heavily on interviews with Penrose. Access to their subject for any biographer is tricky. While it creates a real opportunity for the author, there is also the challenge of having a relationship with someone whose memories you need to question. Barss doesn’t really interrogate Penrose’s memory but seems to take them as gospel.
During the first half of the book, I wondered repeatedly if The Impossible Man is effectively a memoir told in the third person. Eventually, Barss does allow other voices to tell the story. Ultimately, though, this is firmly a book told from Penrose’s point of view. Even the inclusion of Daniels’s story was at least in part at Penrose’s behest.
I found myself wanting to hear more from the women in Penrose’s life. Penrose often saw himself following a current determined by these women. He came, for example, to believe his first wife had essentially trapped him in their relationship by falling for him.
Penrose never takes responsibility for any of his own actions towards the women in his life. So I wondered: how did they see it? What were their lives like? His ex-wife Joan (who died in 2019) and estranged wife Vanessa, who later became a mathematics teacher, both gave interviews for the book. But we learn little about their perspective on the man whose proclivities and career dominated their own lives.
One day there will be another biography of Penrose that will necessarily have distance from its subject because he will no longer be with us. The Impossible Man will be an important document for any future biographer, containing as it does such a close rendering of Penrose’s perspective on his own life.
The cost of genius
When it comes to describing Penrose’s contributions to mathematics and physics, the science writing, especially in the early pages, sings. Barss has a knack for writing up difficult ideas – whether it’s Penrose’s Nobel-prize-winning work on singularities or his attempt at quantum gravity, twistor theory. Overall, the luxurious prose makes the book highly readable.
Sometimes Barss indulges cosmic flourishes in a way that appears to reinforce Penrose’s perspective that the universe is happening to him rather than one over which he has any influence. In the end, I don’t know if we learn the cost of genius, but we certainly learn the cost of not recognizing that we are a part of the universe that has agency.
The final chapter is really Barss writing about himself and Penrose, and the conversations they have together. Penrose has macular degeneration now, so while both are on a visit to Perimeter in 2019, Barss reads some of his letters to Judith back to Penrose. Apparently, Penrose becomes quite emotional in a way that it seems no-one had ever seen – he weeps.
After that, he asks Barss to include the story about Judith. So, on some level, he knows he has erred.
The end of The Impossible Man is devastating. Barss describes how he eventually gains access to two of Penrose’s sons (three with Joan and one with Vanessa). In those interviews, he hears from children who have been traumatized by witnessing what they call “physical aggression” toward their mother. Even so, they both say they’d like to improve their relationship with their father.
Barss then asks a 92-year-old Penrose if he wants to patch things up with his family. His reply: “I feel my life is busy enough and if I get involved with them, it just distracts from other things.” As Barss concludes, Penrose is persistently unwilling to accept that in his life, he has been in the driver’s seat. He has had choices and doesn’t want to take responsibility for that. This, as much as Penrose’s intellectual interests and achievements, is the throughline of the text.
Penrose has shown that he doesn’t really care what others think, as long as he gets what he wants scientifically
The Penrose we meet at the end of The Impossible Man has shown that he doesn’t really care what others think, as long as he gets what he wants scientifically. It’s clear that Barss has a real affection for him, which makes his honesty about the Penrose he finds in the archives all the more remarkable. Perhaps motivated by generosity toward Penrose, Barss also lets the reader do a lot of the analysis.
I wonder, though, how many physicists who are steeped in this culture, and don’t concern themselves with gender equity issues, will miss how bad some of Penrose’s behaviour has been, as his colleagues at the time clearly did. The only documented objections to his behaviour seem more about him going off the deep end with his research into consciousness, cyclic theory and attacks on cosmic inflation.
As I worked on this review, I considered whether a different reviewer would have simply complained that the book has lots of stuff about Penrose’s personal messes that we don’t need to know. Maybe, to other readers, Penrose doesn’t come off quite as badly. For me, I prefer the hero I met in person rather than in the pages of this book. The Impossible Man is an important text, but it’s heartbreaking in the end.
Agrivoltaics is an interdisciplinary research area that lies at the intersection of photovoltaics (PVs) and agriculture. Traditional PV systems used in agricultural settings are made from silicon materials and are opaque. The opaque nature of these solar cells can block sunlight reaching plants and hinder their growth. As such, there’s a need for advanced semi-transparent solar cells that can provide sufficient power but still enable plants to grow instead of casting a shadow over them.
In a recent study headed up at the Institute for Microelectronics and Microsystems (IMM) in Italy, Alessandra Alberti and colleagues investigated the potential of semi-transparent perovskite solar cells as coatings on the roof of a greenhouse housing radicchio seedlings.
Solar cell shading an issue for plant growth
Opaque solar cells are known to induce shade avoidance syndrome in plants. This can cause morphological adaptations, including changes in chlorophyll content and an increased leaf area, as well as a change in the metabolite profile of the plant. Lower UV exposure can also reduce polyphenol content – antioxidant and anti-inflammatory molecules that humans get from plants.
Addressing these issues requires the development of semi-transparent PV panels with high enough efficiencies to be commercially feasible. Some common panels that can be made thin enough to be semi-transparent include organic and dye-sensitized solar cells (DSSCs). While these have been used to provide power while growing tomatoes and lettuces, they typically only have a power conversion efficiency (PCE) of a few percent – a more efficient energy harvester is still required.
A semi-transparent perovskite solar cell greenhouse
Perovskite PVs are seen as the future of the solar cell industry and show a lot of promise in terms of PCE, even if they are not yet up to the level of silicon. However, perovskite PVs can also be made semi-transparent.
Experimental set-up The laboratory-scale greenhouse. (Courtesy: CNR-IMM)
In this latest study, the researchers designed a laboratory-scale greenhouse using a semi-transparent europium (Eu)-enriched CsPbI3 perovskite-coated rooftop and investigated how radicchio seeds grew in the greenhouse for 15 days. They chose this Eu-enriched perovskite composition because CsPbI3 has superior thermal stability compared with other perovskites, making it ideal for long exposures to the Sun’s rays. The addition of Eu into the CsPbI3 structure improved the perovskite stability by minimizing the number of intrinsic defects and increasing the surface-to-volume ratio of perovskite grains.
Alongside this stability, this perovskite also has no volatile components that could potentially effuse under high surface temperatures. It also typically possesses a high PCE – the record for this composition is 21.15%, which is significantly higher and much more commercially feasible than previously possible with organic PVs and DSSCs. This perovskite, therefore, provides a good trade-off between the PCE that can be achieved while transmitting enough light to allow the seedlings to grow.
Low light conditions promote seedling growth
Even though the seedlings were exposed to lower light conditions than natural light, the team found that they grew more quickly, and with bigger leaves, than those under glass panels. This is attributed to the perovskite acting as a filter for only red light to pass through. And red light is known to improve the photosynthetic efficiency and light absorption capabilities of plants, as well as increase the levels of sucrose and hexose within the plant.
The researchers also found that seedlings grown under these conditions had different gene expression patterns compared with those grown under glass. These expression patterns were associated with environmental stress responses, growth regulation, metabolism and light perception, suggesting that the seedlings naturally adapted to different light conditions – although further research is needed to see whether these adaptations will improve the crop yield.
Overall, the use of perovskite PVs strikes a good balance between being able to provide enough power to cover the annual energy needs for irrigation, lighting and air conditioning, while still allowing the seedlings to grow – and grow much quicker and faster. The team suggest that the perovskite solar cells could help with indoor food production operations in the agricultural sector as a potentially affordable solution, although more work now needs to be done on much larger scales to test the technology’s commercial feasibility.
The first results from the Dark Energy Spectroscopic Instrument (DESI) are a cosmological bombshell, suggesting that the strength of dark energy has not remained constant throughout history. Instead, it appears to be weakening at the moment, and in the past it seems to have existed in an extreme form known as “phantom” dark energy.
The new findings have the potential to change everything we thought we knew about dark energy, a hypothetical entity that is used to explain the accelerating expansion of the universe.
“The subject needed a bit of a shake-up, and we’re now right on the boundary of seeing a whole new paradigm,” says Ofer Lahav, a cosmologist from University College London and a member of the DESI team.
DESI is mounted on the Nicholas U Mayall four-metre telescope at Kitt Peak National Observatory in Arizona, and has the primary goal of shedding light on the “dark universe”. The term dark universe reflects our ignorance of the nature of about 95% of the mass–energy of the cosmos.
Intrinsic energy density
Today’s favoured Standard Model of cosmology is the lambda–cold dark matter (CDM) model. Lambda refers to a cosmological constant, which was first introduced by Albert Einstein in 1917 to keep the universe in a steady state by counteracting the effect of gravity. We now know that universe is expanding at an accelerating rate, so lambda is used to quantify this acceleration. It can be interpreted as an intrinsic energy density that is driving expansion. Now, DESI’s findings imply that this energy density is erratic and even more mysterious than previously thought.
DESI is creating a humungous 3D map of the universe. Its first full data release comprise 270 terabytes of data and was made public in March. The data include distance and spectral information about 18.7 million objects including 12.1 million galaxies and 1.6 million quasars. The spectral details of about four million nearby stars nearby are also included.
This is the largest 3D map of the universe ever made, bigger even than all the previous spectroscopic surveys combined. DESI scientists are already working with even more data that will be part of a second public release.
DESI can observe patterns in the cosmos called baryonic acoustic oscillations (BAOs). These were created after the Big Bang, when the universe was filled with a hot plasma of atomic nuclei and electrons. Density waves associated with quantum fluctuations in the Big Bang rippled through this plasma, until about 379,000 years after the Big Bang. Then, the temperature dropped sufficiently to allow the atomic nuclei to sweep up all the electrons. This froze the plasma density waves into regions of high mass density (where galaxies formed) and low density (intergalactic space). These density fluctuations are the BAOs; and they can be mapped by doing statistical analyses of the separation between pairs of galaxies and quasars.
The BAOs grow as the universe expands, and therefore they provide a “standard ruler” that allows cosmologists to study the expansion of the universe. DESI has observed galaxies and quasars going back 11 billion years in cosmic history.
Density fluctuations DESI observations showing nearby bright galaxies (yellow), luminous red galaxies (orange), emission-line galaxies (blue), and quasars (green). The inset shows the large-scale structure of a small portion of the universe. (Courtesy: Claire Lamman/DESI collaboration)
“What DESI has measured is that the distance [between pairs of galaxies] is smaller than what is predicted,” says team member Willem Elbers of the UK’s University of Durham. “We’re finding that dark energy is weakening, so the acceleration of the expansion of the universe is decreasing.”
As co-chair of DESI’s Cosmological Parameter Estimation Working Group, it is Elbers’ job to test different models of cosmology against the data. The results point to a bizarre form of “phantom” dark energy that boosted the expansion acceleration in the past, but is not present today.
The puzzle is related to dark energy’s equation of state, which describes the ratio of pressure of the universe to its energy density. In a universe with an accelerating expansion, the equation of state will have value that is less than about –1/3. A value of –1 characterizes the lambda–CDM model.
However, some alternative cosmological models allow the equation of state to be lower than –1. This means that the universe would expand faster than the cosmological constant would have it do. This points to a “phantom” dark energy that grew in strength as the universe expanded, but then petered out.
“It’s seems that dark energy was ‘phantom’ in the past, but it’s no longer phantom today,” says Elbers. “And that’s interesting because the simplest theories about what dark energy could be do not allow for that kind of behaviour.”
Dark energy takes over
The universe began expanding because of the energy of the Big Bang. We already know that for the first few billion years of cosmic history this expansion was slowing because the universe was smaller, meaning that the gravity of all the matter it contains was strong enough to put the brakes on the expansion. As the density decreased as the universe expanded, gravity’s influence waned and dark energy was able to take over. What DESI is telling us is that at the point that dark energy became more influential than matter, it was in its phantom guise.
“This is really weird,” says Lahav; and it gets weirder. The energy density of dark energy reached a peak at a redshift of 0.4, which equates to about 4.5 billion years ago. At that point, dark energy ceased its phantom behaviour and since then the strength of dark energy has been decreasing. The expansion of the universe is still accelerating, but not as rapidly. “Creating a universe that does that, which gets to a peak density and then declines, well, someone’s going to have to work out that model,” says Lahav.
Scalar quantum field
Unlike the unchanging dark-energy density described by the cosmological constant, a alternative concept called quintessence describes dark energy as a scalar quantum field that can have different values at different times and locations.
However, Elbers explains that a single field such as quintessence is incompatible with phantom dark energy. Instead, he says that “there might be multiple fields interacting, which on their own are not phantom but together produce this phantom equation of state,” adding that “the data seem to suggest that it is something more complicated.”
Before cosmology is overturned, however, more data are needed. On its own, the DESI data’s departure from the Standard Model of cosmology has a statistical significance 1.7σ. This is well below 5σ, which is considered a discovery in cosmology. However, when combined with independent observations of the cosmic microwave background and type Ia supernovae the significance jumps 4.2σ.
“Big rip” avoided
Confirmation of a phantom era and a current weakening would be mean that dark energy is far more complex than previously thought – deepening the mystery surrounding the expansion of the universe. Indeed, had dark energy continued on its phantom course, it would have caused a “big rip” in which cosmic expansion is so extreme that space itself is torn apart.
“Even if dark energy is weakening, the universe will probably keep expanding, but not at an accelerated rate,” says Elbers. “Or it could settle down in a quiescent state, or if it continues to weaken in the future we could get a collapse,” into a big crunch. With a form of dark energy that seems to do what it wants as its equation of state changes with time, it’s impossible to say what it will do in the future until cosmologists have more data.
Lahav, however, will wait until 5σ before changing his views on dark energy. “Some of my colleagues have already sold their shares in lambda,” he says. “But I’m not selling them just yet. I’m too cautious.”
The observations are reported in a series of papers on the arXiv server. Links to the papers can be found here.
At a conference in 2014, bioengineer Jeffrey Fredberg of Harvard University presented pictures of asthma cells. To most people, the images would have been indistinguishable – they all showed tightly packed layers of cells from the airways of people with asthma. But as a physicist, Lisa Manning saw something no one else had spotted; she could tell, just by looking, that some of the model tissues were solid and some were fluid.
Animal tissues must be able to rearrange and flow but also switch to a state where they can withstand mechanical stress. However, whereas solid-liquid transitions are generally associated with a density change, many cellular systems, including asthma cells, can change from rigid to fluid-like at a constant packing density.
Many of a tissue’s properties depend on biochemical processes in its constituent cells, but some collective behaviours can be captured by mathematical models, which is the focus of Manning’s research. At the time, she was working with postdoctoral associate Dapeng Bi on a theory that a tissue’s rigidity depends on the shape of the cells, with cells in a rigid state touching more neighbouring cells than those in a fluid-like one. When she saw the pictures of the asthma cells she knew she was right. “That was a very cool moment,” she says.
Manning – now the William R Kenan, Jr Professor of Physics at Syracuse University in the US – began her research career in theoretical condensed-matter physics, completing a PhD at the University of California, Santa Barbara, in 2008. The thesis was on the mechanical properties of amorphous solids – materials that don’t have long-ranged order like a crystal but are nevertheless rigid. Amorphous solids include many plastics, soils and foods, but towards the end of her graduate studies, Manning started thinking about where else she could apply her work.
I was looking for a project where I could use some of the skills that I had been developing as a graduate student in an orthogonal way
“I was looking for a project where I could use some of the skills that I had been developing as a graduate student in an orthogonal way,” Manning recalls. Inspiration came from of a series of talks on tissue dynamics at the Kavli Institute for Theoretical Physics, where she recognized that the theories she had worked on could also apply to biological systems. “I thought it was amazing that you could apply physical principles to those systems,” she says.
The physics of life
Manning has been at Syracuse since completing a postdoc at Princeton University, and although she has many experimental collaborators, she is happy to still be a theorist. Whereas experimentalists in the biological sciences generally specialize in just one or two experimental models, she looks for “commonalities across a wide range of developmental systems”. That principle has led Manning to study everything from cancer to congenital disease and the development of embryos.
“In animal development, pretty universally one of the things that you must do is change from something that’s the shape of a ball of cells into something that is elongated,” says Manning, who working to understand how this happens. With collaborator Karen Kasza at Columbia University, she has demonstrated that rather than stretching as a solid, it’s energy efficient for embryos to change shape by undergoing a phase transition to a fluid, and many of their predictions have been confirmed in fruit fly embryo models.
More recently, Manning has been looking at how ideas from AI and machine learning can be applied to embryogenesis. Unlike most condensed-matter systems, tissues continuously tune individual interactions between cells, and it’s these localized forces that drive complex shape changes during embryonic development. Together with Andrea Liu of the University of Pennsylvania, Manning is now developing a framework that treats cell–cell interactions like weights in a neural network that can be adjusted to produce a desired outcome.
“I think you really need almost a new type of statistical physics that we don’t have yet to describe systems where you have these individually tunable degrees of freedom,” she says, “as opposed to systems where you have maybe one control parameter, like a temperature or a pressure.”
Developing the next generation
Manning’s transition to biophysics was spurred by an unexpected encounter with scientists outside her field. Between 2019 and 2023, she was director of the Bio-inspired Institute at Syracuse University, which supported similar opportunities for other researchers, including PhD students and postdocs. “As a graduate student, it’s a little easy to get focused on the one project that you know about, in the corner of the universe that your PhD is in,” she says.
As well as supporting science, one of the first things Manning spearheaded at the institute was a professional development programme for early-career researchers. “During our graduate schools, we’re typically mostly trained on how to do the academic stuff,” she says, “and then later in our careers, we’re expected to do a lot of other types of things like manage groups and manage funding.” To support their wider careers, participants in the programme build non-technical skills in areas such as project management, intellectual property and graphic design.
What I realized is that I did have implicit expectations that were based on my culture and background, and that they were distinct from those of some of my students
Manning’s senior role has also brought opportunities to build her own skills, with the COVID-19 pandemic in particular making her reflect and reevaluate how she approached mentorship. One of the appeals of academia is the freedom to explore independent research, but Manning began to see that her fear of micromanaging her students was sometimes creating confusion.
“What I realized is that I did have implicit expectations that were based on my culture and background, and that they were distinct from those of some of my students,” she says. “Because I didn’t name them, I was actually doing my students a disservice.” If she could give advice to her younger self, it would be that the best way to support early-career researchers as equals is to set clear expectations as soon as possible.
When Manning started at Syracuse, most of her students wanted to pursue research in academia, and she would often encourage them to think about other career options, such as working in industry. However, now she thinks academia is perceived as the poorer choice. “Some students have really started to get this idea that academia is too challenging and it’s really hard and not at all great and not rewarding.”
Manning doesn’t want anyone to be put off pursuing their interests, and she feels a responsibility to be outspoken about why she loves her job. For her, the best thing about being a scientist is encapsulated by the moment with the asthma cells: “The thrill of discovering something is a joy,” she says, “being for just a moment, the only person in the world that understands something new.”
Core physics This apple tree at Woolsthorpe Manor is believed to have been the inspiration for Isaac Newton. (Courtesy: Bs0u10e01/CC BY-SA 4.0)
Physicists in the UK have drawn up plans for an International Year of Classical Physics (IYC) in 2027 – exactly three centuries after the death of Isaac Newton. Following successful international years devoted to astronomy (2009), light (2015) and quantum science (2025), they want more recognition for a branch of physics that underpins much of everyday life.
A bright green Flower of Kent apple has now been picked as the official IYC logo in tribute to Newton, who is seen as the “father of classical physics”. Newton, who died in 1727, famously developed our understanding of gravity – one of the fundamental forces of nature – after watching an apple fall from a tree of that variety in his home town of Woolsthorpe, Lincolnshire, in 1666.
“Gravity is central to classical physics and contributes an estimated $270bn to the global economy,” says Crispin McIntosh-Smith, chief classical physicist at the University of Lincoln. “Whether it’s rockets escaping Earth’s pull or skiing down a mountain slope, gravity is loads more important than quantum physics.”
McIntosh-Smith, who also works in cosmology having developed the Cosmic Crisp theory of the universe during his PhD, will now be leading attempts to get endorsement for IYC from the United Nations. He is set to take a 10-strong delegation from Bramley, Surrey, to Paris later this month.
An official gala launch ceremony is being pencilled in for the Travelodge in Grantham, which is the closest hotel to Newton’s birthplace. A parallel scientific workshop will take place in the grounds of Woolsthorpe Manor, with a plenary lecture from TV physicist Brian Cox. Evening entertainment will feature a jazz band.
Numerous outreach events are planned for the year, including the world’s largest demonstration of a wooden block on a ramp balanced by a crate on a pulley. It will involve schoolchildren pouring Golden Delicious apples into the crate to illustrate Newton’s laws of motion. Physicists will also be attempting to break the record for the tallest tower of stacked Braeburn apples.
But there is envy from those behind the 2025 International Year of Quantum Science and Technology. “Of course, classical physics is important but we fear this year will peel attention away from the game-changing impact of quantum physics,” says Anne Oyd from the start-up firm Qrunch, who insists she will only play a cameo role in events. “I believe the impact of classical physics is over-hyped.”
A new artificial intelligence/machine learning method rapidly and accurately characterizes binary neutron star mergers based on the gravitational wave signature they produce. Though the method has not yet been tested on new mergers happening “live”, it could enable astronomers to make quicker estimates of properties such as the location of mergers and the masses of the neutron stars. This information, in turn, could make it possible for telescopes to target and observe the electromagnetic signals that accompany such mergers.
When massive objects such as black holes and neutron stars collide and merge, they emit ripples in spacetime known as gravitational waves (GWs). In 2015 scientists on Earth began observing these ripples using kilometre-scale interferometers that measure the minuscule expansion and contraction of space–time that occurs when a gravitational wave passes through our planet. These interferometers are located in the US, Italy and Japan and are known collectively as the LVK observatories after their initials: the Laser Interferometer GW Observatory (LIGO), the Virgo GW Interferometer (Virgo) and the Kamioka GW Detector (KAGRA).
When two neutron stars in a binary pair merge, they emit electromagnetic waves as well as GWs. While both types of wave travel at the speed of light, certain poorly understood processes that occur within and around the merging pair cause the electromagnetic signal to be slightly delayed. This means that the LVK observatories can detect the GW signal coming from a binary neutron star (BNS) merger seconds, or even minutes, before its electromagnetic counterpart arrives. Being able to identify GWs quickly and accurately therefore increases the chances of detecting other signals from the same event.
This is no easy task, however. GW signals are long and complex, and the main technique currently used to interpret them, Bayesian inference, is slow. While faster alternatives exist, they often make algorithmic approximations that negatively affect their accuracy.
Trained with millions of GW simulations
Physicists led by Maximilian Dax of the Max Planck Institute for Intelligent Systems in Tübingen, Germany have now developed a machine learning (ML) framework that accurately characterizes and localizes BNS mergers within a second of a GW being detected, without resorting to such approximations. To do this, they trained a deep neural network model with millions of GW simulations.
Once trained, the neural network can take fresh GW data as input and predict corresponding properties of the merging BNSs – for example, their masses, locations and spins – based on its training dataset. Crucially, this neural network output includes a sky map. This map, Dax explains, provides a fast and accurate estimate for where the BNS is located.
The new work built on the group’s previous studies, which used ML systems to analyse GWs from binary black hole (BBH) mergers. “Fast inference is more important for BNS mergers, however,” Dax says, “to allow for quick searches for the aforementioned electromagnetic counterparts, which are not emitted by BBH mergers.”
The researchers, who report their work in Nature, hope their method will help astronomers to observe electromagnetic counterparts for BNS mergers more often and detect them earlier – that is, closer to when the merger occurs. Being able to do this could reveal important information on the underlying processes that occur during these events. “It could also serve as a blueprint for dealing with the increased GW signal duration that we will encounter in the next generation of GW detectors,” Dax says. “This could help address a critical challenge in future GW data analysis.”
So far, the team has focused on data from current GW detectors (LIGO and Virgo) and has only briefly explored next-generation ones. They now plan to apply their method to these new GW detectors in more depth.
Waseem completed his DPhil in physics at the University of Oxford in the UK, where he worked on applied process-relational philosophy and employed string diagrams to study interpretations of quantum theory, constructor theory, wave-based logic, quantum computing and natural language processing. At Oxford, Waseem continues to teach mathematics and physics at Magdalen College, the Mathematical Institute, and the Department of Computer Science.
Waseem has played a key role in organizing the Lahore Science Mela, the largest annual science festival in Pakistan. He also co-founded Spectra, an online magazine dedicated to training popular-science writers in Pakistan. For his work popularizing science he received the 2021 Diana Award, was highly commended at the 2021 SEPnet Public Engagement Awards, and won an impact award in 2024 from Oxford’s Mathematical, Physical and Life Sciences (MPLS) division.
What skills do you use every day in your job?
I’m a theoretical physicist, so if you’re thinking about what I do every day, I use chalk and a blackboard, and maybe a pen and paper. However, for theoretical physics, I believe the most important skill is creativity, and the ability to dream and imagine.
What do you like best and least about your job?
That’s a difficult one because I’ve only been in this job for a few weeks. What I like about my job is the academic freedom and the opportunity to work on both education and research. My role is divided 50/50, so 50% of the time I’m thinking about the structure of natural languages like English and Urdu, and how to use quantum computers for natural language processing. The other half is spent using our diagrammatic formalism called “quantum picturalism” to make quantum physics accessible to everyone in the world. So, I think that’s the best part. On the other hand, when you have a lot of smart people together in the same room or building, there can be interpersonal issues. So, the worst part of my job is dealing with those conflicts.
What do you know today, that you wish you knew when you were starting out in your career?
It’s a cynical view, but I think scientists are not always very rational or fair in their dealings with other people and their work. If I could go back and give myself one piece of advice, it would be that sometimes even rational and smart people make naive mistakes. It’s good to recognize that, at the end of the day, we are all human.
Disabled people in science must be recognized and given better support to help reverse the numbers of such people dropping out of science. That is the conclusion of a new report released today by the National Association of Disabled Staff Networks (NADSN). It also calls for funders to stop supporting institutions that have toxic research cultures and for a change in equality law to recognize the impact of discrimination on disabled people including neurodivergent people.
About 22% of working-age adults in the UK are disabled. Yet it is estimated that only 6.4% of people in science have a disability, falling to just 4% for senior academic positions. What’s more, barely 1% of research grant applications to UK Research and Innovation – the umbrella organization for the UK’s main funding councils – are from researchers who disclose being disabled. Disabled researchers who do win grants receive less than half the amount compared to non-disabled researchers.
NADSN is an umbrella organization for disabled staff networks, with a focus on higher education. It includes the STEMM Action Group, which was founded in 2020 and consists of nine people at universities across the UK who work in science and have lived experience of disability, chronic illness or neurodivergence. The group develops recommendations to funding bodies, learned societies and higher-education institutions to address barriers faced by those who are marginalised due to disability.
In 2021 the group published a “problem statement” that identified issues facing disabled people in science. They range from digital problems, such as the need for accessible fonts in reports and presentations, to physical concerns such as needing access ramps for people in wheelchairs or automatic doors to open heavy fire doors. Other issues include the need for adjustable desks in offices and wheelchair accessible labs.
“Many of these physical issues tend to be afterthoughts in the planning process,” says Francesca Doddato, a physicist from Lancaster University, who co-wrote the latest report. “But at that point they are much harder, and more costly, to implement.”
We need to have this big paradigm shift in terms of how we see disability inclusion
Francesca Doddato
Workplace attitudes and cultures can also be a big problem for disabled people in science, some 62% of whom report having been bullied and harassed compared to 43% of all scientists. “Unfortunately, in research and academia there is generally a toxic culture in which you are expected to be hyper productive, move all over the world, and have a focus on quantity over quality in terms of research output,” says Doddato. “This, coupled with society-wide attitudes towards disabilities, means that many disabled people struggle to get promoted and drop out of science.”
The action group spent the past four years compiling their latest report – Towards a fully inclusive environment for disabled people in STEMM – to present solutions to these issues. They hope it will raise awareness of the inequity and discrimination experienced by disabled people in science and to highlight the benefits of having an inclusive environment.
The report identifies three main areas that will have to be reformed to make science fully inclusive for disabled scientists: enabling inclusive cultures and practices; enhancing accessible physical and digital environments; and accessible and proactive funding.
In the short term, it calls on people to recognize the challenges and barriers facing disabled researchers and to improve work-based training for managers. “One of the best things is just being willing to listen and ask what can I do to help?” notes Doddato. “Being an ally is vitally important.”
Doddato says that sharing meeting agendas and documents ahead of time, ensuring that documents are presented in accessible formats, or acknowledging that tasks such as getting around campus can take longer are some aspects that can be useful.“All of these little things can really go a long way in shifting those attitudes and being an ally, and those things they don’t need policies that people need to be willing to listen and be willing to change.”
Medium- and long-term goals in the report involve holding organisations responsible for their working practice polices and to stop promoting and funding toxic research cultures. “We hope that report encourages funding bodies to put pressure on institutions if they are demonstrating toxicity and being discriminatory,” adds Doddato. The report also calls for a change to equality law to recognize the impact of intersectional discrimination, although it admits that this will be a “large undertaking” and will be the subject of a further NADSN report.
Doddato adds that disabled people’s voices need to be hear “loud and clear” as part of any changes. “What we are trying to address with the report is to push universities, research institutions and societies to stop only talking about doing something and actually implement change,” says Doddato. “We need to have a big paradigm shift in terms of how we see disability inclusion. It’s time for change.”
Neutron-activated gold Novel activation imaging technique enables real-time visualization of gold nanoparticles in the body without the use of external tracers. (Courtesy: Nanase Koshikawa from Waseda University)
Gold nanoparticles are promising vehicles for targeted delivery of cancer drugs, offering biocompatibility plus a tendency to accumulate in tumours. To fully exploit their potential, it’s essential to be able to track the movement of these nanoparticles in the body. To date, however, methods for directly visualizing their pharmacokinetics have not yet been established. Aiming to address this shortfall, researchers in Japan are using neutron-activated gold radioisotopes to image nanoparticle distribution in vivo.
The team, headed up by Nanase Koshikawa and Jun Kataoka from Waseda University, are investigating the use of radioactive gold nanoparticles based on 198Au, which they create by irradiating stable gold (197Au) with low-energy neutrons. The radioisotope 198Au has a half-life of 2.7 days and emits 412 keV gamma rays, enabling a technique known as activation imaging.
“Our motivation was to visualize gold nanoparticles without labelling them with tracers,” explains Koshikawa. “Radioactivation allows gold nanoparticles themselves to become detectable from outside the body. We used neutron activation because it does not change the atomic number, ensuring the chemical properties of gold nanoparticles remain unchanged.”
In vivo studies
The researchers – also from Osaka University and Kyoto University – synthesized 198Au-based nanoparticles and injected them into tumours in four mice. They used a hybrid Compton camera (HCC) to detect the emitted 412 keV gamma rays and determine the in vivo nanoparticle distribution, on the day of injection and three and five days later.
The HCC, which incorporates two pixelated scintillators, a scatterer with a central pinhole, and an absorber, can detect radiation with energies from tens of keV to nearly 1 MeV. For X-rays and low-energy gamma rays, the scatterer enables pinhole-mode imaging. For gamma rays over 200 keV, the device functions as a Compton camera.
The researchers reconstructed the 412 keV gamma signals into images, using an energy window of 412±30 keV. With the HCC located 5 cm from the animals’ abdomens, the spatial resolution was 7.9 mm, roughly comparable to the tumour size on the day of injection (7.7 x 11 mm).
In vivo distribution Images of 198Au nanoparticles in the bodies of two mice obtained with the HCC on the day of administration. (Courtesy: CC BY 4.0/Appl. Phys. Lett. 10.1063/5.0251048)
Overlaying the images onto photographs of the mice revealed that the nanoparticles accumulated in both the tumour and liver. In mice 1 and 2, high pixel values were observed primarily in the tumour, while mice 3 and 4 also had high pixel values in the liver region.
After imaging, the mice were euthanized and the team used a gamma counter to measure the radioactivity of each organ. The measured activity concentrations were consistent with the imaging results: mice 1 and 2 had higher nanoparticle concentrations in the tumour than the liver, and mice 3 and 4 had higher concentrations in the liver.
Tracking drug distribution
Next, Koshikawa and colleagues used the 198Au nanoparticles to label astatine-211 (211At), a promising alpha-emitting drug. They note that although 211At emits 79 keV X-rays, allowing in vivo visualization, its short half-life of just 7.2 h precludes its use for long-term tracking of drug pharmacokinetics.
The researchers injected the 211At-labelled nanoparticles into three tumour-bearing mice and used the HCC to simultaneously image 211At and 198Au, on the day of injection and one or two days later. Comparing energy spectra recorded just after injection with those two days later showed that the 211At peak at 79 keV significantly decreased in height owing to its decay, while the 412 keV 198Au peak maintained its height.
The team reconstructed images using energy windows of 79±10 and 412±30 keV, for pinhole- and Compton-mode reconstruction, respectively. In these experiments, the HCC was placed 10 cm from the mouse, giving a spatial resolution of 16 mm – larger than the initial tumour size and insufficient to clearly distinguish tumours from small organs. Nevertheless, the researchers point out that the rough distribution of the drug was still observable.
On the day of injection, the drug distribution could be visualized using both the 211At and 198Au signals. Two days later, imaging using 211At was no longer possible. In contrast, the distribution of the drug could still be observed via the 412 keV gamma rays.
With further development, the technique may prove suitable for future clinical use. “We assume that the gamma ray exposure dose would be comparable to that of clinical imaging techniques using X-rays or gamma rays, such as SPECT and PET, and that activation imaging is not harmful to humans,” Koshikawa says.
Activation imaging could also be applied to more than just gold nanoparticles. “We are currently working on radioactivation of platinum-based anticancer drugs to enable their visualization from outside the body,” Koshikawa tells Physics World. “Additionally, we are developing new detectors to image radioactive drugs with higher spatial resolution.”
Edinburgh researchers filmed ants and the sequence of movements they do when picking up seeds and other things. They then used this to build a robot gripper.
The device consists of two aluminium plates that each contain four rows of “hairs” made from thermoplastic polyurethane.
The hairs are 20 mm long and 1 mm in diameter, protruding in a v-shape. This allowing the hairs to surround circular objects, which can be particularly difficult to grasp and hold onto using paraellel plates.
In tests picking up 30 different household items including a jam jar and shampoo bottle (see video), adding hairs to the gripper increased the prototype’s grasp success rate from 64% to 90%.
The researchers think that such a device could be used in environmental clean-up as well as in construction and agriculture.
Barbara Webb from the University of Edinburgh, who led the research, says the work is “just the first step”.
“Now we can see how [ants’] antennae, front legs and jaws combine to sense, manipulate, grasp and move objects – for instance, we’ve discovered how much ants rely on their front legs to get objects in position,” she adds. “This will inform further development of our technology.”
Researchers at the EMBL in Germany have dramatically reduced the time required to create images using Brillouin microscopy, making it possible to study the viscoelastic properties of biological samples far more quickly and with less damage than ever before. Their new technique can image samples with a field of view of roughly 10 000 pixels at a speed of 0.1 Hz – a 1000-fold improvement in speed and throughput compared to standard confocal techniques.
Mechanical properties such as the elasticity and viscosity of biological cells are closely tied to their function. These properties also play critical roles in processes such as embryo and tissue development and can even dictate how diseases such as cancer evolve. Measuring these properties is therefore important, but it is not easy since most existing techniques to do so are invasive and thus inherently disruptive to the systems being imaged.
Non-destructive, label- and contact-free
In recent years, Brillouin microscopy has emerged as a non-destructive, label- and contact-free optical spectroscopy method for probing the viscoelastic properties of biological samples with high resolution in three dimensions. It relies on Brillouin scattering, which occurs when light interacts with the phonons (or collective vibrational modes) that are present in all matter. This interaction produces two additional peaks, known as Stokes and anti-Stokes Brillouin peaks, in the spectrum of the scattered light. The position of these peaks (the Brillouin shift) and their linewidth (the Brillouin width) are related to the elastic and viscous properties, respectively, of the sample.
The downside is that standard Brillouin microscopy approaches analyse just one point in a sample at a time. Because the scattering signal from a single point is weak, imaging speeds are slow, yielding long light exposure times that can damage photosensitive components within biological cells.
“Light sheet” Brillouin imaging
To overcome this problem, EMBL researchers led by Robert Prevedel began exploring ways to speed up the rate at which Brillouin microscopy can acquire two- and three-dimensional images. In the early days of their project, they were only able to visualize one pixel at a time. With typical measurement times of tens to hundreds of milliseconds for a single data point, it therefore took several minutes, or even hours, to obtain two-dimensional images of 50–250 square pixels.
In 2022, however, they succeeded in expanding the field of view to include an entire spatial line — that is, acquiring image data from more than 100 points in parallel. In their latest work, which they describe in Nature Photonics, they extended the technique further to allow them to view roughly 10 000 pixels in parallel over the full plane of a sample. They then used the new approach to study mechanical changes in live zebrafish larvae.
“This advance enables much faster Brillouin imaging, and in terms of microscopy, allows us to perform ‘light sheet’ Brillouin imaging,” says Prevedel. “In short, we are able to ‘under-sample’ the spectral output, which leads to around 1000 fewer individual measurements than normally needed.”
Towards a more widespread use of Brillouin microscopy
Prevedel and colleagues hope their result will lead to more widespread use of Brillouin microscopy, particularly for photosensitive biological samples. “We wanted to speed-up Brillouin imaging to make it a much more useful technique in the life sciences, yet keep overall light dosages low. We succeeded in both aspects,” he tells Physics World.
Looking ahead, the researchers plan to further optimize the design of their approach and merge it with microscopes that enable more robust and straightforward imaging. “We then want to start applying it to various real-world biological structures and so help shed more light on the role mechanical properties play in biological processes,” Prevedel says.
FLASH irradiation, an emerging cancer treatment that delivers radiation at ultrahigh dose rates, has been shown to significantly reduce acute skin toxicity in laboratory mice compared with conventional radiotherapy. Having demonstrated this effect using proton-based FLASH treatments, researchers from Aarhus University in Denmark have now repeated their investigations using electron-based FLASH (eFLASH).
Reporting their findings in Radiotherapy and Oncology, the researchers note a “remarkable similarity” between eFLASH and proton FLASH with respect to acute skin sparing.
Principal investigator Brita Singers Sørensen and colleagues quantified the dose–response modification of eFLASH irradiation for acute skin toxicity and late fibrotic toxicity in mice, using similar experimental designs to those previously employed for their proton FLASH study. This enabled the researchers to make direct quantitative comparisons of acute skin response between electrons and protons. They also compared the effectiveness of the two modalities to determine whether radiobiological differences were observed.
Over four months, the team examined 197 female mice across five irradiation experiments. After being weighed, earmarked and given an ID number, each mouse was randomized to receive either eFLASH irradiation (average dose rate of 233 Gy/s) or conventional electron radiotherapy (average dose rate of 0.162 Gy/s) at various doses.
For the treatment, two unanaesthetized mice (one from each group) were restrained in a jig with their right legs placed in a water bath and irradiated by a horizontal 16 MeV electron beam. The animals were placed on opposite sides of the field centre and irradiated simultaneously, with their legs at a 3.2 cm water-equivalent depth, corresponding to the dose maximum.
The researchers used a diamond detector to measure the absolute dose at the target position in the water bath and assumed that the mouse foot target received the same dose. The resulting foot doses were 19.2–57.6 Gy for eFLASH treatments and 19.4–43.7 Gy for conventional radiotherapy, chosen to cover the entire range of acute skin response.
FLASH confers skin protection
To evaluate the animals’ response to irradiation, the researchers assessed acute skin damage daily from seven to 28 days post-irradiation using an established assay. They weighed the mice weekly, and one of three observers blinded to previous grades and treatment regimens assessed skin toxicity. Photographs were taken whenever possible. Skin damage was also graded using an automated deep-learning model, generating a dose–response curve independent of observer assessments.
The researchers also assessed radiation-induced fibrosis in the leg joint, biweekly from weeks nine to 52 post-irradiation. They defined radiation-induced fibrosis as a permanent reduction of leg extensibility by 75% or more in the irradiated leg compared with the untreated left leg.
To assess the tissue-sparing effect of eFLASH, the researchers used dose–response curves to derive TD50 – the toxic dose eliciting a skin response in 50% of mice. They then determined a dose modification factor (DMF), defined as the ratio of eFLASH TD50 to conventional TD50. A DMF larger than one suggests that eFLASH reduces toxicity.
The eFLASH treatments had a DMF of 1.45–1.54 – in other words, a 45–54% higher dose was needed to cause comparable skin toxicity to that caused by conventional radiotherapy. “The DMF indicated a considerable acute skin sparing effect of eFLASH irradiation,” the team explain. Radiation-induced fibrosis was also reduced using eFLASH, with a DMF of 1.15.
Reducing skin damage Dose-response curves for acute skin toxicity (left) and fibrotic toxicity (right) for conventional electron radiotherapy and electron FLASH treatments. (Courtesy: CC BY 4.0/adapted from Radiother. Oncol. 10.1016/j.radonc.2025.110796)
For DMF-based equivalent doses, the development of skin toxicity over time was similar for eFLASH and conventional treatments, throughout the dose groups. This supports the hypothesis that eFLASH modifies the dose–response rather than causing a changed biological mechanism. The team also suggests that the difference in DMF seen for fibrotic response and acute skin damage suggests that FLASH sparing depends on tissue type and might be specific to acute and late-responding tissue.
Similar skin damage between electrons and protons
Sørensen and colleagues compared their findings to previous studies of normal-tissue damage from proton irradiation, both in the entrance plateau and using the spread-out Bragg peak (SOBP). DMF values for electrons (1.45–1.54) were similar to those of transmission protons (1.44–1.50) and slightly higher than for SOBP protons (1.35–1.40). “Despite dose rate and pulse structure differences, the response to electron irradiation showed substantial similarity to transmission and SOBP damage,” they write.
Although the average eFLASH dose rate (233 Gy/s) was higher than that of the proton studies (80 and 60 Gy/s), it did not appear to influence the biological response. This supports the hypothesis that beyond a certain dose rate threshold, the tissue-sparing effect of FLASH does not increase notably.
The researchers point out that previous studies also found biological similarities in the FLASH effect for electrons and protons, with this latest work adding data on similar comparable and quantifiable effects. They add, however, that “based on the data of this study alone, we cannot say that the biological response is identical, nor that the electron and proton irradiation elicit the same biological mechanisms for DNA damage and repair. This data only suggests a similar biological response in the skin.”
New data from the NOvA experiment at Fermilab in the US contain no evidence for so-called “sterile” neutrinos, in line with results from most – though not all – other neutrino detectors to date. As well as being consistent with previous experiments, the finding aligns with standard theoretical models of neutrino oscillation, in which three active types, or flavours, of neutrino convert into each other. The result also sets more stringent limits on how much an additional sterile type of neutrino could affect the others.
“The global picture on sterile neutrinos is still very murky, with a number of experiments reporting anomalous results that could be attributed to sterile neutrinos on one hand and a number of null results on the other,” says NOvA team member Adam Lister of the University of Wisconsin, Madison, US. “Generally, these anomalous results imply we should see large amounts of sterile-driven neutrino disappearance at NOvA, but this is not consistent with our observations.”
Neutrinos were first proposed in 1930 by Wolfgang Pauli as a way to account for missing energy and spin in the beta decay of nuclei. They were observed in the laboratory in 1956, and we now know that they come in (at least) three flavours: electron, muon and tau. We also know that these three flavours oscillate, changing from one to another as they travel through space, and that this oscillation means they are not massless (as was initially thought).
Significant discrepancies
Over the past few decades, physicists have used underground detectors to probe neutrino oscillation more deeply. A few of these detectors, including the LSND at Los Alamos National Laboratory, BEST in Russia, and Fermilab’s own MiniBooNE, have observed significant discrepancies between the number of neutrinos they detect and the number that mainstream theories predict.
One possible explanation for this excess, which appears in some extensions of the Standard Model of particle physics, is the existence of a fourth flavour of neutrino. Neutrinos of this “sterile” type do not interact with the other flavours via the weak nuclear force. Instead, they interact only via gravity.
Detecting sterile neutrinos would fundamentally change our understanding of particle physics. Indeed, some physicists think sterile neutrinos could be a candidate for dark matter – the mysterious substance that is thought to make up around 85% of the matter in the universe but has so far only made itself known through the gravitational force it exerts.
Near and far detectors
The NOvA experiment uses two liquid scintillator detectors to monitor a stream of neutrinos created by firing protons at a carbon target. The near detector is located at Fermilab, approximately 1 km from the target, while the far detector is 810 km away in northern Minnesota. In the new study, the team measured how many muon-type neutrinos survive the journey through the Earth’s crust from the near detector to the far one. The idea is that if fewer neutrinos survive than the conventional three-flavour oscillations picture predicts, some of them could have oscillated into sterile neutrinos.
The experimenters studied two different interactions between neutrinos and normal matter, says team member V Hewes of the University of Cincinnati, US. “We looked for both charged current muon neutrino and neutral current interactions, as a sterile neutrino would manifest differently in each,” Hewes explains. “We then compared our data across those samples in both detectors to simulations of neutrino oscillation models with and without the presence of a sterile neutrino.”
No excess of neutrinos seen
Writing in Physical Review Letters, the researchers state that they found no evidence of neutrinos oscillating into sterile neutrinos. What is more, introducing a fourth, sterile neutrino did not provide better agreement with the data than sticking with the standard model of three active neutrinos.
This result is in line with several previous experiments that looked for sterile neutrinos, including those performed at T2K, Daya Bay, RENO and MINOS+. However, Lister says it places much stricter constraints on active-sterile neutrino mixing than these earlier results. “We are really tightening the net on where sterile neutrinos could live, if they exist,” he tells Physics World.
The NOvA team now hopes to tighten the net further by reducing systematic uncertainties. “To that end, we are developing new data samples that will help us better understand the rate at which neutrinos interact with our detector and the composition of our beam,” says team member Adam Aurisano, also at the University of Cincinnati. “This will help us better distinguish between the potential imprint of sterile neutrinos and more mundane causes of differences between data and prediction.”
NOvA co-spokesperson Patricia Vahle, a physicist at the College of William & Mary in Virginia, US, sums up the results. “Neutrinos are full of surprises, so it is important to check when anomalies show up,” she says. “So far, we don’t see any signs of sterile neutrinos, but we still have some tricks up our sleeve to extend our reach.”
Last week I had the pleasure of attending the Global Physics Summit (GPS) in Anaheim California, where I rubbed shoulders with 15,0000 fellow physicists. The best part of being there was chatting with lots of different people, and in this podcast I share two of those conversations.
First up is Chetan Nayak, who is a senior researcher at Microsoft’s Station Q quantum computing research centre here in California. In February, Nayak and colleagues claimed a breakthrough in the development of topological quantum bits (qubits) based on Majorana zero modes. In principle, such qubits could enable the development of practical quantum computers, but not all physicists were convinced, and the announcement remains controversial – despite further results presented by Nayak in a packed session at the GPS.
I caught up with Nayak after his talk and asked him about the challenges of achieving Microsoft’s goal of a superconductor-based topological qubit. That conversation is the first segment of today’s podcast.
Distinctive jumping technique
Up next, I chat with Atharva Lele about the physics of manu jumping, which is a competitive aquatic sport that originates from the Māori and Pasifika peoples of New Zealand. Jumpers are judged by the height of their splash when they enter the water, and the best competitors use a very distinctive technique.
Lele is an undergraduate student at the Georgia Institute of Technology in the US, and is part of team that analysed manu techniques in a series of clever experiments that included plunging robots. He explains how to make a winning manu jump while avoiding the pain of a belly flop.
The first direct evidence for auroras on Neptune has been spotted by the James Webb Space Telescope (JWST) and the Hubble Space Telescope.
Auroras happen when energetic particles from the Sun become trapped in a planet’s magnetic field and eventually strike the upper atmosphere with the energy released creating a signature glow.
Auroral activity has previously been seen on Jupiter, Saturn and Uranus but not on Neptune despite hints in a flyby of the planet by NASA’s Voyager 2 in 1989.
“Imaging the auroral activity on Neptune was only possible with [the JWST’s] near-infrared sensitivity,” notes Henrik Melin from Northumbria University. “It was so stunning to not just see the auroras, but the detail and clarity of the signature really shocked me.”
The data was taken by JWST’s Near-Infrared Spectrograph as well as Hubble’s Wide Field Camera 3. The cyan on the image above represent auroral activity and is shown together with white clouds on a multi-hued blue orb that is Neptune.
While auroras on Earth occur at the poles, on Neptune they happen elsewhere. This is due to the nature of Neptune’s magnetic field, which is tilted by 47 degrees from the planet’s rotational axis.
As well as the visible imagery, the JWST also detected an emission line from trihydrogen cation (H3+), which can be created in auroras.
Physicists in Germany have found an alternative explanation for an anomaly that had previously been interpreted as potential evidence for a mysterious “dark force”. Originally spotted in ytterbium atoms, the anomaly turns out to have a more mundane cause. However, the investigation, which involved high-precision measurements of shifts in ytterbium’s energy levels and the mass ratios of its isotopes, could help us better understand the structure of heavy atomic nuclei and the physics of neutron stars.
Isotopes are forms of an element that have the same number of protons and electrons, but different numbers of neutrons. These different numbers of neutrons produce shifts in the atom’s electronic energy levels. Measuring these so-called isotope shifts is therefore a way of probing the interactions between electrons and neutrons.
In 2020, a team of physicists at the Massachusetts Institute of Technology (MIT) in the US observed an unexpected deviation in the isotope shift of ytterbium. One possible explanation for this deviation was the existence of a new “dark force” that would interact with both ordinary, visible matter and dark matter via hypothetical new force-carrying particles (bosons).
Although dark matter is thought to make up about 85 percent of the universe’s total matter, and its presence can be inferred from the way light bends as it travels towards us from distant galaxies, it has never been detected directly. Evidence for a new, fifth force (in addition to the known strong, weak, electromagnetic and gravitational forces) that acts between ordinary and dark matter would therefore be very exciting.
Measuring ytterbium isotope shifts and atomic masses
Mehlstäubler, Blaum and colleagues came to this conclusion after measuring shifts in the atomic energy levels of five different ytterbium isotopes: 168,170,172,174,176Yb. They did this by trapping ions of these isotopes in an ion trap at the PTB and then using an ultrastable laser to drive certain electronic transitions. This allowed them to pin down the frequencies of specific transitions (2S1/2→2D5/2 and 2S1/2→2F7/2) with a precision of 4 ×10−9, the highest to date.
They also measured the atomic masses of the ytterbium isotopes by trapping individual highly-charged Yb42+ ytterbium ions in the cryogenic PENTATRAP Penning trap mass spectrometer at the MPIK. In the strong magnetic field of this trap, team member and study lead author Menno Door explains, the ions are bound to follow a circular orbit. “We measure the rotational frequency of this orbit by amplifying the miniscule inducted current in surrounding electrodes,” he says. “The measured frequencies allowed us to very precisely determine the related mass ratios of the various isotopes with a precision of 4 ×10−12.”
From these data, the researchers were able to extract new parameters that describe how the ytterbium nucleus deforms. To back up their findings, a group at TU Darmstadt led by Achim Schwenk simulated the ytterbium nuclei on large supercomputers, calculating their structure from first principles based on our current understanding of the strong and electromagnetic interactions. “These calculations confirmed that the leading signal we measured was due to the evolving nuclear structure of ytterbium isotopes, not a new fifth force,” says team member Matthias Heinz.
“Our work complements a growing body of research that aims to place constraints on a possible new interaction between electrons and neutrons,” team member Chih-Han Yeh tells Physics World. “In our work, the unprecedented precision of our experiments refined existing constraints.”
The researchers say they would now like to measure other isotopes of ytterbium, including rare isotopes with high or low neutron numbers. “Doing this would allow us to control for uncertain ‘higher-order’ nuclear structure effects and further improve the constraints on possible new physics,” says team member Fiona Kirk.
Door adds that isotope chains of other elements such as calcium, tin and strontium would also be worth investigating. “These studies would allow to further test our understanding of nuclear structure and neutron-rich matter, and with this understanding allow us to probe for possible new physics again,” he says.
Located about 40 light years from us, the exoplanet Trappist-1 b, orbiting an ultracool dwarf star, has perplexed astronomers with its atmospheric mysteries. Recent observations made by the James Webb Space Telescope (JWST) at two mid-infrared bands (12.8 and 15 µm), suggest that the exoplanet could either be bare, airless rock like Mercury or shrouded by a hazy carbon dioxide (CO2) atmosphere like Titan.
The research, reported in Nature Astronomy, provides the first thermal emission measurements for Trappist-1 b suggesting two plausible yet contradictory scenarios. This paradox challenges our current understanding of atmospheric models and highlights the need for further investigations – both theoretical and observational.
Scenario one: airless rock
An international team of astronomers, co-led by Elsa Ducrot and Pierre-Olivier Lagage from the Commissariat aux Énergies Atomiques (CEA) in Paris, France, obtained mid-infrared observations for Trappist-1 b for 10 secondary eclipse measurements (recorded as the exoplanet moves behind the star) using the JWST Mid-Infrared Instrument (MIRI). They recorded emission data at 12.8 and 15 µm and compared the findings with various surface and atmospheric models.
The thermal emission at 15 µm corresponded with Trappist-1 b being almost null-albedo bare rock; however, the emission at 12.8 µm refuted this model. At this wavelength, the exoplanet’s measured flux was most consistent with the surface model of an igneous, low-silica-content rock called ultramafic rock. The model assumes the surface to be geologically unweathered.
Trappist-1 b, the innermost planet in the Trappist-1 system, experiences strong tidal interaction and induction heating from its host star. This could trigger volcanic activity and continuous resurfacing, which could lead to a young surface like that of Jupiter’s volcanic moon Io. The researchers argue that these scenarios support the idea that Trappist-1 b is an airless rocky planet with a young ultramafic surface.
The team next explored atmospheric models for the exoplanet, which unfolded a different story.
Scenario two: haze-rich CO2 atmosphere
Ducrot and colleagues fitted the measured flux data with hazy atmospheric models centred around 15 µm. The results showed that Trappist-1 b could have a thick CO2-rich atmosphere with photochemical haze, but with a twist. For an atmosphere dominated by greenhouse gases such as CO2, which is strongly absorbing, temperature is expected to increase with increasing pressure (at lower levels). Consequently, they anticipated the brightness temperature should be lower at 15 µm (which measures temperature high in the atmosphere) than at 12.8 µm. But the observations showed otherwise. They proposed that this discrepancy could be explained by a thermal inversion, where the upper atmosphere has higher temperature than the layers below.
In our solar system, Titan’s atmosphere also shows thermal inversion due to heating through haze absorption. Haze is an efficient absorber of stellar radiation. Therefore, it could absorb radiation high up in the atmosphere, leading to heating of the upper atmosphere and cooling of the lower atmosphere. Indeed, this model is consistent with the team’s measurements. However, this leads to another plausible question: what forms this haze?
Trappist-1 b’s close proximity to Trappist-1 and the strong X-ray and ultraviolet radiation from the host star could create haze in the exoplanet’s atmosphere via photodissociation. While Titan’s hydrocarbon haze arises from photodissociation of methane, the same is not possible for Trappist-1 b as methane and CO2 cannot coexist photochemically and thermodynamically.
One plausible scenario is that the photochemical haze forms due to the presence of hydrogen sulphide (H2S). The volcanic activity in an oxidized, CO2-dominated atmosphere could supply H2S, but it is unlikely that it could sustain the levels needed for the haze. Additionally, as the innermost planet around an active star, Trappist-1 b is subjected to constant space weathering, raising the question of the sustainability of its atmosphere.
The researchers note that although the modelled atmospheric scenario appears less plausible than the airless bare-rock model, more theoretical and experimental work is needed to create a conclusive model.
What is the true nature of Trappist-1 b?
The two plausible surface and atmospheric models for Trappist-1 b provide an enigma. How could a planet be simultaneously an airless, young ultramafic rock and have a haze-filled CO2-rich atmosphere? The resolution might come not from theoretical models but from additional measurements.
Currently, the available data only capture the dayside thermal flux within two infrared bands, which proved insufficient to decisively distinguish between an airless surface and a CO₂-rich atmosphere. To solve this planetary paradox, astronomers advocate for broader spectral coverage and photometric phase curve measurements to help explain heat redistribution patterns essential for atmospheric confirmation.
JWST’s observations of Trappist-1 b demonstrate its power to precisely detect thermal emissions from exoplanets. However, the contradictory interpretations of the data highlight its limitations too and emphasize the need for higher resolution spectroscopy. With only two thermal flux measurements insufficient to give a precise answer, future JWST observations of Trappist-1 b might uncover its true picture.
Co-author Michaël Gillon, an astrophysicist at the University of Liège, emphasizes the importance of the results. “The agreement between our two measurements of the planet’s dayside fluxes at 12.8 and 15 microns and a haze-rich CO2-dominated atmosphere is an important finding,” he tells Physics World. “It shows that dayside flux measurements in one or a couple of broadband filters is not enough to fully discriminate airless versus atmosphere models. Additional phase curve and transit transmission data are necessary, even if for the latter, the interpretation of the measurements is complicated by the inhomogeneity of the stellar surface.”
For now, TRAPPIST-1 b hides its secrets, either standing as airless barren world scorched by its star or hidden underneath a hazy thick CO2 veil.
Last year the UK government placed a new cap of £9535 on annual tuition fees, a figure that will likely rise in the coming years as universities tackle a funding crisis. Indeed, shortfalls are already affecting institutions, with some saying they will run out of money in the next few years. The past couple of months alone have seen several universities announce plans to shed academic staff and even shut departments.
Whether you agree with tuition fees or not, the fact is that students will continue to pay a significant sum for a university education. Value for money is part of the university proposition and lecturers can play a role by conveying the excitement of their chosen field. But what are the key requirements to help do so? In the late 1990s we carried out a study aimed at improving the long-term performance of students who initially struggled with university-level physics.
With funding from the Higher Education Funding Council for Wales, the study involved structured interviews with 28 students and 17 staff. An internal report – The Rough Guide to Lecturing – was written which, while not published, informed the teaching strategy of Cardiff University’s physics department for the next quarter of a century.
From the findings we concluded that lecture courses can be significantly enhanced by simply focusing on three principles, which we dub the three “E”s. The first “E” is enthusiasm. If a lecturer appears bored with the subject – perhaps they have given the same course for many years – why should their students be interested? This might sound obvious, but a bit of reading, or examining the latest research, can do wonders to freshen up a lecture that has been given many times before.
For both old and new courses it is usually possible to highlight at least one research current paper in a semester’s lectures. Students are not going to understand all of the paper, but that is not the point – it is the sharing in contemporary progress that will elicit excitement. Commenting on a nifty experiment in the work, or the elegance of the theory, can help to inspire both teacher and student.
As well as freshening up the lecture course’s content, another tip is to mention the wider context of the subject being taught, perhaps by mentioning its history or possible exciting applications. Be inventive –we have evidence of a lecturer “live” translating parts of Louis de Broglie’s classic 1925 paper “La relation du quantum et la relativité” during a lecture. It may seem unlikely, but the students responded rather well to that.
Supporting students
The second “E” is engagement. The role of the lecturer as a guide is obvious, but it should also be emphasized that the learner’s desire is to share the lecturer’s passion for, and mastery of, a subject. Styles of lecturing and visual aids can vary greatly between people, but the important thing is to keep students thinking.
Don’t succumb to the apocryphal definition of a lecture as only a means of transferring the lecturer’s notes to the student’s pad without first passing through the minds of either person. In our study, when the students were asked “What do you expect from a lecture?”, they responded simply to learn something new, but we might extend this to a desire to learn how to do something new.
Simple demonstrations can be effective for engagement. Large foam dice, for example, can illustrate the non-commutation of 3D rotations. Fidget-spinners in the hands of students can help explain the vector nature of angular momentum. Lecturers should also ask rhetorical questions that make students think, but do not expect or demand answers, particularly in large classes.
More importantly, if a student asks a question, never insult them – there is no such thing as a “stupid” question. After all, what may seem a trivial point could eliminate a major conceptual block for them. If you cannot answer a technical query, admit it and say you will find out for next time – but make sure you do. Indeed, seeing that the lecturer has to work at the subject too can be very encouraging for students.
The final “E” is enablement. Make sure that students have access to supporting material. This could be additional notes; a carefully curated reading list of papers and books; or sets of suitable interesting problems with hints for solutions, worked examples they can follow, and previous exam papers. Explain what amount of self-study will be needed if they are going to benefit from the course.
Have clear and accessible statements concerning the course content and learning outcomes – in particular, what students will be expected to be able to do as a result of their learning. In our study, the general feeling was that a limited amount of continuous assessment (10–20% of the total lecture course mark) encourages both participation and overall achievement, provided students are given good feedback to help them improve.
Next time you are planning to teach a new course, or looking through those decades-old notes, remember enthusiasm, engagement and enablement. It’s not rocket science, but it will certainly help the students learn it.
Orthopaedic implants that bear loads while bones heal, then disappear once they’re no longer needed, could become a reality thanks to a new technique for enhancing the mechanical properties of zinc alloys. Developed by researchers at Monash University in Australia, the technique involves controlling the orientation and size of microscopic grains in these strong yet biodegradable materials.
Implants such as plates and screws provide temporary support for fractured bones until they knit together again. Today, these implants are mainly made from sturdy materials such as stainless steel or titanium that remain in the body permanently. Such materials can, however, cause discomfort and bone loss, and subsequent injuries to the same area risk additional damage if the permanent implants warp or twist.
To address these problems, scientists have developed biodegradable alternatives that dissolve once the bone has healed. These alternatives include screws made from magnesium-based materials such as MgYREZr (trade name MAGNEZIX), MgYZnMn (NOVAMag) and MgCaZn (RESOMET). However, these materials have compressive yield strengths of just 50 to 260 MPa, which is too low to support bones that need to bear a patient’s weight. They also produce hydrogen gas as they degrade, possibly affecting how biological tissues regenerate.
Zinc alloys do not suffer from the hydrogen gas problem. They are biocompatible, dissolving slowly and safely in the body. There is even evidence that Zn2+ ions can help the body heal by stimulating bone formation. But again, their mechanical strength is low: at less than 30 MPa, they are even worse than magnesium in this respect.
Making zinc alloys strong enough for load-bearing orthopaedic implants is not easy. Mechanical strategies such as hot-extruding binary alloys have not helped much. And methods that focus on reducing the materials’ grain size (to hamper effects like dislocation slip) have run up against a discouraging problem: at body temperature (37 °C), ultrafine-grained Zn alloys become mechanically weaker as their so-called “creep resistance” decreases.
Grain size goes bigger
In the new work, a team led by materials scientist and engineer Jian-Feng Nei tried a different approach. By increasing grain size in Zn alloys rather than decreasing it, the Monash team was able to balance the alloys’ strength and creep resistance – something they say could offer a route to stronger zinc alloys for biodegradable implants.
In compression tests of extruded Zn–0.2 wt% Mg alloy samples with grain sizes of 11 μm, 29 μm and 47 μm, the team measured stress-strain curves that show a markedly higher yield strength for coarse-grained samples than for fine-grained ones. What is more, the compressive yield strengths of these coarser-grained zinc alloys are notably higher than those of MAGNEZIX, NOVAMag and RESOMET biodegradable magnesium alloys. At the upper end, they even rival those of high-strength medical-grade stainless steels.
The researchers attribute this increased compressive yield to a phenomenon called the inverse Hall–Petch effect. This effect comes about because larger grains favour metallurgical effects such as intra-granular pyramidal slip as well as a variation of a well-known metal phenomenon called twinning, in which a specific kind of defect forms when part of the material’s crystal structure flips its orientation. Larger grains also make the alloys more flexible, allowing them to better adapt to surrounding biological tissues. This is the opposite of what happens with smaller grains, which facilitate inter-granular grain boundary sliding and make alloys more rigid.
The new work, which is detailed in Nature, could aid the development of advanced biodegradable implants for orthopaedics, cardiovascular applications and other devices, says Nei. “With improved biocompatibility, these implants could be safer and do away with the need for removal surgeries, lowering patient risk and healthcare costs,” he tells Physics World. “What is more, new alloys and processing techniques could allow for more personalized treatments by tailoring materials to specific medical needs, ultimately improving patient outcomes.”
The Monash team now aims to improve the composition of the alloys and achieve more control over how they degrade. “Further studies on animals and then clinical trials will test their strength, safety and compatibility with the body,” says Nei. “After that, regulatory approvals will ensure that the biodegradable metals meet medical standards for orthopaedic implants.”
The team is also setting up a start-up company with the goal of developing and commercializing the materials, he adds.
Researchers in China have unveiled a 105-qubit quantum processor that can solve in minutes a quantum computation problem that would take billions of years using the world’s most powerful classical supercomputers. The result sets a new benchmark for claims of so-called “quantum advantage”, though some previous claims have faded after classical algorithms improved.
The fundamental promise of quantum computation is that it will reduce the computational resources required to solve certain problems. More precisely, it promises to reduce the rate at which resource requirements grow as problems become more complex. Evidence that a quantum computer can solve a problem faster than a classical computer – quantum advantage – is therefore a key measure of success.
The first claim of quantum advantage came in 2019, when researchers at Google reported that their 53-qubit Sycamore processor had solved a problem known as random circuit sampling (RCS) in just 200 seconds. Xiaobu Zhu, a physicist at the University of Science and Technology of China (USTC) in Hefei who co-led the latest work, describes RCS as follows: “First, you initialize all the qubits, then you run them in single-qubit and two-qubit gates and finally you read them out,” he says. “Since this process includes every key element of quantum computing, such as initializing the gate operations and readout, unless you have really good fidelity at each step you cannot demonstrate quantum advantage.”
At the time, the Google team claimed that the best supercomputers would take 10::000 years to solve this problem. However, subsequent improvements to classical algorithms reduced this to less than 15 seconds. This pattern has continued ever since, with experimentalists pushing quantum computing forward even as information theorists make quantum advantage harder to achieve by improving techniques used to simulate quantum algorithms on classical computers.
Recent claims of quantum advantage
In October 2024, Google researchers announced that their 67-qubit Sycamore processor had solved an RCS problem that would take an estimated 3600 years for the Frontier supercomputer at the US’s Oak Ridge National Laboratory to complete. In the latest work, published in Physical Review Letters, Jian-Wei Pan, Zhu and colleagues set the bar even higher. They show that their new Zuchongzhi 3.0 processor can complete in minutes an RCS calculation that they estimate would take Frontier billions of years using the best classical algorithms currently available.
To achieve this, they redesigned the readout circuit of their earlier Zuchongzhi processor to improve its efficiency, modified the structures of the qubits to increase their coherence times and increased the total number of superconducting qubits to 105. “We really upgraded every aspect and some parts of it were redesigned,” Zhu says.
Google’s latest processor, Willow, also uses 105 superconducting qubits, and in December 2024 researchers there announced that they had used it to demonstrate quantum error correction. This achievement, together with complementary advances in Rydberg atom qubits from Harvard University’s Mikhail Lukin and colleagues, was named Physics World’s Breakthrough of the Year in 2024. However, Zhu notes that Google has not yet produced any peer-reviewed research on using Willow for RCS, making it hard to compare the two systems directly.
The USTC team now plans to demonstrate quantum error correction on Zuchongzhi 3.0. This will involve using an error correction code such as the surface code to combine multiple physical qubits into a single “logical qubit” that is robust to errors. “The requirements for error-correction readout are much more difficult than for RCS,” Zhu notes. “RCS only needs one readout, whereas error-correction needs readout many times with very short readout times…Nevertheless, RCS can be a benchmark to show we have the tools to run the surface code. I hope that, in my lab, within a few months we can demonstrate a good-quality error correction code.”
“How progress gets made”
Quantum information theorist Bill Fefferman of the University of Chicago in the US praises the USTC team’s work, describing it as “how progress gets made”. However, he offers two caveats. The first is that recent demonstrations of quantum advantage do not have efficient classical verification schemes – meaning, in effect, that classical computers cannot check the quantum computer’s work. While the USTC researchers simulated a smaller problem on both classical and quantum computers and checked that the answers matched, Fefferman doesn’t think this is sufficient. “With the current experiments, at the moment you can’t simulate it efficiently, the verification doesn’t work anymore,” he says.
The second caveat is that the rigorous hardness arguments proving that the classical computational power needed to solve an RCS problem grows exponentially with the problem’s complexity apply only to situations with no noise. This is far from the case in today’s quantum computers, and Fefferman says this loophole has been exploited in many past quantum advantage experiments.
Still, he is upbeat about the field’s prospects. “The fact that the original estimates the experimentalists gave did not match some future algorithm’s performance is not a failure: I see that as progress on all fronts,” he says. “The theorists are learning more and more about how these systems work and improving their simulation algorithms and, based on that, the experimentalists are making their systems better and better.”
Sometimes, you just have to follow your instincts and let serendipity take care of the rest.
North Ronaldsay, a remote island north of mainland Orkney, has a population of about 50 and a lot of sheep. In the early 19th century, it thrived on the kelp ash industry, producing sodium carbonate (soda ash), potassium salts and iodine for soap and glass making.
But when cheaper alternatives became available, the island turned to its unique breed of seaweed-eating sheep. In 1832 islanders built a 12-mile-long dry stone wall around the island to keep the sheep on the shore, preserving inland pasture for crops.
My connection with North Ronaldsay began last summer when my partner, Sue Bowler, and I volunteered for the island’s Sheep Festival, where teams of like minded people rebuild sections of the crumbling wall. That experience made us all the more excited when we learned that North Ronaldsay also had a science festival.
This year’s event took place on 14–16 March and getting there was no small undertaking. From our base in Leeds, the journey involved a 500-mile drive to a ferry, a crossing to Orkney mainland, and finally, a flight in a light aircraft. With just 50 inhabitants, we had no idea how many people would turn up but instinct told us it was worth the trip.
Sue, who works for the Royal Astronomical Society (RAS), presented Back to the Moon, while together we ran hands-on maker activities, a geology walk and a trip to the lighthouse, where we explored light beams and Fresnel lenses.
The Yorkshire Branch of the Institute of Physics (IOP) provided laser-cut hoist kits to demonstrate levers and concepts like mechanical advantage, while the RAS shared Connecting the Dots – a modern LED circuit version of a Victorian after-dinner card set illustrating constellations.
Hands-on science Participants get stuck into maker activities at the festival. (Courtesy: @Lazy.Photon on Instagram)
Despite the island’s small size, the festival drew attendees from neighbouring islands, with 56 people participating in person and another 41 joining online. Across multiple events, the total accumulated attendance reached 314.
One thing I’ve always believed in science communication is to listen to your audience and never make assumptions. Orkney has a rich history of radio and maritime communications, shaped in part by the strategic importance of Scapa Flow during the Second World War.
Stars in their eyes Making a constellation board at the North Ronaldsay Science Festival. (Courtesy: @Lazy.Photon on Instagram)
The Orkney Wireless Museum is a testament to this legacy, and one of our festival guests had even reconstructed a working 1930s Baird television receiver for the museum.
Leaving North Ronaldsay was hard. The festival sparked fascinating conversations, and I hope we inspired a few young minds to explore physics and astronomy.
The author would like to thanks Alexandra Wright (festival organizer), Lucinda Offer (education, outreach and events officer at the RAS) and Sue Bowler (editor of Astronomy & Geophysics)
I’m standing next to Yang Fugui in front of the High Energy Photon Source (HEPS) in Beijing’s Huairou District about 50 km north of the centre of the Chinese capital. The HEPS isn’t just another synchrotron light source. It will, when it opens later this year, be the world’s most advanced facility of its type. Construction of this giant device started in 2019 and for Yang – a physicist who is in charge of designing the machine’s beamlines – we’re at a critical point.
“This machine has many applications, but now is the time to make sure it does new science,” says Yang, who is a research fellow at the Institute of High Energy Physics (IHEP) of the Chinese Academy of Sciences (CAS), which is building the new machine. With the ring completed, optimizing the beamlines will be vital if the facility is to open up new research areas.
From the air – Google will show you photos – the HEPS looks like a giant magnifying glass lying in a grassy field. But I’ve come by land, and from my perspective it resembles a large and gleaming low-walled silver sports stadium, surrounded by well-kept bushes, flowers and fountains.
I was previously in Beijing in 2019 at the time ground for the HEPS was broken when the site was literally a green field. Back then, I was told, the HEPS would take six-and-a-half years to build. We’re still on schedule and, if all continues to run as planned, the facility will come online in December 2025.
Lighting up the world
There are more than 50 synchrotron radiation sources around the world, producing intense, coherent beams of electromagnetic radiation used for experiments in everything from condensed-matter physics to biology. Three significant hardware breakthroughs, one after the other, have created natural divisions among synchrotron sources, leading them to be classed by their generation.
Along with Max IV in Sweden, SIRIUS in Brazil and the Extremely Brilliant Source at the European Synchrotron Radiation Facility (ESRF) in France, the HEPS is a fourth-generation source. These days such devices are vital and prestigious pieces of scientific infrastructure, but synchrotron radiation began life as an unexpected nuisance (Phys. Perspect. 10 438).
Classical electrodynamics says that charged particles undergoing acceleration – changing their momentum or velocity – radiate energy tangentially to their trajectories. Early accelerator builders assumed they could ignore the resulting energy losses. But in 1947, scientists building electron synchrotrons at the General Electric (GE) Research Laboratory in Schenectady, New York, were dismayed to find the phenomenon was real, sapping the energies of their devices.
Where it all began Synchrotron light is created whenever charged particles are accelerated. It gets its name because it was first observed in 1947 by scientists at the General Electric Research Laboratory in New York, who saw a bright speck of light through their synchrotron accelerator’s glass vacuum chamber – the visible portion of that energy. (Courtesy: AIP Emilio Segrè Visual Archives, John P. Blewett Collection)
Nuisances of physics, however, have a way of turning into treasured tools. By the early 1950s, scientists were using synchrotron light to study absorption spectra and other phenomena. By the mid-1960s, they were using it to examine the surface structures of materials. But a lot of this work was eclipsed by seemingly much sexier physics.
High-energy particle accelerators, such as CERN’s Proton Synchrotron and Brookhaven’s Alternating Gradient Synchrotron, were regarded as the most exciting, well-funded and biggest instruments in physics. They were the symbols of physics for politicians, press and the public – the machines that studied the fundamental structure of the world.
Researchers who had just discovered the uses of synchrotron light were forced to scrape parts for their instruments. These “first-generation” synchrotrons, such as “Tantalus” in Wisconsin, the Stanford Synchrotron Radiation Project in California, and the Cambridge Electron Accelerator in Massachusetts, were cobbled together from discarded pieces of high energy accelerators or grafted onto them. They were known as “parasites”.
Early adopter A drawing of plans for the Stanford Synchrotron Radiation Project in the US, which became one of the “first generation” of dedicated synchrotron-light sources when it opened in 1974. (Courtesy: SLAC – Zawojski)
In the 1970s, accelerator physicists realized that synchrotron sources could become more useful by shrinking the angular divergence of the electron beam, thereby improving the “brightness”. Renate Chasman and Kenneth Green devised a magnet array to maximize this property. Dubbed the “Chasman–Green lattice”, it begat a second-generation of dedicated light sources, built not borrowed.
Hard on the heels of Synchrotron Radiation Light Source, which opened in the UK in 1981, the National Synchrotron Light Source (NSLS I) at Brookhaven was the first second-generation source to use such a lattice. China’s oldest light source, the Beijing Synchrotron Radiation Facility, which opened to users in Beijing early in 1991, had a Chasman–Green lattice but also had to skim photons off an accelerator; it was a first-generation machine with a second-generation lattice. China’s first fully second-generation machine was the Hefei Light Source, which opened later that year.
By then instruments called “undulators” were already starting to be incorporated into light sources. They increased brightness hundreds-fold, doing so by wiggling the electron beam up and down, causing a coherent addition of electron field through each wiggle. While undulators had been inserted into second-generation sources, the third generation built them in from the start.
Bright thinking Consisting of a periodic array of dipole magnets (red and green blocks), undulators have a static magnetic field that alternates with a wavelength λu. An electron beam passing through the magnets is forced to oscillate, emitting light hundreds of times brighter than would otherwise be possible (orange). Such undulators were added to second-generation synchrotron sources – but third-generation facilities had them built in from the start. (Courtesy: Creative Commons Attribution-Share Alike 3.0 Unported license)
The first of these light sources was the ESRF, which opened to users in 1988. It was followed by the Advanced Photon Source (APS) at Argonne National Laboratory in 1995 and SPring-8 in Japan in 1999. The first third-generation source on the Chinese mainland was the Shanghai Synchrotron Radiation Facility, which opened in 2009.
In the 2010s, “multi-bend achromat” magnets drastically shrank the size of beam elements, further increasing brilliance. Several third generation machines, including the APS, have been upgraded with achromats, turning third-generation machines into fourth. SIRIUS, which has an energy of 3 GeV, was the first fourth-generation machine to be built from scratch.
Next in sequence The Advanced Photon Source at the Argonne National Laboratory in the US, which is a third-generation synchrotron-light source. (Courtesy: Argonne National Laboratory)
Set to operate at 6 GeV, the HEPS will be the first high-energy fourth-generation machine built from scratch. It is a step nearer to the “diffraction limit” that’s ultimately imposed by the way the uncertainty principle limits the simultaneous specification of certain properties. It makes further shrinking of the beam possible – but only at the expense of lost brilliance. That limit is still on the horizon, but the HEPS draws it closer.
The HEPS is being built next to a mountain range north of Beijing, where the bedrock provides a stable platform for the extraordinarily sensitive beams. Next door to the HEPS is a smaller stadium-like building for experimental labs and offices, and a yet smaller building for housing behind that.
Staff at the HEPS successfully stored the machine’s first electron beam in August 2024 and are now enhancing and optimizing parameters such as electron beam current strength and lifetime. When it opens at the end of the year, the HEPS will have 14 beamlines but is designed eventually to have around 90 experimental stations. “Our task right now is to build more beamlines” Yang told me.
Looking around
After studying physics at the University of Science and Technology in Hefei, Yang’s first job was as a beamline designer at the HEPS. On my visit, the machine was still more than a year from being operational and the experimental hall surrounding the ring was open. It is spacious unlike of many US light sources I’ve been to, which tend to be crammed due to numerous upgrades of the machine and beamlines.
As with any light source, the main feature of the HEP is its storage ring, which consists of alternating straight sections and bends. At the bends, the electrons shed X-rays like rain off a spinning umbrella. Intense, energetic and finely tunable, the X-rays are carried off down beamlines, where are they made useful for almost everything from materials science to biomedicine.
New science Fourth-generation sources, such as the High Energy Photon Source (HEPS), need to attract academic and business users from home and abroad. But only time will tell what kind of new science might be made possible. (Courtesy: IHEP)
We pass other stations optimized for 2D, 3D and nanoscale structures. Occasionally, a motorized vehicle loaded with equipment whizzes by, or workers pass us on bicycles. Every so often, I see an overhead red banner in Chinese with white lettering. Translating, Yang says the banners promote safety, care and the need for precision in doing high-quality work, signs of the renowned Chinese work ethic.
We then come to what is labelled a “pink” beam. Unlike a “white” beam, which has a broad spread of wavelengths, or a monochromatic beam of a very specific colour such as red, a pink beam has a spread of wavelengths that are neither broad nor narrow. This allows a much broader flux – typically two orders of magnitude more than a monochromatic beam – allowing a researcher fast diffraction patterns.
Another beamline, meanwhile, is labelled “tender” because its energy falls between 2 keV (“soft” X-rays) and 10 keV (“hard” X-rays). It’s for materials “somewhere between grilled steak and Jell-O” one HEPS researcher quips to me, referring to the wobbly American desert. A tender beam is for purposes that don’t require atomic-scale resolution, such as the magnetic behaviour of atoms.
Three beam pipes pass over the experimental hall to end stations that lie outside the building. They will be used, among other things, for applications in nanoscience, with a monochromator throwing out much of the X-ray beam to make it extremely coherent. We also pass a boxy, glass structure that is a clean room for making parts, as well as a straight pipe about 100 m long that will be used to test tiny vibrations in the Earth that might affect the precision of the beam.
Challenging times
I once spoke to one director of the NSLS, who would begin each day by walking around that facility, seeing what the experimentalists were up to and asking if they needed help. His trip usually took about 5–10 minutes; my tour with Yang took an hour.
But fourth-generation sources, such as the HEPS, face two daunting challenges. One is to cultivate a community of global users. Nearby the HEPS is CAS’s new Yanqi Lake campus, which lies on the other side of the mountains from Beijing, from where I can see the Great Wall meandering through the nearby hills. Faculty and students at CAS will form part of academic users of the HEPS, but how will the lab bring in researchers from abroad?
The HEPS will also need to get in users from business, convincing companies of the value of their machine. SPring-8 in Japan has industrial beamlines, including one sponsored by car giant Toyota, while China’s Shanghai machine has beamlines built by the China Petroleum and Chemical Corporation (Sinopec).
Yang is certainly open to collaboration with business partners. “We welcome industries, and can make full use of the machine, that would be enough,” he says. “If they contribute to building the beamlines, even better.”
The other big challenge for fourth-generation sources is to discover what new things are made possible by the vastly increased flux and brightness. A new generation of improved machines doesn’t necessarily produce breakthrough science; it’s not like one can turn on a machine with greater brightness and a field of new capabilities unfolds before you.
Going fourth The BM18 beamline on the Extremely Brilliant Source (EBS) at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. The EBS is a dedicated fourth-generation light source, with the BM18 beamline being ideal for monitoring very slowly changing systems. (Courtesy: ESRF)
Instead, what can happen is that techniques that are demonstrations or proof-of-concept research in one generation of synchrotron become applied in niche areas in the next, but become routine in the generation after that. A good example is speckle spectrometry – an interference-based technique that needs a sufficiently coherent light source – that should become widely used at fourth-generation sources like HEPS.
For the HEPS, the challenge will be to discover what new research in materials, chemistry, engineering and biomedicine these techniques will make possible. Whenever I ask experimentalists at light sources what kinds of new science the fourth-generation machines will allow, the inevitable answer is something like, “Ask me in 10 years!”
Yang can’t wait that long. “I started my career here,” he says, gesturing excitedly to the machine. “Now is the time – at the beginning – to try to make this machine do new science. If it can, I’ll end my career here!”
Cell separation Illustration of the fabricated optimal acousto-microfluidic chip. (Courtesy: Afshin Kouhkord and Naserifar Naser)
Analysing circulating tumour cells (CTCs) in the blood could help scientists detect cancer in the body. But separating CTCs from blood is a difficult, laborious process and requires large sample volumes.
Researchers at the K N Toosi University of Technology (KNTU) in Tehran, Iran believe that ultrasonic waves could separate CTCs from red blood cells accurately, in an energy efficient way and in real time. They publish their study in the journal Physics of Fluids.
“In a broader sense, we asked: ‘How can we design a microfluidic, lab-on-a-chip device powered by SAWs [standing acoustic waves] that remains simple enough for medical experts to use easily, while still delivering precise and efficient cell separation?’,” says senior author Naser Naserifar, an assistant professor in mechanical engineering at KNTU. “We became interested in acoustofluidics because it offers strong, biocompatible forces that effectively handle cells with minimal damage.”
Acoustic waves can deliver enough force to move cells over small distances without damaging them. The researchers used dual pressure acoustic fields at critical positions in a microchannel to separate CTCs from other cells. The CTCs are gathered at an outlet for further analyses, cultures and laboratory procedures.
In the process of designing the chip, the researchers integrated computational modelling, experimental analysis and artificial intelligence (AI) algorithms to analyse acoustofluidic phenomena and generate datasets that predict CTC migration in the body.
“We introduced an acoustofluidic microchannel with two optimized acoustic zones, enabling fast, accurate separation of CTCs from RBCs [red blood cells],” explains Afshin Kouhkord, who performed the work while a master’s student in the Advance Research in Micro And Nano Systems Lab at KNTU. “Despite the added complexity under the hood, the resulting chip is designed for simple operation in a clinical environment.”
So far, the researchers have evaluated the device with numerical simulations and tested it using a physical prototype. Simulations modelled fluid flow, acoustic pressure fields and particle trajectories. The physical prototype was made of lithium niobate, with polystyrene microspheres used as surrogates for red blood cells and CTCs. Results from the prototype agreed with numerical simulations to within 3.5%.
“This innovative approach in laboratory-on-chip technology paves the way for personalized medicine, real-time molecular analysis and point-of-care diagnostics,” Kouhkord and Naserifar write.
The researchers are now refining their design, aiming for a portable device that could be operated with a small battery pack in resource-limited and remote environments.
D-Wave Systems has used quantum annealing to do simulations of quantum magnetic phase transitions. The company claims that some of their calculations would be beyond the capabilities of the most powerful conventional (classical) computers – an achievement referred to as quantum advantage. This would mark the first time quantum computers had achieved such a feat for a practical physics problem.
However, the claim has been challenged by two independent groups of researchers in Switzerland and the US, who have published papers on the arXiv preprint server that report that similar calculations could be done using classical computers. D-Wave’s experts believe these classical results fall well short of the company’s own accomplishments, and some independent experts agree with D-Wave.
While most companies trying to build practical quantum computers are developing “universal” or “gate model” quantum systems, US-based D-Wave has principally focused on quantum annealing devices. While such systems are less programmable than gate model systems, the approach has allowed D-Wave to build machines with many more quantum bits (qubits) than any of its competitors. Whereas researchers at Google Quantum AI and researchers in China have, independently, recently unveiled 105-qubit universal quantum processors, some of D-Wave’s have more than 5000 qubits. Moreover, D-Wave’s systems are already in practical use, with hardware owned by the Japanese mobile phone company NTT Docomo being used to optimize cell tower operations. Systems are also being used for network optimization at motor companies, food producers and elsewhere.
Trevor Lanting, the chief development officer at D-Wave, explains the central principles behind quantum-annealing computation: “You have a network of qubits with programmable couplings and weights between those devices and then you program in a certain configuration – a certain bias on all of the connections in the annealing processor,” he says. The quantum annealing algorithm places the system in a superposition of all possible states of the system. When the couplings are slowly switched off, the system settles into its most energetically favoured state – which is the desired solution.
Quantum hiking
Lanting compares this to a hiker in the mountains searching for the lowest point on a landscape: “As a classical hiker all you can really do is start going downhill until you get to a minimum, he explains; “The problem is that, because you’re not doing a global search, you could get stuck in a local valley that isn’t at the minimum elevation.” By starting out in a quantum superposition of all possible states (or locations in the mountains), however, quantum annealing is able to find the global potential minimum.
In the new work, researchers at D-Wave and elsewhere set out to show that their machines could use quantum annealing to solve practical physics problems beyond the reach of classical computers. The researchers used two different 1200-qubit processors to model magnetic quantum phase transitions. This is a similar problem to one studied in gate-model systems by researchers at Google and Harvard University in independent work announced in February.
“When water freezes into ice, you can sometimes see patterns in the ice crystal, and this is a result of the dynamics of the phase transition,” explains Andrew King, who is senior distinguished scientist at D-Wave and the lead author of a paper describing the work. “The experiments that we’re demonstrating shed light on a quantum analogue of this phenomenon taking place in a magnetic material that has been programmed into our quantum processors and a phase transition driven by a magnetic field.” Understanding such phase transitions are important in the discovery and design of new magnetic materials.
Quantum versus classical
The researchers studied multiple configurations, comprising ever-more spins arranged in ever-more complex lattice structures. The company says that its system performed the most complex simulation in minutes. They also ascertained how long it would take to do the simulations using several leading classical computation techniques, including neural network methods, and how the time to achieve a solution grew with the complexity of the problem. Based on this, they extrapolated that the most complex lattices would require almost a million years on Frontier, which is one of the world’s most powerful supercomputers.
However, two independent groups – one at EPFL in Switzerland and one at the Flatiron Institute in the US – have posted papers on the arXiv preprint server claiming to have done some of the less complex calculations using classical computers. They argue that their results should scale simply to larger sizes; the implication being that classical computers could solve the more complicated problems addressed by D-Wave.
King has a simple response: “You don’t just need to do the easy simulations, you need to do the hard ones as well, and nobody has demonstrated that.” Lanting adds that “I see this as a healthy back and forth between quantum and classical methods, but I really think that, with these results, we’re pulling ahead of classical methods on the biggest scales we can calculate”.
Very interesting work
Frank Verstraete of the University of Cambridge is unsurprised by some scientists’ scepticism. “D-Wave have historically been the absolute champions at overselling what they did,” he says. “But now it seems they’re doing something nobody else can reproduce, and in that sense it’s very interesting.” He does note, however, that the specific problem chosen is not, in his view an interesting one from a physics perspective, and has been chosen purely to be difficult for a classical computer.
Daniel Lidar of the University of Southern California, who has previously collaborated with D-Wave on similar problems but was not involved in the current work, says “I do think this is quite the breakthrough…The ability to anneal very fast on the timescales of the coherence times of the qubits has now become possible, and that’s really a game changer here.” He concludes that “the arms race is destined to continue between quantum and classical simulations, and because, in all likelihood, these are problems that are extremely hard classically, I think the quantum win is going to become more and more indisputable.”
Artificial intelligence is transforming physics at an unprecedented pace. In the latest episode of Physics World Stories, host Andrew Glester is joined by three expert guests to explore AI’s impact on discovery, research and the future of the field.
Tony Hey, a physicist who worked with Richard Feynman and Murray Gell-Mann at Caltech in the 1970s, shares his perspective on AI’s role in computation and discovery. A former vice-president of Microsoft Research Connections, he also edited the Feynman Lectures on Computation (Anniversary Edition), a key text on physics and computing.
Caterina Doglioni, a particle physicist at the University of Manchester and part of CERN’s ATLAS collaboration, explains how AI is unlocking new physics at the Large Hadron Collider. She sees big potential but warns against relying too much on AI’s “black box” models without truly understanding nature’s behaviour.
Felice Frankel, a science photographer and MIT research scientist, discusses AI’s promise for visualizing science. However, she is concerned about its potential to manipulate scientific data and imagery – distorting reality. Frankel wrote about the need for an ethical code of conduct for AI in science imagery in this recent Nature essay.
The episode also questions the environmental cost of AI’s vast energy demands. As AI becomes central to physics, should researchers worry about its sustainability? What responsibility do physicists have in managing its impact?
While the physics of uncorking a bottle of champagne has been well documented, less is known about the mechanisms at play when opening a swing-top bottle of beer.
Physicist Max Koch from the University of Göttingen in Germany, decided to find out more.
Koch, who is a keen homebrewer, and colleagues used a high-speed camera and a microphone to capture what was going on together with computational fluid-dynamics simulations.
When opening a carbonated bottle under pressure, the difference between the gas pressure in the bottleneck and ambient pressure as it opens results in a rapid escape of gas from the bottle, which can reach the speed of sound.
In a champagne bottle, this results in the creation of a Mach disc as well the classic “pop” sound as it is uncorked.
To investigate the gas dynamics in swing-top bottles, Kock and colleagues examined transparent 0.33 litre bottles, which contained home-brewed ginger beer under 2–5 bars of pressure.
The team found that the sound emitted when opening the bottles, what can be described as an “ah” sound, wasn’t due to a single shockwave, but rather condensation in the bottleneck forming a standing wave.
“The pop’s frequency is much lower than the resonation if you blow on the full bottle like a whistle,” notes Koch. “This is caused by the sudden expansion of the carbon dioxide and air mixture in the bottle, as well as a strong cooling effect to about minus 50 degrees Celsius, which reduces sound speed.”
The team also investigated the sloshing of the beverage as it is opened. First, the dissolved carbon dioxide inside the beer triggers the level of the liquid to rise while the motion of the bottle as it opens also causes the liquid to slosh.
Another effect during opening is the bottle-top hitting the glass with its sharp edge. This triggers further “gushing” in the liquid due to the enhanced formation of bubbles.
There are still some unanswered questions, however, which will require further work. “One thing we didn’t resolve is that our numerical simulations showed an initial strong peak in the acoustic emission before the short ‘ah’ resonance, but this peak was absent in the experimentation,” adds Koch.
Scientists who have been publicly accused of sexual misconduct see a significant and immediate decrease in the rate at which their work is cited, according to a study by behavioural scientists in the US. However, researchers who are publicly accused of scientific misconduct are found not to suffer the same drop in citations (PLOS One20 e0317736). Despite its flaws, citation rates are often seen a marker of impact and quality.
The study was carried by a team led by Giulia Maimone from the University of California, Los Angeles, who collected data from the Web of Science covering 31,941 scientific publications across 18 disciplines. They then analysed the citation rates for 5888 papers authored by 30 researchers accused of either sexual or scientific misconduct, the latter including data fabrication, falsification and plagiarism.
Maimone told Physics World that they used strict selection criteria to ensure that the two groups of academics were comparable and that the accusations against them were public. This meant her team only used scholars whose misconduct allegations have been reported in the media and had “detailed accounts of the allegations online”.
Maimone’s team concluded that papers by scientists accused of sexual misconduct experienced a significant drop in citations in the three years after allegations become public compared with a “control” group of academics of a similar professional standing. Those accused of scientific fraud, meanwhile, saw no statistically significant change in the citation rates of their papers.
Further work
To further explore attitudes towards sexual and scientific misconduct, the researchers surveyed 231 non-academics and 240 academics. The non-academics considered sexual misconduct more reprehensible than scientific misconduct and more deserving of punishment, while academics claimed that they would more likely continue to cite researchers accused of sexual misconduct as compared to scientific misconduct. “Exactly the opposite of what we observe in the real data,” adds Maimone.
According to the researchers, there are two possible explanations for this discrepancy. One is that academics, according to Maimone, “overestimate their ability to disentangle the scientists from the science”. Another is that scientists are aware that they would not cite sexual harassers, but they are unwilling to admit it because they feel they should take a harsher professional approach towards scientific misconduct.
Maimone says they would now like to explore the longer-term consequences of misconduct as well as the psychological mechanisms behind the citation drop for those accused of sexual misconduct. “Do [academics] simply want to distance themselves from these allegations or are they actively trying to punish these scholars?” she asks.
From the Global Physics Summit in Anaheim, California
Some of the most fascinating people that you meet at American Physical Society meetings are not actually physicists, and Bruce Rosenbaum is no exception. Based in Massachusetts, Rosenbaum is a maker of beautiful steampunk objects and he is in Anaheim with a quantum-related creation (see figure).
At first glance Rosenbaum’s sculpture of a “quantum engine” fits in nicely at a conference exhibition that features gleaming vacuum chambers and other such things. However, this lovely artistic object is meant to be admired, rather than being a functioning machine.
At the centre of the object is a small vacuum chamber that could hold a single trapped ion – which could be operated as a quantum engine. Lasers are pointed at the ions through the chamber windows and the chamber is surrounded by a spherical structure that represents both the Bloch sphere of quantum physics and an armillary sphere. The latter being used to demonstrate the motions of celestial objects in the days before computers. But as someone who, many years ago, did some electron spectroscopy, the rings are more reminiscent of Helmholtz coils that would screen the ion from Earth’s magnetic field.
I should make it clear that the neither the vacuum chamber, nor the lasers are real — and there is no trapped ion. However, a real quantum engine based on a trapped ion has been created in a real physics lab. So, in principle, the sculpture could be made into a functional device by using “real components”.
Past and future connections
In my mind, the object symbolizes the connection between the state-of-the-art today (the trapped-ion qubit) and the many technologies that have come before (armillary sphere).
While Rosenbaum does not have a background in physics, I think he has a kinship with the thousands of experimental physicists who have built devices that bear a striking resemblance to this object. But some physicists were involved in the development of this beautiful object. They include Nicole Yunger Halpern of the University of Maryland. Yunger Halpern is a theorist who uses the ideas of quantum information to study thermodynamics. She describes the field as “quantum steampunk” because like the artistic genre of steampunk, it combines 19th century concepts (thermodynamics) with the 21st century concepts of quantum science and technology.
I had a lovely chat with Rosenbaum and he had some very interesting things to say about the intersection of creativity and technology – things that are highly relevant to physicists. I hope to have him and perhaps one of his physicist colleagues on a future episode of the Physics World Weekly podcast.
When physicists got their first insights into the quantum world more than a century ago, they found it puzzling to say the least. But gradually, and through clever theoretical and experimental work, a consistent quantum theory emerged.
Two physicists that who played crucial roles in this evolution were Albert Einstein and John Bell. In this episode of the Physics World Weekly podcast the theoretical crypto-physicist Artur Ekert explains how a quantum paradox identified by Einstein and colleagues in 1935 inspired a profound theoretical breakthrough by Bell three decades later.
Ekert, who splits his time between the University of Oxford and the National University of Singapore, describes how he used Bell’s theorem to create a pioneering quantum cryptography protocol and he also chats about current research in quantum physics and beyond.
The European Space Agency (ESA) has released the first batch of survey data from its €1.4bn Euclid mission. The release includes a preview of its ‘deep field’ where in just one week of observations, Euclid already spotted 26 million galaxies as well as many transient phenomena such as supernovae and gamma-ray bursts. The dataset is published along with 27 scientific papers that will be submitted to the journal Astronomy & Astrophysics.
The dataset also features a catalogue of 380 000 galaxies that have been detected by artificial intelligence or “citizen-science” efforts. They include those with spiral arms, central bars and “tidal tails”, inferring merging galaxies.
There has also been the discovery of 500 gravitational-lens candidates thanks to AI and citizen science. Gravitational lensing in when light from more distant galaxies is bent around closer galaxies due to gravity and it can help identify where dark matter is located and its properties.
“For the past decade, my research has been defined by painstakingly analyzing the same 50 strong gravitational lenses, but with the data release, I was handed 500 new strong lenses in under a week” says astronomer James Nightingale from Newcastle University. “It’s a seismic shift — transforming how I do science practically overnight.”
The data released today still represents only 0.4% of the total number of galaxies that Euclid is expected to image over its lifetime. Euclid will capture images of more than 1.5 billion galaxies over six years, sending back around 100 GB of data every day.
Light curves: a collection of gravitational lenses that Euclid captured in its first observations of the deep field areas. (Courtesy: ESA/Euclid/Euclid Consortium/NASA, image processing by M Walmsley, M Huertas-Company, J-C Cuillandre)
“Euclid shows itself once again to be the ultimate discovery machine. It is surveying galaxies on the grandest scale, enabling us to explore our cosmic history and the invisible forces shaping our universe,” says ESA’s science director, Carole Mundell. “With the release of the first data from Euclid’s survey, we are unlocking a treasure trove of information for scientists to dive into and tackle some of the most intriguing questions in modern science”.
More data to come
Euclid was launched in July 2023 and is currently located in a spot in space called Lagrange Point 2 – a gravitational balance point some 1.5 million kilometres beyond the Earth’s orbit around the Sun. The Euclid Consortium comprises some 2600 members from more than 15 countries.
Euclid has a 1.2 m-diameter telescope, a camera and a spectrometer that it uses to plot a 3D map of the distribution of galaxies. The images it takes are about four times as sharp as current ground-based telescopes.
Researchers have demonstrated that they can remotely detect radioactive material from 10 m away using short-pulse CO2 lasers – a distance over ten times farther than achieved via previous methods.
Conventional radiation detectors, such as Geiger counters, detect particles that are emitted by the radioactive material, typically limiting their operational range to the material’s direct vicinity. The new method, developed by a research team headed up at the University of Maryland, instead leverages the ionization in the surrounding air, enabling detection from much greater distances.
The study may one day lead to remote sensing technologies that could be used in nuclear disaster response and nuclear security.
Using atmospheric ionization
Radioactive materials emit particles – such as alpha, beta or gamma particles – that can ionize air molecules, creating free electrons and negative ions. These charged particles are typically present at very low concentrations, making them difficult to detect.
Senior author Howard Milchberg and colleagues – also from Brookhaven National Laboratory, Los Alamos National Laboratory and Lawrence Livermore National Laboratory – demonstrated that CO2 lasers could accelerate these charged particles, causing them to collide with neutral gas molecules, in turn creating further ionization. These additional free charges would then undergo the same laser-induced accelerations and collisions, leading to a cascade of charged particles.
This effect, known as “electron avalanche breakdown”, can create microplasmas that scatter laser light. By measuring the profile of the backscattered light, researchers can detect the presence of radioactive material.
The team tested their technique using a 3.6-mCi polonium-210 alpha particle source at a standoff distance of 10 m, significantly longer than previous experiments that used different types of lasers and electromagnetic radiation sources.
“The researchers successfully demonstrated 10-m standoff detection of radioactive material, significantly surpassing the previous range of approximately 1 m,” she says.
Milchberg and collaborators had previously used a mid-infrared laser in a similar experiment in 2019. Changing to a long-wavelength (9.2 μm) CO2 laser brought significant advantages, he says.
“You can’t use any laser to do this cascading breakdown process,” Milchberg explains. The CO2 laser’s wavelength was able to enhance the avalanche process, while being low energy enough to not create its own ionization sources. “CO2 is sort of the limit for long wavelengths on powerful lasers and it turns out CO2 lasers are very, very efficient as well,” he says. “So this is like a sweet spot.”
Imaging microplasmas
The team also used a CMOS camera to capture visible-light emissions from the microplasmas. Milchberg says that this fluorescence around radioactive sources resembled balls of plasma, indicating the localized regions where electron avalanche breakdowns had occurred.
By counting these “plasma balls” and calibrating them against the backscattered laser signal, the researchers could link fluorescence intensity to the density of ionization in the air, and use that to determine the type of radiation source.
The CMOS imagers, however, had to be placed close to the measured radiation source, reducing their applicability to remote sensing. “Although fluorescence imaging is not practical for field deployment due to the need for close-range cameras, it provides a valuable calibration tool,” Milchberg says.
Scaling to longer distances
The researchers believe their method can be extended to standoff distances exceeding 100 m. The primary limitation is the laser’s focusing geometry, which would affect the regions in which it could trigger an avalanche breakdown. A longer focal length would require a larger laser aperture but could enable kilometre-scale detection.
Choi points out, however, that deploying a CO2 laser may be difficult in real-world applications. “A CO₂ laser is a bulky system, making it challenging to deploy in a portable manner in the field,” she says, adding that mounting the laser for long-range detection may be a solution.
Milchberg says that the next steps will be to continue developing a technique that can differentiate between different types of radioactive sources completely remotely. Choi agrees, noting that accurately quantifying both the amount and type of radioactive material continues to be a significant hurdle to realising remote sensing technologies in the field.
“There’s also the question of environmental conditions,” says Milchberg, explaining that it is critical to ensure that detection techniques are robust against the noise introduced by aerosols or air turbulence.
The Square Kilometre Array (SKA) Observatory has released the first images from its partially built low-frequency telescope in Australia, known as SKA-Low.
The new SKA-Low image was created using 1024 two-metre-high antennas. It shows an area of the sky that would be obscured by a person’s clenched fist held at arm’s length.
Observed at 150 MHz to 175 MHz, the image contains 85 of the brightest known galaxies in that region, each with a black hole at their centre.
“We are demonstrating that the system as a whole is working,” notes SKA Observatory director-general Phil Diamond. “As the telescopes grow, and more stations and dishes come online, we’ll see the images improve in leaps and bounds and start to realise the full power of the SKAO.”
SKA-Low will ultimately have 131 072 two-metre-high antennas that will be clumped together in arrays to act as a single instrument.
These arrays collect the relatively quiet signals from space and combine them to produce radio images of the sky with the aim of answering some of cosmology’s most enigmatic questions, including what dark matter is, how galaxies form, and if there is other life in the universe.
When the full SKA-Low gazes at the same portion of sky as captured in the image released yesterday, it will be able to observe more than 600,000 galaxies.
“The bright galaxies we can see in this image are just the tip of the iceberg,” says George Heald, lead commissioning scientist for SKA-Low. “With the full telescope we will have the sensitivity to reveal the faintest and most distant galaxies, back to the early universe when the first stars and galaxies started to form.”
‘Milestone’ achieved
SKA-Low is one of two telescopes under construction by the observatory. The other, SKA-Mid, which observes mid-frequency range, will include 197 three-storey dishes and is being built in South Africa.
The telescopes, with a combined price tag of £1bn, are projected to begin making science observations in 2028. They are being funded through a consortium of member states, including China, Germany and the UK.
University of Cambridge astrophysicist Eloy de Lera Acedo, who is principal Investigator at his institution for the observatory’s science data processor, says the first image from SKA-Low is an “important milestone” for the project.
“It is worth remembering that these images now require a lot of work, and a lot more data to be captured with the telescope as it builds up, to reach the science quality level we all expect and hope for,” he adds.
Rob Fender, an astrophysicist at the University of Oxford, who is not directly involved in the SKA Observatory, says that the first image “hints at the enormous potential” for the array that will eventually “provide humanity’s deepest ever view of the universe at wavelengths longer than a metre”.
“I could have sworn I put it somewhere safe,” is something we’ve all said when looking for our keys, but the frustration of searching for lost objects is also a common, and very costly, headache for civil engineers. The few metres of earth under our feet are a tangle of pipes and cables that provide water, electricity, broadband and waste disposal. However, once this infrastructure is buried, it’s often difficult to locate it again.
“We damage pipes and cables in the ground roughly 60,000 times a year, which costs the country about 2.4 billion pounds,” explains Nicole Metje, a civil engineer at the University of Birmingham in the UK. “The ground is such a high risk, but also such a significant opportunity.”
The standard procedure for imaging the subsurface is to use electromagnetic waves. This is done either with ground penetrating radar (GPR), where the signal reflects off interfaces between objects in the ground, or with locators that use electromagnetic induction to find objects. Though they are stalwarts of the civil engineering toolbox, the performance of both these techniques is limited by many factors, including the soil type and moisture.
Physics at work Damage to underground infrastructure costs millions of pounds a year in the UK alone. That’s why there is a need to develop new methods to image the subsurface that don’t require holes to be dug or rely on electromagnetic pulses whose penetration depth is highly variable. (Courtesy: iStock/mikeuk)
Metje and her team in Birmingham have participated in several research projects improving subsurface mapping. But her career took an unexpected turn in 2009 when one of her colleagues was contacted out of the blue by Kai Bongs – a researcher in the Birmingham school of physics. Bongs, who became the director of the Institute for Quantum Technologies at the German Aerospace Centre (DLR) in 2023, explained that his group was building quantum devices to sense tiny changes in gravity and thought this might be just what the civil engineers needed. However, there was a problem. The device required a high-stability, low-noise environment – rarely compatible with the location of engineering surveys. But as Bongs spoke to more engineers he became more interested. “I understood why tunnels and sewers are very interesting,” he says, and saw an opportunity to “do something really meaningful and impactful”.
What lies beneath
Although most physicists are happy to treat g, the acceleration due to gravity, as 9.81 m/s2, it actually varies across the surface of Earth. Changes in g indicate the presence of buried objects and varying soil composition and can even signal the movement of tectonic plates and oceans. The engineers in Birmingham were well aware of this; classical devices that measure changes in gravity using the extension of springs are already used in engineering surveys, though they aren’t as widely adopted as electromagnetic signals. These machines – called gravimeters – don’t require holes to be dug and the measurement isn’t limited by soil conditions, but changes in the properties of the spring over time cause drift, requiring frequent recalibration.
The perfect test mass would be a single atom – it has no moving mechanical parts, can be swapped out for any of the same isotope, and its mass will never change
More sensitive devices have been developed that use a levitating superconducting sphere. These devices have been used for long-term monitoring of geophysical phenomena such as tides, volcanos and seismic activity, but they are less appropriate for engineering surveys where speed and portability are of the essence.
The perfect test mass would be a single atom – it has no moving mechanical parts, can be swapped out for any of the same isotope, and its mass will never change. “Today or tomorrow or in 100 years’ time, it’ll be exactly the same,” says physicist Michael Holynski, the principal investigator of the UK Quantum Technology Hub for Sensors and Timing led by the University of Birmingham.
Falling atoms
The gravity sensing project in Birmingham uses a technique called cold-atom interferometry, first demonstrated in 1991 by Steven Chu and Mark Kasevich at Stanford University in the US (Phys. Rev. Lett.67 181). In the cold-atom interferometer, two atomic test masses fall from different heights, and g is calculated by comparing their displacement in a given time.
Because it’s a quantum object, a single atom can act as both test masses at once. To do this, the interferometer uses three laser pulses that sends the atom on two trajectories. First, a laser pulse puts the atom in a superposition of two states, where one state gets a momentum “kick” and recoils away from the other. This means that when the atom is allowed to freefall, the state nearest the centre of the Earth accelerates faster. Halfway through the freefall, a second laser pulse then switches the state with the momentum kick. The two states start to catch up with each other, both still falling under gravity.
Finally, another laser pulse, identical to the first, is applied. If the acceleration due to gravity were constant everywhere in space, the two states would fall exactly the same distance and overlap at the end of the sequence. In this case, the final pulse would effectively reverse the first, and the atom would end up back in the ground state. However, because in the real world the atom’s acceleration changes as it falls through the gravity gradient, the two states don’t quite find each other at the end. Since the atom is wavelike, this spatial separation is equivalent to a phase difference. Now, the outcome of the final laser pulse is less certain; sometimes it will return the atom to the ground state, but sometimes it will collapse the wavefunction to the excited state instead.
If a cloud of millions of atoms is dropped at once, the proportion that finishes in each state (which is measured by making the atoms fluoresce) can be used to calculate the phase difference, which is proportional to the atom’s average gravitational acceleration.
To measure these phase shifts, the thermal noise of the atoms must be minimized. This can be achieved using a magneto-optical trap and laser cooling, a technique pioneered by Chu, in which spatially varying magnetic fields and lasers trap atoms and cool them close to absolute zero. Chu, along with William H Phillips and Claude Cohen-Tannoudji, was awarded the 1997 Nobel Prize in Physics for his work on laser cooling.
Bad vibrations
Unlike the spring or the superconducting gravimeter, the cold-atom device produces an absolute rather than a relative measurement of g. In their first demonstration, Chu and Kasevich measured the acceleration due to gravity to three parts in 100 million. This was about a million times better than previous attempts with single atoms, but it trailed behind the best absolute measurements, which were made using a macroscopic object in free fall.
Whether spring or quantum-based, gravimeters share the same major source of noise – vibrations
“It’s always one thing to do the first demonstration of principle, and then it’s a different thing to really get it to a performance level where it actually is useful and competitive,” says Achim Peters, who started a PhD with Chu in 1992 and is now a researcher at the Humboldt University of Berlin.
Whether spring or quantum-based, gravimeters share the same major source of noise – vibrations. Although we don’t feel it, the ground, which is the test mass’s reference frame, is never completely still. According to the Einstein equivalence principle, we can’t differentiate the acceleration due to these vibrations from the acceleration of the test mass due to gravity.
When Peters was at Stanford he built a sophisticated vibration isolation system where the extension of mechanical springs was controlled by electronic feedback. This brought the quantum device in line with other state-of-the-art measurement techniques, but such a complex apparatus would be difficult to operate outside a laboratory.
However, if a cold-atom gravity sensor could operate outside without being hampered by vibrations it would have an instant advantage over spring devices, where vibrations have to be averaged out by taking longer measurements. “If we want to measure several hectares, you’re talking about three weeks or plus [with spring gravimeters],” explains Metje. “That takes a lot of time and therefore also a lot of cost.”
Enter the gravity gradiometer
A few years after Chu and Kasevich published the first cold-atom interferometer result, the US Navy declassified a technology that had been developed by Bell Aerospace (later acquired by Lockheed Martin) for submarines and which transformed the field of geophysics. This device – called a gravity gradiometer – calculated the gravity gradient by measuring the acceleration of several spinning discs. As well as finding objects, gravity can identify a geographical location, meaning that gravity sensors have applications in GPS-free navigation. Compared to gravimeters, a gradiometer is more sensitive to nearby objects and when the gravity gradiometer was declassified it was seized upon for use in oil and gas exploration. The Lockheed Martin device remains the industry standard – it measures gravity gradient in three dimensions and its sophisticated vibration-isolation system means it can be used in the field, including in airborne surveys – but it is prohibitively costly for most researchers.
In 1998 Kasevich’s group demonstrated a gradiometer built from two cold-atom interferometers stacked one above the other, where the difference between the phases on the atom clouds was used to calculate the gravity gradient (Phys. Rev. Lett. 81 971). In this configuration, the interferometry pulses illuminating the two clouds come from the same laser beams, which means that the vibrations that had previously required a complex damping system are cancelled out. In the laboratory, cold-atom gravity gradiometers have many applications in fundamental physics – they have been used to test the Einstein equivalence principle to one part in a trillion, and a 100 m tall interferometer is currently under construction at Fermilab, where it will be used to hunt for gravitational waves.
It was around this time, in 2000, when Bongs first encountered cold-atom interferometry, as a postdoc with Kasevich, then at Yale. He explains that the goal was to “get one of the lab-based systems, which were essentially the standard at the time, out into the field”. Even without the problem of vibrational noise, this was a significant challenge. Temperature fluctuations, external magnetic fields and laser stability will all limit the performance of the gradiometer. The portability of the system must also be balanced against the fact that a taller device will allow longer freefall and more sensitive measurements. What’s more, the interferometers will rarely be perfectly directed towards the centre of the Earth, which means the atoms fall slightly sideways relative to the laser beams.
In the summer of 2008, by which time Bongs was in Birmingham, Kasevich’s group, now back at Stanford, mounted a cold-atom gradiometer in a truck and measured the gravity gradient as they drove in and out of a loading bay on the Stanford campus. They measured a peak that coincided with the building’s outer wall, but this demonstration took place with a levelling platform and temperature control inside the truck. The demonstration of the first truly free-standing, outdoor cold-atom gradiometer was still up for grabs.
Ears to the ground
The portable cold-atom gravity sensor project in Birmingham began in earnest in 2011, as a collaboration between the engineers and the physicists. The team knew that building a device that was robust enough to operate outside would be only half the challenge. They also needed to make something cost-effective and easy to operate. “If you can manage to make the laser system small and compact and cheap and robust, then you more or less own quantum technologies,” says Bongs.
When lasers propagate in free space, small knocks and bumps easily misalign the optical components. To make their device portable, the researchers made an early decision to instead use optical fibres, which direct light to the right place even if the device is jolted during transportation or operation.
However, they quickly realized that this was easier said than done. In a standard magneto-optical trap, atoms are cooled by three orthogonal pairs of laser beams that cool and trap them in three dimensions. In the team’s original configuration, this light came from three fibres that were split from a single laser. Bending and temperature fluctuations exert stresses on the optical fibre that alter the polarization of the light as it propagates. Unstable polarizations in the beams meant that the atom clouds were moving around in the optical traps. “It wasn’t very robust,” says Holynski, “we needed a different approach”.
To solve this problem, they adopted a new solution in which light enters the chamber from the top and bottom, where it bounces off a configuration of mirrors to create the two atom traps. Because the beams can’t be individually adjusted, this sacrifices some efficiency, but if it fixed the laser polarization problem, the team decided it was worth a try.
In the world of quantum technologies, 1550 is something of a magic number. This is the most common wavelength of telecoms lasers because light of this wavelength propagates furthest in optical fibres. The telecoms industry has therefore invested significant time and money into developing robust lasers operating close to 1550 nm.
By lucky chance, 1550 nm is also almost twice the main resonant frequency of rubidium-87 (780 nm), an alkali metal that is well-suited to atom interferometry. Conveniently close to rubidium-87’s resonant frequency are hyperfine transitions that can be used to cool the atoms, measure their final state and put them into a superposition for interferometry. Frequency doubling using nonlinear crystals is a well-established optical technique, so combining a rubidium interferometer with a telecoms laser was an ideal solution.
Out and about The quantum-based gravity sensor, pictured outside on the University of Birmingham campus. The blue tube houses the two interferometers and the black box houses the lasers and control electronics. (CC BY 4.0 Nature602 590)
By 2018, as part of the hub and under contract with the UK Ministry of Defence, had assembled a freestanding gradiometer – a 2 m tall tube containing the two interferometers, attached to a box of electronics and the lasers, both mounted on wheels. The researchers performed outdoor trials in 2018 and 2019, including a trip to an underground cave in the Peak District, but they still weren’t getting the performance they wanted. “People get their hopes up,” says Holynski. “This was quite a big journey.”
The researchers worked out that another gamble they had made, this time to reduce the cost of the magnetic shield, wasn’t performing as well as hoped. External magnetic fields shift the atom’s energy levels, but unlike the phase shift due to gravity, this source of error is the same whether the momentum kick is directed up or down. By taking two successive measurements with a downwards and upwards kick, they thought they could remove magnetic noise, enabling them to reduce the cost of the expensive alloy they were using to shield the interferometers.
This worked as expected, but because they were operating outside a controlled laboratory environment, the large variation of the magnetic fields in space and time introduced other errors. It was back to the lab, where the team disassembled the sensor and rebuilt it again with full magnetic shielding.
By 2020 the researchers were ready to take the new device outside. However, the COVID-19 pandemic ground work to a halt and they had to wait until the following year.
Quantum tunnelling
“One of the things that changes about you when you work on gravity gradiometers is you start looking around for potential targets everywhere you go,” says Holynski. In March 2021 a team of physicists and engineers that included Bongs, Metje and Holynski took the newly rebuilt gradiometer for its first outside trial, where they trundled it repeatedly over a road on the University of Birmingham campus. They knew that running under the road was a two-by-two-metre hollow tunnel, built to carry utility lines. They also knew approximately where it was, but wanted to see if the gradiometer could find it.
The first time they did this, they noticed a dip in the gravity gradient that seemed to have the right dimensions for the tunnel, and when they repeated the measurements, they saw it again. Because of their previous unsuccessful attempts, Holynski remained trepidatious. “People get quite excited. And then you have to say to them, ‘Sorry, I don’t think that’s quite conclusive enough yet’.”
(a) A schematic of the 2021 test of the gravity gradiometer, with the hollow utility tunnel pictured to scale. (b) The hourglass configuration of the quantum gravity gradiometer. The atom clouds (green dots) are laser-cooled (red arrows) in magneto-optical traps formed using mirrors (blue). To measure the gravity gradient the atoms are subject to interferometry laser pulses (yellow arrows) under freefall (purple dots).
Elsewhere on campus, another team was busy analysing the data. The results, when they were done, were consistent with a hollow object, about two-by-two metres across, and about a metre below the surface. Millions of people will have walked over that road without thinking once about what’s beneath it, but to the researchers, this was the culmination of a decade of work, and proof that cold-atom gradiometers can operate outside the lab (Nature602 590).
The valley of death
“It’s one more step in the direction of making quantum sensors available for real-world everyday use,” says Holger Müller, a physicist at the University of California, Berkeley. In 2019 Müller’s group published the results of a gravity survey it had taken with a cold-atom interferometer during a drive through the California hills (Sci. Adv.5 10.1126/sciadv.aax0800). He is also involved in a NASA project that aims to perform atom interferometry on the International Space Station (Nature Communications15 6414). Müller thinks that for researchers especially, cold-atom gradiometers could make gravity gradient surveys more accessible than with the Lockheed Martin device.
By now, the Birmingham gravity gradiometer is well travelled. As well as land-based trials, it has been on two ship voyages, one lasting several weeks, to test its performance in different environments and its potential for use in navigation. The project has also become a flagship of the UK’s national quantum technologies programme, garnering industry partners including Network Rail and RSK and spinning out into start-up DeltaG (of which Holynski is a co-founder). Another project in France led by the company iXblue has also built a prototype gravity gradiometer that has been demonstrated inside (Phys. Rev. A105 022801).
However, if cold-atom gravity gradiometers are to become an alternative to electromagnetic surveys or spring gravimeters, they must escape the “Valley of Death” – the critical phase in a technology journey when it has been demonstrated but not yet been commercialized.
This won’t be easy. The team has estimated that the gravity gradiometer currently performs about 1.5 times better than the industry-leading spring gravimeter. Spring gravimeters are small, easy to operate and significantly cheaper than the quantum alternative. The cost of the lasers in the quantum gradiometer alone are several hundreds of thousands of pounds, compared to about £100,000 for a spring-based instrument.
The quantum device is also large, requires a team of scientists to operate and maintain it, and consumes much more power than a spring gravimeter. As well as saving time compared to spring gravimeters, a potential advantage of the quantum gravity gradiometer is that because it has no machined moving parts it could be used for passive, long-term environmental monitoring. However, unless the power consumption is reduced it will be tricky to operate it in remote conditions.
In the years since the first test, the team has built another prototype that is about half the size, consumes significantly less power, and delivers the cooling, detection and interferometry using a single laser, which will significantly reduce the total cost. Holynski explains that this system is a “work in progress” that is currently being tested in the laboratory.
A large focus of the group’s efforts has been bringing down the cost of the lasers. “We’ve taken available components from the telecom community and found ways to make them work in our system,” says Holynski. “Now we’re starting to work with the telecom community, the academic and industry community, to think ‘how can we twist their technology and make it cheaper to fit what we need?’”
When Chu and Kasevich demonstrated it for the first time, the idea of atom interferometry was already four decades old, having been proposed by David Bohm and later Eugene Wigner (Am. J. Phys.31 6). Rather than lasers, this theoretical device was based on the Stern–Gerlach effect, in which an atom is in a superposition of spin states, deflected in opposite directions in a magnetic field. Atoms have a much smaller characteristic wavelength than photons, so a practical interferometer requires exquisite control over the atomic wavefronts. In the decades after it was proposed, several theorists, including Julian Schwinger, investigated the idea but found that a useful interferometer would require an extraordinarily controlled low-noise environment that then seemed inaccessible (Found. Phys.18 1045).
Decades in the making, the mobile cold-atom interferometer is a triumph of practical problem-solving and even if the commercial applications have yet to be realized, one thing is clear: when it comes to pushing the boundaries of quantum physics, sometimes it pays to think like an engineer.
From the Global Physics Summit in Anaheim, California
The greatest pleasure of being at a huge physics conference is learning about the science of something that’s familiar, but also a little bit quirky. That’s why I always try to go to sessions given by undergraduate students, because for some reason they seem to do research projects that are the most fun.
I was not disappointed by the talk given this morning by Atharva Lele, who is at the Georgia Institute of Technology here in the US. He spoke about the physics of manu jumping, a competitive sport that originates from the Māori and Pasifika peoples of New Zealand.
The general idea will be familiar to anyone who messed around at swimming pools as a child: who can make the highest splash when they jump into the water.
Cavity creation
According to Lele, the best manu jumpers enter the water back first, creating a V-shape with their legs and upper body. The highest splashes are made when a jumper creates a deep and wide air cavity that quickly closes, driving water upwards in a jet – often to astonishing heights.
Lele and colleagues discovered that a 45° angle between the legs and torso afforded the highest splashes. This is probably because this angle results in a cavity that is both deep and wide. An analysis of videos of manu jumpers revealed that the best ones entered the water at an angle of about 46°, corroborating the teams findings. This is good news for jumpers, because there is risk of injury at higher angles (think belly flop).
Another important aspect of the study looked at what jumpers did when they entered the water – which is to roll and kick. To study the effect of this motion, the team created a “manu bot”, which unfolded as it entered the water. They found that there was an optimal opening time for making the highest splashes – it is a mere 0.26 s.
I was immediately taken back to my childhood in Canada and realized that we were doing our own version of manu from the high diving board at the local pool. The most successful technique that we discovered was to keep our bodies straight, but entering the water at an angle. This would consistently produce a narrow jet of water. I realize now that by entering the water at an angle, we must have been creating a relatively deep and wide cavity – although probably not as efficiently and manu jumpers. Maybe Lele and colleagues could do a follow-up study looking at alternative versions of manu around the world.
A new study probing quantum phenomena in neurons as they transmit messages in the brain could provide fresh insight into how our brains function.
In this project, described in the Computational and Structural Biotechnology Journal, theoretical physicist Partha Ghose from the Tagore Centre for Natural Sciences and Philosophy in India, together with theoretical neuroscientist Dimitris Pinotsis from City St George’s, University of London and the MillerLab of MIT, proved that established equations describing the classical physics of brain responses are mathematically equivalent to equations describing quantum mechanics. Ghose and Pinotsis then derived a Schrödinger-like equation specifically for neurons.
Our brains process information via a vast network containing many millions of neurons, which can each send and receive chemical and electrical signals. Information is transmitted by nerve impulses that pass from one neuron to the next, thanks to a flow of ions across the neuron’s cell membrane. This results in an experimentally detectable change in electrical potential difference across the membrane known as the “action potential” or “spike”.
When this potential passes a threshold value, the impulse is passed on. But below the threshold for a spike, a neuron’s action potential randomly fluctuates in a similar way to classical Brownian motion – the continuous random motion of tiny particles suspended in a fluid – due to interactions with its surroundings. This creates the so-called “neuronal noise” that the researchers investigated in this study.
Previously, “both physicists and neuroscientists have largely dismissed the relevance of standard quantum mechanics to neuronal processes, as quantum effects are thought to disappear at the large scale of neurons,” says Pinotsis. But some researchers studying quantum cognition hold an alternative to this prevailing view, explains Ghose.
“They have argued that quantum probability theory better explains certain cognitive effects observed in the social sciences than classical probability theory,” Ghose tells Physics World. “[But] most researchers in this field treat quantum formalism [the mathematical framework describing quantum behaviour] as a purely mathematical tool, without assuming any physical basis in quantum mechanics. I found this perspective rather perplexing and unsatisfactory, prompting me to explore a more rigorous foundation for quantum cognition – one that might be physically grounded.”
As such, Ghose and Pinotsis began their work by taking ideas from American mathematician Edward Nelson, who in 1966 derived the Schrödinger equation – which predicts the position and motion of particles in terms of a probability wave known as a wavefunction – using classical Brownian motion.
Firstly they proved that the variables in the classical equations for Brownian motion that describe the random neuronal noise seen in brain activity also obey quantum mechanical equations, deriving a Schrödinger-like equation for a single neuron. This equation describes neuronal noise by revealing the probability of a neuron having a particular value of membrane potential at a specific instant. Next, the researchers showed how the FitzHugh-Nagumo equations, which are widely used for modelling neuronal dynamics, could be re-written as a Schrödinger equation. Finally, they introduced a neuronal constant in these Schrödinger-like equations that is analogous to Planck’s constant (which defines the amount of energy in a quantum).
“I got excited when the mathematical proof showed that the FitzHugh-Nagumo equations are connected to quantum mechanics and the Schrödinger equation,” enthuses Pinotsis. “This suggested that quantum phenomena, including quantum entanglement, might survive at larger scales.”
“Penrose and Hameroff have suggested that quantum entanglement might be related to lack of consciousness, so this study could shed light on how anaesthetics work,” he explains, adding that their work might also connect oscillations seen in recordings of brain activity to quantum phenomena. “This is important because oscillations are considered to be markers of diseases: the brain oscillates differently in patients and controls and by measuring these oscillations we can tell whether a person is sick or not.”
Going forward, Ghose hopes that “neuroscientists will get interested in our work and help us design critical neuroscience experiments to test our theory”. Measuring the energy levels for neurons predicted in this study, and ultimately confirming the existence of a neuronal constant along with quantum effects including entanglement would, he says, “represent a big step forward in our understanding of brain function”.
From the Global Physics Summit in Anaheim, California
I spent most of Saturday travelling between the UK and Anaheim in Southern California, so I was up very early on Sunday with jetlag. So just as the sun was rising over the Santa Ana Mountains on a crisp morning, I went for a run in the suburban neighbourhood just south of the Anaheim Convention Center. As I made my way back to my hotel, the sidewalks were already thronging with physicists on their way to register for the Global Physics Summit (GPS) – which is being held in Anaheim by the American Physical Society (APS).
The GPS combines the APS’s traditional March and April meetings, which focus on condensed-matter and particle and nuclear physics, respectively – and much more. This year, about 14,000 physicists are expected to attend. I popped out at lunchtime and spotted a “physics family” walking along Harbor Boulevard, with parents and kids all wearing vintage APS T-shirts with clever slogans. They certainly stood out from most families, many of which were wearing Mickey Mouse ears (Disneyland is just across the road from the convention centre).
Uniting physicists
The GPS starts in earnest bright and early Monday morning, and I am looking forward to spending a week surrounded by thousands of fellow physicists. While many physicists in the US are facing some pretty dire political and funding issues, I am hoping that the global community can unite in the face of the anti-science forces that have emerged in some countries.
This year is the International Year of Quantum Science and Technology, so it’s not surprising that quantum mechanics will be front and centre here in Anaheim. I am looking forward to the “Quantum Playground”, which will be on much of this week. It promises, “themed areas; hands-on interactive experiences; demonstrations and games; art and science installations; mini-performances; and ask the experts”. I’ll report back once I have paid a visit.
Researchers in France have devised a new technique in quantum sensing that uses trapped ultracold atoms to detect acceleration and rotation. They then combined their quantum sensor with a conventional, classical inertial sensor to create a hybrid system that was used to measure acceleration due to Earth’s gravity and the rotational frequency of the Earth. With further development, the hybrid sensor could be deployed in the field for applications such as inertial navigation and geophysical mapping.
Measuring inertial quantities such as acceleration and rotation is at the heart of inertial navigation systems, which operate without information from satellites or other external sources. This relies on the precise knowledge of the position and orientation of the navigation device. Inertial sensors based on classical physics have been available for some time, but quantum devices are showing great promise. On one hand, classical sensors using quartz in micro-electro-mechanical (MEM) devices have gained widespread use due to their robustness and speed. However, they suffer from drifts – a gradual loss of accuracy over time, due to several factors such as temperature sensitivity and material aging. On the other hand, quantum sensors using ultracold atoms achieve better stability over long operation times. While such sensors are already commercially available, the technology is still being developed to achieve the robustness and speed of classical sensors.
Now, the Cold Atom Group of the French Aerospace Lab (ONERA) has devised a new method in atom interferometry that uses ultracold atoms to measure inertial quantities. By launching the atoms using a magnetic field gradient, the researchers demonstrated stabilities below 1 µm/s2 and 1 µrad/s for acceleration and rotation measurements over 24 hours respectively. This was done by continuously performing a 4 s interferometer sequence on the atoms for around 20 min to extract the inertial quantities. That is equivalent to driving a car for 20 min straight and knowing the acceleration and rotation to the µm/s2 and µrad/s level.
Cold-atom accelerometer–gyroscope
They built their cold-atom accelerometer–gyroscope using rubidium-87 atoms. By holding the atoms in a magneto-optical trap, the researchers cool them down to 2 µK, enabling good control over the atoms for further manipulation. By releasing the atoms from the trap, the atoms freely fall along the gravity direction. This allows the researchers to measure their free falling acceleration using atom interferometry. In their protocol, a series of three light pulses that coherently splits an atomic cloud into two paths, redirects and then recombines it allowing the cloud to interfere with itself. From the phase shift of the interference pattern, the inertial quantities can be deduced.
Measuring their rotation rates however, requires that the atoms have an initial velocity in the horizontal direction. This is done by applying a horizontal magnetic field gradient, which results in a horizontal force on atoms with magnetic moments. The rubidium atoms are prepared in one of the magnetic states known as the Zeeman sublevels. The researchers then use a pair of coils that they called the “launching coils” in the horizontal plane to create the necessary magnetic field gradient to give the atoms a horizontal velocity. The atoms are then transferred back to the ground non-magnetic state using a microwave pulse before performing atom interferometry. This avoids any additional magnetic forces that can affect interferometer’s outcome.
Analysing the launch velocity using laser pulses with tuned frequencies, the researchers are able to discriminate the velocity of the atoms whether it being from the magnetic launching scheme or other effects. The researchers observe two dominant and symmetric peaks associated to the velocity of the atoms due to the magnetic launching. However, they also observe a third smaller peak in between. This peak is due to an unwanted effect from the laser beams that transfers additional velocity to the atoms. Further improvement in the stability of the laser beams’ polarization – the orientation of its oscillating electric field with respect to its propagation axis, as well the current noise in the launching coils will allow for more atoms to be launched.
Using their new launch technique, the researchers operated their cold-atom dual accelerometer–gyroscope for two days straight, averaging down their results to obtain an acceleration measurement of 7×10−7 m/s2 and a rotation rate of 4×10−7 rad/s, limited by residual ground vibration noise.
Best of both worlds
While classical sensors suffer from long term drifts, they operate continuously in comparison to a quantum sensor that requires preparation of the atomic sample and the interferometry process which takes around half a second. For this reason, a classical–quantum hybrid sensor benefits from the long-term stability of the quantum sensor and the fast repetition rate of the classical one. By attaching a commercial classical accelerometer and a commercial classical gyroscope to the atom interferometer, they implemented a feedback loop on the classical sensor’s outputs. The researchers demonstrated a respective 100-fold and three-fold improvement on the acceleration and rotation rates stabilities, respectively, of the classical sensors compared to when they are operated alone.
Operating this hybrid sensor continuously and utilizing their magnetic launch technique, the researchers report a measure of the local acceleration due gravity in their laboratory of 980,881.397 mGal (the milligal is a standard unit of gravimetry). They measured Earth’s rotation rate to be 4.82 × 10−5 rad/s. Cross checking with another atomic gravimeter, they find their acceleration value deviating by 2.3 mGal, which they regard to be due to misalignment of the vertical interferometer beams. Their rotation measurement has a significant error of about 25%, which the team attributes to wave-front distortions for the Raman beams used in their interferometer.
Yannick Bidel, a researcher working on this project, explains how such an inertial quantum sensor has room for improvement. Large-momentum-transfer, a technique to increase the arm separation of the interferometer, is one way to go. He further adds that once they reach bias stabilities of 10−9 to 10−10 rad/s within a compact size atom interferometer, such a sensor could become transportable and ready for in-field measurement campaigns.
(Courtesy: EHT Collaboration; Los Alamos National Laboratory)
1 When the Event Horizon Telescope imaged a black hole in 2019, what was the total mass of all the hard drives needed to store the data? A 1 kg B 50 kg C 500 kg D 2000 kg
2 In 1956 MANIAC I became the first computer to defeat a human being in chess, but because of its limited memory and power, the pawns and which other pieces had to be removed from the game? A Bishops B Knights C Queens D Rooks
(Courtesy: IOP Publishing; CERN)
3 The logic behind the Monty Hall problem, which involves a car and two goats behind different doors, is one of the cornerstones of machine learning. On which TV game show is it based? A Deal or No Deal BFamily Fortunes CLet’s Make a Deal DWheel of Fortune
4 In 2023 CERN broke which barrier for the amount of data stored on devices at the lab? A 10 petabytes (1016 bytes) B 100 petabytes (1017 bytes) C 1 exabyte (1018 bytes) D 10 exabytes (1019 bytes)
5 What was the world’s first electronic computer? A Atanasoff–Berry Computer (ABC) B Electronic Discrete Variable Automatic Computer (EDVAC) C Electronic Numerical Integrator and Computer (ENIAC) D Small-Scale Experimental Machine (SSEM)
6 What was the outcome of the chess match between astronaut Frank Poole and the HAL 9000 computer in the movie 2001: A Space Odyssey? A Draw B HAL wins C Poole wins D Match abandoned
7 Which of the following physics breakthroughs used traditional machine learning methods? A Discovery of the Higgs boson (2012) B Discovery of gravitational waves (2016) C Multimessenger observation of a neutron-star collision (2017) D Imaging of a black hole (2019)
8 The physicist John Hopfield shared the 2024 Nobel Prize for Physics with Geoffrey Hinton for their work underpinning machine learning and artificial neural networks – but what did Hinton originally study? A Biology B Chemistry C Mathematics D Psychology
9 Put the following data-driven discoveries in chronological order. A Johann Balmer’s discovery of a formula computing wavelength from Anders Ångström’s measurements of the hydrogen lines B Johannes Kepler’s laws of planetary motion based on Tycho Brahe’s astronomical observations C Henrietta Swan Leavitt’s discovery of the period-luminosity relationship for Cepheid variables D Ole Rømer’s estimation of the speed of light from observations of the eclipses of Jupiter’s moon Io
10 Inspired by Alan Turing’s “Imitation Game” – in which an interrogator tries to distinguish between a human and machine – when did Joseph Weizenbaum develop ELIZA, the world’s first “chatbot”? A 1964 B 1984 C 2004 D 2024
11 What does the CERN particle-physics lab use to store data from the Large Hadron Collider? A Compact discs B Hard-disk drives C Magnetic tape D Solid-state drives
12 In preparation for the High Luminosity Large Hadron Collider, CERN tested a data link to the Nikhef lab in Amsterdam in 2024 that ran at what speed? A 80 Mbps B 8 Gbps C 80 Gbps D 800 Gbps
13 When complete, the Square Kilometre Array telescope will be the world’s largest radio telescope. How many petabytes of data is it expected to archive per year? A 15 B 50 C 350 D 700
This quiz is for fun and there are no prizes. Answers will be published in April.
How would the climate and the environment on our planet change if an asteroid struck? Researchers at the IBS Center for Climate Physics (ICCP) at Pusan National University in South Korea have now tried to answer this question by running several impact simulations with a state-of-the-art Earth system model on their in-house supercomputer. The results show that the climate, atmospheric chemistry and even global photosynthesis would be dramatically disrupted in the three to four years following the event, due to the huge amounts of dust produced by the impact.
Beyond immediate effects such as scorching heat, earthquakes and tsunamis, an asteroid impact would have long-lasting effects on the climate because of the large quantities of aerosols and gases ejected into the atmosphere. Indeed, previous studies on the Chicxulub 10-km asteroid impact, which happened around 66 million years ago, revealed that dust, soot and sulphur led to a global “impact winter” and was very likely responsible for the dinosaurs going extinct during the Cretaceous/Paleogene period.
“This winter is characterized by reduced sunlight, because of the dust filtering it out, cold temperatures and decreased precipitation at the surface,” says Axel Timmermann, director of the ICCP and leader of this new study. “Severe ozone depletion would occur in the stratosphere too because of strong warming caused by the dust particles absorbing solar radiation there.”
These unfavourable climate conditions would inhibit plant growth via a decline in photosynthesis both on land and in the sea and would thus affect food productivity, Timmermann adds.
Something surprising and potentially positive would also happen though, he says: plankton in the ocean would recover within just six months and its abundance could even increase afterwards. Indeed, diatoms (silicate-rich algae) would be more plentiful than before the collision. This might be because the dust created by the asteroid is rich in iron, which would trigger plankton growth as it sinks into the ocean. These phytoplankton “blooms” could help alleviate emerging food crises triggered by the reduction in terrestrial productivity, at least for several years after the impact, explains Timmermann.
The effect of a “Bennu”-sized asteroid impact
In this latest study, published in Science Advances, the researchers simulated the effect of a “Bennu”-sized asteroid impact. Bennu is a so-called medium-sized asteroid with a diameter of around 500 m. This type of asteroid is more likely to impact Earth than the “planet killer” larger asteroids, but has been studied far less.
There is an estimated 0.037% chance of such an asteroid colliding with Earth in September 2182. While this probability is small, such an impact would be very serious, says Timmermann, and would lead to climate conditions similar to those observed after some of the largest volcanic eruptions in the last 100 000 years. “It is therefore important to assess the risk, which is the product of the probability and the damage that would be caused, rather than just the probability by itself,” he tells Physics World. “Our results can serve as useful benchmarks to estimate the range of environmental effects from future medium-sized asteroid collisions.”
The team ran the simulations on the IBS’ supercomputer Aleph using the Community Earth System Model Version 2 (CESM2) and the Whole Atmosphere Community Climate Model Version 6 (WACCM6). The simulations injected up to 400 million tonnes of dust into the stratosphere.
The climate effects of impact-dust aerosols mainly depend on their abundance in the atmosphere and how they evolve there. The simulations revealed that global mean temperatures would drop by 4° C, a value that’s comparable with the cooling estimated as a result of the Toba volcano erupting around 74 000 years ago (which emitted 2000 Tg (2×1015 g) of sulphur dioxide). Precipitation also decreased 15% worldwide and ozone dropped by a dramatic 32% in the first year following the asteroid impact.
Asteroid impacts may have shaped early human evolution
“On average, medium-sized asteroids collide with Earth about every 100 000 to 200 000 years,” says Timmermann. “This means that our early human ancestors may have experienced some of these medium-sized events. These may have impacted human evolution and even affected our species’ genetic makeup.”
The researchers admit that their model has some inherent limitations. For one, CESM2/WACCM6, like other modern climate models, is not designed and optimized to simulate the effects of massive amounts of aerosol injected into the atmosphere. Second, the researchers only focused on the asteroid colliding with the Earth’s land surface. This is obviously less likely than an impact on the ocean, because roughly 70% of Earth’s surface is covered by water, they say. “An impact in the ocean would inject large amounts of water vapour rather than climate-active aerosols such as dust, soot and sulphur into the atmosphere and this vapour needs to be better modelled – for example, for the effect it has on ozone loss,” they say.
The effect of the impact on specific regions on the planet also needs to be better simulated, the researchers add. Whether the asteroid impacts during winter or summer also needs to be accounted for since this can affect the extent of the climate changes that would occur.
Finally, as well as the dust nanoparticles investigated in this study, future work should also look at soot emissions from wildfires ignited by “impact “spherules”, and sulphur and CO2 released from target evaporites, say Timmermann and colleagues. “The ‘impact winter’ would be intensified and prolonged if other aerosols such as soot and sulphur were taken into account.”
This episode of the Physics World Weekly podcast features Ileana Silvestre Patallo, a medical physicist at the UK’s National Physical Laboratory, and Ruth McLauchlan, consultant radiotherapy physicist at Imperial College Healthcare NHS Trust.
In a wide-ranging conversation with Physics World’s Tami Freeman, Patallo and McLauchlan explain how ionizing radiation such as X-rays and proton beams interact with our bodies and how radiation is being used to treat diseases including cancer.
This episode was created in collaboration with IPEM, the Institute of Physics and Engineering in Medicine. IPEM owns the journal Physics in Medicine & Biology.
Helium deep with the Earth could bond with iron to form stable compounds – according to experiments done by scientists in Japan and Taiwan. The work was done by Haruki Takezawa and Kei Hirose at the University of Tokyo and colleagues, who suggest that Earth’s core could host a vast reservoir of primordial helium-3 – reshaping our understanding of the planet’s interior.
Noble gases including helium are normally chemically inert. But under extreme pressures, heavier members of the group (including xenon and krypton) can form a variety of compounds with other elements. To date, however, less is known about compounds containing helium – the lightest noble gas.
Beyond the synthesis of disodium helide (Na2He) in 2016, and a handful of molecules in which helium forms weak van der Waals bonds with other atoms, the existence of other helium compounds has remained purely theoretical.
As a result, the conventional view is that any primordial helium-3 present when our planet first formed would have quickly diffused through Earth’s interior, before escaping into the atmosphere and then into space.
Tantalizing clues
However, there are tantalizing clues that helium compounds could exist in some volcanic rocks on Earth’s surface. These rocks contain unusually high isotopic ratios of helium-3 to helium-4. “Unlike helium-4, which is produced through radioactivity, helium-3 is primordial and not produced in planetary interiors,” explains Hirose. “Based on volcanic rock measurements, helium-3 is known to be enriched in hot magma, which originally derives from hot plumes coming from deep within Earth’s mantle.” The mantle is the region between Earth’s core and crust.
The fact that the isotope can still be found in rock and magma suggests that it must have somehow become trapped in the Earth. “This argument suggests that helium-3 was incorporated into the iron-rich core during Earth’s formation, some of which leaked from the core to the mantle,” Hirose explains.
It could be that the extreme pressures present in Earth’s iron-rich core enabled primordial helium-3 to bond with iron to form stable molecular lattices. To date, however, this possibility has never been explored experimentally.
Now, Takezawa, Hirose and colleagues have triggered reactions between iron and helium within a laser-heated diamond-anvil cell. Such cells crush small samples to extreme pressures – in this case as high as 54 GPa. While this is less than the pressure in the core (about 350 GPa), the reactions created molecular lattices of iron and helium. These structures remained stable even when the diamond-anvil’s extreme pressure was released.
To determine the molecular structures of the compounds, the researchers did X-ray diffraction experiments at Japan’s SPring-8 synchrotron. The team also used secondary ion mass spectrometry to determine the concentration of helium within their samples.
Synchrotron and mass spectrometer
“We also performed first-principles calculations to support experimental findings,” Hirose adds. “Our calculations also revealed a dynamically stable crystal structure, supporting our experimental findings.” Altogether, this combination of experiments and calculations showed that the reaction could form two distinct lattices (face-centred cubic and distorted hexagonal close packed), each with differing ratios of iron to helium atoms.
These results suggest that similar reactions between helium and iron may have occurred within Earth’s core shortly after its formation, trapping much of the primordial helium-3 in the material that coalesced to form Earth. This would have created a vast reservoir of helium in the core, which is gradually making its way to the surface.
However, further experiments are needed to confirm this thesis. “For the next step, we need to see the partitioning of helium between iron in the core and silicate in the mantle under high temperatures and pressures,” Hirose explains.
Observing this partitioning would help rule out the lingering possibility that unbonded helium-3 could be more abundant than expected within the mantle – where it could be trapped by some other mechanism. Either way, further studies would improve our understanding of Earth’s interior composition – and could even tell us more about the gases present when the solar system formed.
Two months into Donald Trump’s second presidency and many parts of US science – across government, academia, and industry – continue to be hit hard by the new administration’s policies. Science-related government agencies are seeing budgets and staff cut, especially in programmes linked to climate change and diversity, equity and inclusion (DEI). Elon Musk’s Department of Government Efficiency (DOGE) is also causing havoc as it seeks to slash spending.
In mid-February, DOGE fired more than 300 employees at the National Nuclear Safety Administration, which is part of the US Department of Energy, many of whom were responsible for reassembling nuclear warheads at the Pantex plant in Texas. A day later, the agency was forced to rescind all but 28 of the sackings amid concerns that their absence could jeopardise national security.
A judge has also reinstated workers who were laid off at the National Science Foundation (NSF) as well as at the Centers for Disease Control and Prevention. The judge said the government’s Office of Personnel Management, which sacked the staff, did not have the authority to do so. However, the NSF rehiring applies mainly to military veterans and staff with disabilities, with the overall workforce down by about 140 people – or roughly 10%.
The NSF has also announced a reduction, the size of which is unknown, in its Research Experiences for Undergraduates programme. Over the last 38 years, the initiative has given thousands of college students – many with backgrounds that are underrepresented in science – the opportunity to carry out original research at institutions during the summer holidays. NSF staff are also reviewing thousands of grants containing such words as “women” and “diversity”.
NASA, meanwhile, is to shut its office of technology, policy and strategy, along with its chief-scientist office, and the DEI and accessibility branch of its diversity and equal opportunity office. “I know this news is difficult and may affect us all differently,” admitted acting administrator Janet Petro in an all-staff e-mail. Affecting about 20 staff, the move is on top of plans to reduce NASA’s overall workforce. Reports also suggest that NASA’s science budget could be slashed by as much as 50%.
Hundreds of “probationary employees” have also been sacked by the National Oceanic and Atmospheric Administration (NOAA), which provides weather forecasts that are vital for farmers and people in areas threatened by tornadoes and hurricanes. “If there were to be large staffing reductions at NOAA there will be people who die in extreme weather events and weather-related disasters who would not have otherwise,” warns climate scientist Daniel Swain from the University of California, Los Angeles.
Climate concerns
In his first cabinet meeting on 26 February, Trump suggested that officials “use scalpels” when trimming their departments’ spending and personnel – rather than Musk’s figurative chainsaw. But bosses at the Environmental Protection Agency (EPA) still plan to cut its budget by about two-thirds. “[W]e fear that such cuts would render the agency incapable of protecting Americans from grave threats in our air, water, and land,” wrote former EPA administrators William Reilly, Christine Todd Whitman and Gina McCarthy in the New York Times.
The White House’s attack on climate science goes beyond just the EPA. In January, the US Department of Agriculture removed almost all data on climate change from its website. The action resulted in a lawsuit in March from the Northeast Organic Farming Association of New York and two non-profit organizations – the Natural Resources Defense Council and the Environmental Working Group. They say that the removal hinders research and “agricultural decisions”.
The Trump administration has also barred NASA’s now former chief scientist Katherine Calvin and members of the State Department from travelling to China for a planning meeting of the Intergovernmental Panel on Climate Change. Meanwhile, in a speech to African energy ministers in Washington on 7 March, US energy secretary Chris Wright claimed that coal has “transformed our world and made it better”, adding that climate change, while real, is not on his list of the world’s top 10 problems. “We’ve had years of Western countries shamelessly saying ‘don’t develop coal’,” he said. “That’s just nonsense.”
At the National Institutes of Health (NIH), staff are being told to cancel hundreds of research grants that involve DEI and transgender issues. The Trump administration also wants to cut the allowance for indirect costs of NIH’s and other agencies’ research grants to 15% of research contracts, although a district court judge has put that move on hold pending further legal arguments. On 8 March, the Trump administration also threatened to cancel $400m in funding to Columbia purportedly due to its failure to tackle anti-semitism on the campus.
A Trump policy of removing “undocumented aliens” continues to alarm universities that have overseas students. Some institutions have already advised overseas students against travelling abroad during holidays, in case immigration officers do not let them back in when they return. Others warn that their international students should carry their immigration documents with them at all times. Universities have also started to rein in spending with Harvard and the Massachusetts Institute of Technology, for example, implementing a hiring freeze.
Falling behind
Amid the turmoil, the US scientific community is beginning to fight back. Individual scientists have supported court cases that have overturned sackings at government agencies, while a letter to Congress signed by the Union of Concerned Scientists and 48 scientific societies asserts that the administration has “already caused significant harm to American science”. On 7 March, more than 30 US cities also hosted “Stand Up for Science” rallies attended by thousands of demonstrators.
Elsewhere, a group of government, academic and industry leaders – known collectively as Vision for American Science and Technology – has released a report warning that the US could fall behind China and other competitors in science and technology. Entitled Unleashing American Potential, it calls for increased public and private investment in science to maintain US leadership. “The more dollars we put in from the feds, the more investment comes in from industry, and we get job growth, we get economic success, and we get national security out of it,” notes Sudip Parikh, chief executive of the American Association for the Advancement of Science, who was involved in the report.
Marcia McNutt, president of the National Academy of Sciences, meanwhile, has called on the community to continue to highlight the benefit of science. “We need to underscore the fact that stable federal funding of research is the main mode by which radical new discoveries have come to light – discoveries that have enabled the age of quantum computing and AI and new materials science,” she said. “These are areas that I am sure are very important to this administration as well.”
New for 2025, the American Physical Society (APS) is combining its March Meeting and April Meeting into a joint event known as the APS Global Physics Summit. The largest physics research conference in the world, the Global Physics Summit brings together 14,000 attendees across all disciplines of physics. The meeting takes place in Anaheim, California (as well as virtually) from 16 to 21 March.
Uniting all disciplines of physics in one joint event reflects the increasingly interdisciplinary nature of scientific research and enables everybody to participate in any session. The meeting includes cross-disciplinary sessions and collaborative events, where attendees can meet to connect with others, discuss new ideas and discover groundbreaking physics research.
The meeting will take place in three adjacent venues. The Anaheim Convention Center will host March Meeting sessions, while the April Meeting sessions will be held at the Anaheim Marriott. The Hilton Anaheim will host SPLASHY (soft, polymeric, living, active, statistical, heterogenous and yielding) matter and medical physics sessions. Cross-disciplinary sessions and networking events will take place at all sites and in the connecting outdoor plaza.
With programming aligned with the 2025 International Year of Quantum Science and Technology, the meeting also celebrates all things quantum with a dedicated Quantum Festival. Designed to “inspire and educate”, the festival incorporates events at the intersection of art, science and fun – with multimedia performances, science demonstrations, circus performers, and talks by Nobel laureates and a NASA astronaut.
Finally, there’s the exhibit hall, where more than 200 exhibitors will showcase products and services for the physics community. Here, delegates can also attend poster sessions, a career fair and a graduate school fair. Read on to find out about some of the innovative product offerings on show at the technical exhibition.
Precision motion drives innovative instruments for physics applications
For over 25 years Mad City Labs has provided precision instrumentation for research and industry, including nanopositioning systems, micropositioners, microscope stages and platforms, single-molecule microscopes and atomic force microscopes (AFMs).
This product portfolio, coupled with the company’s expertise in custom design and manufacturing, enables Mad City Labs to provide solutions for nanoscale motion for diverse applications such as astronomy, biophysics, materials science, photonics and quantum sensing.
Mad City Labs’ piezo nanopositioners feature the company’s proprietary PicoQ sensors, which provide ultralow noise and excellent stability to yield sub-nanometre resolution and motion control down to the single picometre level. The performance of the nanopositioners is central to the company’s instrumentation solutions, as well as the diverse applications that it can serve.
Within the scanning probe microscopy solutions, the nanopositioning systems provide true decoupled motion with virtually undetectable out-of-plane movement, while their precision and stability yields high positioning performance and control. Uniquely, Mad City Labs offers both optical deflection AFMs and resonant probe AFM models.
Product portfolio Mad City Labs provides precision instrumentation for applications ranging from astronomy and biophysics, to materials science, photonics and quantum sensing. (Courtesy: Mad City Labs)
The MadAFM is a sample scanning AFM in a compact, tabletop design. Designed for simple user-led installation, the MadAFM is a multimodal optical deflection AFM and includes software. The resonant probe AFM products include the AFM controllers MadPLL and QS-PLL, which enable users to build their own flexibly configured AFMs using Mad City Labs micro- and nanopositioners. All AFM instruments are ideal for material characterization, but resonant probe AFMs are uniquely well suited for quantum sensing and nano-magnetometry applications.
Stop by the Mad City Labs booth and ask about the new do-it-yourself quantum scanning microscope based on the company’s AFM products.
Mad City Labs also offers standalone micropositioning products such as optical microscope stages, compact positioners and the Mad-Deck XYZ stage platform. These products employ proprietary intelligent control to optimize stability and precision. These micropositioning products are compatible with the high-resolution nanopositioning systems, enabling motion control across micro–picometre length scales.
The new MMP-UHV50 micropositioning system offers 50 mm travel with 190 nm step size and maximum vertical payload of 2 kg, and is constructed entirely from UHV-compatible materials and carefully designed to eliminate sources of virtual leaks. Uniquely, the MMP-UHV50 incorporates a zero power feature when not in motion to minimize heating and drift. Safety features include limit switches and overheat protection, a critical item when operating in vacuum environments.
For advanced microscopy techniques for biophysics, the RM21 single-molecule microscope, featuring the unique MicroMirror TIRF system, offers multicolour total internal-reflection fluorescence microscopy with an excellent signal-to-noise ratio and efficient data collection, along with an array of options to support multiple single-molecule techniques. Finally, new motorized micromirrors enable easier alignment and stored setpoints.
Visit Mad City Labs at the APS Global Summit, at booth #401
New lasers target quantum, Raman spectroscopy and life sciences
HÜBNER Photonics, manufacturer of high-performance lasers for advanced imaging, detection and analysis, is highlighting a large range of exciting new laser products at this year’s APS event. With these new lasers, the company responds to market trends specifically within the areas of quantum research and Raman spectroscopy, as well as fluorescence imaging and analysis for life sciences.
Dedicated to the quantum research field, a new series of CW ultralow-noise single-frequency fibre amplifier products – the Ampheia Series lasers – offer output powers of up to 50 W at 1064 nm and 5 W at 532 nm, with an industry-leading low relative intensity noise. The Ampheia Series lasers ensure unmatched stability and accuracy, empowering researchers and engineers to push the boundaries of what’s possible. The lasers are specifically suited for quantum technology research applications such as atom trapping, semiconductor inspection and laser pumping.
Ultralow-noise operation The Ampheia Series lasers are particularly suitable for quantum technology research applications. (Courtesy: HÜBNER Photonics)
In addition to the Ampheia Series, the new Cobolt Qu-T Series of single frequency, tunable lasers addresses atom cooling. With wavelengths of 707, 780 and 813 nm, course tunability of greater than 4 nm, narrow mode-hop free tuning of below 5 GHz, linewidth of below 50 kHz and powers of 500 mW, the Cobolt Qu-T Series is perfect for atom cooling of rubidium, strontium and other atoms used in quantum applications.
For the Raman spectroscopy market, HÜBNER Photonics announces the new Cobolt Disco single-frequency laser with available power of up to 500 mW at 785 nm, in a perfect TEM00 beam. This new wavelength is an extension of the Cobolt 05-01 Series platform, which with excellent wavelength stability, a linewidth of less than 100 kHz and spectral purity better than 70 dB, provides the performance needed for high-resolution, ultralow-frequency Raman spectroscopy measurements.
For life science applications, a number of new wavelengths and higher power levels are available, including 553 nm with 100 mW and 594 nm with 150 mW. These new wavelengths and power levels are available on the Cobolt 06-01 Series of modulated lasers, which offer versatile and advanced modulation performance with perfect linear optical response, true OFF states and stable illumination from the first pulse – for any duty cycles and power levels across all wavelengths.
The company’s unique multi-line laser, Cobolt Skyra, is now available with laser lines covering the full green–orange spectral range, including 594 nm, with up to 100 mW per line. This makes this multi-line laser highly attractive as a compact and convenient illumination source in most bioimaging applications, and now also specifically suitable for excitation of AF594, mCherry, mKate2 and other red fluorescent proteins.
In addition, with the Cobolt Kizomba laser, the company is introducing a new UV wavelength that specifically addresses the flow cytometry market. The Cobolt Kizomba laser offers 349 nm output at 50 mW with the renowned performance and reliability of the Cobolt 05-01 Series lasers.
Visit HÜBNER Photonics at the APS Global Summit, at booth #359.
Are we at risk of losing ourselves in the midst of technological advancement? Could the tools we build to reflect our intelligence start distorting our very sense of self? Artificial intelligence (AI) is a technological advancement with huge ethical implications, and in The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking,Shannon Vallor offers a philosopher’s perspective on this vital question.
Vallor, who is based at the University of Edinburgh in the UK, argues that artificial intelligence is not just reshaping society but is also subtly rewriting our relationship with knowledge and autonomy. She even goes as far as to say, “Today’s AI mirrors tell us what it is to be human – what we prioritize, find good, beautiful or worth our attention.”
Vallor employs the metaphor of AI as a mirror – a device that reflects human intelligence but lacks independent creativity. According to her, AI systems, which rely on curated sets of training data, cannot truly innovate or solve new challenges. Instead, they mirror our collective past, reflecting entrenched biases and limiting our ability to address unprecedented global problems like climate change. Therefore, unless we carefully consider how we build and use AI, it risks stalling human progress by locking us into patterns of the past.
The book explores how humanity’s evolving relationship with technology – from mechanical automata and steam engines to robotics and cloud computing – has shaped the development of AI. Vallor grounds readers in what AI is and, crucially, what it is not. As she explains, while AI systems appear to “think”, they are fundamentally tools designed to process and mimic human-generated data.
The book’s philosophical underpinnings are enriched by Vallor’s background in the humanities and her ethical expertise. She draws on myths, such as the story of Narcissus, who met a tragic end after being captivated by his reflection, to illustrate the dangers of AI. She gives as an example the effect that AI social-media filters have on the propagation and domination of Western beauty standards.
Vallor also explores the long history of literature grappling with artificial intelligence, self-awareness and what it truly means to be human. These fictional works, which include Do Androids Dream of Electric Sheep? by Philip K Dick, are used not just as examples but as tools to explore the complex relationship between humanity and AI. The emphasis on the ties between AI and popular culture results in writing that is both accessible and profound, deftly weaving complex ideas into a narrative that engages readers from all backgrounds.
One area where I find Vallor’s conclusions contentious is her vision for AI in augmenting science communication and learning. She argues that our current strategies for science communication are inadequate and that improving public and student access to reliable information is critical. In her words: “Training new armies of science communicators is an option, but a less prudent use of scarce public funds than conducting vital research itself. This is one area where AI mirrors will be useful in the future.”
Science communication and teaching are about more than simply summarising papers or presenting data; they require human connection to contextualize findings and make them accessible to broad audiences
In my opinion, this statement warrants significant scrutiny. Science communication and teaching are about more than simply summarising papers or presenting data; they require human connection to contextualize findings and make them accessible to broad audiences. While public distrust of experts is a legitimate issue, delegating science communication to AI risks exacerbating the problem.
AI’s lack of genuine understanding, combined with its susceptibility to bias and detachment from human nuance, could further erode trust and deepen the disconnect between science and society. Vallor’s optimism in this context feels misplaced. AI, as it currently stands, is ill-suited to bridge the gaps that good science communication seeks to address.
Despite its generally critical tone, The AI Mirror is far from a technophobic manifesto. Vallor’s insights are ultimately hopeful, offering a blueprint for reclaiming technology as a tool for human advancement. She advocates for transparency, accountability, and a profound shift in economic and social priorities. Rather than building AI systems to mimic human behaviour, she argues, we should design them to amplify our best qualities – creativity, empathy and moral reasoning – while acknowledging the risk that this technology will devalue these talents as well as amplify them.
The AI Mirror is essential reading for anyone concerned about the future of artificial intelligence and its impact on humanity. Vallor’s arguments are rigorous yet accessible, drawing from philosophy, history and contemporary AI research. She challenges readers to see AI not as a technological inevitability but as a cultural force that we must actively shape.
Her emphasis on the need for a “new language of virtue” for the AI age warrants consideration, particularly in her call to resist the seductive pull of efficiency and automation at the expense of humanity. Vallor argues that as AI systems increasingly influence decision-making in society, we must cultivate a vocabulary of ethical engagement that goes beyond simplistic notions of utility and optimization. As she puts it: “We face a stark choice in building AI technologies. We can use them to strengthen our humane virtues, sustaining and extending our collective capabilities to live wisely and well. By this path, we can still salvage a shared future for human flourishing.”
Vallor’s final call to action is clear: we must stop passively gazing into the AI mirror and start reshaping it to serve humanity’s highest virtues, rather than its worst instincts. If AI is a mirror, then we must decide what kind of reflection we want to see.
Set to operate for two years in a polar orbit about 650 km from the Earth’s surface, SPHEREx will collect data from 450 million galaxies as well as more than 100 million stars to create a 3D map of the cosmos.
It will use to this gain an insight into cosmic inflation – the rapid expansion of the universe following the Big Bang.
It will also search the Milky Way for hidden reservoirs of water, carbon dioxide and other ingredients critical for life as well as study the cosmic glow of light from the space between galaxies.
The craft features three concentric shields that surround the telescope to protect it from light and heat. Three mirrors, including a 20 cm primary mirror, collect light before feed it into filters and detectors. The set-up allows the telescope to resolve 102 different wavelengths of light.
Packing a punch
SPHEREx has been launched together with another NASA mission dubbed Polarimeter to Unify the Corona and Heliosphere (PUNCH). Via a constellation of four satellites in a low-Earth orbit, PUNCH will make 3D observations of the Sun’s corona to learn how the mass and energy become solar wind. It will also explore the formation and evolution of space weather events such as coronal mass ejections, which can create storms of energetic particle radiation that can be damaging to spacecraft.
PUNCH will now undergo a three-month commissioning period in which the four satellites will enter the correct orbital formation and the instruments calibrated to operate as a single “virtual instrument” before it begins studying the solar wind.
“Everything in NASA science is interconnected, and sending both SPHEREx and PUNCH up on a single rocket doubles the opportunities to do incredible science in space,” noted Nicky Fox, associate administrator for NASA’s science mission directorate. “Congratulations to both mission teams as they explore the cosmos from far-out galaxies to our neighbourhood star. I am excited to see the data returned in the years to come.”
A research team headed up at Linköping University in Sweden and Cornell University in the US has succeeded in recycling almost all of the components of perovskite solar cells using simple, non-toxic, water-based solvents. What’s more, the researchers were able to use the recycled components to make new perovskite solar cells with almost the same power conversion efficiency as those created from new materials. This work could pave the way to a sustainable perovskite solar economy, they say.
While solar energy is considered an environmentally friendly source of energy, most of the solar panels available today are based on silicon, which is difficult to recycle. This has led to the first generation of silicon solar panels, which are reaching the end of their life cycles, ending up in landfills, says Xun Xiao, one of the team members at Linköping University.
When developing emerging solar cell technologies, we therefore need to take recycling into consideration, adds one of the leaders of the new study, Feng Gao, also at Linköping. “If we don’t know how to recycle them, maybe we shouldn’t put them on the market at all.”
To this end, many countries around the world are imposing legal requirements on photovoltaic manufacturers, to ensure that they collect and recycle any solar cell waste they produce. These initiatives include the WEEE directive 2012/19/EU in the European Union and equivalent legislation in Asia and the US.
Perovskites are one of the most promising materials for making next-generation solar cells. Not only are they relatively inexpensive, they are also easy to fabricate, lightweight, flexible and transparent. This allows them to be placed on top of a variety of surfaces, unlike their silicon counterparts. And since they boast a power conversion efficiency (PCE) of more than 25%, this makes them comparable to existing photovoltaics on the market.
A shorter lifespan
One of their downsides, however, is that perovskite solar cells have a shorter lifespan than silicon solar cells. This means that recycling is even more critical for these materials. Today, perovskite solar cells are disassembled using dangerous solvents such as dimethylformamide, but Gao and colleagues have now developed a technique in which water can be used as the solvent.
Perovskites are crystalline materials with an ABX3 structure, where A is caesium, methylammonium (MA) or formamidinium (FA); B is lead or tin; and X is chlorine, bromine or iodine. Solar cells made of these materials are composed of different layers: the hole/electron transport layers; the perovskite layer; indium tin oxide substrates; and cover glasses.
In their work, which they detail in Nature, the researchers succeeded in delaminating end-of-life devices layer by layer, using water containing three low-cost additives: sodium acetate, sodium iodide and hypophosphorous acid. Despite being able to dissolve organic iodide salts such as methylammonium iodide and formamidinium iodide, water only marginally dissolves lead iodide (about 0.044 g per 100 ml at 20 °C). The researchers therefore developed a way to increase the amount of lead iodide that dissolves in water by introducing acetate ions into the mix. These ions readily coordinate with lead ions, forming highly soluble lead acetate (about 44.31 g per 100 ml at 20 °C).
Once the degraded perovskites had dissolved in the aqueous solution, the researchers set about recovering pure and high-quality perovskite crystals from the solution. They did this by providing extra iodide ions to coordinate with lead. This resulted in [PbI]+ transitioning to [PbI2]0 and eventually to [PbI3]− and the formation of the perovskite framework.
To remove the indium tin oxide substrates, the researchers sonicated these layers in a solution of water/ethanol (50%/50% volume ratio) for 15 min. Finally, they delaminated the cover glasses by placing the degraded solar cells on a hotplate preheated to 150 °C for 3 min.
They were able to apply their technology to recycle both MAPbI3 and FAPbI3 perovskites.
New devices made from the recycled perovskites had an average power conversion efficiency of 21.9 ± 1.1%, with the best samples clocking in at 23.4%. This represents an efficiency recovery of more than 99% compared with those prepared using fresh materials (which have a PCE of 22.1 ± 0.9%).
Looking forward, Gao and colleagues say they would now like to demonstrate that their technique works on a larger scale. “Our life-cycle assessment and techno-economic analysis has already confirmed that our strategy not only preserves raw materials, but also appreciably lowers overall manufacturing costs of solar cells made from perovskites,” says co-team leader Fengqi You, who works at Cornell University. “In particular, reclaiming the valuable layers in these devices drives down expenses and helps reduce the ‘levelized cost’ of electricity they produce, making the technology potentially more competitive and sustainable at scale,” he tells Physics World.
Quantum technologies are flourishing the world over, with advances across the board researching practical applications such as quantum computing, communication, cryptography and sensors. Indeed, the quantum industry is booming – an estimated $42bn was invested in the sector in 2023, and this amount is projected to rise to $106bn by 2040.
With academia, industry and government all looking for professionals to join the future quantum workforce, it’s crucial to have people with the right skills, and from all educational levels. With this in mind, efforts are being made across the US to focus on quantum education and training, with educators working to introduce quantum concepts from the elementary-school level, all the way to tailored programmes at PhD and postgraduate level that meet the needs of potential employers in the area. Efforts are being made to ensure that graduates and early-career physicists are aware of the many roles available in the quantum sphere.
“There are a lot of layers to what has to be done in quantum education,” says Emily Edwards, an electrical and computer engineer at Duke University and co-leader of the National Q-12 Education Partnership. “I like to think of quantum education along different dimensions. One way is to think about what most learners may need in terms of foundational public literacy or student literacy in the space. Towards the top, we have people who are very specialized. Essentially, we have to think about many different learners at different stages – they might need specific tools or might need different barriers removed for them. And so different parts of the economy – from government to industry to academia and professional institutions – will play a role in how to address the needs of a certain group.”
Engaging young minds
To ensure that the US remains a key global player in quantum information science and technology (QIST), the National Q-12 Education Partnership – launched by the White House Office of Science and Technology Policy and the National Science Foundation (NSF) – is focused on ways to engage young minds in quantum, building the necessary tools and strategies to help improve early (K-12) education and outreach.
To achieve this, Q-12 is looking at outreach and education in middle and high school by introducing QIST concepts and providing access to learning materials and to inspire the next generation of quantum leaders. Over the next decade, Q-12 also aims to provide quantum-related curricula – developed by professionals in the field – beyond university labs and classrooms, to community colleges and online courses.
Edwards explains that while Q-12 mainly focuses on the K-12 level, there is also an overlap with early undergraduate, two-year colleges – meaning that there is a wide range of requirements, issues and unique challenges to contend with. Such a big space also means that different companies and institutions have varying levels of funding and interests in quantum education research and development.
“Academic organizations, for example, tend to work on educational research or to provide professional development, especially because it’s nascent,” says Edwards. “There is a lot of the activity in the academic space, within professional societies. We also work with a number of private companies, some of which are developing curricula, or providing free access to different tools and simulations for learning experiences.”
The role of the APS
The American Physical Society (APS) is strongly involved in quantum education – by making sure that teachers have access to tools and resources for quantum education as well as connecting quantum professionals with K-12 classrooms to discuss careers in quantum. “The APS has been really active in engaging with teachers and connecting them with the vast network of APS members, stakeholders and professionals, to talk about careers,” says Edwards. APS and Q-12 have a number of initiatives – such as Quantum To-Go and QuanTime – that help connect quantum professionals with classrooms and provide teachers with ready-to-use quantum activities.
Role model The Quantum To-Go programme matches scientists, engineers and professionals in quantum information science andt technology with classrooms across the US to inspire students to enter the quantum workforce. (Courtesy: APS)
Claudia Fracchiolla, who is the APS’s head of public engagement, points out that while there is growing interest in quantum education, there is a lack of explicit support for high-school teachers who need to be having conversations about a possible career in quantum with students that will soon be choosing a major.
“We know from our research that while teachers might want to engage in this professional development, they don’t always have the necessary support from their institution and it is not regulated,” explains Fracchiolla. She adds that while there are a “few stellar people in the field who are creating materials for teachers”, there is not a clear standard on how they can be used, or what can be taught at a school level.
Quantum To-Go
To help tackle these issues, the APS and Q-12 launched the Quantum To-Go programme, which pairs educators with quantum-science professionals, who speak to students about quantum concepts and careers. The programme covers students from the first year of school through to undergraduate level, with scientists visiting in person or virtually.
It’s a really great way for quantum professionals in different sectors to visit classrooms and talk about their experiences
Emily Edwards
“I think it’s a really great way for quantum professionals in different sectors to visit classrooms and talk about their experiences,” says Edwards. She adds that this kind of collaboration can be especially useful “because we know that students – particularly young women, or students of colour or those from any marginalized background – self-select out of these areas while they’re still in the K-12 environment.”
Edwards puts this down to a lack of role models in the workplace. “Not only do they not hear about quantum in the classroom or in their curriculum, but they also can’t see themselves working in the field,” she says. “So there’s no hope of achieving a diverse workforce if you don’t connect a diverse set of professionals with the classroom. So we are really proud to be a part of Quantum To-Go.”
Quantum resources
With 2025 being celebrated as the International Year of Quantum Science and Technology (IYQ), both Q-12 and the APS hope to see and host many community-driven activities and events focused on young learners and their families. An example of this is Q-12’s QuanTime initiative, which seeks to help teachers curate informal quantum activities across the US all year round. “Education is local in the US, and so it’s most successful if we can work with locals to help develop their own community resources,” explains Edwards.
A key event in the APS’s annual calendar of activities celebrating IYQ is the Quantum Education and Policy Summit, held in partnership with the Q-SEnSE institute. It aims to bring together key experts in physics education, policymakers and quantum industry leaders, to develop quantum educational resources and policies.
Quantum influencers Testifying before the US House Science Committee on 7 June 2023 were (from left to right) National Quantum Coordination Office director Charles Tahan, former Department of Education under secretary for science Paul Dabbar, NASA quantum scientist Eleanor Rieffel, Quantum Economic Development Consortium executive director Celia Merzbacher, and University of Illinois quantum scientist Emily Edwards (now at Duke University). (Courtesy: House Science Committee)
Another popular resource produced by the APS is its PhysicsQuest kits, which are aimed at middle-school students to help them explore specific physics topics. “We engaged with different APS members who work in quantum to design activities for middle-school students,”says Fracchiolla. “We then worked with some teachers to pilot and test those activities, before finalizing our kits, which are freely available to teachers. Normally, each year we do four activities, but thanks to IYQ, we decided to double that to eight activities that are all related to topics in quantum science and technology.”
To help distribute these kits to teachers, as well as provide them with guidance on how to use all the included materials, the APS is hosting workshops for teachers during the Teachers’ Days at the APS Global Physics Summit in March 2025. Workshops will also be held at the APS Division of Atomic, Molecular and Optical Physics (DAMOP) annual meeting in June.
“A key part of IYQ is creating an awareness of what quantum science and technology entails, because it is also about the people that work in the field,” says Fracchiolla. “Something that was really important when we were writing the proposal to send to the UN for the IYQ was to demonstrate how quantum technologies will supports the UN’s sustainable development goals. I hope this also inspires students to pursue careers in quantum, as they realize that it goes beyond quantum computing.”
If we are focusing on quantum technologies to address sustainable development goals, we need to make sure that they are accessible to everyone
Claudia Fracchiolla
Fracchiolla also underlines that having a diverse range of people in the quantum workforce will ensure that these technologies will help to tackle societal and environmental issues, and vice versa. “If we are focusing on quantum technologies to address sustainable development goals, we need to make sure that they are accessible to everyone. And that’s not going to happen if diverse minds are not involved in the process of developing these technologies,” she says, while acknowledging that this is currently not the case.
It is Fracchiolla’s ultimate hope that the IYQ and the APS’s activities taken together will help all students feel empowered that there is a place for them in the field. “Quantum is still a nascent field and we have the opportunity to not repeat the errors of the past, that have made many areas of science exclusive. We need to make the field diverse from the get go.”
The Stand Up for Science demonstration at Washington Square Park in New York City on Friday 7 March 2025 had the most qualified speakers, angriest participants and wickedest signs of any protest I can remember.
Raucous, diverse and loud, it was held in the shadow of looming massive cuts to key US scientific agencies including the National Institutes of Health (NIH), the National Science Foundation (NSF), and the National Oceanic and Atmospheric Administration (NOAA)
Other anti-science actions have included the appointment of a vaccine opponent as head of the US Health and Human Services and the cancellation of $400m in grants and contracts to Columbia University.
I arrived at the venue half an hour beforehand. Despite the chillingly cold and breezy weather, the park’s usual characters were there, including chess players, tap dancers, people advertising “Revolution Books” and evangelists who handed me a “spiritual credit card”.
But I had come for a more real-world cause that is affecting many of my research colleagues right here, right now. Among the Stand Up For Science demonstrators was Srishti Bose, a fourth-year graduate student in neuroscience at Queens College, who met me underneath the arch at the north of the park, the traditional site of demonstrations.
She had organized the rally together with two other women – a graduate student at Stony Brook University and a postdoc at the Albert Einstein College of Medicine. They had heard that there would be a Stand Up for Science rally on the same day in Washington, DC, and thought that New York City should have one too. In fact, there were 32 across the US in total.
The trio didn’t have much time, and none of them had ever planned a political protest before. “We spent 10 days frantically e-mailing everyone we could think of,” Srishti said, of having to arrange the permits, equipment, insurance, medical and security personnel – and speakers.
Speaking out Two of the protestors in Washington Square in Greenwich Village, New York. (Courtesy: Robert P Crease)
I was astounded at what they accomplished. The first speaker was Harald Varmus, who won the 1989 Nobel Prize for Physiology and Medicine and spent seven years as director of the NIH under President Barack Obama. “People think medicine falls from the sky,” he told protestors, “rather than from academics supported by science funding.”
Another Nobel-prize-winner who spoke was Martin Chalfie from Columbia University, who won the 2008 Nobel Prize for Chemistry.
Speaker after speaker – faculty, foundation directors, lab heads, faculty, postdocs, graduate students, New York State politicians – ticked off what was being lost by the budget cuts targeting science.
It included money for motor neurone disease, Alzheimer’s, cancer, polio, measles, heart disease research, climate science, and funding that supports stipends and salaries for postdocs, grad students, university labs and departments.
Lisa Randall, a theoretical physicist at Harvard University, began with a joke: “How many government officials does it take to screw in a light bulb? None: Trump says the job’s done and they stay in the dark.”
Randall continued by enumerating programme and funding cuts that will turn the lights out on important research. “Let’s keep the values that Make America Great – Again,” she concluded.
The crowd of 2000 or so demonstrators were diverse and multi-generational, as is typical for such events in my New York City. I heard at least five different languages being spoken. Everyone was fired up and roared “Boo!” whenever the names of certain politicians were mentioned.
I told Bose about the criticism I had heard that Stand Up for Science was making science look like a special-interest group rather than being carried out in the public interest.
She would have none of it. “They made us an interest group,” Bose insisted. “We grew up thinking that everyone accepted and supported science. This is the first time we’ve had a direct attack on what we do. I can’t think of a single lab that doesn’t have an NSF or NIH grant.”
Seriously funny Many of the demonstrators held messages aloft. (Courtesy: Robert P Crease)
Lots of signs were on display, many fabulously aggressive and angry, ranging from hand-drawn lettering on cardboard to carefully produced placards – some of which I won’t reproduce in a family magazine.
“I shouldn’t have to make a sign saying that ‘Defunding science is wrong’…but here we are” said one. “Go fact yourself!” and “Science keeps you assholes alive”, said others.
Two female breast-cancer researchers had made a sign that, they told me, put their message in a way that they thought the current US leaders would get: “Science saves boobs.”
I saw others that bitterly mocked the current US president’s apparent ignorance of the distinction between “transgenic” and “transgender”.
“Girls just wanna have funding” said another witty sign. “Executive orders are not peer reviewed”; “Science: because I’d rather not make shit up”; “Science is significant *p<0.05” said others.
The rally ended with 20 minutes of call-and-response chants. Everyone knew the words, thanks to a QR code.
“We will fight?”
“Every day!”
“When science is under attack?”
“Stand up, fight back!”
“What do we want?”
“Answers”
“When do we want it?”
“After peer review!”
After the spirited chanting, the rally was officially over, but many people stayed, sharing stories, collecting information and seeking ideas for the next moves.
“Obviously,” Bose said, “it’s not going to end here.”
A few months ago, I attended a presentation and reception at the Houses of Parliament in London for companies that had won Business Awards from the Institute of Physics in 2024. What excited me most at the event was hearing about the smaller start-up companies and their innovations. They are developing everything from metamaterials for sound proofing to instruments that can non-invasively measure pressure in the human brain.
The event also reminded me of my own experience working in the small-business sector. After completing my PhD in high-speed aerodynamics at the University of Southampton, I spent a short spell working for what was then the Defence and Evaluation Research Agency (DERA) in Farnborough. But wanting to stay in Southampton, I decided working permanently at DERA wasn’t right for me so started looking for a suitable role closer to home.
I soon found myself working as a development engineer at a small engineering company called Stewart Hughes Limited. It was founded in 1980 by Ron Stewart and Tony Hughes, who had been researchers at the Institute of Sound and Vibration Research (ISVR) at Southampton University. Through numerous research contracts, the pair had spent almost a decade developing techniques for monitoring the condition of mechanical machinery from their vibrations.
By attaching accelerometers or vibration sensors to the machines, they discovered that the resulting signals can be processed to determine the physical condition of the devices. Their particular innovation was to find a way to both capture and process the accelerometer signals in near real time to produce indicators relating to the health of the equipment being monitored. It required a combination of hardware and software that was cutting edge at the time.
Exciting times
Although I did not join the firm until early 1994, it still had all the feel of a start-up. We were located in a single office building (in reality it was a repurposed warehouse) with 50 or so staff, about 40 of whom were electronics, software and mechanical engineers. There was a strong emphasis on “systems engineering” – in other words, integrating different disciplines to design and build an overarching solution to a problem.
In its early years, Stewart Hughes had developed a variety of applications for their vibration health monitoring technique. It was used in all sorts of areas, ranging from conveyor belts carrying coal and Royal Navy ships travelling at sea to supersized trucks working on mines. But when I joined, the company was focused on helicopter drivetrains.
In particular, the company had developed a product called Health and Usage Monitoring System (HUMS). The UK’s Civil Aviation Authority required this kind of device to be fitted on all helicopters transporting passengers to and from oil platforms in the North Sea to improve operational safety. Our equipment (and that of rival suppliers – we did not have a monopoly) was used to monitor mechanical parts such as gears, bearings, shafts and rotors.
For someone straight out of university, it was an exciting time. There were lots of technical challenges to be solved, including designing effective ways to process signals in noisy environments and extracting information about critical drivetrain components. We then had to convert the data into indicators that could be monitored to detect and diagnose mechanical issues.
As a physicist, I found myself working closely with the engineers but tended to approach things from a more fundamental angle, helping to explain why certain approaches worked and others didn’t. Don’t forget that the technology developed by Stewart Hughes wasn’t used in the comfort of a physics lab but on a real-life working helicopter. That meant capturing and processing data on the airborne helicopter itself using bespoke electronics to manage high onboard data rates.
After the data were downloaded, they had to be sent on floppy disks or other portable storage devices to ground stations. There the results would be presented in a form to allow customers and our own staff to interpret and diagnose any mechanical problems. We also had to develop ways to monitor an entire fleet of helicopters, continuously learning and developing from experience.
If it all sounds as if working in a small business is plain sailing, well it rarely is. A few years before I joined, Stewart Hughes had ridden out at least one major storm when it was forced to significantly reduce the workforce because anticipated contracts did not materialize. “Black Friday”, as it became known, made the board of directors nervous about taking on additional employees, often relying on existing staff to work overtime instead.
This arrangement actually suited many of the early-career employees, who were keen to quickly expand their work experience and their pay packet. But when I arrived, we were once again up against cash-flow challenges, which is the bane of any small business. Back then there were no digital electronic documents and web portals, which led to some hairy situations.
I can recall several occasions when the company had to book a despatch rider for 2 p.m. on a Friday afternoon to dash a report up the motorway to the Ministry of Defence in London. If we hadn’t got an approval signature and contractual payment before the close of business on the same day, the company literally wouldn’t have been able to open its doors on Monday morning.
Being part of a small company was undoubtedly a formative part of my early career experience
At some stage, however, the company’s bank lost patience with this hand-to-mouth existence and the board of directors was told to put the firm on a more solid financial footing. This edict led to the company structure becoming more formal and the directors being less accessible, with a seasoned professional brought in to help run the business. The resulting change in strategic trajectory eventually led to its sale.
Being part of a small company was undoubtedly a formative part of my early career experience. It was an exciting time and the fact all employees were – literally – under one roof meant that we knew and worked with the decision makers. We always had the opportunity to speak up and influence the future. We got to work on unexpected new projects because there was external funding available. We could be flexible when it came to trying out new software or hardware as part of our product development.
The flip side was that we sometimes had to flex too much, which at times made it hard to stick to a cohesive strategy. We struggled to find cash to try out blue sky or speculative approaches – although there were plenty of good ideas. These advantages come with being part of a larger corporation with bigger budgets and greater overall stability.
That said, I appreciate the diverse and dynamic learning curve I experienced at Stewart Hughes. The founders were innovators, whose vision and products have stood the test of time, still being widely used today . The company benefited many people not just the staff who led successful careers but also the pilots and passengers on helicopters whose lives may potentially have been saved.
Working in a large corporation is undoubtedly a smoother ride than in a small business. But it’s rarely seat-of-the-pants stuff and I learned so much from my own days at Stewart Hughes. Attending the IOP’s business awards reminded me of the buzz of being in a small firm. It might not be to everyone’s taste, but if you get the chance to work in that environment, do give it serious thought.
Researchers from the Amazon Web Services (AWS) Center for Quantum Computing have announced what they describe as a “breakthrough” in quantum error correction. Their method uses so-called cat qubits to reduce the total number of qubits required to build a large-scale, fault-tolerant quantum computer, and they claim it could shorten the time required to develop such machines by up to five years.
Quantum computers are promising candidates for solving complex problems that today’s classical computers cannot handle. Their main drawback is the tendency for errors to crop up in the quantum bits, or qubits, they use to perform computations. Just like classical bits, the states of qubits can erroneously flip from 0 to 1, which is known as a bit-flip error. In addition, qubits can suffer from inadvertent changes to their phase, which is a parameter that characterizes their quantum superposition (phase-flip errors). A further complication is that whereas classical bits can be copied in order to detect and correct errors, the quantum nature of qubits makes copying impossible. Hence, errors need to be dealt with in other ways.
One error-correction scheme involves building physical or “measurement” qubits around each logical or “data” qubit. The job of the measurement qubits is to detect phase-flip or bit-flip errors in the data qubits without destroying their quantum nature. In 2024, a team at Google Quantum AI showed that this approach is scalable in a system of a few dozen qubits. However, a truly powerful quantum computer would require around a million data qubits and an even larger number of measurement qubits.
Cat qubits to the rescue
The AWS researchers showed that it is possible reduce this total number of qubits. They did this by using a special type of qubit called a cat qubit. Named after the Schrödinger’s cat thought that illustrates the concept of quantum superposition, cat qubits use the superposition of coherent states to encode information in a way that resists bit flips. Doing so may increase the number of phase-flip errors, but special error-correction algorithms can deal with these efficiently.
The AWS team got this result by building a microchip containing an array of five cat qubits. These are connected to four transmon qubits, which are a type of superconducting qubit with a reduced sensitivity to charge noise (a major source of errors in quantum computations). Here, the cat qubits serve as data qubits, while the transmon qubits measure and correct phase-flip errors. The cat qubits were further stabilized by connecting each of them to a buffer mode that uses a non-linear process called two-photon dissipation to ensure that their noise bias is maintained over time.
According to Harry Putterman, a senior research scientist at AWS, the team’s foremost challenge (and innovation) was to ensure that the system did not introduce too many bit-flip errors. This was important because the system uses a classical repetition code as its “outer layer” of error correction, which left it with no redundancy against residual bit flips. With this aspect under control, the researchers demonstrated that their superconducting quantum circuit suppressed errors from 1.75% per cycle for a three-cat qubit array to 1.65% per cycle for a five-cat qubit array. Achieving this degree of error suppression with larger error-correcting codes previously required tens of additional qubits.
On a scalable path
AWS’s director of quantum hardware, Oskar Painter, says the result will reduce the development time for a full-scale quantum computer by 3-5 years. This is, he says, a direct outcome of the system’s simple architecture as well as its 90% reduction in the “overhead” required for quantum error correction. The team does, however, need to reduce the error rates of the error-corrected logical qubits. “The two most important next steps towards building a fault-tolerant quantum computer at scale is that we need to scale up to several logical qubits and begin to perform and study logical operations at the logical qubit level,” Painter tells Physics World.
According to David Schlegel, a research scientist at the French quantum computing firm Alice & Bob, which specializes in cat qubits, this work marks the beginning of a shift from noisy, classically simulable quantum devices to fully error-corrected quantum chips. He says the AWS team’s most notable achievement is its clever hybrid arrangement of cat qubits for quantum information storage and traditional transmon qubits for error readout.
However, while Schlegel calls the research “innovative”, he says it is not without limitations. Because the AWS chip incorporates transmons, it still needs to address both bit-flip and phase-flip errors. “Other cat qubit approaches focus on completely eliminating bit flips, further reducing the qubit count by more than a factor of 10,” Schlegel says. “But it remains to be seen which approach will prove more effective and hardware-efficient for large-scale error-corrected quantum devices in the long run.”
Physicists in Serbia have begun strike action today in response to what they say is government corruption and social injustice. The one-day strike, called by the country’s official union for researchers, is expected to result in thousands of scientists joining students who have already been demonstrating for months over conditions in the country.
The student protests, which began in November, were triggered by a railway station canopy collapse that killed 15 people. Since then, it has grown into an ongoing mass protest seen by many as indirectly seeking to change the government, currently led by president Aleksandar Vučić.
The Serbian government, however, claims it has met all student demands such as transparent publication of all documents related to the accident and the prosecution of individuals who have disrupted the protests. The government has also accepted the resignation of prime minister Miloš Vučević as well as transport minister Goran Vesić and trade minister Tomislav Momirović, who previously held the transport role during the station’s reconstruction.
“The students are championing noble causes that resonate with all citizens,” says Igor Stanković, a statistical physicist at the Institute of Physics (IPB) in Belgrade, who is joining today’s walkout. In January, around 100 employees from the IPB in Belgrade signed a letter in support of the students, one of many from various research institutions since December.
Stanković believes that the corruption and lack of accountability that students are protesting against “stem from systemic societal and political problems, including entrenched patronage networks and a lack of transparency”.
“I believe there is no turning back now,” adds Stanković. “The students have gained support from people across the academic spectrum – including those I personally agree with and others I believe bear responsibility for the current state of affairs. That, in my view, is their strength: standing firmly behind principles, not political affiliations.”
Meanwhile, Miloš Stojaković, a mathematician at the University of Novi Sad, says that the faculty at the university have backed the students from the start especially given that they are making “a concerted effort to minimize disruptions to our scientific work”.
Many university faculties in Serbia have been blockaded by protesting students, who have been using them as a base for their demonstrations. “The situation will have a temporary negative impact on research activities,” admits Dejan Vukobratović, an electrical engineer from the University of Novi Sad. However, most researchers are “finding their way through this situation”, he adds, with “most teams keeping their project partners and funders informed about the situation, anticipating possible risks”.
Missed exams
Amidst the continuing disruptions, the Serbian national science foundation has twice delayed a deadline for the award of €24m of research grants, citing “circumstances that adversely affect the collection of project documentation”. The foundation adds that 96% of its survey participants requested an extension. The researchers’ union has also called on the government to freeze the work status of PhD students employed as research assistants or interns to accommodate the months’ long pause to their work. The government has promised to look into it.
Meanwhile, universities are setting up expert groups to figure out how to deal with the delays to studies and missed exams. Physics World approached Serbia’s government for comment, but did not receive a reply.
Researchers in Australia have developed a nanosensor that can detect the onset of gestational diabetes with 95% accuracy. Demonstrated by a team led by Carlos Salomon at the University of Queensland, the superparamagnetic “nanoflower” sensor could enable doctors to detect a variety of complications in the early stages of pregnancy.
Many complications in pregnancy can have profound and lasting effects on both the mother and the developing foetus. Today, these conditions are detected using methods such as blood tests, ultrasound screening and blood pressure monitoring. In many cases, however, their sensitivity is severely limited in the earliest stages of pregnancy.
“Currently, most pregnancy complications cannot be identified until the second or third trimester, which means it can sometimes be too late for effective intervention,” Salomon explains.
To tackle this challenge, Salomon and his colleagues are investigating the use of specially engineered nanoparticles to isolate and detect biomarkers in the blood associated with complications in early pregnancy. Specifically, they aim to detect the protein molecules carried by extracellular vesicles (EVs) – tiny, membrane-bound particles released by the placenta, which play a crucial role in cell signalling.
In their previous research, the team pioneered the development of superparamagnetic nanostructures that selectively bind to specific EV biomarkers. Superparamagnetism occurs specifically in small, ferromagnetic nanoparticles, causing their magnetization to randomly flip direction under the influence of temperature. When proteins are bound to the surfaces of these nanostructures, their magnetic responses are altered detectably, providing the team with a reliable EV sensor.
“This technology has been developed using nanomaterials to detect biomarkers at low concentrations,” explains co-author Mostafa Masud. “This is what makes our technology more sensitive than current testing methods, and why it can pick up potential pregnancy complications much earlier.”
Previous versions of the sensor used porous nanocubes that efficiently captured EVs carrying a key placental protein named PLAP. By detecting unusual levels of PLAP in the blood of pregnant women, this approach enabled the researchers to detect complications far more easily than with existing techniques. However, the method generally required detection times lasting several hours, making it unsuitable for on-site screening.
In their latest study, reported in Science Advances, Salomon’s team started with a deeper analysis of the EV proteins carried by these blood samples. Through advanced computer modelling, they discovered that complications can be linked to changes in the relative abundance of PLAP and another placental protein, CD9.
Based on these findings, they developed a new superparamagnetic nanosensor capable of detecting both biomarkers simultaneously. Their design features flower-shaped nanostructures made of nickel ferrite, which were embedded into specialized testing strips to boost their sensitivity even further.
Using this sensor, the researchers collected blood samples from 201 pregnant women at 11 to 13 weeks’ gestation. “We detected possible complications, such as preterm birth, gestational diabetes and preeclampsia, which is high blood pressure during pregnancy,” Salomon describes. For gestational diabetes, the sensor demonstrated 95% sensitivity in identifying at-risk cases, and 100% specificity in ruling out healthy cases.
Based on these results, the researchers are hopeful that further refinements to their nanoflower sensor could lead to a new generation of EV protein detectors, enabling the early diagnosis of a wide range of pregnancy complications.
“With this technology, pregnant women will be able to seek medical intervention much earlier,” Salomon says. “This has the potential to revolutionize risk assessment and improve clinical decision-making in obstetric care.”
In this episode of the Physics World Weekly podcast, we explore how computational physics is being used to develop new quantum materials; and we look at how ultrasound can help detect breast cancer.
Our first guest is Bhaskaran Muralidharan, who leads the Computational Nanoelectronics & Quantum Transport Group at the Indian Institute of Technology Bombay. In a conversation with Physics World’s Hamish Johnston, he explains how computational physics is being used to develop new materials and devices for quantum science and technology. He also shares his personal perspective on quantum physics in this International Year of Quantum Science and Technology.
Our second guest is Daniel Sarno of the UK’s National Physical Laboratory, who is an expert in the medical uses of ultrasound. In a conversation with Physics World’s Tami Freeman, Sarno explains why conventional mammography can struggle to detect cancer in patients with higher density breast tissue. This is a particular problem because women with such tissue are at higher risk of developing the disease. To address this problem, Sarno and colleagues have developed a ultrasound technique for measuring tissue density and are commercializing it via a company called sona.
Bhaskaran Muralidharan is an editorial board member on Materials for Quantum Technology. The journal is produced by IOP Publishing, which also brings you Physics World
A counterintuitive result from Einstein’s special theory of relativity has finally been verified more than 65 years after it was predicted. The prediction states that objects moving near the speed of light will appear rotated to an external observer, and physicists in Austria have now observed this experimentally using a laser and an ultrafast stop-motion camera.
A central postulate of special relativity is that the speed of light is the same in all reference frames. An observer who sees an object travelling close to the speed of light and makes simultaneous measurements of its front and back (in the direction of travel) will therefore find that, because photons coming from each end of the object both travel at the speed of light, the object is measurably shorter than it would be for an observer in the object’s reference frame. This is the long-established phenomenon of Lorentz contraction.
In 1959, however, two physicists, James Terrell and the future Nobel laureate Roger Penrose, independently noted something else. If the object has any significant optical depth relative to its length – in other words, if its extension parallel to the observer’s line of sight is comparable to its extension perpendicular to this line of sight, as is the case for a cube or a sphere – then photons from the far side of the object (from the observer’s perspective) will take longer to reach the observer than photons from its near side. Hence, if a camera takes an instantaneous snapshot of the moving object, it will collect photons from the far side that were emitted earlier at the same time as it collects photons from the near side that were emitted later.
This time difference stretches the image out, making the object appear longer even as Lorentz contraction makes its measurements shorter. Because the stretching and the contraction cancel out, the photographed object will not appear to change length at all.
But that isn’t the whole story. For the cancellation to work, the photons reaching the observer from the part of the object facing its direction of travel must have been emitted later than the photons that come from its trailing edge. This is because photons from the far and back sides come from parts of the object that would normally be obscured by the front and near sides. However, because the object moves in the time it takes photons to propagate, it creates a clear passage for trailing-edge photons to reach the camera.
The cumulative effect, Terrell and Penrose showed, is that instead of appearing to contract – as one would naïvely expect – a three-dimensional object photographed travelling at nearly the speed of light will appear rotated.
The Terrell effect in the lab
While multiple computer models have been constructed to illustrate this “Terrell effect” rotation, it has largely remained a thought experiment. In the new work, however, Peter Schattschneider of the Technical University of Vienna and colleagues realized it in an experimental setup. To do this, they shone pulsed laser light onto one of two moving objects: a sphere or a cube. The laser pulses were synchronized to a picosecond camera that collected light scattered off the object.
The researchers programmed the camera to produce a series of images at each position of the moving object. They then allowed the object to move to the next position and, when the laser pulsed again, recorded another series of ultrafast images with the camera. By linking together images recorded from the camera in response to different laser pulses, the researchers were able to, in effect, reduce the speed of light to less than 2 m/s.
When they did so, they observed that the object rotated rather than contracted, just as Terrell and Penrose predicted. While their results did deviate somewhat from theoretical predictions, this was unsurprising given that the predictions rest on certain assumptions. One of these is that incoming rays of light should be parallel to the observer, which is only true if the distance from object to observer is infinite. Another is that each image should be recorded instantaneously, whereas the shutter speed of real cameras is inevitably finite.
Because their research is awaiting publication by a journal with an embargo policy, Schattschneider and colleagues were unavailable for comment. However, the Harvard University astrophysicist Avi Loeb, who suggested in 2017 that the Terrell effect could have applications for measuring exoplanet masses, is impressed: “What [the researchers] did here is a very clever experiment where they used very short pulses of light from an object, then moved the object, and then looked again at the object and then put these snapshots together into a movie – and because it involves different parts of the body reflecting light at different times, they were able to get exactly the effect that Terrell and Penrose envisioned,” he says. Though Loeb notes that there’s “nothing fundamentally new” in the work, he nevertheless calls it “a nice experimental confirmation”.
The research is available on the arXiv pre-print server.
The integrity of science could be threatened by publishers changing scientific papers after they have been published – but without making any formal public notification. That’s the verdict of a new study by an international team of researchers, who coin such changes “stealth corrections”. They want publishers to publicly log all changes that are made to published scientific research (Learned Publishing 38 e1660).
When corrections are made to a paper after publication, it is standard practice for a notice to be added to the article explaining what has been changed and why. This transparent record keeping is designed to retain trust in the scientific record. But last year, René Aquarius, a neurosurgery researcher at Radboud University Medical Center in the Netherlands, noticed this does not always happen.
After spotting an issue with an image in a published paper, he raised concerns with the authors, who acknowledged the concerns and stated that they were “checking the original data to figure out the problem” and would keep him updated. However, Aquarius was surprised to see that the figure had been updated a month later, but without a correction notice stating that the paper had been changed.
Teaming up with colleagues from Belgium, France, the UK and the US, Aquarius began to identify and document similar stealth corrections. They did so by recording instances that they and other “science sleuths” had already found and by searching online for for terms such as “no erratum”, “no corrigendum” and “stealth” on PubPeer – an online platform where users discuss and review scientific publications.
Sustained vigilance
The researchers define a stealth correction as at least one post-publication change being made to a scientific article that does not provide a correction note or any other indicator that the publication has been temporarily or permanently altered. The researchers identified 131 stealth corrections spread across 10 scientific publishers and in different fields of research. In 92 of the cases, the stealth correction involved a change in the content of the article, such as to figures, data or text.
The remaining unrecorded changes covered three categories: “author information” such as the addition of authors or changes in affiliation; “additional information”, including edits to ethics and conflict of interest statements; and “the record of editorial process”, for instance alterations to editor details and publication dates. “For most cases, we think that the issue was big enough to have a correction notice that informs the readers what was happening,” Aquarius says.
After the authors began drawing attention to the stealth corrections, five of the papers received an official correction notice, nine were given expressions of concern, 17 reverted to the original version and 11 were retracted. Aquarius says he believes it is “important” that reader knows what has happened to a paper “so they can make up their own mind whether they want to trust [it] or not”.
The researchers would now like to see publishers implementing online correction logs that make it impossible to change anything in a published article without it being transparently reported, however small the edit. They also say that clearer definitions and guidelines are required concerning what constitutes a correction and needs a correction notice.
“We need to have sustained vigilance in the scientific community to spot these stealth corrections and also register them publicly, for example on PubPeer,” Aquarius says.
The story begins with the startling event that gives the book its unusual moniker: the firing of a Colt revolver in the famous London cathedral in 1951. A similar experiment was also performed in the Royal Festival Hall in the same year (see above photo). Fortunately, this was simply a demonstration for journalists of an experiment to understand and improve the listening experience in a space notorious for its echo and other problematic acoustic features.
St Paul’s was completed in 1711 and Smyth, a historian of architecture, science and construction at the University of Cambridge in the UK, explains that until the turn of the last century, the only way to evaluate the quality of sound in such a building was by ear. The book then reveals how this changed. Over five decades of innovative experiments, scientists and architects built a quantitative understanding of how a building’s shape, size and interior furnishings determine the quality of speech and music through reflection and absorption of sound waves.
The evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers
We are first taken back to the dawn of the 20th century and shown how the evolution of architectural acoustics as a scientific field was driven by a small group of dedicated researchers. This includes architect and pioneering acoustician Hope Bagenal, along with several physicists, notably Harvard-based US physicist Wallace Clement Sabine.
Details of Sabine’s career, alongside those of Bagenal, whose personal story forms the backbone for much of the book, deftly put a human face on the research that transformed these public spaces. Perhaps Sabine’s most significant contribution was the derivation of a formula to predict the time taken for sound to fade away in a room. Known as the “reverberation time”, this became a foundation of architectural acoustics, and his mathematical work still forms the basis for the field today.
The presence of people, objects and reflective or absorbing surfaces all affect a room’s acoustics. Smyth describes how materials ranging from rugs and timber panelling to specially developed acoustic plaster and tiles have all been investigated for their acoustic properties. She also vividly details the venues where acoustics interventions were added – such as the reflective teak flooring and vast murals painted on absorbent felt in the Henry Jarvis Memorial Hall of the Royal Institute of British Architects in London.
Other locations featured include the Royal Albert Hall, Abbey Road Studios, White Rock Pavilion at Hastings, and the Assembly Chamber of the Legislative Building in New Delhi, India. Temporary structures and spaces for musical performance are highlighted too. These include the National Gallery while it was cleared of paintings during the Second World War and the triumph of acoustic design that was the Glasgow Empire Exhibition concert hall – built for the 1938 event and sadly dismantled that same year.
Unsurprisingly, much of this acoustic work was either punctuated or heavily influenced by the two world wars. While in the trenches during the First World War, Bagenal wrote a journal paper on cathedral acoustics that detailed his pre-war work at St Paul’s Cathedral, Westminster Cathedral and Westminster Abbey. His paper discussed timbre, resonant frequency “and the effects of interference and delay on clarity and harmony”.
In 1916, back in England recovering from a shellfire injury, Bagenal started what would become a long-standing research collaboration with the commandant of the hospital where he was recuperating – who happened to be Alex Wood, a physics lecturer at Cambridge. Equally fascinating is hearing about the push in the wake of the First World War for good speech acoustics in public spaces used for legislative and diplomatic purposes.
Smyth also relates tales of the wrangling that sometimes took place over funding for acoustic experiments on public buildings, and how, as the 20th century progressed, companies specializing in acoustic materials sprang up – and in some cases made dubious claims about the merits of their products. Meanwhile, new technologies such as tape recorders and microphones helped bring a more scientific approach to architectural acoustics research.
The author concludes by describing how the acoustic research from the preceding decades influenced the auditorium design of the Royal Festival Hall on the South Bank in London, which, as Smyth states, was “the first building to have been designed from the outset as a manifestation of acoustic science”.
As evidenced by the copious notes, the wealth of contemporary quotes, and the captivating historical photos and excerpts from archive documents, this book is well-researched. But while I enjoyed the pace and found myself hooked into the story, I found the text repetitive in places, and felt that more details about the physics of acoustics would have enhanced the narrative.
But these are minor grumbles. Overall Smyth paints an evocative picture, transporting us into these legendary auditoria. I have always found it a rather magical experience attending concerts at the Royal Albert Hall. Now, thanks to this book, the next time I have that pleasure I will do so with a far greater understanding of the role physics and physicists played in shaping the music I hear. For me at least, listening will never be quite the same again.
2024 Manchester University Press 328pp £25.00/$36.95
As service lifetimes of electric vehicle (EV) and grid storage batteries continually improve, it has become increasingly important to understand how Li-ion batteries perform after extensive cycling. Using a combination of spatially resolved synchrotron x-ray diffraction and computed tomography, the complex kinetics and spatially heterogeneous behavior of extensively cycled cells can be mapped and characterized under both near-equilibrium and non-equilibrium conditions.
This webinar shows examples of commercial cells with thousands (even tens of thousands) of cycles over many years. The behaviour of such cells can be surprisingly complex and spatially heterogeneous, requiring a different approach to analysis and modelling than what is typically used in the literature. Using this approach, we investigate the long-term behavior of Ni-rich NMC cells and examine ways to prevent degradation. This work also showcases the incredible durability of single-crystal cathodes, which show very little evidence of mechanical or kinetic degradation after more than 20,000 cycles – the equivalent to driving an EV for 8 million km!
Toby Bond
Toby Bond is a senior scientist in the Industrial Science group at the Canadian Light Source (CLS), Canada’s national synchrotron facility. He is a specialist in x-ray imaging and diffraction, specializing in in-situ and operando analysis of batteries and fuel cells for industry clients of the CLS. Bond is an electrochemist by training, who completed his MSc and PhD in Jeff Dahn’s laboratory at Dalhousie University with a focus in developing methods and instrumentation to characterize long-term degradation in Li-ion batteries.
The Superconducting Quantum Materials and Systems (SQMS) Center, led by Fermi National Accelerator Laboratory (Chicago, Illinois), is on a mission “to develop beyond-the-state-of-the-art quantum computers and sensors applying technologies developed for the world’s most advanced particle accelerators”. SQMS director Anna Grassellino talks to Physics World about the evolution of a unique multidisciplinary research hub for quantum science, technology and applications.
What’s the headline take on SQMS?
Established as part of the US National Quantum Initiative (NQI) Act of 2018, SQMS is one of the five National Quantum Information Science Research Centers run by the US Department of Energy (DOE). With funding of $115m through its initial five-year funding cycle (2020-25), SQMS represents a coordinated, at-scale effort – comprising 35 partner institutions – to address pressing scientific and technological challenges for the realization of practical quantum computers and sensors, as well as exploring how novel quantum tools can advance fundamental physics.
Our mission is to tackle one of the biggest cross-cutting challenges in quantum information science: the lifetime of superconducting quantum states – also known as the coherence time (the length of time that a qubit can effectively store and process information). Understanding and mitigating the physical processes that cause decoherence – and, by extension, limit the performance of superconducting qubits – is critical to the realization of practical and useful quantum computers and quantum sensors.
How is the centre delivering versus the vision laid out in the NQI?
SQMS has brought together an outstanding group of researchers who, collectively, have utilized a suite of enabling technologies from Fermilab’s accelerator science programme – and from our network of partners – to realize breakthroughs in qubit chip materials and fabrication processes; design and development of novel quantum devices and architectures; as well as the scale-up of complex quantum systems. Central to this endeavour are superconducting materials, superconducting radiofrequency (SRF) cavities and cryogenic systems – all workhorse technologies for particle accelerators employed in high-energy physics, nuclear physics and materials science.
Collective endeavour At the core of SQMS success are top-level scientists and engineers leading the centre’s cutting-edge quantum research programmes. From left to right: Alexander Romanenko, Silvia Zorzetti, Tanay Roy, Yao Lu, Anna Grassellino, Akshay Murthy, Roni Harnik, Hank Lamm, Bianca Giaccone, Mustafa Bal, Sam Posen. (Courtesy: Hannah Brumbaugh/Fermilab)
Take our research on decoherence channels in quantum devices. SQMS has made significant progress in the fundamental science and mitigation of losses in the oxides, interfaces, substrates and metals that underpin high-coherence qubits and quantum processors. These advances – the result of wide-ranging experimental and theoretical investigations by SQMS materials scientists and engineers – led, for example, to the demonstration of transmon qubits (a type of charge qubit exhibiting reduced sensitivity to noise) with systematic improvements in coherence, record-breaking lifetimes of over a millisecond, and reductions in performance variation.
How are you building on these breakthroughs?
First of all, we have worked on technology transfer. By developing novel chip fabrication processes together with quantum computing companies, we have contributed to our industry partners’ results of up to 2.5x improvement in error performance in their superconducting chip-based quantum processors.
We have combined these qubit advances with Fermilab’s ultrahigh-coherence 3D SRF cavities: advancing our efforts to build a cavity-based quantum processor and, in turn, demonstrating the longest-lived superconducting multimode quantum processor unit ever built (coherence times in excess of 20 ms). These systems open the path to a more powerful qudit-based quantum computing approach. (A qudit is a multilevel quantum unit that can be more than two states.) What’s more, SQMS has already put these novel systems to use as quantum sensors within Fermilab’s particle physics programme – probing for the existence of dark-matter candidates, for example, as well as enabling precision measurements and fundamental tests of quantum mechanics.
Elsewhere, we have been pushing early-stage societal impacts of quantum technologies and applications – including the use of quantum computing methods to enhance data analysis in magnetic resonance imaging (MRI). Here, SQMS scientists are working alongside clinical experts at New York University Langone Health to apply quantum techniques to quantitative MRI, an emerging diagnostic modality that could one day provide doctors with a powerful tool for evaluating tissue damage and disease.
What technologies pursued by SQMS will be critical to the scale-up of quantum systems?
There are several important examples, but I will highlight two of specific note. For starters, there’s our R&D effort to efficiently scale millikelvin-regime cryogenic systems. SQMS teams are currently developing technologies for larger and higher-cooling-power dilution refrigerators. We have designed and prototyped novel systems allowing over 20x higher cooling power, a necessary step to enable the scale-up to thousands of superconducting qubits per dilution refrigerator.
Materials insights The SQMS collaboration is studying the origins of decoherence in state-of-the-art qubits (above) using a raft of advanced materials characterization techniques – among them time-of-flight secondary-ion mass spectrometry, cryo electron microscopy and scanning probe microscopy. With a parallel effort in materials modelling, the centre is building a hierarchy of loss mechanisms that is informing how to fabricate the next generation of high-coherence qubits and quantum processors. (Courtesy: Dan Svoboda/Fermilab)
Also, we are working to optimize microwave interconnects with very low energy loss, taking advantage of SQMS expertise in low-loss superconducting resonators and materials in the quantum regime. (Quantum interconnects are critical components for linking devices together to enable scaling to large quantum processors and systems.)
How important are partnerships to the SQMS mission?
Partnerships are foundational to the success of SQMS. The DOE National Quantum Information Science Research Centers were conceived and built as mini-Manhattan projects, bringing together the power of multidisciplinary and multi-institutional groups of experts. SQMS is a leading example of building bridges across the “quantum ecosystem” – with other national and federal laboratories, with academia and industry, and across agency and international boundaries.
In this way, we have scaled up unique capabilities – multidisciplinary know-how, infrastructure and a network of R&D collaborations – to tackle the decoherence challenge and to harvest the power of quantum technologies. A case study in this regard is Ames National Laboratory, a specialist DOE centre for materials science and engineering on the campus of Iowa State University.
Ames is a key player in a coalition of materials science experts – coordinated by SQMS – seeking to unlock fundamental insights about qubit decoherence at the nanoscale. Through Ames, SQMS and its partners get access to powerful analytical tools – modalities like terahertz spectroscopy and cryo transmission electron microscopy – that aren’t routinely found in academia or industry.
What are the drivers for your engagement with the quantum technology industry?
The SQMS strategy for industry engagement is clear: to work hand-in-hand to solve technological challenges utilizing complementary facilities and expertise; to abate critical performance barriers; and to bring bidirectional value. I believe that even large companies do not have the ability to achieve practical quantum computing systems working exclusively on their own. The challenges at hand are vast and often require R&D partnerships among experts across diverse and highly specialized disciplines.
I also believe that DOE National Laboratories – given their depth of expertise and ability to build large-scale and complex scientific instruments – are, and will continue to be, key players in the development and deployment of the first useful and practical quantum computers. This means not only as end-users, but as technology developers. Our vision at SQMS is to lay the foundations of how we are going to build these extraordinary machines in partnership with industry. It’s about learning to work together and leveraging our mutual strengths.
How do Rigetti and IBM, for example, benefit from their engagement with SQMS?
The partnership with IBM, although more recent, is equally significant. Together with IBM researchers, we are interested in developing quantum interconnects – including the development of high-Q cables to make them less lossy – for the high-fidelity connection and scale-up of quantum processors into large and useful quantum computing systems.
At the same time, SQMS scientists are exploring simulations of problems in high-energy physics and condensed-matter physics using quantum computing cloud services from Rigetti and IBM.
Presumably, similar benefits accrue to suppliers of ancillary equipment to the SQMS quantum R&D programme?
Correct. We challenge our suppliers of advanced materials and fabrication equipment to go above and beyond, working closely with them on continuous improvement and new product innovation. In this way, for example, our suppliers of silicon and sapphire substrates and nanofabrication platforms – key technologies for advanced quantum circuits – benefit from SQMS materials characterization tools and fundamental physics insights that would simply not be available in isolation. These technologies are still at a stage where we need fundamental science to help define the ideal materials specifications and standards.
We are also working with companies developing quantum control boards and software, collaborating on custom solutions to unique hardware architectures such as the cavity-based qudit platforms in development at Fermilab.
How is your team building capacity to support quantum R&D and technology innovation?
We’ve pursued a twin-track approach to the scaling of SQMS infrastructure. On the one hand, we have augmented – very successfully – a network of pre-existing facilities at Fermilab and at SQMS partners, spanning accelerator technologies, materials science and cryogenic engineering. In aggregate, this covers hundreds of millions of dollars’ worth of infrastructure that we have re-employed or upgraded for studying quantum devices, including access to a host of leading-edge facilities via our R&D partners – for example, microkelvin-regime quantum platforms at Royal Holloway, University of London, and underground quantum testbeds at INFN’s Gran Sasso Laboratory.
Thinking big in quantum The SQMS Quantum Garage (above) houses a suite of R&D testbeds to support granular studies of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects. (Courtesy: Ryan Postel/Fermilab)
In parallel, we have invested in new and dedicated infrastructure to accelerate our quantum R&D programme. The Quantum Garage here at Fermilab is the centrepiece of this effort: a 560 square-metre laboratory with a fleet of six additional dilution refrigerators for cryogenic cooling of SQMS experiments as well as test, measurement and characterization of superconducting qubits, quantum processors, high-coherence quantum sensors and quantum interconnects.
What is the vision for the future of SQMS?
SQMS is putting together an exciting proposal in response to a DOE call for the next five years of research. Our efforts on coherence will remain paramount. We have come a long way, but the field still needs to make substantial advances in terms of noise reduction of superconducting quantum devices. There’s great momentum and we will continue to build on the discoveries made so far.
We have also demonstrated significant progress regarding our 3D SRF cavity-based quantum computing platform. So much so that we now have a clear vision of how to implement a mid-scale prototype quantum computer with over 50 qudits in the coming years. To get us there, we will be laying out an exciting SQMS quantum computing roadmap by the end of 2025.
It’s equally imperative to address the scalability of quantum systems. Together with industry, we will work to demonstrate practical and economically feasible approaches to be able to scale up to large quantum computing data centres with millions of qubits.
Finally, SQMS scientists will work on exploring early-stage applications of quantum computers, sensors and networks. Technology will drive the science, science will push the technology – a continuous virtuous cycle that I’m certain will lead to plenty more ground-breaking discoveries.
How SQMS is bridging the quantum skills gap
Education, education, education SQMS hosted the inaugural US Quantum Information Science (USQIS) School in summer 2023. Held annually, the USQIS is organized in conjunction with other DOE National Laboratories, academia and industry. (Courtesy: Dan Svoboda/Fermilab)
As with its efforts in infrastructure and capacity-building, SQMS is addressing quantum workforce development on multiple fronts.
Across the centre, Grassellino and her management team have recruited upwards of 150 technical staff and early-career researchers over the past five years to accelerate the SQMS R&D effort. “These ‘boots on the ground’ are a mix of PhD students, postdoctoral researchers plus senior research and engineering managers,” she explains.
Another significant initiative was launched in summer 2023, when SQMS hosted nearly 150 delegates at Fermilab for the inaugural US Quantum Information Science (USQIS) School – now an annual event organized in conjunction with other National Laboratories, academia and industry. The long-term goal is to develop the next generation of quantum scientists, engineers and technicians by sharing SQMS know-how and experimental skills in a systematic way.
“The prioritization of quantum education and training is key to sustainable workforce development,” notes Grassellino. With this in mind, she is currently in talks with academic and industry partners about an SQMS-developed master’s degree in quantum engineering. Such a programme would reinforce the centre’s already diverse internship initiatives, with graduate students benefiting from dedicated placements at SQMS and its network partners.
“Wherever possible, we aim to assign our interns with co-supervisors – one from a National Laboratory, say, another from industry,” adds Grassellino. “This ensures the learning experience shapes informed decision-making about future career pathways in quantum science and technology.”
When a mantis shrimp uses shock waves to strike and kill its prey, how does it prevent those shock waves from damaging its own tissues? Researchers at Northwestern University in the US have answered this question by identifying a structure within the shrimp that filters out harmful frequencies. Their findings, which they obtained by using ultrasonic techniques to investigate surface and bulk wave propagation in the shrimp’s dactyl club, could lead to novel advanced protective materials for military and civilian applications.
Dactyl clubs are hammer-like structures located on each side of a mantis shrimp’s body. They store energy in elastic structures similar to springs that are latched in place by tendons. When the shrimp contracts its muscles, the latch releases, releasing the stored energy and propelling the club forward with a peak force of up to 1500 N.
This huge force (relative to the animal’s size) creates stress waves in both the shrimp’s target – typically a hard-shelled animal such as a crab or mollusc – and the dactyl club itself, explains biomechanical engineer Horacio Dante Espinosa, who led the Northwestern research effort. The club’s punch also creates bubbles that rapidly collapse to produce shockwaves in the megahertz range. “The collapse of these bubbles (a process known as cavitation collapse), which takes place in just nanoseconds, releases intense bursts of energy that travel through the target and shrimp’s club,” he explains. “This secondary shockwave effect makes the shrimp’s strike even more devastating.”
Protective phononic armour
So how do the shrimp’s own soft tissues escape damage? To answer this question, Espinosa and colleagues studied the animal’s armour using transient grating spectroscopy (TGS) and asynchronous optical sampling (ASOPS). These ultrasonic techniques respectively analyse how stress waves propagate through a material and characterize the material’s microstructure. In this work, Espinosa and colleagues used them to provide high-resolution, frequency-dependent wave propagation characteristics that previous studies had not investigated experimentally.
The team identified three distinct regions in the shrimp’s dactyl club. The outermost layer consists of a hard hydroxyapatite coating approximately 70 μm thick, which is durable and resists damage. Beneath this, an approximately 500 μm-thick layer of mineralized chitin fibres arranged in a herringbone pattern enhances the club’s fracture resistance. Deeper still, Espinosa explains, is a region that features twisted fibre bundles organized in a corkscrew-like arrangement known as a Bouligand structure. Within this structure, each successive layer is rotated relative to its neighbours, giving it a unique and crucial role in controlling how stress waves propagate through the shrimp.
“Our key finding was the existence of phononic bandgaps (through which waves within a specific frequency range cannot travel) in the Bouligand structure,” Espinosa explains. “These bandgaps filter out harmful stress waves so that they do not propagate back into the shrimp’s club and body. They thus preserve the club’s integrity and protect soft tissue in the animal’s appendage.”
The team also employed finite element simulations incorporating so-called Bloch-Floquet analyses and graded mechanical properties to understand the phonon bandgap effects. The most surprising result, Espinosa tells Physics World, was the formation of a flat branch around the 450 to 480 MHz range, which correlates to frequencies arising from bubble collapse originating during club impact.
Evolution and its applications
For Espinosa and his colleagues, a key goal of their research is to understand how evolution leads to natural composite materials with unique photonic, mechanical and thermal properties. In particular, they seek to uncover how hierarchical structures in natural materials and the chemistry of their constituents produce emergent mechanical properties. “The mantis shrimp’s dactyl club is an example of how evolution leads to materials capable of resisting extreme conditions,” Espinosa says. “In this case, it is the violent impacts the animal uses for predation or protection.”
The properties of the natural “phononic shield” unearthed in this work might inspire advanced protective materials for both military and civilian applications, he says. Examples could include the design of helmets, personnel armour, and packaging for electronics and other sensitive devices.
In this study, which is described in Science, the researchers analysed two-dimensional simulations of wave behaviour. Future research, they say, should focus on more complex three-dimensional simulations to fully capture how the club’s structure interacts with shock waves. “Designing aquatic experiments with state-of-the-art instrumentation would also allow us to investigate how phononic properties function in submerged underwater conditions,” says Espinosa.
The team would also like to use biomimetics to make synthetic metamaterials based on the insights gleaned from this work.
From its sites in South Africa and Australia, the Square Kilometre Array (SKA) Observatory last year achieved “first light” – producing its first-ever images. When its planned 197 dishes and 131,072 antennas are fully operational, the SKA will be the largest and most sensitive radio telescope in the world.
Under the umbrella of a single observatory, the telescopes at the two sites will work together to survey the cosmos. The Australian side, known as SKA-Low, will focus on low-frequencies, while South Africa’s SKA-Mid will observe middle-range frequencies. The £1bn telescopes, which are projected to begin making science observations in 2028, were built to shed light on some of the most intractable problems in astronomy, such as how galaxies form, the nature of dark matter, and whether life exists on other planets.
Three decades in the making, the SKA will stand on the shoulders of many smaller experiments and telescopes – a suite of so-called “precursors” and “pathfinders” that have trialled new technologies and shaped the instrument’s trajectory. The 15 pathfinder experiments dotted around the planet are exploring different aspects of SKA science.
Meanwhile on the SKA sites in Australia and South Africa, there are four precursor telescopes – MeerKAT and HERA in South Africa and Australian SKA Pathfinder (ASKAP) and Murchison Widefield Array (MWA) in Australia. These precursors are weathering the arid local conditions and are already broadening scientists’ understanding of the universe.
“The SKA was the big, ambitious end game that was going to take decades,” says Steven Tingay, director of the MWA based in Bentley, Australia. “Underneath that umbrella, a huge number of already fantastic things have been done with the precursors, and they’ve all been investments that have been motivated by the path to the SKA.”
Even as technology and science testbeds, “they have far surpassed what anyone reasonably expected of them”, adds Emma Chapman, a radio astronomer at the University of Nottingham, UK.
MeerKAT: glimpsing the heart of the Milky Way
In 2018, radio astronomers in South Africa were scrambling to pull together an image for the inauguration of the 64-dish MeerKAT radio telescope. MeerKAT will eventually form the heart of SKA-Mid, picking up frequencies between 350 megahertz and 15.4 gigahertz, and the researchers wanted to show what it was capable of.
As you’ve never seen it before A radio image of the centre of the Milky Way taken by the MeerKAT telescope. The elongated radio filaments visible emanating from the heart of the galaxy are 10 times more numerous than in any previous image. (Courtesy: I. Heywood, SARAO)
Like all the SKA precursors, MeerKAT is an interferometer, with many dishes acting like a single giant instrument. MeerKAT’s dishes stand about three storeys high, with a diameter of 13.5 m, and the largest distance between dishes being about 8 km. This is part of what gives the interferometer its sensitivity: large baselines between dishes increase the telescope’s angular resolution and thus its sensitivity.
Additional dishes will be integrated into the interferometer to form SKA-Mid. The new dishes will be larger (with diameters of 15 m) and further apart (with baselines of up to 150 km), making it much more sensitive than MeerKAT on its own. Nevertheless, using just the provisional data from MeerKAT, the researchers were able to mark the unveiling of the telescope with the clearest radio image yet of our galactic centre.
Now, we finally see the big picture – a panoramic view filled with an abundance of filaments…. This is a watershed in furthering our understanding of these structures
Farhad Yusef-Zadeh
Four years later, an international team used the MeerKAT data to produce an even more detailed image of the centre of the Milky Way (ApJL 949 L31). The image (above) shows long radio-emitting filaments up to 150 light–years long unspooling from the heart of the galaxy. These structures, whose origin remains unknown, were first observed in 1984, but the new image revealed 10 times more than had ever been seen before.
“We have studied individual filaments for a long time with a myopic view,” Farhad Yusef-Zadeh, an astronomer at Northwestern University in the US and an author on the image paper, said at the time. “Now, we finally see the big picture – a panoramic view filled with an abundance of filaments. This is a watershed in furthering our understanding of these structures.”
The image resembles a “glorious artwork, conveying how bright black holes are in radio waves, but with the busyness of the galaxy going on around it”, says Chapman. “Runaway pulsars, supernovae remnant bubbles, magnetic field lines – it has it all.”
In a different area of astronomy, MeerKAT “has been a surprising new contender in the field of pulsar timing”, says Natasha Hurley-Walker, an astronomer at the Curtin University node of the International Centre for Radio Astronomy Research in Bentley. Pulsars are rotating neutron stars that produce periodic pulses of radiation hundreds of times a second. MeerKAT’s sensitivity, combined with its precise time-stamping, allows it to accurately map these powerful radio sources.
An experiment called the MeerKAT Pulsar Timing Array has been observing a group of 80 pulsars once a fortnight since 2019 and is using them as “cosmic clocks” to create a map of gravitational-wave sources. “If we see pulsars in the same direction in the sky lose time in a connected way, we start suspecting that it is not the pulsars that are acting funny but rather a gravitational wave background that has interfered,” says Marisa Geyer, an astronomer at the University of Cape Town and a co-author on several papers about the array published last year.
HERA: the first stars and galaxies
When astronomers dreamed up the idea for the SKA about 30 years ago, they wanted an instrument that could not only capture a wide view of the universe but was also sensitive enough to look far back in time. In the first billion years after the Big Bang, the universe cooled enough for hydrogen and helium to form, eventually clumping into stars and galaxies.
When these early stars began to shine, their light stripped electrons from the primordial hydrogen that still populated most of the cosmos – a period of cosmic history known as the Epoch of Reionization. The re-ionised hydrogen gave off a faint signal and catching glimpses of this ancient radiation remains one of the major science goals of the SKA.
Developing methods to identify primordial hydrogen signals will be the Hydrogen Epoch of Reionization Array (HERA) – a collection of hundreds of 14 m dishes, packed closely together as they watch the sky, like bowls made of wire mesh (see image below). They have been specifically designed to observe fluctuations in primordial hydrogen in the low-frequency range of 100 MHz to 200 MHz.
Echoes of the early universe The HERA telescope is listening for the faint signals from the first primordial hydrogen that formed after the Big Bang. (Courtesy: South African Radio Astronomy Observatory (SARAO))
Understanding this mysterious epoch sheds light on how young cosmic objects influenced the formation of larger ones and later seeded other objects in the universe. Scientists using HERA data have already reported the most sensitive power limits on the reionization signal (ApJ 945 124), bringing us closer to pinning down what the early universe looked like and how it evolved, and will eventually guide SKA observations. “It always helps to be able to target things better before you begin to build and operate a telescope,” explains HERA project manager David de Boer, an astronomer at the University of California, Berkeley in the US.
MWA: “unexpected” new objects
Over in Australia, meanwhile, the MWA’s 4096 antennas crouch on the red desert sand like spiders (see image below). This interferometer has a particularly wide-field view because, unlike its mid-frequency precursor cousins, it has no moving parts, allowing it to view large parts of the sky at the same time. Each antenna also contains a low-noise amplifier in its centre, boosting the relatively weak low-frequency signals from space. “In a single observation, you cover an enormous fraction of the sky”, says Tingay. “That’s when you can start to pick up rare events and rare objects.”
Sharp eyes With its wide field of view and low-noise signal amplifiers, the MWA telescope in Australia is poised to spot brief and rare cosmic events, and it has already discovered a new class of mysterious radio transients. (Courtesy: Marianne Annereau, 2015 Murchison Widefield Array (MWA))
Hurley-Walker and colleagues discovered one such object a few years ago – repeated, powerful blasts of radio waves that occurred every 18 minutes and lasted about a minute. These signals were an example of a “radio transient” – an astrophysical phenomena that last for milliseconds to years, and may repeat or occur just once. Radio transients have been attributed to many sources including pulsars, but the period of this event was much longer than had ever been observed before.
New transients are challenging our current models of stellar evolution
Cathryn Trott, Curtin Institute of Radio Astronomy in Bentley, Australia
After the researchers first noticed this signal, they followed up with other telescopes and searched archival data from other observatories going back 30 years to confirm the peculiar time scale. “This has spurred observers around the world to look through their archival data in a new way, and now many new similar sources are being discovered,” Hurley-Walker says.
The discovery of new transients, including this one, are “challenging our current models of stellar evolution”, according to Cathryn Trott, a radio astronomer at the Curtin Institute of Radio Astronomy in Bentley, Australia. “No one knows what they are, how they are powered, how they generate radio waves, or even whether they are all the same type of object,” she adds.
This is something that the SKA – both SKA-Mid and SKA-Low – will investigate. The Australian SKA-Low antennas detect frequencies between 50 MHz and 350 MHz. They build on some of the techniques trialled by the MWA, such as the efficacy of using low-frequency antennas and how to combine their received signals into a digital beam. SKA-Low, with its similarly wide field of view, will offer a powerful new perspective on this developing area of astronomy.
ASKAP: giant sky surveys
The 36-dish ASKAP saw first light in 2012, the same year it was decided to split the SKA between Australia and South Africa. ASKAP was part of Australia’s efforts to prove that it could host the massive telescope, but it has since become an important instrument in its own right. These dishes use a technology called a phased array feed which allows the telescope to view different parts of the sky simultaneously.
Each dish contains one of these phased array feeds, which consists of 188 receivers arranged like a chessboard. With this technology, ASKAP can produce 36 concurrent beams looking at 30 degrees of sky. This means it has a wide field of view, says de Boer, who was ASKAP’s inaugural director in 2010. In its first large-area survey, published in 2020, astronomers stitched together 903 images and identified more than 3 million sources of radio emissions in the southern sky, many of which were new (PASA37 e048).
Down under The ASKAP telescope array in Australia was used to demonstrate Australia’s capability to host the SKA. Able to rapidly take wide surveys of the sky, it is also a valuable scientific instrument in its own right, and has made significant discoveries in the study of Fast Radio Bursts. (Courtesy: CSIRO)
Because it can quickly survey large areas of the sky, the telescope has shown itself to be particularly adept at identifying and studying new fast radio bursts (FRBs). Discovered in 2007, FRBs are another kind of radio transient. They have been observed in many galaxies, and though some have been observed to repeat, most are detected only once.
This work is also helping scientists to understand one of the universe’s biggest mysteries. For decades, researchers have puzzled over the fact that the detectable mass of the universe is about half the mass that we know existed after the Big Bang. The dispersion of FRBs by this “missing matter” allows us to weigh all of the normal matter between us and the distant galaxies hosting the FRB.
By combing through ASKAP data, researchers in 2020 also discovered a new class of radio sources, which they dubbed “odd radio circles” (PASA38 e003). These are giant rings of radiation that are observed only in radio waves. Five years later their origins remain a mystery, but some scientists maintain they are flashes from ancient star formation.
The precursors are so important. They’ve given us new questions. And it’s incredibly exciting
Philippa Hartley, SKAO, Manchester
While SKA has many concrete goals, it is these unexpected discoveries that Philippa Hartley, a scientist at the SKAO, based near Manchester, is most excited about. “We’ve got so many huge questions that we’re going to use the SKA to try and answer, but then you switch on these new telescopes, you’re like, ‘Whoa! We didn’t expect that.’” That is why the precursors are so important. “They’ve given us new questions. And it’s incredibly exciting,” she adds.
Trouble on the horizon
As well as pushing the boundaries of astronomy and shaping the design of the SKA, the precursors have made a discovery much closer to home – one that could be a significant issue for the telescope. In a development that SKA’s founders will not have foreseen, the race to fill the skies with constellations of satellites is a problem both for the precursors and also for SKA itself.
Large corporations, including SpaceX in Hawthorne, California, OneWeb in London, UK, and Amazon’s Project Kuiper in Seattle, Washington, have launched more than 6000 communications satellites into space. Many others are also planned, including more than 12,000 from the Shanghai Spacecom Satellite Technology’s G60 Starlink based in Shanghai. These satellites, as well as global positioning satellites, are “photobombing” astronomy observatories and affecting observations across the electromagnetic spectrum.
The wild, wild west Satellites constellations are causing interference with ground-based observatories. (Courtesy: iStock/yucelyilmaz)
ASKAP, MeerKAT and the MWA have all flagged the impact of satellites on their observations. “The likelihood of a beam of a satellite being within the beam of our telescopes is vanishingly small and is easily avoided,” says Robert Braun, SKAO director of science. However, because they are everywhere, these satellites still introduce background radio interference that contaminates observations, he says.
Although the SKA Observatory is engaging with individual companies to devise engineering solutions, “we really can’t be in a situation where we have bespoke solutions with all of these companies”, SKAO director-general Phil Diamond told a side event at the IAU general assembly in Cape Town last year. “That’s why we’re pursuing the regulatory and policy approach so that there are systems in place,” he said. “At the moment, it’s a bit like the wild, wild west and we do need a sheriff to stride into town to help put that required protection in place.”
In this, too, SKA precursors are charting a path forward, identifying ways to observe even with mega satellite constellations staring down at them. When the full SKA telescopes finally come online in 2028, the discoveries it makes will, in large part, be thanks to the telescopes that came before it.
The US firm Firefly Aerospace has claimed to be the first commercial company to achieve “a fully successful soft landing on the Moon”. Yesterday, the company’s Blue Ghost lunar lander touched down on the Moon’s surface in an “upright, stable configuration”. It will now operate for 14 days where it will drill into the lunar soil and image a total eclipse from the Moon where the Earth blocks the Sun.
Blue Ghost was launched on 15 January from NASA’s Kennedy Space Center in Florida via a SpaceX Falcon 9 rocket. Following a 45-day trip, the craft landed in Mare Crisium, touching down within its 100 m landing target next to a volcanic feature called Mons Latreille.
The mission is carrying 10 NASA instruments, which includes a lunar subsurface drill, sample collector, X-ray imager and dust-mitigation experiments. “With the hardest part behind us, Firefly looks forward to completing more than 14 days of surface operations, again raising the bar for commercial cislunar capabilities,” notes Shea Ferring, chief technology officer at Firefly Aerospace.
In February 2024 the Houston-based company Intuitive Machines became the first private firm to soft land on the Moon with its Odysseus mission. Yet it suffered a few hiccups prior to touch down and rather than landing vertically, did so at a 30 degree angle, which affected radio-transmission rates.
The Firefly mission is part of NASA’s Commercial Lunar Payload Services initiative, which contracts the private sector to develop missions with the aim of reducing costs.
Firefly’s Blue Ghost Mission 2 is expected to launch next year, where it will aim to land on the far side of the Moon. “With annual lunar missions, Firefly is paving the way for a lasting lunar presence that will help unlock access to the rest of the solar system for our nation, our partners, and the world,” notes Jason Kim, chief executive officer of Firefly Aerospace.
Apart from the usual set of mathematical skills ranging from probability theory and linear algebra to aspects of cryptography, the most valuable skill is the ability to think in a critical and dissecting way. Also, one mustn’t be afraid to go in different directions and connect dots. In my particular case, I was lucky enough that I knew the foundations of quantum physics and the problems that cryptographers were facing and I was able to connect the two. So I would say it’s important to have a good understanding of topics outside your narrow field of interest. Nature doesn’t know that we divided all phenomena into physics, chemistry and biology, but we still put ourselves in those silos and don’t communicate with each other.
Flying high and low “Physics – not just quantum mechanics, but all its aspects – deeply shapes my passion for aviation and scuba diving,” says Artur Ekert. “Experiencing and understanding the world above and below brings me great joy and often clarifies the fine line between adventure and recklessness.” (Courtesy: Artur Ekert)
What do you like best and least about your job?
Least is easy, all admin aspects of it. Best is meeting wonderful people. That means not only my senior colleagues – I was blessed with wonderful supervisors and mentors – but also the junior colleagues, students and postdocs that I work with. This job is a great excuse to meet interesting people.
What do you know today that you wish you’d known at the start of your career?
That it’s absolutely fine to follow your instincts and your interests without paying too much attention to practicalities. But of course that is a post-factum statement. Maybe you need to pay attention to certain practicalities to get to the comfortable position where you can make the statement I just expressed.
Globular springtails (Dicyrtomina minuta) are small bugs about five millimetres long that can be seen crawling through leaf litter and garden soil. While they do not have wings and cannot fly, they more than make up for it with their ability to hop relatively large heights and distances.
This jumping feat is thanks to a tail-like appendage on their abdomen called a furcula, which is folded in beneath their body, held under tension.
When released, it snaps against the ground in as little as 20 milliseconds, flipping the springtail up to 6 cm into the air and 10 cm horizontally.
They modified a cockroach-inspired robot to include a latch-mediated spring actuator, in which potential energy is stored in an elastic element – essentially a robotic fork-like furcula.
Via computer simulations and experiments to control the length of the linkages in the furcula as well as the energy stored in them, the team found that the robot could jump some 1.4 m horizontally, or 23 times its body length – the longest of any existing robot relative to body length.
The work could help design robots that can traverse places that are hazardous to humans.
“Walking provides a precise and efficient locomotion mode but is limited in terms of obstacle traversal,” notes Harvard’s Robert Wood. “Jumping can get over obstacles but is less controlled. The combination of the two modes can be effective for navigating natural and unstructured environments.”
The internal temperature of a building is important – particularly in offices and work environments –for maximizing comfort and productivity. Managing the temperature is also essential for reducing the energy consumption of a building. In the US, buildings account for around 29% of total end-use energy consumption, with more than 40% of this energy dedicated to managing the internal temperature of a building via heating and cooling.
The human body is sensitive to both radiative and convective heat. The convective part revolves around humidity and air temperature, whereas radiative heat depends upon the surrounding surface temperatures inside the building. Understanding both thermal aspects is key for balancing energy consumption with occupant comfort. However, there are not many practical methods available for measuring the impact of radiative heat inside buildings. Researchers from the University of Minnesota Twin Cities have developed an optical sensor that could help solve this problem.
Limitation of thermostats for radiative heat
Room thermostats are used in almost every building today to regulate the internal temperature and improve the comfort levels for the occupants. However, modern thermostats only measure the local air temperature and don’t account for the effects of radiant heat exchange between surfaces and occupants, resulting in suboptimal comfort levels and inefficient energy use.
Finding a way to measure the mean radiant temperature in real time inside buildings could provide a more efficient way of heating the building – leading to more advanced and efficient thermostat controls. Currently, radiant temperature can be measured using either radiometers or black globe sensors. But radiometers are too expensive for commercial use and black globe sensors are slow, bulky and error strewn for many internal environments.
In search of a new approach, first author Fatih Evren (now at Pacific Northwest National Laboratory) and colleagues used low-resolution, low-cost infrared sensors to measure the longwave mean radiant temperature inside buildings. These sensors eliminate the pan/tilt mechanism (where sensors rotate periodically to measure the temperature at different points and an algorithm determines the surface temperature distribution) required by many other sensors used to measure radiative heat. The new optical sensor also requires 4.5 times less computation power than pan/tilt approaches with the same resolution.
Integrating optical sensors to improve room comfort
The researchers tested infrared thermal array sensors with 32 x 32 pixels in four real-world environments (three living spaces and an office) with different room sizes and layouts. They examined three sensor configurations: one sensor on each of the room’s four walls; two sensors; and a single-sensor setup. The sensors measured the mean radiant temperature for 290 h at internal temperatures of between 18 and 26.8 °C.
The optical sensors capture raw 2D thermal data containing temperature information for adjacent walls, floor and ceiling. To determine surface temperature distributions from these raw data, the researchers used projective homographic transformations – a transformation between two different geometric planes. The surfaces of the room were segmented into a homography matrix by marking the corners of the room. Applying the transformations to this matrix provides the surface distribution temperature on each of the surfaces. The surface temperatures can then be used to calculate the mean radiant temperature.
The team compared the temperatures measured by their sensors against ground truth measurements obtained via the net-radiometer method. The optical sensor was found to be repeatable and reliable for different room sizes, layouts and temperature sensing scenarios, with most approaches agreeing within ±0.5 °C of the ground truth measurement, and a maximum error (arising from a single-sensor configuration) of only ±0.96 °C. The optical sensors were also more accurate than the black globe sensor method, which tends to have higher errors due to under/overestimating solar effects.
The researchers conclude that the sensors are repeatable, scalable and predictable, and that they could be integrated into room thermostats to improve human comfort and energy efficiency – especially for controlling the radiant heating and cooling systems now commonly used in high-performance buildings. They also note that a future direction could be to integrate machine learning and other advanced algorithms to improve the calibration of the sensors.
New statistical analyses of the supermassive black hole M87* may explain changes observed since it was first imaged. The findings, from the same Event Horizon Telescope (EHT) that produced the iconic first image of a black hole’s shadow, confirm that M87*’s rotational axis points away from Earth. The analyses also indicate that turbulence within the rotating envelope of gas that surrounds the black hole – the accretion disc – plays a role in changing its appearance.
The first image of M87*’s shadow was based on observations made in 2017, though the image itself was not released until 2019. It resembles a fiery doughnut, with the shadow appearing as a dark region around three times the diameter of the black hole’s event horizon (the point beyond which even light cannot escape its gravitational pull) and the accretion disc forming a bright ring around it.
Because the shadow is caused by the gravitational bending and capture of light at the event horizon, its size and shape can be used to infer the black hole’s mass. The larger the shadow, the higher the mass. In 2019, the EHT team calculated that M87* has a mass of about 6.5 billion times that of our Sun, in line with previous theoretical predictions. Team members also determined that the radius of the event horizon is 3.8 micro-arcseconds; that the black hole is rotating in a clockwise direction; and that its spin points away from us.
Hot and violent region
The latest analysis focuses less on the shadow and more on the bright ring outside it. As matter accelerates, it produces huge amounts of light. In the vicinity of the black hole, this acceleration occurs as matter is sucked into the black hole, but it also arises when matter is blasted out in jets. The way these jets form is still not fully understood, but some astrophysicists think magnetic fields could be responsible. Indeed, in 2021, when researchers working on the EHT analysed the polarization of light emitted from the bright region, they concluded that only the presence of a strongly magnetized gas could explain their observations.
The team has now combined an analysis of ETH observations made in 2018 with a re-analysis of the 2017 results using a Bayesian approach. This statistical technique, applied for the first time in this context, treats the two sets of observations as independent experiments. This is possible because the event horizon of M87* is about a light-day across, so the accretion disc should present a new version of itself every few days, explains team member Avery Broderick from the Perimeter Institute and the University of Waterloo, both in Canada. In more technical language, the gap between observations exceeds the correlation timescale of the turbulent environment surrounding the black hole.
New result reinforces previous interpretations
The part of the ring that appears brightest to us stems from the relativistic movement of material in a clockwise direction as seen from Earth. In the original 2017 observations, this bright region was further “south” on the image than the EHT team expected. However, when members of the team compared these observations with those from 2018, they found that the region reverted to its mean position. This result corroborated computer simulations of the general relativistic magnetohydrodynamics of the turbulent environment surrounding the black hole.
Even in the 2018 observations, though, the ring remains brightest at the bottom of the image. According to team member Bidisha Bandyopadhyay, a postdoctoral researcher at the Universidad de Concepción in Chile, this finding provides substantial information about the black hole’s spin and reinforces the EHT team’s previous interpretation of its orientation: the black hole’s rotational axis is pointing away from Earth. The analyses also reveal that the turbulence within the accretion disc can help explain the differences observed in the bright region from one year to the next.
Very long baseline interferometry
To observe M87* in detail, the EHT team needed an instrument with an angular resolution comparable to the black hole’s event horizon, which is around tens of micro-arcseconds across. Achieving this resolution with an ordinary telescope would require a dish the size of the Earth, which is clearly not possible. Instead, the EHT uses very long baseline interferometry, which involves detecting radio signals from an astronomical source using a network of individual radio telescopes and telescopic arrays spread across the globe.
“This work demonstrates the power of multi-epoch analysis at horizon scale, providing a new statistical approach to studying the dynamical behaviour of black hole systems,” says EHT team member Hung-Yi Pu from National Taiwan Normal University. “The methodology we employed opens the door to deeper investigations of black hole accretion and variability, offering a more systematic way to characterize their physical properties over time.”
Looking ahead, the ETH astronomers plan to continue analysing observations made in 2021 and 2022. With these results, they aim to place even tighter constraints on models of black hole accretion environments. “Extending multi-epoch analysis to the polarization properties of M87* will also provide deeper insights into the astrophysics of strong gravity and magnetized plasma near the event horizon,” EHT Management team member Rocco Lico, tells Physics World.
A new technique for using frequency combs to measure trace concentrations of gas molecules has been developed by researchers in the US. The team reports single-digit parts-per-trillion detection sensitivity, and extreme broadband coverage over 1000 cm-1 wavenumbers. This record-level sensing performance could open up a variety of hitherto inaccessible applications in fields such as medicine, environmental chemistry and chemical kinetics.
Each molecular species will absorb light at a specific set of frequencies. So, shining light through a sample of gas and measuring this absorption can reveal the molecular composition of the gas.
Cavity ringdown spectroscopy is an established way to increase the sensitivity of absorption spectroscopy and needs no calibration. A laser is injected between two mirrors, creating an optical standing wave. A sample of gas is then injected into the cavity, so the laser beam passes through it, normally many thousands of times. The absorption of light by the gas is then determined by the rate at which the intracavity light intensity “rings down” – in other words, the rate at which the standing wave decays away.
Researchers have used this method with frequency comb lasers to probe the absorption of gas samples at a range of different light frequencies. A frequency comb produces light at a series of very sharp intensity peaks that are equidistant in frequency – resembling the teeth of a comb.
Shifting resonances
However, the more reflective the mirrors become (the higher the cavity finesse), the narrower each cavity resonance becomes. Due to the fact that their frequencies are not evenly spaced and can be heavily altered by the loaded gas, normally one relies on creating oscillations in the length of the cavity. This creates shifts in all the cavity resonance frequencies to modulate around the comb lines. Multiple resonances are sequentially excited and the transient comb intensity dynamics are captured by a camera, following spatial separation by an optical grating.
“That experimental scheme works in the near-infrared, but not in the mid-infrared,” says Qizhong Liang. “Mid-infrared cameras are not fast enough to capture those dynamics yet.” This is a problem because the mid-infrared is where many molecules can be identified by their unique absorption spectra.
Liang is a member of Jun Ye’s group in JILA in Colorado, which has shown that it is possible to measure transient comb dynamics simply with a Michelson interferometer. The spectrometer entails only beam splitters, a delay stage, and photodetectors. The researchers worked out that, the periodically generated intensity dynamics arising from each tooth of the frequency comb can be detected as a set of Fourier components offset by Doppler frequency shifts. Absorption from the loaded gas can thus be determined.
Dithering the cavity
This process of reading out transient dynamics from “dithering” the cavity by a passive Michelson interferometer is much simpler than previous setups and thus can be used by people with little experience with combs, says Liang. It also places no restrictions on the finesse of the cavity, spectral resolution, or spectral coverage. “If you’re dithering the cavity resonances, then no matter how narrow the cavity resonance is, it’s guaranteed that the comb lines can be deterministically coupled to the cavity resonance twice per cavity round trip modulation,” he explains.
The researchers reported detections of various molecules at concentrations as low as parts-per-billion with parts-per-trillion uncertainty in exhaled air from volunteers. This included biomedically relevant molecules such as acetone, which is a sign of diabetes, and formaldehyde, which is diagnostic of lung cancer. “Detection of molecules in exhaled breath in medicine has been done in the past,” explains Liang. “The more important point here is that, even if you have no prior knowledge about what the gas sample composition is, be it in industrial applications, environmental science applications or whatever you can still use it.”
Konstantin Vodopyanov of the University of Central Florida in Orlando comments: “This achievement is remarkable, as it integrates two cutting-edge techniques: cavity ringdown spectroscopy, where a high-finesse optical cavity dramatically extends the laser beam’s path to enhance sensitivity in detecting weak molecular resonances, and frequency combs, which serve as a precise frequency ruler composed of ultra-sharp spectral lines. By further refining the spectral resolution to the Doppler broadening limit of less than 100 MHz and referencing the absolute frequency scale to a reliable frequency standard, this technology holds great promise for applications such as trace gas detection and medical breath analysis.”
In this episode of the Physics World Weekly podcast, online editor Margaret Harris chats about her recent trip to CERN. There, she caught up with physicists working on some of the lab’s most exciting experiments and heard from CERN’s current and future leaders.
Founded in Geneva in 1954, today CERN is most famous for the Large Hadron Collider (LHC), which is currently in its winter shutdown. Harris describes her descent 100 m below ground level to visit the huge ATLAS detector and explains why some of its components will soon be updated as part of the LHC’s upcoming high luminosity upgrade.
She explains why new “crab cavities” will boost the number of particle collisions at the LHC. Among other things, this will allow physicists to better study how Higgs bosons interact with each other, which could provide important insights into the early universe.
Harris describes her visit to CERN’s Antimatter Factory, which hosts several experiments that are benefitting from a 2021 upgrade to the lab’s source of antiprotons. These experiments measure properties of antimatter – such as its response to gravity – to see if its behaviour differs from that of normal matter.
Harris also heard about the future of the lab from CERN’s director general Fabiola Gianotti and her successor Mark Thomson, who will take over next year.
Something extraordinary happened on Earth around 10 million years ago, and whatever it was, it left behind a “signature” of radioactive beryllium-10. This finding, which is based on studies of rocks located deep beneath the ocean, could be evidence for a previously-unknown cosmic event or major changes in ocean circulation. With further study, the newly-discovered beryllium anomaly could also become an independent time marker for the geological record.
Most of the beryllium-10 found on Earth originates in the upper atmosphere, where it forms when cosmic rays interact with oxygen and nitrogen molecules. Afterwards, it attaches to aerosols, falls to the ground and is transported into the oceans. Eventually, it reaches the seabed and accumulates, becoming part of what scientists call one of the most pristine geological archives on Earth.
Because beryllium-10 has a half-life of 1.4 million years, it is possible to use its abundance to pin down the dates of geological samples that are more than 10 million years old. This is far beyond the limits of radiocarbon dating, which relies on an isotope (carbon-14) with a half-life of just 5730 years, and can only date samples less than 50 000 years old.
Almost twice as much 10Be than expected
In the new work, which is detailed in Nature Communications, physicists in Germany and Australia measured the amount of beryllium-10 in geological samples taken from the Pacific Ocean. The samples are primarily made up of iron and manganese and formed slowly over millions of years. To date them, the team used a technique called accelerator mass spectrometry (AMS) at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR). This method can distinguish beryllium-10 from its decay product, boron-10, which has the same mass, and from other beryllium isotopes.
The researchers found that samples dated to around 10 million years ago, a period known as the late Miocene, contained almost twice as much beryllium-10 as they expected to see. The source of this overabundance is a mystery, says team member Dominik Koll, but he offers three possible explanations. The first is that changes to the ocean circulation near the Antarctic, which scientists recently identified as occurring between 10 and 12 million years ago, could have distributed beryllium-10 unevenly across the Earth. “Beryllium-10 might thus have become particularly concentrated in the Pacific Ocean,” says Koll, a postdoctoral researcher at TU Dresden and an honorary lecturer at the Australian National University.
Another possibility is that a supernova exploded in our galactic neighbourhood 10 million years ago, producing a temporary increase in cosmic radiation. The third option is that the Sun’s magnetic shield, which deflects cosmic rays away from the Earth, became weaker through a collision with an interstellar cloud, making our planet more vulnerable to cosmic rays. Both scenarios would have increased the amount of beryllium-10 that fell to Earth without affecting its geographic distribution.
To distinguish between these competing hypotheses, the researchers now plan to analyse additional samples from different locations on Earth. “If the anomaly were found everywhere, then the astrophysics hypothesis would be supported,” Koll says. “But if it were detected only in specific regions, the explanation involving altered ocean currents would be more plausible.”
Whatever the reason for the anomaly, Koll suggests it could serve as a cosmogenic time marker for periods spanning millions of years, the likes of which do not yet exist. “We hope that other research groups will also investigate their deep-ocean samples in the relevant period to eventually come to a definitive answer on the origin of the anomaly,” he tells Physics World.
Update 7 March 2025: In a statement, Intuitive Machines announced that while Athena performed a soft landing on the Moon on 6 March, it landed on its side about 250m away from the intended landing spot. Given that the lander is unable to recharge its batteries, the firm declared the mission over with the team accessing the data that has been collected.
The private firm Intuitive Machines has launched a lunar lander to test extraction methods for water and volatile gases. The six-legged Moon lander, dubbed Athena, took off yesterday aboard a SpaceX Falcon 9 rocket from NASA’s Kennedy Space Center in Florida . Also aboard the rocket was NASA’s Lunar Trailblazer – a lunar orbiter that will investigate water on the Moon and its geology.
In February 2024, Intuitive Machines’ Odysseus mission became the first US mission to make a soft landing on the Moon since Apollo 17 and the first private craft to do so. After a few hiccups during landing, the mission carried out measurements with an optical and radio telescope before it ended seven days later.
Athena is the second lunar lander by Intuitive Machines in its quest to build infrastructure on the Moon that would be required for long-term lunar exploration.
The mission, standing almost five meters tall, aims to land in the Mons Mouton region, which is about 160 km from the lunar south pole.
It will use a drill to bore one meter into the surface and test the extraction of substances – including volatiles such as carbon dioxide as well as water – that it will then analyse with a mass spectrometer.
Athena also contains a “hopper” dubbed Grace that can travel up to 25 kilometres on the lunar surface. Carrying about 10 kg of payloads, the rocket-propelled drone will aim to take images of the lunar surface and explore nearby craters.
As well as Grace, Athena carries two rovers. MAPP, built by Lunar Outpost, will autonomously navigate the lunar surface while a small, lightweight rover dubbed Yaoki, which has been built by the Japanese firm Dymon, will explore the Moon within 50 meters of the lander.
Athena is part of NASA’s $2.6bn Commercial Lunar Payload Services initiative, which contracts the private sector to develop missions with the aim of reducing costs.
Taking the Moon’s temperature
Lunar Trailblazer, meanwhile, will spend two years orbiting the Moon from a 100 km altitude polar orbit. Weighing 200 kg and about the size of a washing machine, it will map the distribution of water on the Moon’s surface about 12 times a day with a resolution of about 50 meters.
While it is known that water exists on the lunar surface, little is known about its form, abundance, distribution or how it arrived. Various hypothesis range from “wet” asteroids crashing into the Moon to volcanic eruptions producing water vapour from the Moon’s interior.
Water hunter: NASA’s Lunar Trailblazer will spend two years mapping the distribution of water on the surface of the Moon (courtesy: Lockheed Martin Space for Lunar Trailblazer)
To help answer that question, the craft will examine water deposits via an imaging spectrometer dubbed the High-resolution Volatiles and Minerals Moon Mapper that has been built by NASA’s Jet Propulsion Laboratory.
A thermal mapper, meanwhile, that has been developed by the University of Oxford, will plot the temperature of the Moon’s surface and help to confirm the presence and location of water.
Lunar Trailblazer was selected in 2019 as part of NASA’s Small Innovative Missions for Planetary Exploration programme.
While the biology of how an entire organism develops from a single cell has long been a source of fascination, recent research has increasingly highlighted the role of mechanical forces. “If we want to have rigorous predictive models of morphogenesis, of tissues and cells forming organs of an animal,” says Konstantin Doubrovinski at the University of Texas Southwestern Medical Center, “it is absolutely critical that we have a clear understanding of material properties of these tissues.”
Now Doubrovinski and his colleagues report a rheological study explaining why the developing fruit fly (Drosophila melanogaster) epithelial tissue stretches as it does over time to allow the embryo to change shape.
Previous studies had shown that under a constant force, tissue extension was proportional to the time the force had been applied to the power of one half. This had puzzled the researchers, since it did not fit a simple model in which epithelial tissues behave like linear springs. In such a model, the extension obeys Hooke’s law and is proportional to the force applied alone, such that the exponent of time in the relation would be zero.
They and other groups had tried to explain this observation of an exponent equal to 0.5 as due to the viscosity of the medium surrounding the cells, which would lead to deformation near the point of pulling that then gradually spreads. However, their subsequent experiments ruled out viscosity as a cause of the non-zero exponent.
Tissue pulling experiments Schematic showing how a ferrofluid droplet positioned inside one cell is used to stretch the epithelium via an external magnetic field. The lower images are snapshots from an in vivo measurement. (Courtesy: Konstantin Doubrovinski/bioRxiv 10.1101/2023.09.12.557407)
For their measurements, the researchers had exploited a convenient feature of Drosophila epithelial cells – a small hole, through which they could manipulate a droplet of ferrofluid to enter using a permanent magnet. Once inside the cell, a magnet acting on this droplet could exert forces on the cell to stretch the surrounding tissue.
For the current study, the researchers first tested the observed scaling law over longer periods of time. A power law gives a straight line on a log–log plot but as Doubrovinski points out, curves also look like straight lines over short sections. However, even when they increased the time scales probed in their experiments to cover three orders of magnitude – from fractions of a second to several minutes – the observed power law still held.
Understanding the results
One of the post docs on the team – Mohamad Ibrahim Cheikh – stumbled upon the actual relation giving the power law with an exponent of 0.5 while working on a largely unrelated problem. He had been modelling ellipsoids in a hexagonal meshwork on a surface, in what Doubrovinski describes as a “large” and “relatively complex” simulation. He decided to examine what would happen if he allowed the mesh to relax in its stretched position, which would model the process of actin turnover in cells.
Cheikh’s simulation gave the power law observed in the epithelial cells. “We totally didn’t expect it,” says Doubrovinski. “We pursued it and thought, why are we getting it? What’s going on here?”
Although this simulation yielded the power law with an exponent of 0.5, because the simulation was so complex, it was hard to get a handle on why. “There are all these different physical effects that we took into account that we thought were relevant,” he tells Physics World.
To get a more intuitive understanding of the system, the researchers attempted to simplify the model into a lattice of springs in one dimension, keeping only some of the physical effects from the simulations, until they identified the effects required to give the exponent value of 0.5. They could then scale this simplified one-dimensional model back up to three dimensions and test how it behaved.
According to their model, if they changed the magnitude of various parameters, they should be able to rescale the curves so that they essentially collapse onto a single curve. “This makes our prediction falsifiable,” says Doubrovinski, and in fact the experimental curves could be rescaled in this way.
When the researchers used measured values for the relaxation constant based on the actin turnover rate, along with other known parameters such as the size of the force and the size of the extension, they were able to calculate the force constant of the epithelial cell. This value also agreed with their previous estimates.
Doubrovinski explains how the ferrofluid droplet engages with individual “springs” of the lattice as it moves through the mesh. “The further it moves, the more springs it catches on,” he says. “So the rapid increase of one turns into a slow increase with an exponent of 0.5.” Against this model, all the pieces fit into place.
“I find it inspiring that the authors, first motivated by in vivo mechanical measurements, could develop a simple theory capturing a new phenomenological law of tissue rheology,” says Pierre Françoise Lenne, group leader at the Institut de Biologie du Development de Marseille at L’Universite d’Aix-Marseille. Lenne specializes in the morphogenesis of multicellular systems but was not involved in the current research.
Next, Doubrovinski and his team are keen to see where else their results might apply, such as other developmental stages and other types of organisms, such as mammals, for example.
Quantum-inspired “tensor networks” can simulate the behaviour of turbulent fluids in just a few hours rather than the several days required for a classical algorithm. The new technique, developed by physicists in the UK, Germany and the US, could advance our understanding of turbulence, which has been called one of the greatest unsolved problems of classical physics.
Turbulence is all around us, found in weather patterns, water flowing from a tap or a river and in many astrophysical phenomena. It is also important for many industrial processes. However, the way in which turbulence arises and then sustains itself is still not understood, despite the seemingly simple and deterministic physical laws governing it.
The reason for this is that turbulence is characterized by large numbers of eddies and swirls of differing shapes and sizes that interact in chaotic and unpredictable ways across a wide range of spatial and temporal scales. Such fluctuations are difficult to simulate accurately, even using powerful supercomputers, because doing so requires solving sets of coupled partial differential equations on very fine grids.
An alternative is to treat turbulence in a probabilistic way. In this case, the properties of the flow are defined as random variables that are distributed according to mathematical relationships called joint Fokker-Planck probability density functions. These functions are neither chaotic nor multiscale, so they are straightforward to derive. However, they are nevertheless challenging to solve because of the high number of dimensions contained in turbulent flows.
For this reason, the probability density function approach was widely considered to be computationally infeasible. In response, researchers turned to indirect Monte Carlo algorithms to perform probabilistic turbulence simulations. However, while this approach has chalked up some notable successes, it can be slow to yield results.
Highly compressed “tensor networks”
To overcome this problem, a team led by Nikita Gourianov of the University of Oxford, UK, decided to encode turbulence probability density functions as highly compressed “tensor networks” rather than simulating the fluctuations themselves. Such networks have already been used to simulate otherwise intractable quantum systems like superconductors, ferromagnets and quantum computers, they say.
These quantum-inspired tensor networks represent the turbulence probability distributions in a hyper-compressed format, which then allows them to be simulated. By simulating the probability distributions directly, the researchers can then extract important parameters, such as lift and drag, that describe turbulent flow.
Importantly, the new technique allows an ordinary single CPU (central processing unit) core to compute a turbulent flow in just a few hours, compared to several days using a classical algorithm on a supercomputer.
This significantly improved way of simulating turbulence could be particularly useful in the area of chemically reactive flows in areas such as combustion, says Gourianov. “Our work also opens up the possibility of probabilistic simulations for all kinds of chaotic systems, including weather or perhaps even the stock markets,” he adds.
The researchers now plan to apply tensor networks to deep learning, a form of machine learning that uses artificial neural networks. “Neural networks are famously over-parameterized and there are several publications showing that they can be compressed by orders of magnitude in size simply by representing their layers as tensor networks,” Gourianov tells Physics World.
Vacuum technology is routinely used in both scientific research and industrial processes. In physics, high-quality vacuum systems make it possible to study materials under extremely clean and stable conditions. In industry, vacuum is used to lift, position and move objects precisely and reliably. Without these technologies, a great deal of research and development would simply not happen. But for all its advantages, working under vacuum does come with certain challenges. For example, once something is inside a vacuum system, how do you manipulate it without opening the system up?
Heavy duty: The new transfer arm. (Courtesy: UHV Design)
The UK-based firm UHV Design has been working on this problem for over a quarter of a century, developing and manufacturing vacuum manipulation solutions for new research disciplines as well as emerging industrial applications. Its products, which are based on magnetically coupled linear and rotary probes, are widely used at laboratories around the world, in areas ranging from nanoscience to synchrotron and beamline applications. According to engineering director Jonty Eyres, the firm’s latest innovation – a new sample transfer arm released at the beginning of this year – extends this well-established range into new territory.
“The new product is a magnetically coupled probe that allows you to move a sample from point A to point B in a vacuum system,” Eyres explains. “It was designed to have an order of magnitude improvement in terms of both linear and rotary motion thanks to the magnets in it being arranged in a particular way. It is thus able to move and position objects that are much heavier than was previously possible.”
The new sample arm, Eyres explains, is made up of a vacuum “envelope” comprising a welded flange and tube assembly. This assembly has an outer magnet array that magnetically couples to an inner magnet array attached to an output shaft. The output shaft extends beyond the mounting flange and incorporates a support bearing assembly. “Depending on the model, the shafts can either be in one or more axes: they move samples around either linearly, linear/rotary or incorporating a dual axis to actuate a gripper or equivalent elevating plate,” Eyres says.
Continual development, review and improvement
While similar devices are already on the market, Eyres says that the new product has a significantly larger magnetic coupling strength in terms of its linear thrust and rotary torque. These features were developed in close collaboration with customers who expressed a need for arms that could carry heavier payloads and move them with more precision. In particular, Eyres notes that in the original product, the maximum weight that could be placed on the end of the shaft – a parameter that depends on the stiffness of the shaft as well as the magnetic coupling strength – was too small for these customers’ applications.
“From our point of view, it was not so much the magnetic coupling that needed to be reviewed, but the stiffness of the device in terms of the size of the shaft that extends out to the vacuum system,” Eyres explains. “The new arm deflects much less from its original position even with a heavier load and when moving objects over longer distances.”
The new product – a scaled-up version of the original – can move an object with a mass of up to 50 N (5 kg) over an axial stroke of up to 1.5 m. Eyres notes that it also requires minimal maintenance, which is important for moving higher loads. “It is thus targeted to customers who wish to move larger objects around over longer periods of time without having to worry about intervening too often,” he says.
Moving multiple objects
As well as moving larger, single objects, the new arm’s capabilities make it suitable for moving multiple objects at once. “Rather than having one sample go through at a time, we might want to nest three or four samples onto a large plate, which inevitably increases the size of the overall object,” Eyres explains.
Before they created this product, he continues, he and his UHV Design colleagues were not aware of any magnetic coupled solution on the marketplace that enabled users to do this. “As well as being capable of moving heavy samples, our product can also move lighter samples, but with a lot less shaft deflection over the stroke of the product,” he says. “This could be important for researchers, particularly if they are limited in space or if they wish to avoid adding costly supports in their vacuum system.”
Researchers at Microsoft in the US claim to have made the first topological quantum bit (qubit) – a potentially transformative device that could make quantum computing robust against the errors that currently restrict what it can achieve. “If the claim stands, it would be a scientific milestone for the field of topological quantum computing and physics beyond,” says Scott Aaronson, a computer scientist at the University of Texas at Austin.
However, the claim is controversial because the evidence supporting it has not yet been presented in a peer-reviewed paper. It is made in a press release from Microsoft accompanying a paper in Nature (638 651) that has been written by more than 160 researchers from the company’s Azure Quantum team. The paper stops short of claiming a topological qubit but instead reports some of the key device characterization underpinning it.
Writing in a peer-review file accompanying the paper, the Nature editorial team says that it sought additional input from two of the article’s reviewers to “establish its technical correctness”, concluding that “the results in this manuscript do not represent evidence for the presence of Majorana zero modes [MZMs] in the reported devices”. An MZM is a quasiparticle (a particle-like collective electronic state) that can act as a topological qubit.
“That’s a big no-no”
“The peer-reviewed publication is quite clear [that it contains] no proof for topological qubits,” says Winfried Hensinger, a physicist at the University of Sussex who works on quantum computing using trapped ions. “But the press release speaks differently. In academia that’s a big no-no: you shouldn’t make claims that are not supported by a peer-reviewed publication” – or that have at least been presented in a preprint.
Chetan Nayak, leader of Microsoft Azure Quantum, which is based in Redmond, Washington, says that the evidence for a topological qubit was obtained in the period between submission of the paper in March 2024 and its publication. He will present those results at a talk at the Global Physics Summit of the American Physical Society in Anaheim in March.
But Hensinger is concerned that “the press release doesn’t make it clear what the paper does and doesn’t contain”. He worries that some might conclude that the strong claim of having made a topological qubit is now supported by a paper in Nature. “We don’t need to make these claims – that is just unhealthy and will really hurt the field,” he says, because it could lead to unrealistic expectations about what quantum computers can do.
As with the qubits used in current quantum computers, such as superconducting components or trapped ions, MZMs would be able to encode superpositions of the two readout states (representing a 1 or 0). By quantum-entangling such qubits, information could be manipulated in ways not possible for classical computers, greatly speeding up certain kinds of computation. In MZMs the two states are distinguished by “parity”: whether the quasiparticles contain even or odd numbers of electrons.
Built-in error protection
As MZMs are “topological” states, their settings cannot easily be flipped by random fluctuations to introduce errors into the calculation. Rather, the states are like a twist in a buckled belt that cannot be smoothed out unless the buckle is undone. Topological qubits would therefore suffer far less from the errors that afflict current quantum computers, and which limit the scale of the computations they can support. Because quantum error correction is one of the most challenging issues for scaling up quantum computers, “we want some built-in level of error protection”, explains Nayak.
It has long been thought that MZMs might be produced at the ends of nanoscale wires made of a superconducting material. Indeed, Microsoft researchers have been trying for several years to fabricate such structures and look for the characteristic signature of MZMs at their tips. But it can be hard to distinguish this signature from those of other electronic states that can form in these structures.
In 2018 researchers at labs in the US and the Netherlands (including the Delft University of Technology and Microsoft), claimed to have evidence of an MZM in such devices. However, they then had to retract the work after others raised problems with the data. “That history is making some experts cautious about the new claim,” says Aaronson.
Now, though, it seems that Nayak and colleagues have cracked the technical challenges. In the Nature paper, they report measurements in a nanowire heterostructure made of superconducting aluminium and semiconducting indium arsenide that are consistent with, but not definitive proof of, MZMs forming at the two ends. The crucial advance is an ability to accurately measure the parity of the electronic states. “The paper shows that we can do these measurements fast and accurately,” says Nayak.
The device is a remarkable achievement from the materials science and fabrication standpoint
Ivar Martin, Argonne National Laboratory
“The device is a remarkable achievement from the materials science and fabrication standpoint,” says Ivar Martin, a materials scientist at Argonne National Laboratory in the US. “They have been working hard on these problems, and seems like they are nearing getting the complexities under control.” In the press release, the Microsoft team claims now to have put eight MZM topological qubits on a chip called Majorana 1, which is designed to house a million of them (see figure).
Even if the Microsoft claim stands up, a lot will still need to be done to get from a single MZM to a quantum computer, says Hensinger. Topological quantum computing is “probably 20–30 years behind the other platforms”, he says. Martin agrees. “Even if everything checks out and what they have realized are MZMs, cleaning them up to take full advantage of topological protection will still require significant effort,” he says.
Regardless of the debate about the results and how they have been announced, researchers are supportive of the efforts at Microsoft to produce a topological quantum computer. “As a scientist who likes to see things tried, I’m grateful that at least one player stuck with the topological approach even when it ended up being a long, painful slog,” says Aaronson.
“Most governments won’t fund such work, because it’s way too risky and expensive,” adds Hensinger. “So it’s very nice to see that Microsoft is stepping in there.”
Solid-state batteries are considered next-generation energy storage technology as they promise higher energy density and safety than lithium-ion batteries with a liquid electrolyte. However, major obstacles for commercialization are the requirement of high stack pressures as well as insufficient power density. Both aspects are closely related to limitations of charge transport within the composite cathode.
This webinar presents an introduction on how to use electrochemical impedance spectroscopy for the investigation of composite cathode microstructures to identify kinetic bottlenecks. Effective conductivities can be obtained using transmission line models and be used to evaluate the main factors limiting electronic and ionic charge transport.
In combination with high-resolution 3D imaging techniques and electrochemical cell cycling, the crucial role of the cathode microstructure can be revealed, relevant factors influencing the cathode performance identified, and optimization strategies for improved cathode performance.
Philip Minnmann
Philip Minnmann received his M.Sc. in Material from RWTH Aachen University. He later joined Prof. Jürgen Janek’s group at JLU Giessen as part of the BMBF Cluster of Competence for Solid-State Batteries FestBatt. During his Ph.D., he worked on composite cathode characterization for sulfide-based solid-state batteries, as well as processing scalable, slurry-based solid-state batteries. Since 2023, he has been a project manager for high-throughput battery material research at HTE GmbH.
Johannes Schubert
Johannes Schubert holds an M.Sc. in Material Science from the Justus-Liebig University Giessen, Germany. He is currently a Ph.D. student in the research group of Prof. Jürgen Janek in Giessen, where he is part of the BMBF Competence Cluster for Solid-State Batteries FestBatt. His main research focuses on characterization and optimization of composite cathodes with sulfide-based solid electrolytes.