A new artificial intelligence/machine learning method rapidly and accurately characterizes binary neutron star mergers based on the gravitational wave signature they produce. Though the method has not yet been tested on new mergers happening “live”, it could enable astronomers to make quicker estimates of properties such as the location of mergers and the masses of the neutron stars. This information, in turn, could make it possible for telescopes to target and observe the electromagnetic signals that accompany such mergers.
When massive objects such as black holes and neutron stars collide and merge, they emit ripples in spacetime known as gravitational waves (GWs). In 2015, scientists on Earth began observing these ripples using kilometre-scale interferometers that measure the minuscule expansion and contraction of space-time that occurs when a gravitational wave passes through our planet. These interferometers are located in the US, Italy and Japan and are known collectively as the LVK observatories after their initials: the Laser Interferometer GW Observatory (LIGO), the Virgo GW Interferometer (Virgo) and the Kamioka GW Detector (KAGRA).
When two neutron stars in a binary pair merge, they emit electromagnetic waves as well as GWs. While both types of wave travel at the speed of light, certain poorly-understood processes that occur within and around the merging pair cause the electromagnetic signal to be slightly delayed. This means that the LVK observatories can detect the GW signal coming from a binary neutron star (BNS) merger seconds, or even minutes, before its electromagnetic counterpart arrives. Being able to identify GWs quickly and accurately therefore increases the chances of detecting other signals from the same event.
This is no easy task, however. GW signals are long and complex, and the main technique currently used to interpret them, Bayesian inference, is slow. While faster alternatives exist, they often make algorithmic approximations that negatively affect their accuracy.
Trained with millions of GW simulations
Physicists led by Maximilian Dax of the Max Planck Institute for Intelligent Systems in Tübingen, Germany have now developed a machine learning (ML) framework that accurately characterizes and localizes BNS mergers within a second of a GW being detected, without resorting to such approximations. To do this, they trained a deep neural network model with millions of GW simulations.
Once trained, the neural network can take fresh GW data as input and predict corresponding properties of the merging BNSs – for example, their masses, locations and spins – based on its training dataset. Crucially, this neural network output includes a sky map. This map, Dax explains, provides a fast and accurate estimate for where the BNS is located.
The new work built on the group’s previous studies, which used ML systems to analyse GWs from binary black hole (BBH) mergers. “Fast inference is more important for BNS mergers, however,” Dax says, “to allow for quick searches for the aforementioned electromagnetic counterparts, which are not emitted by BBH mergers.”
The researchers, who report their work in Nature, hope their method will help astronomers to observe electromagnetic counterparts for BNS mergers more often and detect them earlier – that is, closer to when the merger occurs. Being able to do this could reveal important information on the underlying processes that occur during these events. “It could also serve as a blueprint for dealing with the increased GW signal duration that we will encounter in the next generation of GW detectors,” Dax says. “This could help address a critical challenge in future GW data analysis.”
So far, the team has focused on data from current GW detectors (LIGO and Virgo) and has only briefly explored next-generation ones. They now plan to apply their method to these new GW detectors in more depth.
Waseem completed his DPhil in physics at the University of Oxford in the UK, where he worked on applied process-relational philosophy and employed string diagrams to study interpretations of quantum theory, constructor theory, wave-based logic, quantum computing and natural language processing. At Oxford, Waseem continues to teach mathematics and physics at Magdalen College, the Mathematical Institute, and the Department of Computer Science.
Waseem has played a key role in organizing the Lahore Science Mela, the largest annual science festival in Pakistan. He also co-founded Spectra, an online magazine dedicated to training popular-science writers in Pakistan. For his work popularizing science he received the 2021 Diana Award, was highly commended at the 2021 SEPnet Public Engagement Awards, and won an impact award in 2024 from Oxford’s Mathematical, Physical and Life Sciences (MPLS) division.
What skills do you use every day in your job?
I’m a theoretical physicist, so if you’re thinking about what I do every day, I use chalk and a blackboard, and maybe a pen and paper. However, for theoretical physics, I believe the most important skill is creativity, and the ability to dream and imagine.
What do you like best and least about your job?
That’s a difficult one because I’ve only been in this job for a few weeks. What I like about my job is the academic freedom and the opportunity to work on both education and research. My role is divided 50/50, so 50% of the time I’m thinking about the structure of natural languages like English and Urdu, and how to use quantum computers for natural language processing. The other half is spent using our diagrammatic formalism called “quantum picturalism” to make quantum physics accessible to everyone in the world. So, I think that’s the best part. On the other hand, when you have a lot of smart people together in the same room or building, there can be interpersonal issues. So, the worst part of my job is dealing with those conflicts
What do you know today, that you wish you knew when you were starting out in your career?
It’s a cynical view, but I think scientists are not always very rational or fair in their dealings with other people and their work. If I could go back and give myself one piece of advice, it would be that sometimes even rational and smart people make naive mistakes. It’s good to recognize that, at the end of the day, we are all human.
Disabled people in science must be recognised and given better support to help reverse the numbers of such people dropping out of science. That is the conclusion of a new report released today by the National Association of Disabled Staff Networks (NADSN). It also calls for funders to stop supporting institutions that have toxic research cultures and for a change in equality law to recognise the impact of discrimination on disabled people including neurodivergent people.
About 22% of working-age adults in the UK are disabled. Yet it is estimated that only 6.4% of people in science have a disability, falling to just 4% for senior academic positions. What’s more, barely 1% of research grant applications to UK Research and Innovation – the umbrella organisation for the UK’s main funding councils – are from researchers who disclose being disabled. Disabled researchers who do win grants receive less than half the amount compared to non-disabled researchers.
NADSN is an umbrella organisation for disabled staff networks, with a focus on higher education. It includes the STEMM Action Group, which was founded in 2020 and consists of nine people at universities across the UK who work in science and have lived experience of disability, chronic illness or neurodivergence. The group develops recommendations to funding bodies, learned societies and higher-education institutions to address barriers faced by those who are marginalised due to disability.
In 2021, the group published a “problem statement” that identified issues facing disabled people in science. They range from digital problems, such as the need for accessible fonts in reports and presentations, to physical concerns such as needing access ramps for people in wheelchairs or automatic doors to open heavy fire doors. Other issues include the need for adjustable desks in offices and wheelchair accessible labs.
“Many of these physical issues tend to be afterthoughts in the planning process,” says Francesca Doddato, a physicist from Lancaster University, who co-wrote the latest report. “But at that point they are much harder, and more costly, to implement.”
We need to have this big paradigm shift in terms of how we see disability inclusion
Francesca Doddato
Workplace attitudes and cultures can also be a big problem for disabled people in science, some 62% of whom report having been bullied and harassed compared to 43% of all scientists. “Unfortunately, in research and academia there is generally a toxic culture in which you are expected to be hyper productive, move all over the world, and have a focus on quantity over quality in terms of research output,” says Doddato. “This, coupled with society-wide attitudes towards disabilities, means that many disabled people struggle to get promoted and drop out of science.”
The action group spent the past four years compiling their latest report – Towards a fully inclusive environment for disabled people in STEMM – to present solutions to these issues. They hope it will raise awareness of the inequity and discrimination experienced by disabled people in science and to highlight the benefits of having an inclusive environment.
The report identifies three main areas that will have to be reformed to make science fully inclusive for disabled scientists: enabling inclusive cultures and practices; enhancing accessible physical and digital environments; and accessible and proactive funding.
In the short term, it calls on people to recognise the challenges and barriers facing disabled researchers and to improve work-based training for managers. “One of the best things is just being willing to listen and ask what can I do to help?” notes Doddato. “Being an ally is vitally important.”
Doddato says that sharing meeting agendas and documents ahead of time, ensuring that documents are presented in accessible formats, or acknowledging that tasks such as getting around campus can take longer are some aspects that can be useful.“All of these little things can really go a long way in shifting those attitudes and being an ally, and those things they don’t need policies that people need to be willing to listen and be willing to change.”
Medium- and long-term goals in the report involve holding organisations responsible for their working practice polices and to stop promoting and funding toxic research cultures. “We hope that report encourages funding bodies to put pressure on institutions if they are demonstrating toxicity and being discriminatory,” adds Doddato. The report also calls for a change to equality law to recognise the impact of intersectional discrimination, although it admits that this will be a “large undertaking” and will be the subject of a further NADSN report.
Doddato adds that disabled people’s voices need to be hear “loud and clear” as part of any changes. “What we are trying to address with the report is to push universities, research institutions and societies to stop only talking about doing something and actually implement change,” says Doddato. “We need to have a big paradigm shift in terms of how we see disability inclusion. It’s time for change.”
Neutron-activated gold Novel activation imaging technique enables real-time visualization of gold nanoparticles in the body without the use of external tracers. (Courtesy: Nanase Koshikawa from Waseda University)
Gold nanoparticles are promising vehicles for targeted delivery of cancer drugs, offering biocompatibility plus a tendency to accumulate in tumours. To fully exploit their potential, it’s essential to be able to track the movement of these nanoparticles in the body. To date, however, methods for directly visualizing their pharmacokinetics have not yet been established. Aiming to address this shortfall, researchers in Japan are using neutron-activated gold radioisotopes to image nanoparticle distribution in vivo.
The team, headed up by Nanase Koshikawa and Jun Kataoka from Waseda University, are investigating the use of radioactive gold nanoparticles based on 198Au, which they create by irradiating stable gold (197Au) with low-energy neutrons. The radioisotope 198Au has a half-life of 2.7 days and emits 412 keV gamma rays, enabling a technique known as activation imaging.
“Our motivation was to visualize gold nanoparticles without labelling them with tracers,” explains Koshikawa. “Radioactivation allows gold nanoparticles themselves to become detectable from outside the body. We used neutron activation because it does not change the atomic number, ensuring the chemical properties of gold nanoparticles remain unchanged.”
In vivo studies
The researchers – also from Osaka University and Kyoto University – synthesized 198Au-based nanoparticles and injected them into tumours in four mice. They used a hybrid Compton camera (HCC) to detect the emitted 412 keV gamma rays and determine the in vivo nanoparticle distribution, on the day of injection and three and five days later.
The HCC, which incorporates two pixelated scintillators, a scatterer with a central pinhole, and an absorber, can detect radiation with energies from tens of keV to nearly 1 MeV. For X-rays and low-energy gamma rays, the scatterer enables pinhole-mode imaging. For gamma rays over 200 keV, the device functions as a Compton camera.
The researchers reconstructed the 412 keV gamma signals into images, using an energy window of 412±30 keV. With the HCC located 5 cm from the animals’ abdomens, the spatial resolution was 7.9 mm, roughly comparable to the tumour size on the day of injection (7.7 x 11 mm).
In vivo distribution Images of 198Au nanoparticles in the bodies of two mice obtained with the HCC on the day of administration. (Courtesy: CC BY 4.0/Appl. Phys. Lett. 10.1063/5.0251048)
Overlaying the images onto photographs of the mice revealed that the nanoparticles accumulated in both the tumour and liver. In mice 1 and 2, high pixel values were observed primarily in the tumour, while mice 3 and 4 also had high pixel values in the liver region.
After imaging, the mice were euthanized and the team used a gamma counter to measure the radioactivity of each organ. The measured activity concentrations were consistent with the imaging results: mice 1 and 2 had higher nanoparticle concentrations in the tumour than the liver, and mice 3 and 4 had higher concentrations in the liver.
Tracking drug distribution
Next, Koshikawa and colleagues used the 198Au nanoparticles to label astatine-211 (211At), a promising alpha-emitting drug. They note that although 211At emits 79 keV X-rays, allowing in vivo visualization, its short half-life of just 7.2 h precludes its use for long-term tracking of drug pharmacokinetics.
The researchers injected the 211At-labelled nanoparticles into three tumour-bearing mice and used the HCC to simultaneously image 211At and 198Au, on the day of injection and one or two days later. Comparing energy spectra recorded just after injection with those two days later showed that the 211At peak at 79 keV significantly decreased in height owing to its decay, while the 412 keV 198Au peak maintained its height.
The team reconstructed images using energy windows of 79±10 and 412±30 keV, for pinhole- and Compton-mode reconstruction, respectively. In these experiments, the HCC was placed 10 cm from the mouse, giving a spatial resolution of 16 mm – larger than the initial tumour size and insufficient to clearly distinguish tumours from small organs. Nevertheless, the researchers point out that the rough distribution of the drug was still observable.
On the day of injection, the drug distribution could be visualized using both the 211At and 198Au signals. Two days later, imaging using 211At was no longer possible. In contrast, the distribution of the drug could still be observed via the 412 keV gamma rays.
With further development, the technique may prove suitable for future clinical use. “We assume that the gamma ray exposure dose would be comparable to that of clinical imaging techniques using X-rays or gamma rays, such as SPECT and PET, and that activation imaging is not harmful to humans,” Koshikawa says.
Activation imaging could also be applied to more than just gold nanoparticles. “We are currently working on radioactivation of platinum-based anticancer drugs to enable their visualization from outside the body,” Koshikawa tells Physics World. “Additionally, we are developing new detectors to image radioactive drugs with higher spatial resolution.”
Edinburgh researchers filmed ants and the sequence of movements they do when picking up seeds and other things. They then used this to build a robot gripper.
The device consists of two aluminium plates that each contain four rows of “hairs” made from thermoplastic polyurethane.
The hairs are 20 mm long and 1 mm in diameter, protruding in a v-shape. This allowing the hairs to surround circular objects, which can be particularly difficult to grasp and hold onto using paraellel plates.
In tests picking up 30 different household items including a jam jar and shampoo bottle (see video), adding hairs to the gripper increased the prototype’s grasp success rate from 64% to 90%.
The researchers think that such a device could be used in environmental clean-up as well as in construction and agriculture.
Barbara Webb from the University of Edinburgh, who led the research, says the work is “just the first step”.
“Now we can see how [ants’] antennae, front legs and jaws combine to sense, manipulate, grasp and move objects – for instance, we’ve discovered how much ants rely on their front legs to get objects in position,” she adds. “This will inform further development of our technology.”
Researchers at the EMBL in Germany have dramatically reduced the time required to create images using Brillouin microscopy, making it possible to study the viscoelastic properties of biological samples far more quickly and with less damage than ever before. Their new technique can image samples with a field of view of roughly 10 000 pixels at a speed of 0.1 Hz – a 1000-fold improvement in speed and throughput compared to standard confocal techniques.
Mechanical properties such as the elasticity and viscosity of biological cells are closely tied to their function. These properties also play critical roles in processes such as embryo and tissue development and can even dictate how diseases such as cancer evolve. Measuring these properties is therefore important, but it is not easy since most existing techniques to do so are invasive and thus inherently disruptive to the systems being imaged.
Non-destructive, label- and contact-free
In recent years, Brillouin microscopy has emerged as a non-destructive, label- and contact-free optical spectroscopy method for probing the viscoelastic properties of biological samples with high resolution in three dimensions. It relies on Brillouin scattering, which occurs when light interacts with the phonons (or collective vibrational modes) that are present in all matter. This interaction produces two additional peaks, known as Stokes and anti-Stokes Brillouin peaks, in the spectrum of the scattered light. The position of these peaks (the Brillouin shift) and their linewidth (the Brillouin width) are related to the elastic and viscous properties, respectively, of the sample.
The downside is that standard Brillouin microscopy approaches analyse just one point in a sample at a time. Because the scattering signal from a single point is weak, imaging speeds are slow, yielding long light exposure times that can damage photosensitive components within biological cells.
“Light sheet” Brillouin imaging
To overcome this problem, EMBL researchers led by Robert Prevedel began exploring ways to speed up the rate at which Brillouin microscopy can acquire two- and three-dimensional images. In the early days of their project, they were only able to visualize one pixel at a time. With typical measurement times of tens to hundreds of milliseconds for a single data point, it therefore took several minutes, or even hours, to obtain two-dimensional images of 50–250 square pixels.
In 2022, however, they succeeded in expanding the field of view to include an entire spatial line — that is, acquiring image data from more than 100 points in parallel. In their latest work, which they describe in Nature Photonics, they extended the technique further to allow them to view roughly 10 000 pixels in parallel over the full plane of a sample. They then used the new approach to study mechanical changes in live zebrafish larvae.
“This advance enables much faster Brillouin imaging, and in terms of microscopy, allows us to perform ‘light sheet’ Brillouin imaging,” says Prevedel. “In short, we are able to ‘under-sample’ the spectral output, which leads to around 1000 fewer individual measurements than normally needed.”
Towards a more widespread use of Brillouin microscopy
Prevedel and colleagues hope their result will lead to more widespread use of Brillouin microscopy, particularly for photosensitive biological samples. “We wanted to speed-up Brillouin imaging to make it a much more useful technique in the life sciences, yet keep overall light dosages low. We succeeded in both aspects,” he tells Physics World.
Looking ahead, the researchers plan to further optimize the design of their approach and merge it with microscopes that enable more robust and straightforward imaging. “We then want to start applying it to various real-world biological structures and so help shed more light on the role mechanical properties play in biological processes,” Prevedel says.
FLASH irradiation, an emerging cancer treatment that delivers radiation at ultrahigh dose rates, has been shown to significantly reduce acute skin toxicity in laboratory mice compared with conventional radiotherapy. Having demonstrated this effect using proton-based FLASH treatments, researchers from Aarhus University in Denmark have now repeated their investigations using electron-based FLASH (eFLASH).
Reporting their findings in Radiotherapy and Oncology, the researchers note a “remarkable similarity” between eFLASH and proton FLASH with respect to acute skin sparing.
Principal investigator Brita Singers Sørensen and colleagues quantified the dose–response modification of eFLASH irradiation for acute skin toxicity and late fibrotic toxicity in mice, using similar experimental designs to those previously employed for their proton FLASH study. This enabled the researchers to make direct quantitative comparisons of acute skin response between electrons and protons. They also compared the effectiveness of the two modalities to determine whether radiobiological differences were observed.
Over four months, the team examined 197 female mice across five irradiation experiments. After being weighed, earmarked and given an ID number, each mouse was randomized to receive either eFLASH irradiation (average dose rate of 233 Gy/s) or conventional electron radiotherapy (average dose rate of 0.162 Gy/s) at various doses.
For the treatment, two unanaesthetized mice (one from each group) were restrained in a jig with their right legs placed in a water bath and irradiated by a horizontal 16 MeV electron beam. The animals were placed on opposite sides of the field centre and irradiated simultaneously, with their legs at a 3.2 cm water-equivalent depth, corresponding to the dose maximum.
The researchers used a diamond detector to measure the absolute dose at the target position in the water bath and assumed that the mouse foot target received the same dose. The resulting foot doses were 19.2–57.6 Gy for eFLASH treatments and 19.4–43.7 Gy for conventional radiotherapy, chosen to cover the entire range of acute skin response.
FLASH confers skin protection
To evaluate the animals’ response to irradiation, the researchers assessed acute skin damage daily from seven to 28 days post-irradiation using an established assay. They weighed the mice weekly, and one of three observers blinded to previous grades and treatment regimens assessed skin toxicity. Photographs were taken whenever possible. Skin damage was also graded using an automated deep-learning model, generating a dose–response curve independent of observer assessments.
The researchers also assessed radiation-induced fibrosis in the leg joint, biweekly from weeks nine to 52 post-irradiation. They defined radiation-induced fibrosis as a permanent reduction of leg extensibility by 75% or more in the irradiated leg compared with the untreated left leg.
To assess the tissue-sparing effect of eFLASH, the researchers used dose–response curves to derive TD50 – the toxic dose eliciting a skin response in 50% of mice. They then determined a dose modification factor (DMF), defined as the ratio of eFLASH TD50 to conventional TD50. A DMF larger than one suggests that eFLASH reduces toxicity.
The eFLASH treatments had a DMF of 1.45–1.54 – in other words, a 45–54% higher dose was needed to cause comparable skin toxicity to that caused by conventional radiotherapy. “The DMF indicated a considerable acute skin sparing effect of eFLASH irradiation,” the team explain. Radiation-induced fibrosis was also reduced using eFLASH, with a DMF of 1.15.
Reducing skin damage Dose-response curves for acute skin toxicity (left) and fibrotic toxicity (right) for conventional electron radiotherapy and electron FLASH treatments. (Courtesy: CC BY 4.0/adapted from Radiother. Oncol. 10.1016/j.radonc.2025.110796)
For DMF-based equivalent doses, the development of skin toxicity over time was similar for eFLASH and conventional treatments, throughout the dose groups. This supports the hypothesis that eFLASH modifies the dose–response rather than causing a changed biological mechanism. The team also suggests that the difference in DMF seen for fibrotic response and acute skin damage suggests that FLASH sparing depends on tissue type and might be specific to acute and late-responding tissue.
Similar skin damage between electrons and protons
Sørensen and colleagues compared their findings to previous studies of normal-tissue damage from proton irradiation, both in the entrance plateau and using the spread-out Bragg peak (SOBP). DMF values for electrons (1.45–1.54) were similar to those of transmission protons (1.44–1.50) and slightly higher than for SOBP protons (1.35–1.40). “Despite dose rate and pulse structure differences, the response to electron irradiation showed substantial similarity to transmission and SOBP damage,” they write.
Although the average eFLASH dose rate (233 Gy/s) was higher than that of the proton studies (80 and 60 Gy/s), it did not appear to influence the biological response. This supports the hypothesis that beyond a certain dose rate threshold, the tissue-sparing effect of FLASH does not increase notably.
The researchers point out that previous studies also found biological similarities in the FLASH effect for electrons and protons, with this latest work adding data on similar comparable and quantifiable effects. They add, however, that “based on the data of this study alone, we cannot say that the biological response is identical, nor that the electron and proton irradiation elicit the same biological mechanisms for DNA damage and repair. This data only suggests a similar biological response in the skin.”
New data from the NOvA experiment at Fermilab in the US contain no evidence for so-called “sterile” neutrinos, in line with results from most – though not all – other neutrino detectors to date. As well as being consistent with previous experiments, the finding aligns with standard theoretical models of neutrino oscillation, in which three active types, or flavours, of neutrino convert into each other. The result also sets more stringent limits on how much an additional sterile type of neutrino could affect the others.
“The global picture on sterile neutrinos is still very murky, with a number of experiments reporting anomalous results that could be attributed to sterile neutrinos on one hand and a number of null results on the other,” says NOvA team member Adam Lister of the University of Wisconsin, Madison, US. “Generally, these anomalous results imply we should see large amounts of sterile-driven neutrino disappearance at NOvA, but this is not consistent with our observations.”
Neutrinos were first proposed in 1930 by Wolfgang Pauli as a way to account for missing energy and spin in the beta decay of nuclei. They were observed in the laboratory in 1956, and we now know that they come in (at least) three flavours: electron, muon and tau. We also know that these three flavours oscillate, changing from one to another as they travel through space, and that this oscillation means they are not massless (as was initially thought).
Significant discrepancies
Over the past few decades, physicists have used underground detectors to probe neutrino oscillation more deeply. A few of these detectors, including the LSND at Los Alamos National Laboratory, BEST in Russia, and Fermilab’s own MiniBooNE, have observed significant discrepancies between the number of neutrinos they detect and the number that mainstream theories predict.
One possible explanation for this excess, which appears in some extensions of the Standard Model of particle physics, is the existence of a fourth flavour of neutrino. Neutrinos of this “sterile” type do not interact with the other flavours via the weak nuclear force. Instead, they interact only via gravity.
Detecting sterile neutrinos would fundamentally change our understanding of particle physics. Indeed, some physicists think sterile neutrinos could be a candidate for dark matter – the mysterious substance that is thought to make up around 85% of the matter in the universe but has so far only made itself known through the gravitational force it exerts.
Near and far detectors
The NOvA experiment uses two liquid scintillator detectors to monitor a stream of neutrinos created by firing protons at a carbon target. The near detector is located at Fermilab, approximately 1 km from the target, while the far detector is 810 km away in northern Minnesota. In the new study, the team measured how many muon-type neutrinos survive the journey through the Earth’s crust from the near detector to the far one. The idea is that if fewer neutrinos survive than the conventional three-flavour oscillations picture predicts, some of them could have oscillated into sterile neutrinos.
The experimenters studied two different interactions between neutrinos and normal matter, says team member V Hewes of the University of Cincinnati, US. “We looked for both charged current muon neutrino and neutral current interactions, as a sterile neutrino would manifest differently in each,” Hewes explains. “We then compared our data across those samples in both detectors to simulations of neutrino oscillation models with and without the presence of a sterile neutrino.”
No excess of neutrinos seen
Writing in Physical Review Letters, the researchers state that they found no evidence of neutrinos oscillating into sterile neutrinos. What is more, introducing a fourth, sterile neutrino did not provide better agreement with the data than sticking with the standard model of three active neutrinos.
This result is in line with several previous experiments that looked for sterile neutrinos, including those performed at T2K, Daya Bay, RENO and MINOS+. However, Lister says it places much stricter constraints on active-sterile neutrino mixing than these earlier results. “We are really tightening the net on where sterile neutrinos could live, if they exist,” he tells Physics World.
The NOvA team now hopes to tighten the net further by reducing systematic uncertainties. “To that end, we are developing new data samples that will help us better understand the rate at which neutrinos interact with our detector and the composition of our beam,” says team member Adam Aurisano, also at the University of Cincinnati. “This will help us better distinguish between the potential imprint of sterile neutrinos and more mundane causes of differences between data and prediction.”
NOvA co-spokesperson Patricia Vahle, a physicist at the College of William & Mary in Virginia, US, sums up the results. “Neutrinos are full of surprises, so it is important to check when anomalies show up,” she says. “So far, we don’t see any signs of sterile neutrinos, but we still have some tricks up our sleeve to extend our reach.”
Last week I had the pleasure of attending the Global Physics Summit (GPS) in Anaheim California, where I rubbed shoulders with 15,0000 fellow physicists. The best part of being there was chatting with lots of different people, and in this podcast I share two of those conversations.
First up is Chetan Nayak, who is a senior researcher at Microsoft’s Station Q quantum computing research centre here in California. In February, Nayak and colleagues claimed a breakthrough in the development of topological quantum bits (qubits) based on Majorana zero modes. In principle, such qubits could enable the development of practical quantum computers, but not all physicists were convinced, and the announcement remains controversial – despite further results presented by Nayak in a packed session at the GPS.
I caught up with Nayak after his talk and asked him about the challenges of achieving Microsoft’s goal of a superconductor-based topological qubit. That conversation is the first segment of today’s podcast.
Distinctive jumping technique
Up next, I chat with Atharva Lele about the physics of manu jumping, which is a competitive aquatic sport that originates from the Māori and Pasifika peoples of New Zealand. Jumpers are judged by the height of their splash when they enter the water, and the best competitors use a very distinctive technique.
Lele is an undergraduate student at the Georgia Institute of Technology in the US, and is part of team that analysed manu techniques in a series of clever experiments that included plunging robots. He explains how to make a winning manu jump while avoiding the pain of a belly flop.
The first direct evidence for auroras on Neptune has been spotted by the James Webb Space Telescope (JWST) and the Hubble Space Telescope.
Auroras happen when energetic particles from the Sun become trapped in a planet’s magnetic field and eventually strike the upper atmosphere with the energy released creating a signature glow.
Auroral activity has previously been seen on Jupiter, Saturn and Uranus but not on Neptune despite hints in a flyby of the planet by NASA’s Voyager 2 in 1989.
“Imaging the auroral activity on Neptune was only possible with [the JWST’s] near-infrared sensitivity,” notes Henrik Melin from Northumbria University. “It was so stunning to not just see the auroras, but the detail and clarity of the signature really shocked me.”
The data was taken by JWST’s Near-Infrared Spectrograph as well as Hubble’s Wide Field Camera 3. The cyan on the image above represent auroral activity and is shown together with white clouds on a multi-hued blue orb that is Neptune.
While auroras on Earth occur at the poles, on Neptune they happen elsewhere. This is due to the nature of Neptune’s magnetic field, which is tilted by 47 degrees from the planet’s rotational axis.
As well as the visible imagery, the JWST also detected an emission line from trihydrogen cation (H3+), which can be created in auroras.
Physicists in Germany have found an alternative explanation for an anomaly that had previously been interpreted as potential evidence for a mysterious “dark force”. Originally spotted in ytterbium atoms, the anomaly turns out to have a more mundane cause. However, the investigation, which involved high-precision measurements of shifts in ytterbium’s energy levels and the mass ratios of its isotopes, could help us better understand the structure of heavy atomic nuclei and the physics of neutron stars.
Isotopes are forms of an element that have the same number of protons and electrons, but different numbers of neutrons. These different numbers of neutrons produce shifts in the atom’s electronic energy levels. Measuring these so-called isotope shifts is therefore a way of probing the interactions between electrons and neutrons.
In 2020, a team of physicists at the Massachusetts Institute of Technology (MIT) in the US observed an unexpected deviation in the isotope shift of ytterbium. One possible explanation for this deviation was the existence of a new “dark force” that would interact with both ordinary, visible matter and dark matter via hypothetical new force-carrying particles (bosons).
Although dark matter is thought to make up about 85 percent of the universe’s total matter, and its presence can be inferred from the way light bends as it travels towards us from distant galaxies, it has never been detected directly. Evidence for a new, fifth force (in addition to the known strong, weak, electromagnetic and gravitational forces) that acts between ordinary and dark matter would therefore be very exciting.
Measuring ytterbium isotope shifts and atomic masses
Mehlstäubler, Blaum and colleagues came to this conclusion after measuring shifts in the atomic energy levels of five different ytterbium isotopes: 168,170,172,174,176Yb. They did this by trapping ions of these isotopes in an ion trap at the PTB and then using an ultrastable laser to drive certain electronic transitions. This allowed them to pin down the frequencies of specific transitions (2S1/2→2D5/2 and 2S1/2→2F7/2) with a precision of 4 ×10−9, the highest to date.
They also measured the atomic masses of the ytterbium isotopes by trapping individual highly-charged Yb42+ ytterbium ions in the cryogenic PENTATRAP Penning trap mass spectrometer at the MPIK. In the strong magnetic field of this trap, team member and study lead author Menno Door explains, the ions are bound to follow a circular orbit. “We measure the rotational frequency of this orbit by amplifying the miniscule inducted current in surrounding electrodes,” he says. “The measured frequencies allowed us to very precisely determine the related mass ratios of the various isotopes with a precision of 4 ×10−12.”
From these data, the researchers were able to extract new parameters that describe how the ytterbium nucleus deforms. To back up their findings, a group at TU Darmstadt led by Achim Schwenk simulated the ytterbium nuclei on large supercomputers, calculating their structure from first principles based on our current understanding of the strong and electromagnetic interactions. “These calculations confirmed that the leading signal we measured was due to the evolving nuclear structure of ytterbium isotopes, not a new fifth force,” says team member Matthias Heinz.
“Our work complements a growing body of research that aims to place constraints on a possible new interaction between electrons and neutrons,” team member Chih-Han Yeh tells Physics World. “In our work, the unprecedented precision of our experiments refined existing constraints.”
The researchers say they would now like to measure other isotopes of ytterbium, including rare isotopes with high or low neutron numbers. “Doing this would allow us to control for uncertain ‘higher-order’ nuclear structure effects and further improve the constraints on possible new physics,” says team member Fiona Kirk.
Door adds that isotope chains of other elements such as calcium, tin and strontium would also be worth investigating. “These studies would allow to further test our understanding of nuclear structure and neutron-rich matter, and with this understanding allow us to probe for possible new physics again,” he says.
Located about 40 light years from us, the exoplanet Trappist-1 b, orbiting an ultracool dwarf star, has perplexed astronomers with its atmospheric mysteries. Recent observations made by the James Webb Space Telescope (JWST) at two mid-infrared bands (12.8 and 15 µm), suggest that the exoplanet could either be bare, airless rock like Mercury or shrouded by a hazy carbon dioxide (CO2) atmosphere like Titan.
The research, reported in Nature Astronomy, provides the first thermal emission measurements for Trappist-1 b suggesting two plausible yet contradictory scenarios. This paradox challenges our current understanding of atmospheric models and highlights the need for further investigations – both theoretical and observational.
Scenario one: airless rock
An international team of astronomers, co-led by Elsa Ducrot and Pierre-Olivier Lagage from the Commissariat aux Énergies Atomiques (CEA) in Paris, France, obtained mid-infrared observations for Trappist-1 b for 10 secondary eclipse measurements (recorded as the exoplanet moves behind the star) using the JWST Mid-Infrared Instrument (MIRI). They recorded emission data at 12.8 and 15 µm and compared the findings with various surface and atmospheric models.
The thermal emission at 15 µm corresponded with Trappist-1 b being almost null-albedo bare rock; however, the emission at 12.8 µm refuted this model. At this wavelength, the exoplanet’s measured flux was most consistent with the surface model of an igneous, low-silica-content rock called ultramafic rock. The model assumes the surface to be geologically unweathered.
Trappist-1 b, the innermost planet in the Trappist-1 system, experiences strong tidal interaction and induction heating from its host star. This could trigger volcanic activity and continuous resurfacing, which could lead to a young surface like that of Jupiter’s volcanic moon Io. The researchers argue that these scenarios support the idea that Trappist-1 b is an airless rocky planet with a young ultramafic surface.
The team next explored atmospheric models for the exoplanet, which unfolded a different story.
Scenario two: haze-rich CO2 atmosphere
Ducrot and colleagues fitted the measured flux data with hazy atmospheric models centred around 15 µm. The results showed that Trappist-1 b could have a thick CO2-rich atmosphere with photochemical haze, but with a twist. For an atmosphere dominated by greenhouse gases such as CO2, which is strongly absorbing, temperature is expected to increase with increasing pressure (at lower levels). Consequently, they anticipated the brightness temperature should be lower at 15 µm (which measures temperature high in the atmosphere) than at 12.8 µm. But the observations showed otherwise. They proposed that this discrepancy could be explained by a thermal inversion, where the upper atmosphere has higher temperature than the layers below.
In our solar system, Titan’s atmosphere also shows thermal inversion due to heating through haze absorption. Haze is an efficient absorber of stellar radiation. Therefore, it could absorb radiation high up in the atmosphere, leading to heating of the upper atmosphere and cooling of the lower atmosphere. Indeed, this model is consistent with the team’s measurements. However, this leads to another plausible question: what forms this haze?
Trappist-1 b’s close proximity to Trappist-1 and the strong X-ray and ultraviolet radiation from the host star could create haze in the exoplanet’s atmosphere via photodissociation. While Titan’s hydrocarbon haze arises from photodissociation of methane, the same is not possible for Trappist-1 b as methane and CO2 cannot coexist photochemically and thermodynamically.
One plausible scenario is that the photochemical haze forms due to the presence of hydrogen sulphide (H2S). The volcanic activity in an oxidized, CO2-dominated atmosphere could supply H2S, but it is unlikely that it could sustain the levels needed for the haze. Additionally, as the innermost planet around an active star, Trappist-1 b is subjected to constant space weathering, raising the question of the sustainability of its atmosphere.
The researchers note that although the modelled atmospheric scenario appears less plausible than the airless bare-rock model, more theoretical and experimental work is needed to create a conclusive model.
What is the true nature of Trappist-1 b?
The two plausible surface and atmospheric models for Trappist-1 b provide an enigma. How could a planet be simultaneously an airless, young ultramafic rock and have a haze-filled CO2-rich atmosphere? The resolution might come not from theoretical models but from additional measurements.
Currently, the available data only capture the dayside thermal flux within two infrared bands, which proved insufficient to decisively distinguish between an airless surface and a CO₂-rich atmosphere. To solve this planetary paradox, astronomers advocate for broader spectral coverage and photometric phase curve measurements to help explain heat redistribution patterns essential for atmospheric confirmation.
JWST’s observations of Trappist-1 b demonstrate its power to precisely detect thermal emissions from exoplanets. However, the contradictory interpretations of the data highlight its limitations too and emphasize the need for higher resolution spectroscopy. With only two thermal flux measurements insufficient to give a precise answer, future JWST observations of Trappist-1 b might uncover its true picture.
Co-author Michaël Gillon, an astrophysicist at the University of Liège, emphasizes the importance of the results. “The agreement between our two measurements of the planet’s dayside fluxes at 12.8 and 15 microns and a haze-rich CO2-dominated atmosphere is an important finding,” he tells Physics World. “It shows that dayside flux measurements in one or a couple of broadband filters is not enough to fully discriminate airless versus atmosphere models. Additional phase curve and transit transmission data are necessary, even if for the latter, the interpretation of the measurements is complicated by the inhomogeneity of the stellar surface.”
For now, TRAPPIST-1 b hides its secrets, either standing as airless barren world scorched by its star or hidden underneath a hazy thick CO2 veil.
Last year the UK government placed a new cap of £9535 on annual tuition fees, a figure that will likely rise in the coming years as universities tackle a funding crisis. Indeed, shortfalls are already affecting institutions, with some saying they will run out of money in the next few years. The past couple of months alone have seen several universities announce plans to shed academic staff and even shut departments.
Whether you agree with tuition fees or not, the fact is that students will continue to pay a significant sum for a university education. Value for money is part of the university proposition and lecturers can play a role by conveying the excitement of their chosen field. But what are the key requirements to help do so? In the late 1990s we carried out a study aimed at improving the long-term performance of students who initially struggled with university-level physics.
With funding from the Higher Education Funding Council for Wales, the study involved structured interviews with 28 students and 17 staff. An internal report – The Rough Guide to Lecturing – was written which, while not published, informed the teaching strategy of Cardiff University’s physics department for the next quarter of a century.
From the findings we concluded that lecture courses can be significantly enhanced by simply focusing on three principles, which we dub the three “E”s. The first “E” is enthusiasm. If a lecturer appears bored with the subject – perhaps they have given the same course for many years – why should their students be interested? This might sound obvious, but a bit of reading, or examining the latest research, can do wonders to freshen up a lecture that has been given many times before.
For both old and new courses it is usually possible to highlight at least one research current paper in a semester’s lectures. Students are not going to understand all of the paper, but that is not the point – it is the sharing in contemporary progress that will elicit excitement. Commenting on a nifty experiment in the work, or the elegance of the theory, can help to inspire both teacher and student.
As well as freshening up the lecture course’s content, another tip is to mention the wider context of the subject being taught, perhaps by mentioning its history or possible exciting applications. Be inventive –we have evidence of a lecturer “live” translating parts of Louis de Broglie’s classic 1925 paper “La relation du quantum et la relativité” during a lecture. It may seem unlikely, but the students responded rather well to that.
Supporting students
The second “E” is engagement. The role of the lecturer as a guide is obvious, but it should also be emphasized that the learner’s desire is to share the lecturer’s passion for, and mastery of, a subject. Styles of lecturing and visual aids can vary greatly between people, but the important thing is to keep students thinking.
Don’t succumb to the apocryphal definition of a lecture as only a means of transferring the lecturer’s notes to the student’s pad without first passing through the minds of either person. In our study, when the students were asked “What do you expect from a lecture?”, they responded simply to learn something new, but we might extend this to a desire to learn how to do something new.
Simple demonstrations can be effective for engagement. Large foam dice, for example, can illustrate the non-commutation of 3D rotations. Fidget-spinners in the hands of students can help explain the vector nature of angular momentum. Lecturers should also ask rhetorical questions that make students think, but do not expect or demand answers, particularly in large classes.
More importantly, if a student asks a question, never insult them – there is no such thing as a “stupid” question. After all, what may seem a trivial point could eliminate a major conceptual block for them. If you cannot answer a technical query, admit it and say you will find out for next time – but make sure you do. Indeed, seeing that the lecturer has to work at the subject too can be very encouraging for students.
The final “E” is enablement. Make sure that students have access to supporting material. This could be additional notes; a carefully curated reading list of papers and books; or sets of suitable interesting problems with hints for solutions, worked examples they can follow, and previous exam papers. Explain what amount of self-study will be needed if they are going to benefit from the course.
Have clear and accessible statements concerning the course content and learning outcomes – in particular, what students will be expected to be able to do as a result of their learning. In our study, the general feeling was that a limited amount of continuous assessment (10–20% of the total lecture course mark) encourages both participation and overall achievement, provided students are given good feedback to help them improve.
Next time you are planning to teach a new course, or looking through those decades-old notes, remember enthusiasm, engagement and enablement. It’s not rocket science, but it will certainly help the students learn it.
Orthopaedic implants that bear loads while bones heal, then disappear once they’re no longer needed, could become a reality thanks to a new technique for enhancing the mechanical properties of zinc alloys. Developed by researchers at Monash University in Australia, the technique involves controlling the orientation and size of microscopic grains in these strong yet biodegradable materials.
Implants such as plates and screws provide temporary support for fractured bones until they knit together again. Today, these implants are mainly made from sturdy materials such as stainless steel or titanium that remain in the body permanently. Such materials can, however, cause discomfort and bone loss, and subsequent injuries to the same area risk additional damage if the permanent implants warp or twist.
To address these problems, scientists have developed biodegradable alternatives that dissolve once the bone has healed. These alternatives include screws made from magnesium-based materials such as MgYREZr (trade name MAGNEZIX), MgYZnMn (NOVAMag) and MgCaZn (RESOMET). However, these materials have compressive yield strengths of just 50 to 260 MPa, which is too low to support bones that need to bear a patient’s weight. They also produce hydrogen gas as they degrade, possibly affecting how biological tissues regenerate.
Zinc alloys do not suffer from the hydrogen gas problem. They are biocompatible, dissolving slowly and safely in the body. There is even evidence that Zn2+ ions can help the body heal by stimulating bone formation. But again, their mechanical strength is low: at less than 30 MPa, they are even worse than magnesium in this respect.
Making zinc alloys strong enough for load-bearing orthopaedic implants is not easy. Mechanical strategies such as hot-extruding binary alloys have not helped much. And methods that focus on reducing the materials’ grain size (to hamper effects like dislocation slip) have run up against a discouraging problem: at body temperature (37 °C), ultrafine-grained Zn alloys become mechanically weaker as their so-called “creep resistance” decreases.
Grain size goes bigger
In the new work, a team led by materials scientist and engineer Jian-Feng Nei tried a different approach. By increasing grain size in Zn alloys rather than decreasing it, the Monash team was able to balance the alloys’ strength and creep resistance – something they say could offer a route to stronger zinc alloys for biodegradable implants.
In compression tests of extruded Zn–0.2 wt% Mg alloy samples with grain sizes of 11 μm, 29 μm and 47 μm, the team measured stress-strain curves that show a markedly higher yield strength for coarse-grained samples than for fine-grained ones. What is more, the compressive yield strengths of these coarser-grained zinc alloys are notably higher than those of MAGNEZIX, NOVAMag and RESOMET biodegradable magnesium alloys. At the upper end, they even rival those of high-strength medical-grade stainless steels.
The researchers attribute this increased compressive yield to a phenomenon called the inverse Hall–Petch effect. This effect comes about because larger grains favour metallurgical effects such as intra-granular pyramidal slip as well as a variation of a well-known metal phenomenon called twinning, in which a specific kind of defect forms when part of the material’s crystal structure flips its orientation. Larger grains also make the alloys more flexible, allowing them to better adapt to surrounding biological tissues. This is the opposite of what happens with smaller grains, which facilitate inter-granular grain boundary sliding and make alloys more rigid.
The new work, which is detailed in Nature, could aid the development of advanced biodegradable implants for orthopaedics, cardiovascular applications and other devices, says Nei. “With improved biocompatibility, these implants could be safer and do away with the need for removal surgeries, lowering patient risk and healthcare costs,” he tells Physics World. “What is more, new alloys and processing techniques could allow for more personalized treatments by tailoring materials to specific medical needs, ultimately improving patient outcomes.”
The Monash team now aims to improve the composition of the alloys and achieve more control over how they degrade. “Further studies on animals and then clinical trials will test their strength, safety and compatibility with the body,” says Nei. “After that, regulatory approvals will ensure that the biodegradable metals meet medical standards for orthopaedic implants.”
The team is also setting up a start-up company with the goal of developing and commercializing the materials, he adds.
Researchers in China have unveiled a 105-qubit quantum processor that can solve in minutes a quantum computation problem that would take billions of years using the world’s most powerful classical supercomputers. The result sets a new benchmark for claims of so-called “quantum advantage”, though some previous claims have faded after classical algorithms improved.
The fundamental promise of quantum computation is that it will reduce the computational resources required to solve certain problems. More precisely, it promises to reduce the rate at which resource requirements grow as problems become more complex. Evidence that a quantum computer can solve a problem faster than a classical computer – quantum advantage – is therefore a key measure of success.
The first claim of quantum advantage came in 2019, when researchers at Google reported that their 53-qubit Sycamore processor had solved a problem known as random circuit sampling (RCS) in just 200 seconds. Xiaobu Zhu, a physicist at the University of Science and Technology of China (USTC) in Hefei who co-led the latest work, describes RCS as follows: “First, you initialize all the qubits, then you run them in single-qubit and two-qubit gates and finally you read them out,” he says. “Since this process includes every key element of quantum computing, such as initializing the gate operations and readout, unless you have really good fidelity at each step you cannot demonstrate quantum advantage.”
At the time, the Google team claimed that the best supercomputers would take 10::000 years to solve this problem. However, subsequent improvements to classical algorithms reduced this to less than 15 seconds. This pattern has continued ever since, with experimentalists pushing quantum computing forward even as information theorists make quantum advantage harder to achieve by improving techniques used to simulate quantum algorithms on classical computers.
Recent claims of quantum advantage
In October 2024, Google researchers announced that their 67-qubit Sycamore processor had solved an RCS problem that would take an estimated 3600 years for the Frontier supercomputer at the US’s Oak Ridge National Laboratory to complete. In the latest work, published in Physical Review Letters, Jian-Wei Pan, Zhu and colleagues set the bar even higher. They show that their new Zuchongzhi 3.0 processor can complete in minutes an RCS calculation that they estimate would take Frontier billions of years using the best classical algorithms currently available.
To achieve this, they redesigned the readout circuit of their earlier Zuchongzhi processor to improve its efficiency, modified the structures of the qubits to increase their coherence times and increased the total number of superconducting qubits to 105. “We really upgraded every aspect and some parts of it were redesigned,” Zhu says.
Google’s latest processor, Willow, also uses 105 superconducting qubits, and in December 2024 researchers there announced that they had used it to demonstrate quantum error correction. This achievement, together with complementary advances in Rydberg atom qubits from Harvard University’s Mikhail Lukin and colleagues, was named Physics World’s Breakthrough of the Year in 2024. However, Zhu notes that Google has not yet produced any peer-reviewed research on using Willow for RCS, making it hard to compare the two systems directly.
The USTC team now plans to demonstrate quantum error correction on Zuchongzhi 3.0. This will involve using an error correction code such as the surface code to combine multiple physical qubits into a single “logical qubit” that is robust to errors. “The requirements for error-correction readout are much more difficult than for RCS,” Zhu notes. “RCS only needs one readout, whereas error-correction needs readout many times with very short readout times…Nevertheless, RCS can be a benchmark to show we have the tools to run the surface code. I hope that, in my lab, within a few months we can demonstrate a good-quality error correction code.”
“How progress gets made”
Quantum information theorist Bill Fefferman of the University of Chicago in the US praises the USTC team’s work, describing it as “how progress gets made”. However, he offers two caveats. The first is that recent demonstrations of quantum advantage do not have efficient classical verification schemes – meaning, in effect, that classical computers cannot check the quantum computer’s work. While the USTC researchers simulated a smaller problem on both classical and quantum computers and checked that the answers matched, Fefferman doesn’t think this is sufficient. “With the current experiments, at the moment you can’t simulate it efficiently, the verification doesn’t work anymore,” he says.
The second caveat is that the rigorous hardness arguments proving that the classical computational power needed to solve an RCS problem grows exponentially with the problem’s complexity apply only to situations with no noise. This is far from the case in today’s quantum computers, and Fefferman says this loophole has been exploited in many past quantum advantage experiments.
Still, he is upbeat about the field’s prospects. “The fact that the original estimates the experimentalists gave did not match some future algorithm’s performance is not a failure: I see that as progress on all fronts,” he says. “The theorists are learning more and more about how these systems work and improving their simulation algorithms and, based on that, the experimentalists are making their systems better and better.”
Sometimes, you just have to follow your instincts and let serendipity take care of the rest.
North Ronaldsay, a remote island north of mainland Orkney, has a population of about 50 and a lot of sheep. In the early 19th century, it thrived on the kelp ash industry, producing sodium carbonate (soda ash), potassium salts and iodine for soap and glass making.
But when cheaper alternatives became available, the island turned to its unique breed of seaweed-eating sheep. In 1832 islanders built a 12-mile-long dry stone wall around the island to keep the sheep on the shore, preserving inland pasture for crops.
My connection with North Ronaldsay began last summer when my partner, Sue Bowler, and I volunteered for the island’s Sheep Festival, where teams of like minded people rebuild sections of the crumbling wall. That experience made us all the more excited when we learned that North Ronaldsay also had a science festival.
This year’s event took place on 14–16 March and getting there was no small undertaking. From our base in Leeds, the journey involved a 500-mile drive to a ferry, a crossing to Orkney mainland, and finally, a flight in a light aircraft. With just 50 inhabitants, we had no idea how many people would turn up but instinct told us it was worth the trip.
Sue, who works for the Royal Astronomical Society (RAS), presented Back to the Moon, while together we ran hands-on maker activities, a geology walk and a trip to the lighthouse, where we explored light beams and Fresnel lenses.
The Yorkshire Branch of the Institute of Physics (IOP) provided laser-cut hoist kits to demonstrate levers and concepts like mechanical advantage, while the RAS shared Connecting the Dots – a modern LED circuit version of a Victorian after-dinner card set illustrating constellations.
Hands-on science Participants get stuck into maker activities at the festival. (Courtesy: @Lazy.Photon on Instagram)
Despite the island’s small size, the festival drew attendees from neighbouring islands, with 56 people participating in person and another 41 joining online. Across multiple events, the total accumulated attendance reached 314.
One thing I’ve always believed in science communication is to listen to your audience and never make assumptions. Orkney has a rich history of radio and maritime communications, shaped in part by the strategic importance of Scapa Flow during the Second World War.
Stars in their eyes Making a constellation board at the North Ronaldsay Science Festival. (Courtesy: @Lazy.Photon on Instagram)
The Orkney Wireless Museum is a testament to this legacy, and one of our festival guests had even reconstructed a working 1930s Baird television receiver for the museum.
Leaving North Ronaldsay was hard. The festival sparked fascinating conversations, and I hope we inspired a few young minds to explore physics and astronomy.
The author would like to thanks Alexandra Wright (festival organizer), Lucinda Offer (education, outreach and events officer at the RAS) and Sue Bowler (editor of Astronomy & Geophysics)
I’m standing next to Yang Fugui in front of the High Energy Photon Source (HEPS) in Beijing’s Huairou District about 50 km north of the centre of the Chinese capital. The HEPS isn’t just another synchrotron light source. It will, when it opens later this year, be the world’s most advanced facility of its type. Construction of this giant device started in 2019 and for Yang – a physicist who is in charge of designing the machine’s beamlines – we’re at a critical point.
“This machine has many applications, but now is the time to make sure it does new science,” says Yang, who is a research fellow at the Institute of High Energy Physics (IHEP) of the Chinese Academy of Sciences (CAS), which is building the new machine. With the ring completed, optimizing the beamlines will be vital if the facility is to open up new research areas.
From the air – Google will show you photos – the HEPS looks like a giant magnifying glass lying in a grassy field. But I’ve come by land, and from my perspective it resembles a large and gleaming low-walled silver sports stadium, surrounded by well-kept bushes, flowers and fountains.
I was previously in Beijing in 2019 at the time ground for the HEPS was broken when the site was literally a green field. Back then, I was told, the HEPS would take six-and-a-half years to build. We’re still on schedule and, if all continues to run as planned, the facility will come online in December 2025.
Lighting up the world
There are more than 50 synchrotron radiation sources around the world, producing intense, coherent beams of electromagnetic radiation used for experiments in everything from condensed-matter physics to biology. Three significant hardware breakthroughs, one after the other, have created natural divisions among synchrotron sources, leading them to be classed by their generation.
Along with Max IV in Sweden, SIRIUS in Brazil and the Extremely Brilliant Source at the European Synchrotron Radiation Facility (ESRF) in France, the HEPS is a fourth-generation source. These days such devices are vital and prestigious pieces of scientific infrastructure, but synchrotron radiation began life as an unexpected nuisance (Phys. Perspect. 10 438).
Classical electrodynamics says that charged particles undergoing acceleration – changing their momentum or velocity – radiate energy tangentially to their trajectories. Early accelerator builders assumed they could ignore the resulting energy losses. But in 1947, scientists building electron synchrotrons at the General Electric (GE) Research Laboratory in Schenectady, New York, were dismayed to find the phenomenon was real, sapping the energies of their devices.
Where it all began Synchrotron light is created whenever charged particles are accelerated. It gets its name because it was first observed in 1947 by scientists at the General Electric Research Laboratory in New York, who saw a bright speck of light through their synchrotron accelerator’s glass vacuum chamber – the visible portion of that energy. (Courtesy: AIP Emilio Segrè Visual Archives, John P. Blewett Collection)
Nuisances of physics, however, have a way of turning into treasured tools. By the early 1950s, scientists were using synchrotron light to study absorption spectra and other phenomena. By the mid-1960s, they were using it to examine the surface structures of materials. But a lot of this work was eclipsed by seemingly much sexier physics.
High-energy particle accelerators, such as CERN’s Proton Synchrotron and Brookhaven’s Alternating Gradient Synchrotron, were regarded as the most exciting, well-funded and biggest instruments in physics. They were the symbols of physics for politicians, press and the public – the machines that studied the fundamental structure of the world.
Researchers who had just discovered the uses of synchrotron light were forced to scrape parts for their instruments. These “first-generation” synchrotrons, such as “Tantalus” in Wisconsin, the Stanford Synchrotron Radiation Project in California, and the Cambridge Electron Accelerator in Massachusetts, were cobbled together from discarded pieces of high energy accelerators or grafted onto them. They were known as “parasites”.
Early adopter A drawing of plans for the Stanford Synchrotron Radiation Project in the US, which became one of the “first generation” of dedicated synchrotron-light sources when it opened in 1974. (Courtesy: SLAC – Zawojski)
In the 1970s, accelerator physicists realized that synchrotron sources could become more useful by shrinking the angular divergence of the electron beam, thereby improving the “brightness”. Renate Chasman and Kenneth Green devised a magnet array to maximize this property. Dubbed the “Chasman–Green lattice”, it begat a second-generation of dedicated light sources, built not borrowed.
Hard on the heels of Synchrotron Radiation Light Source, which opened in the UK in 1981, the National Synchrotron Light Source (NSLS I) at Brookhaven was the first second-generation source to use such a lattice. China’s oldest light source, the Beijing Synchrotron Radiation Facility, which opened to users in Beijing early in 1991, had a Chasman–Green lattice but also had to skim photons off an accelerator; it was a first-generation machine with a second-generation lattice. China’s first fully second-generation machine was the Hefei Light Source, which opened later that year.
By then instruments called “undulators” were already starting to be incorporated into light sources. They increased brightness hundreds-fold, doing so by wiggling the electron beam up and down, causing a coherent addition of electron field through each wiggle. While undulators had been inserted into second-generation sources, the third generation built them in from the start.
Bright thinking Consisting of a periodic array of dipole magnets (red and green blocks), undulators have a static magnetic field that alternates with a wavelength λu. An electron beam passing through the magnets is forced to oscillate, emitting light hundreds of times brighter than would otherwise be possible (orange). Such undulators were added to second-generation synchrotron sources – but third-generation facilities had them built in from the start. (Courtesy: Creative Commons Attribution-Share Alike 3.0 Unported license)
The first of these light sources was the ESRF, which opened to users in 1988. It was followed by the Advanced Photon Source (APS) at Argonne National Laboratory in 1995 and SPring-8 in Japan in 1999. The first third-generation source on the Chinese mainland was the Shanghai Synchrotron Radiation Facility, which opened in 2009.
In the 2010s, “multi-bend achromat” magnets drastically shrank the size of beam elements, further increasing brilliance. Several third generation machines, including the APS, have been upgraded with achromats, turning third-generation machines into fourth. SIRIUS, which has an energy of 3 GeV, was the first fourth-generation machine to be built from scratch.
Next in sequence The Advanced Photon Source at the Argonne National Laboratory in the US, which is a third-generation synchrotron-light source. (Courtesy: Argonne National Laboratory)
Set to operate at 6 GeV, the HEPS will be the first high-energy fourth-generation machine built from scratch. It is a step nearer to the “diffraction limit” that’s ultimately imposed by the way the uncertainty principle limits the simultaneous specification of certain properties. It makes further shrinking of the beam possible – but only at the expense of lost brilliance. That limit is still on the horizon, but the HEPS draws it closer.
The HEPS is being built next to a mountain range north of Beijing, where the bedrock provides a stable platform for the extraordinarily sensitive beams. Next door to the HEPS is a smaller stadium-like building for experimental labs and offices, and a yet smaller building for housing behind that.
Staff at the HEPS successfully stored the machine’s first electron beam in August 2024 and are now enhancing and optimizing parameters such as electron beam current strength and lifetime. When it opens at the end of the year, the HEPS will have 14 beamlines but is designed eventually to have around 90 experimental stations. “Our task right now is to build more beamlines” Yang told me.
Looking around
After studying physics at the University of Science and Technology in Hefei, Yang’s first job was as a beamline designer at the HEPS. On my visit, the machine was still more than a year from being operational and the experimental hall surrounding the ring was open. It is spacious unlike of many US light sources I’ve been to, which tend to be crammed due to numerous upgrades of the machine and beamlines.
As with any light source, the main feature of the HEP is its storage ring, which consists of alternating straight sections and bends. At the bends, the electrons shed X-rays like rain off a spinning umbrella. Intense, energetic and finely tunable, the X-rays are carried off down beamlines, where are they made useful for almost everything from materials science to biomedicine.
New science Fourth-generation sources, such as the High Energy Photon Source (HEPS), need to attract academic and business users from home and abroad. But only time will tell what kind of new science might be made possible. (Courtesy: IHEP)
We pass other stations optimized for 2D, 3D and nanoscale structures. Occasionally, a motorized vehicle loaded with equipment whizzes by, or workers pass us on bicycles. Every so often, I see an overhead red banner in Chinese with white lettering. Translating, Yang says the banners promote safety, care and the need for precision in doing high-quality work, signs of the renowned Chinese work ethic.
We then come to what is labelled a “pink” beam. Unlike a “white” beam, which has a broad spread of wavelengths, or a monochromatic beam of a very specific colour such as red, a pink beam has a spread of wavelengths that are neither broad nor narrow. This allows a much broader flux – typically two orders of magnitude more than a monochromatic beam – allowing a researcher fast diffraction patterns.
Another beamline, meanwhile, is labelled “tender” because its energy falls between 2 keV (“soft” X-rays) and 10 keV (“hard” X-rays). It’s for materials “somewhere between grilled steak and Jell-O” one HEPS researcher quips to me, referring to the wobbly American desert. A tender beam is for purposes that don’t require atomic-scale resolution, such as the magnetic behaviour of atoms.
Three beam pipes pass over the experimental hall to end stations that lie outside the building. They will be used, among other things, for applications in nanoscience, with a monochromator throwing out much of the X-ray beam to make it extremely coherent. We also pass a boxy, glass structure that is a clean room for making parts, as well as a straight pipe about 100 m long that will be used to test tiny vibrations in the Earth that might affect the precision of the beam.
Challenging times
I once spoke to one director of the NSLS, who would begin each day by walking around that facility, seeing what the experimentalists were up to and asking if they needed help. His trip usually took about 5–10 minutes; my tour with Yang took an hour.
But fourth-generation sources, such as the HEPS, face two daunting challenges. One is to cultivate a community of global users. Nearby the HEPS is CAS’s new Yanqi Lake campus, which lies on the other side of the mountains from Beijing, from where I can see the Great Wall meandering through the nearby hills. Faculty and students at CAS will form part of academic users of the HEPS, but how will the lab bring in researchers from abroad?
The HEPS will also need to get in users from business, convincing companies of the value of their machine. SPring-8 in Japan has industrial beamlines, including one sponsored by car giant Toyota, while China’s Shanghai machine has beamlines built by the China Petroleum and Chemical Corporation (Sinopec).
Yang is certainly open to collaboration with business partners. “We welcome industries, and can make full use of the machine, that would be enough,” he says. “If they contribute to building the beamlines, even better.”
The other big challenge for fourth-generation sources is to discover what new things are made possible by the vastly increased flux and brightness. A new generation of improved machines doesn’t necessarily produce breakthrough science; it’s not like one can turn on a machine with greater brightness and a field of new capabilities unfolds before you.
Going fourth The BM18 beamline on the Extremely Brilliant Source (EBS) at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. The EBS is a dedicated fourth-generation light source, with the BM18 beamline being ideal for monitoring very slowly changing systems. (Courtesy: ESRF)
Instead, what can happen is that techniques that are demonstrations or proof-of-concept research in one generation of synchrotron become applied in niche areas in the next, but become routine in the generation after that. A good example is speckle spectrometry – an interference-based technique that needs a sufficiently coherent light source – that should become widely used at fourth-generation sources like HEPS.
For the HEPS, the challenge will be to discover what new research in materials, chemistry, engineering and biomedicine these techniques will make possible. Whenever I ask experimentalists at light sources what kinds of new science the fourth-generation machines will allow, the inevitable answer is something like, “Ask me in 10 years!”
Yang can’t wait that long. “I started my career here,” he says, gesturing excitedly to the machine. “Now is the time – at the beginning – to try to make this machine do new science. If it can, I’ll end my career here!”
Cell separation Illustration of the fabricated optimal acousto-microfluidic chip. (Courtesy: Afshin Kouhkord and Naserifar Naser)
Analysing circulating tumour cells (CTCs) in the blood could help scientists detect cancer in the body. But separating CTCs from blood is a difficult, laborious process and requires large sample volumes.
Researchers at the K N Toosi University of Technology (KNTU) in Tehran, Iran believe that ultrasonic waves could separate CTCs from red blood cells accurately, in an energy efficient way and in real time. They publish their study in the journal Physics of Fluids.
“In a broader sense, we asked: ‘How can we design a microfluidic, lab-on-a-chip device powered by SAWs [standing acoustic waves] that remains simple enough for medical experts to use easily, while still delivering precise and efficient cell separation?’,” says senior author Naser Naserifar, an assistant professor in mechanical engineering at KNTU. “We became interested in acoustofluidics because it offers strong, biocompatible forces that effectively handle cells with minimal damage.”
Acoustic waves can deliver enough force to move cells over small distances without damaging them. The researchers used dual pressure acoustic fields at critical positions in a microchannel to separate CTCs from other cells. The CTCs are gathered at an outlet for further analyses, cultures and laboratory procedures.
In the process of designing the chip, the researchers integrated computational modelling, experimental analysis and artificial intelligence (AI) algorithms to analyse acoustofluidic phenomena and generate datasets that predict CTC migration in the body.
“We introduced an acoustofluidic microchannel with two optimized acoustic zones, enabling fast, accurate separation of CTCs from RBCs [red blood cells],” explains Afshin Kouhkord, who performed the work while a master’s student in the Advance Research in Micro And Nano Systems Lab at KNTU. “Despite the added complexity under the hood, the resulting chip is designed for simple operation in a clinical environment.”
So far, the researchers have evaluated the device with numerical simulations and tested it using a physical prototype. Simulations modelled fluid flow, acoustic pressure fields and particle trajectories. The physical prototype was made of lithium niobate, with polystyrene microspheres used as surrogates for red blood cells and CTCs. Results from the prototype agreed with numerical simulations to within 3.5%.
“This innovative approach in laboratory-on-chip technology paves the way for personalized medicine, real-time molecular analysis and point-of-care diagnostics,” Kouhkord and Naserifar write.
The researchers are now refining their design, aiming for a portable device that could be operated with a small battery pack in resource-limited and remote environments.
D-Wave Systems has used quantum annealing to do simulations of quantum magnetic phase transitions. The company claims that some of their calculations would be beyond the capabilities of the most powerful conventional (classical) computers – an achievement referred to as quantum advantage. This would mark the first time quantum computers had achieved such a feat for a practical physics problem.
However, the claim has been challenged by two independent groups of researchers in Switzerland and the US, who have published papers on the arXiv preprint server that report that similar calculations could be done using classical computers. D-Wave’s experts believe these classical results fall well short of the company’s own accomplishments, and some independent experts agree with D-Wave.
While most companies trying to build practical quantum computers are developing “universal” or “gate model” quantum systems, US-based D-Wave has principally focused on quantum annealing devices. While such systems are less programmable than gate model systems, the approach has allowed D-Wave to build machines with many more quantum bits (qubits) than any of its competitors. Whereas researchers at Google Quantum AI and researchers in China have, independently, recently unveiled 105-qubit universal quantum processors, some of D-Wave’s have more than 5000 qubits. Moreover, D-Wave’s systems are already in practical use, with hardware owned by the Japanese mobile phone company NTT Docomo being used to optimize cell tower operations. Systems are also being used for network optimization at motor companies, food producers and elsewhere.
Trevor Lanting, the chief development officer at D-Wave, explains the central principles behind quantum-annealing computation: “You have a network of qubits with programmable couplings and weights between those devices and then you program in a certain configuration – a certain bias on all of the connections in the annealing processor,” he says. The quantum annealing algorithm places the system in a superposition of all possible states of the system. When the couplings are slowly switched off, the system settles into its most energetically favoured state – which is the desired solution.
Quantum hiking
Lanting compares this to a hiker in the mountains searching for the lowest point on a landscape: “As a classical hiker all you can really do is start going downhill until you get to a minimum, he explains; “The problem is that, because you’re not doing a global search, you could get stuck in a local valley that isn’t at the minimum elevation.” By starting out in a quantum superposition of all possible states (or locations in the mountains), however, quantum annealing is able to find the global potential minimum.
In the new work, researchers at D-Wave and elsewhere set out to show that their machines could use quantum annealing to solve practical physics problems beyond the reach of classical computers. The researchers used two different 1200-qubit processors to model magnetic quantum phase transitions. This is a similar problem to one studied in gate-model systems by researchers at Google and Harvard University in independent work announced in February.
“When water freezes into ice, you can sometimes see patterns in the ice crystal, and this is a result of the dynamics of the phase transition,” explains Andrew King, who is senior distinguished scientist at D-Wave and the lead author of a paper describing the work. “The experiments that we’re demonstrating shed light on a quantum analogue of this phenomenon taking place in a magnetic material that has been programmed into our quantum processors and a phase transition driven by a magnetic field.” Understanding such phase transitions are important in the discovery and design of new magnetic materials.
Quantum versus classical
The researchers studied multiple configurations, comprising ever-more spins arranged in ever-more complex lattice structures. The company says that its system performed the most complex simulation in minutes. They also ascertained how long it would take to do the simulations using several leading classical computation techniques, including neural network methods, and how the time to achieve a solution grew with the complexity of the problem. Based on this, they extrapolated that the most complex lattices would require almost a million years on Frontier, which is one of the world’s most powerful supercomputers.
However, two independent groups – one at EPFL in Switzerland and one at the Flatiron Institute in the US – have posted papers on the arXiv preprint server claiming to have done some of the less complex calculations using classical computers. They argue that their results should scale simply to larger sizes; the implication being that classical computers could solve the more complicated problems addressed by D-Wave.
King has a simple response: “You don’t just need to do the easy simulations, you need to do the hard ones as well, and nobody has demonstrated that.” Lanting adds that “I see this as a healthy back and forth between quantum and classical methods, but I really think that, with these results, we’re pulling ahead of classical methods on the biggest scales we can calculate”.
Very interesting work
Frank Verstraete of the University of Cambridge is unsurprised by some scientists’ scepticism. “D-Wave have historically been the absolute champions at overselling what they did,” he says. “But now it seems they’re doing something nobody else can reproduce, and in that sense it’s very interesting.” He does note, however, that the specific problem chosen is not, in his view an interesting one from a physics perspective, and has been chosen purely to be difficult for a classical computer.
Daniel Lidar of the University of Southern California, who has previously collaborated with D-Wave on similar problems but was not involved in the current work, says “I do think this is quite the breakthrough…The ability to anneal very fast on the timescales of the coherence times of the qubits has now become possible, and that’s really a game changer here.” He concludes that “the arms race is destined to continue between quantum and classical simulations, and because, in all likelihood, these are problems that are extremely hard classically, I think the quantum win is going to become more and more indisputable.”
Artificial intelligence is transforming physics at an unprecedented pace. In the latest episode of Physics World Stories, host Andrew Glester is joined by three expert guests to explore AI’s impact on discovery, research and the future of the field.
Tony Hey, a physicist who worked with Richard Feynman and Murray Gell-Mann at Caltech in the 1970s, shares his perspective on AI’s role in computation and discovery. A former vice-president of Microsoft Research Connections, he also edited the Feynman Lectures on Computation (Anniversary Edition), a key text on physics and computing.
Caterina Doglioni, a particle physicist at the University of Manchester and part of CERN’s ATLAS collaboration, explains how AI is unlocking new physics at the Large Hadron Collider. She sees big potential but warns against relying too much on AI’s “black box” models without truly understanding nature’s behaviour.
Felice Frankel, a science photographer and MIT research scientist, discusses AI’s promise for visualizing science. However, she is concerned about its potential to manipulate scientific data and imagery – distorting reality. Frankel wrote about the need for an ethical code of conduct for AI in science imagery in this recent Nature essay.
The episode also questions the environmental cost of AI’s vast energy demands. As AI becomes central to physics, should researchers worry about its sustainability? What responsibility do physicists have in managing its impact?
While the physics of uncorking a bottle of champagne has been well documented, less is known about the mechanisms at play when opening a swing-top bottle of beer.
Physicist Max Koch from the University of Göttingen in Germany, decided to find out more.
Koch, who is a keen homebrewer, and colleagues used a high-speed camera and a microphone to capture what was going on together with computational fluid-dynamics simulations.
When opening a carbonated bottle under pressure, the difference between the gas pressure in the bottleneck and ambient pressure as it opens results in a rapid escape of gas from the bottle, which can reach the speed of sound.
In a champagne bottle, this results in the creation of a Mach disc as well the classic “pop” sound as it is uncorked.
To investigate the gas dynamics in swing-top bottles, Kock and colleagues examined transparent 0.33 litre bottles, which contained home-brewed ginger beer under 2–5 bars of pressure.
The team found that the sound emitted when opening the bottles, what can be described as an “ah” sound, wasn’t due to a single shockwave, but rather condensation in the bottleneck forming a standing wave.
“The pop’s frequency is much lower than the resonation if you blow on the full bottle like a whistle,” notes Koch. “This is caused by the sudden expansion of the carbon dioxide and air mixture in the bottle, as well as a strong cooling effect to about minus 50 degrees Celsius, which reduces sound speed.”
The team also investigated the sloshing of the beverage as it is opened. First, the dissolved carbon dioxide inside the beer triggers the level of the liquid to rise while the motion of the bottle as it opens also causes the liquid to slosh.
Another effect during opening is the bottle-top hitting the glass with its sharp edge. This triggers further “gushing” in the liquid due to the enhanced formation of bubbles.
There are still some unanswered questions, however, which will require further work. “One thing we didn’t resolve is that our numerical simulations showed an initial strong peak in the acoustic emission before the short ‘ah’ resonance, but this peak was absent in the experimentation,” adds Koch.
Scientists who have been publicly accused of sexual misconduct see a significant and immediate decrease in the rate at which their work is cited, according to a study by behavioural scientists in the US. However, researchers who are publicly accused of scientific misconduct are found not to suffer the same drop in citations (PLOS One20 e0317736). Despite its flaws, citation rates are often seen a marker of impact and quality.
The study was carried by a team led by Giulia Maimone from the University of California, Los Angeles, who collected data from the Web of Science covering 31,941 scientific publications across 18 disciplines. They then analysed the citation rates for 5888 papers authored by 30 researchers accused of either sexual or scientific misconduct, the latter including data fabrication, falsification and plagiarism.
Maimone told Physics World that they used strict selection criteria to ensure that the two groups of academics were comparable and that the accusations against them were public. This meant her team only used scholars whose misconduct allegations have been reported in the media and had “detailed accounts of the allegations online”.
Maimone’s team concluded that papers by scientists accused of sexual misconduct experienced a significant drop in citations in the three years after allegations become public compared with a “control” group of academics of a similar professional standing. Those accused of scientific fraud, meanwhile, saw no statistically significant change in the citation rates of their papers.
Further work
To further explore attitudes towards sexual and scientific misconduct, the researchers surveyed 231 non-academics and 240 academics. The non-academics considered sexual misconduct more reprehensible than scientific misconduct and more deserving of punishment, while academics claimed that they would more likely continue to cite researchers accused of sexual misconduct as compared to scientific misconduct. “Exactly the opposite of what we observe in the real data,” adds Maimone.
According to the researchers, there are two possible explanations for this discrepancy. One is that academics, according to Maimone, “overestimate their ability to disentangle the scientists from the science”. Another is that scientists are aware that they would not cite sexual harassers, but they are unwilling to admit it because they feel they should take a harsher professional approach towards scientific misconduct.
Maimone says they would now like to explore the longer-term consequences of misconduct as well as the psychological mechanisms behind the citation drop for those accused of sexual misconduct. “Do [academics] simply want to distance themselves from these allegations or are they actively trying to punish these scholars?” she asks.
From the Global Physics Summit in Anaheim, California
Some of the most fascinating people that you meet at American Physical Society meetings are not actually physicists, and Bruce Rosenbaum is no exception. Based in Massachusetts, Rosenbaum is a maker of beautiful steampunk objects and he is in Anaheim with a quantum-related creation (see figure).
At first glance Rosenbaum’s sculpture of a “quantum engine” fits in nicely at a conference exhibition that features gleaming vacuum chambers and other such things. However, this lovely artistic object is meant to be admired, rather than being a functioning machine.
At the centre of the object is a small vacuum chamber that could hold a single trapped ion – which could be operated as a quantum engine. Lasers are pointed at the ions through the chamber windows and the chamber is surrounded by a spherical structure that represents both the Bloch sphere of quantum physics and an armillary sphere. The latter being used to demonstrate the motions of celestial objects in the days before computers. But as someone who, many years ago, did some electron spectroscopy, the rings are more reminiscent of Helmholtz coils that would screen the ion from Earth’s magnetic field.
I should make it clear that the neither the vacuum chamber, nor the lasers are real — and there is no trapped ion. However, a real quantum engine based on a trapped ion has been created in a real physics lab. So, in principle, the sculpture could be made into a functional device by using “real components”.
Past and future connections
In my mind, the object symbolizes the connection between the state-of-the-art today (the trapped-ion qubit) and the many technologies that have come before (armillary sphere).
While Rosenbaum does not have a background in physics, I think he has a kinship with the thousands of experimental physicists who have built devices that bear a striking resemblance to this object. But some physicists were involved in the development of this beautiful object. They include Nicole Yunger Halpern of the University of Maryland. Yunger Halpern is a theorist who uses the ideas of quantum information to study thermodynamics. She describes the field as “quantum steampunk” because like the artistic genre of steampunk, it combines 19th century concepts (thermodynamics) with the 21st century concepts of quantum science and technology.
I had a lovely chat with Rosenbaum and he had some very interesting things to say about the intersection of creativity and technology – things that are highly relevant to physicists. I hope to have him and perhaps one of his physicist colleagues on a future episode of the Physics World Weekly podcast.
When physicists got their first insights into the quantum world more than a century ago, they found it puzzling to say the least. But gradually, and through clever theoretical and experimental work, a consistent quantum theory emerged.
Two physicists that who played crucial roles in this evolution were Albert Einstein and John Bell. In this episode of the Physics World Weekly podcast the theoretical crypto-physicist Artur Ekert explains how a quantum paradox identified by Einstein and colleagues in 1935 inspired a profound theoretical breakthrough by Bell three decades later.
Ekert, who splits his time between the University of Oxford and the National University of Singapore, describes how he used Bell’s theorem to create a pioneering quantum cryptography protocol and he also chats about current research in quantum physics and beyond.
The European Space Agency (ESA) has released the first batch of survey data from its €1.4bn Euclid mission. The release includes a preview of its ‘deep field’ where in just one week of observations, Euclid already spotted 26 million galaxies as well as many transient phenomena such as supernovae and gamma-ray bursts. The dataset is published along with 27 scientific papers that will be submitted to the journal Astronomy & Astrophysics.
The dataset also features a catalogue of 380 000 galaxies that have been detected by artificial intelligence or “citizen-science” efforts. They include those with spiral arms, central bars and “tidal tails”, inferring merging galaxies.
There has also been the discovery of 500 gravitational-lens candidates thanks to AI and citizen science. Gravitational lensing in when light from more distant galaxies is bent around closer galaxies due to gravity and it can help identify where dark matter is located and its properties.
“For the past decade, my research has been defined by painstakingly analyzing the same 50 strong gravitational lenses, but with the data release, I was handed 500 new strong lenses in under a week” says astronomer James Nightingale from Newcastle University. “It’s a seismic shift — transforming how I do science practically overnight.”
The data released today still represents only 0.4% of the total number of galaxies that Euclid is expected to image over its lifetime. Euclid will capture images of more than 1.5 billion galaxies over six years, sending back around 100 GB of data every day.
Light curves: a collection of gravitational lenses that Euclid captured in its first observations of the deep field areas. (Courtesy: ESA/Euclid/Euclid Consortium/NASA, image processing by M Walmsley, M Huertas-Company, J-C Cuillandre)
“Euclid shows itself once again to be the ultimate discovery machine. It is surveying galaxies on the grandest scale, enabling us to explore our cosmic history and the invisible forces shaping our universe,” says ESA’s science director, Carole Mundell. “With the release of the first data from Euclid’s survey, we are unlocking a treasure trove of information for scientists to dive into and tackle some of the most intriguing questions in modern science”.
More data to come
Euclid was launched in July 2023 and is currently located in a spot in space called Lagrange Point 2 – a gravitational balance point some 1.5 million kilometres beyond the Earth’s orbit around the Sun. The Euclid Consortium comprises some 2600 members from more than 15 countries.
Euclid has a 1.2 m-diameter telescope, a camera and a spectrometer that it uses to plot a 3D map of the distribution of galaxies. The images it takes are about four times as sharp as current ground-based telescopes.
Researchers have demonstrated that they can remotely detect radioactive material from 10 m away using short-pulse CO2 lasers – a distance over ten times farther than achieved via previous methods.
Conventional radiation detectors, such as Geiger counters, detect particles that are emitted by the radioactive material, typically limiting their operational range to the material’s direct vicinity. The new method, developed by a research team headed up at the University of Maryland, instead leverages the ionization in the surrounding air, enabling detection from much greater distances.
The study may one day lead to remote sensing technologies that could be used in nuclear disaster response and nuclear security.
Using atmospheric ionization
Radioactive materials emit particles – such as alpha, beta or gamma particles – that can ionize air molecules, creating free electrons and negative ions. These charged particles are typically present at very low concentrations, making them difficult to detect.
Senior author Howard Milchberg and colleagues – also from Brookhaven National Laboratory, Los Alamos National Laboratory and Lawrence Livermore National Laboratory – demonstrated that CO2 lasers could accelerate these charged particles, causing them to collide with neutral gas molecules, in turn creating further ionization. These additional free charges would then undergo the same laser-induced accelerations and collisions, leading to a cascade of charged particles.
This effect, known as “electron avalanche breakdown”, can create microplasmas that scatter laser light. By measuring the profile of the backscattered light, researchers can detect the presence of radioactive material.
The team tested their technique using a 3.6-mCi polonium-210 alpha particle source at a standoff distance of 10 m, significantly longer than previous experiments that used different types of lasers and electromagnetic radiation sources.
“The researchers successfully demonstrated 10-m standoff detection of radioactive material, significantly surpassing the previous range of approximately 1 m,” she says.
Milchberg and collaborators had previously used a mid-infrared laser in a similar experiment in 2019. Changing to a long-wavelength (9.2 μm) CO2 laser brought significant advantages, he says.
“You can’t use any laser to do this cascading breakdown process,” Milchberg explains. The CO2 laser’s wavelength was able to enhance the avalanche process, while being low energy enough to not create its own ionization sources. “CO2 is sort of the limit for long wavelengths on powerful lasers and it turns out CO2 lasers are very, very efficient as well,” he says. “So this is like a sweet spot.”
Imaging microplasmas
The team also used a CMOS camera to capture visible-light emissions from the microplasmas. Milchberg says that this fluorescence around radioactive sources resembled balls of plasma, indicating the localized regions where electron avalanche breakdowns had occurred.
By counting these “plasma balls” and calibrating them against the backscattered laser signal, the researchers could link fluorescence intensity to the density of ionization in the air, and use that to determine the type of radiation source.
The CMOS imagers, however, had to be placed close to the measured radiation source, reducing their applicability to remote sensing. “Although fluorescence imaging is not practical for field deployment due to the need for close-range cameras, it provides a valuable calibration tool,” Milchberg says.
Scaling to longer distances
The researchers believe their method can be extended to standoff distances exceeding 100 m. The primary limitation is the laser’s focusing geometry, which would affect the regions in which it could trigger an avalanche breakdown. A longer focal length would require a larger laser aperture but could enable kilometre-scale detection.
Choi points out, however, that deploying a CO2 laser may be difficult in real-world applications. “A CO₂ laser is a bulky system, making it challenging to deploy in a portable manner in the field,” she says, adding that mounting the laser for long-range detection may be a solution.
Milchberg says that the next steps will be to continue developing a technique that can differentiate between different types of radioactive sources completely remotely. Choi agrees, noting that accurately quantifying both the amount and type of radioactive material continues to be a significant hurdle to realising remote sensing technologies in the field.
“There’s also the question of environmental conditions,” says Milchberg, explaining that it is critical to ensure that detection techniques are robust against the noise introduced by aerosols or air turbulence.
The Square Kilometre Array (SKA) Observatory has released the first images from its partially built low-frequency telescope in Australia, known as SKA-Low.
The new SKA-Low image was created using 1024 two-metre-high antennas. It shows an area of the sky that would be obscured by a person’s clenched fist held at arm’s length.
Observed at 150 MHz to 175 MHz, the image contains 85 of the brightest known galaxies in that region, each with a black hole at their centre.
“We are demonstrating that the system as a whole is working,” notes SKA Observatory director-general Phil Diamond. “As the telescopes grow, and more stations and dishes come online, we’ll see the images improve in leaps and bounds and start to realise the full power of the SKAO.”
SKA-Low will ultimately have 131 072 two-metre-high antennas that will be clumped together in arrays to act as a single instrument.
These arrays collect the relatively quiet signals from space and combine them to produce radio images of the sky with the aim of answering some of cosmology’s most enigmatic questions, including what dark matter is, how galaxies form, and if there is other life in the universe.
When the full SKA-Low gazes at the same portion of sky as captured in the image released yesterday, it will be able to observe more than 600,000 galaxies.
“The bright galaxies we can see in this image are just the tip of the iceberg,” says George Heald, lead commissioning scientist for SKA-Low. “With the full telescope we will have the sensitivity to reveal the faintest and most distant galaxies, back to the early universe when the first stars and galaxies started to form.”
‘Milestone’ achieved
SKA-Low is one of two telescopes under construction by the observatory. The other, SKA-Mid, which observes mid-frequency range, will include 197 three-storey dishes and is being built in South Africa.
The telescopes, with a combined price tag of £1bn, are projected to begin making science observations in 2028. They are being funded through a consortium of member states, including China, Germany and the UK.
University of Cambridge astrophysicist Eloy de Lera Acedo, who is principal Investigator at his institution for the observatory’s science data processor, says the first image from SKA-Low is an “important milestone” for the project.
“It is worth remembering that these images now require a lot of work, and a lot more data to be captured with the telescope as it builds up, to reach the science quality level we all expect and hope for,” he adds.
Rob Fender, an astrophysicist at the University of Oxford, who is not directly involved in the SKA Observatory, says that the first image “hints at the enormous potential” for the array that will eventually “provide humanity’s deepest ever view of the universe at wavelengths longer than a metre”.
“I could have sworn I put it somewhere safe,” is something we’ve all said when looking for our keys, but the frustration of searching for lost objects is also a common, and very costly, headache for civil engineers. The few metres of earth under our feet are a tangle of pipes and cables that provide water, electricity, broadband and waste disposal. However, once this infrastructure is buried, it’s often difficult to locate it again.
“We damage pipes and cables in the ground roughly 60,000 times a year, which costs the country about 2.4 billion pounds,” explains Nicole Metje, a civil engineer at the University of Birmingham in the UK. “The ground is such a high risk, but also such a significant opportunity.”
The standard procedure for imaging the subsurface is to use electromagnetic waves. This is done either with ground penetrating radar (GPR), where the signal reflects off interfaces between objects in the ground, or with locators that use electromagnetic induction to find objects. Though they are stalwarts of the civil engineering toolbox, the performance of both these techniques is limited by many factors, including the soil type and moisture.
Physics at work Damage to underground infrastructure costs millions of pounds a year in the UK alone. That’s why there is a need to develop new methods to image the subsurface that don’t require holes to be dug or rely on electromagnetic pulses whose penetration depth is highly variable. (Courtesy: iStock/mikeuk)
Metje and her team in Birmingham have participated in several research projects improving subsurface mapping. But her career took an unexpected turn in 2009 when one of her colleagues was contacted out of the blue by Kai Bongs – a researcher in the Birmingham school of physics. Bongs, who became the director of the Institute for Quantum Technologies at the German Aerospace Centre (DLR) in 2023, explained that his group was building quantum devices to sense tiny changes in gravity and thought this might be just what the civil engineers needed. However, there was a problem. The device required a high-stability, low-noise environment – rarely compatible with the location of engineering surveys. But as Bongs spoke to more engineers he became more interested. “I understood why tunnels and sewers are very interesting,” he says, and saw an opportunity to “do something really meaningful and impactful”.
What lies beneath
Although most physicists are happy to treat g, the acceleration due to gravity, as 9.81 m/s2, it actually varies across the surface of Earth. Changes in g indicate the presence of buried objects and varying soil composition and can even signal the movement of tectonic plates and oceans. The engineers in Birmingham were well aware of this; classical devices that measure changes in gravity using the extension of springs are already used in engineering surveys, though they aren’t as widely adopted as electromagnetic signals. These machines – called gravimeters – don’t require holes to be dug and the measurement isn’t limited by soil conditions, but changes in the properties of the spring over time cause drift, requiring frequent recalibration.
The perfect test mass would be a single atom – it has no moving mechanical parts, can be swapped out for any of the same isotope, and its mass will never change
More sensitive devices have been developed that use a levitating superconducting sphere. These devices have been used for long-term monitoring of geophysical phenomena such as tides, volcanos and seismic activity, but they are less appropriate for engineering surveys where speed and portability are of the essence.
The perfect test mass would be a single atom – it has no moving mechanical parts, can be swapped out for any of the same isotope, and its mass will never change. “Today or tomorrow or in 100 years’ time, it’ll be exactly the same,” says physicist Michael Holynski, the principal investigator of the UK Quantum Technology Hub for Sensors and Timing led by the University of Birmingham.
Falling atoms
The gravity sensing project in Birmingham uses a technique called cold-atom interferometry, first demonstrated in 1991 by Steven Chu and Mark Kasevich at Stanford University in the US (Phys. Rev. Lett.67 181). In the cold-atom interferometer, two atomic test masses fall from different heights, and g is calculated by comparing their displacement in a given time.
Because it’s a quantum object, a single atom can act as both test masses at once. To do this, the interferometer uses three laser pulses that sends the atom on two trajectories. First, a laser pulse puts the atom in a superposition of two states, where one state gets a momentum “kick” and recoils away from the other. This means that when the atom is allowed to freefall, the state nearest the centre of the Earth accelerates faster. Halfway through the freefall, a second laser pulse then switches the state with the momentum kick. The two states start to catch up with each other, both still falling under gravity.
Finally, another laser pulse, identical to the first, is applied. If the acceleration due to gravity were constant everywhere in space, the two states would fall exactly the same distance and overlap at the end of the sequence. In this case, the final pulse would effectively reverse the first, and the atom would end up back in the ground state. However, because in the real world the atom’s acceleration changes as it falls through the gravity gradient, the two states don’t quite find each other at the end. Since the atom is wavelike, this spatial separation is equivalent to a phase difference. Now, the outcome of the final laser pulse is less certain; sometimes it will return the atom to the ground state, but sometimes it will collapse the wavefunction to the excited state instead.
If a cloud of millions of atoms is dropped at once, the proportion that finishes in each state (which is measured by making the atoms fluoresce) can be used to calculate the phase difference, which is proportional to the atom’s average gravitational acceleration.
To measure these phase shifts, the thermal noise of the atoms must be minimized. This can be achieved using a magneto-optical trap and laser cooling, a technique pioneered by Chu, in which spatially varying magnetic fields and lasers trap atoms and cool them close to absolute zero. Chu, along with William H Phillips and Claude Cohen-Tannoudji, was awarded the 1997 Nobel Prize in Physics for his work on laser cooling.
Bad vibrations
Unlike the spring or the superconducting gravimeter, the cold-atom device produces an absolute rather than a relative measurement of g. In their first demonstration, Chu and Kasevich measured the acceleration due to gravity to three parts in 100 million. This was about a million times better than previous attempts with single atoms, but it trailed behind the best absolute measurements, which were made using a macroscopic object in free fall.
Whether spring or quantum-based, gravimeters share the same major source of noise – vibrations
“It’s always one thing to do the first demonstration of principle, and then it’s a different thing to really get it to a performance level where it actually is useful and competitive,” says Achim Peters, who started a PhD with Chu in 1992 and is now a researcher at the Humboldt University of Berlin.
Whether spring or quantum-based, gravimeters share the same major source of noise – vibrations. Although we don’t feel it, the ground, which is the test mass’s reference frame, is never completely still. According to the Einstein equivalence principle, we can’t differentiate the acceleration due to these vibrations from the acceleration of the test mass due to gravity.
When Peters was at Stanford he built a sophisticated vibration isolation system where the extension of mechanical springs was controlled by electronic feedback. This brought the quantum device in line with other state-of-the-art measurement techniques, but such a complex apparatus would be difficult to operate outside a laboratory.
However, if a cold-atom gravity sensor could operate outside without being hampered by vibrations it would have an instant advantage over spring devices, where vibrations have to be averaged out by taking longer measurements. “If we want to measure several hectares, you’re talking about three weeks or plus [with spring gravimeters],” explains Metje. “That takes a lot of time and therefore also a lot of cost.”
Enter the gravity gradiometer
A few years after Chu and Kasevich published the first cold-atom interferometer result, the US Navy declassified a technology that had been developed by Bell Aerospace (later acquired by Lockheed Martin) for submarines and which transformed the field of geophysics. This device – called a gravity gradiometer – calculated the gravity gradient by measuring the acceleration of several spinning discs. As well as finding objects, gravity can identify a geographical location, meaning that gravity sensors have applications in GPS-free navigation. Compared to gravimeters, a gradiometer is more sensitive to nearby objects and when the gravity gradiometer was declassified it was seized upon for use in oil and gas exploration. The Lockheed Martin device remains the industry standard – it measures gravity gradient in three dimensions and its sophisticated vibration-isolation system means it can be used in the field, including in airborne surveys – but it is prohibitively costly for most researchers.
In 1998 Kasevich’s group demonstrated a gradiometer built from two cold-atom interferometers stacked one above the other, where the difference between the phases on the atom clouds was used to calculate the gravity gradient (Phys. Rev. Lett. 81 971). In this configuration, the interferometry pulses illuminating the two clouds come from the same laser beams, which means that the vibrations that had previously required a complex damping system are cancelled out. In the laboratory, cold-atom gravity gradiometers have many applications in fundamental physics – they have been used to test the Einstein equivalence principle to one part in a trillion, and a 100 m tall interferometer is currently under construction at Fermilab, where it will be used to hunt for gravitational waves.
It was around this time, in 2000, when Bongs first encountered cold-atom interferometry, as a postdoc with Kasevich, then at Yale. He explains that the goal was to “get one of the lab-based systems, which were essentially the standard at the time, out into the field”. Even without the problem of vibrational noise, this was a significant challenge. Temperature fluctuations, external magnetic fields and laser stability will all limit the performance of the gradiometer. The portability of the system must also be balanced against the fact that a taller device will allow longer freefall and more sensitive measurements. What’s more, the interferometers will rarely be perfectly directed towards the centre of the Earth, which means the atoms fall slightly sideways relative to the laser beams.
In the summer of 2008, by which time Bongs was in Birmingham, Kasevich’s group, now back at Stanford, mounted a cold-atom gradiometer in a truck and measured the gravity gradient as they drove in and out of a loading bay on the Stanford campus. They measured a peak that coincided with the building’s outer wall, but this demonstration took place with a levelling platform and temperature control inside the truck. The demonstration of the first truly free-standing, outdoor cold-atom gradiometer was still up for grabs.
Ears to the ground
The portable cold-atom gravity sensor project in Birmingham began in earnest in 2011, as a collaboration between the engineers and the physicists. The team knew that building a device that was robust enough to operate outside would be only half the challenge. They also needed to make something cost-effective and easy to operate. “If you can manage to make the laser system small and compact and cheap and robust, then you more or less own quantum technologies,” says Bongs.
When lasers propagate in free space, small knocks and bumps easily misalign the optical components. To make their device portable, the researchers made an early decision to instead use optical fibres, which direct light to the right place even if the device is jolted during transportation or operation.
However, they quickly realized that this was easier said than done. In a standard magneto-optical trap, atoms are cooled by three orthogonal pairs of laser beams that cool and trap them in three dimensions. In the team’s original configuration, this light came from three fibres that were split from a single laser. Bending and temperature fluctuations exert stresses on the optical fibre that alter the polarization of the light as it propagates. Unstable polarizations in the beams meant that the atom clouds were moving around in the optical traps. “It wasn’t very robust,” says Holynski, “we needed a different approach”.
To solve this problem, they adopted a new solution in which light enters the chamber from the top and bottom, where it bounces off a configuration of mirrors to create the two atom traps. Because the beams can’t be individually adjusted, this sacrifices some efficiency, but if it fixed the laser polarization problem, the team decided it was worth a try.
In the world of quantum technologies, 1550 is something of a magic number. This is the most common wavelength of telecoms lasers because light of this wavelength propagates furthest in optical fibres. The telecoms industry has therefore invested significant time and money into developing robust lasers operating close to 1550 nm.
By lucky chance, 1550 nm is also almost twice the main resonant frequency of rubidium-87 (780 nm), an alkali metal that is well-suited to atom interferometry. Conveniently close to rubidium-87’s resonant frequency are hyperfine transitions that can be used to cool the atoms, measure their final state and put them into a superposition for interferometry. Frequency doubling using nonlinear crystals is a well-established optical technique, so combining a rubidium interferometer with a telecoms laser was an ideal solution.
Out and about The quantum-based gravity sensor, pictured outside on the University of Birmingham campus. The blue tube houses the two interferometers and the black box houses the lasers and control electronics. (CC BY 4.0 Nature602 590)
By 2018, as part of the hub and under contract with the UK Ministry of Defence, had assembled a freestanding gradiometer – a 2 m tall tube containing the two interferometers, attached to a box of electronics and the lasers, both mounted on wheels. The researchers performed outdoor trials in 2018 and 2019, including a trip to an underground cave in the Peak District, but they still weren’t getting the performance they wanted. “People get their hopes up,” says Holynski. “This was quite a big journey.”
The researchers worked out that another gamble they had made, this time to reduce the cost of the magnetic shield, wasn’t performing as well as hoped. External magnetic fields shift the atom’s energy levels, but unlike the phase shift due to gravity, this source of error is the same whether the momentum kick is directed up or down. By taking two successive measurements with a downwards and upwards kick, they thought they could remove magnetic noise, enabling them to reduce the cost of the expensive alloy they were using to shield the interferometers.
This worked as expected, but because they were operating outside a controlled laboratory environment, the large variation of the magnetic fields in space and time introduced other errors. It was back to the lab, where the team disassembled the sensor and rebuilt it again with full magnetic shielding.
By 2020 the researchers were ready to take the new device outside. However, the COVID-19 pandemic ground work to a halt and they had to wait until the following year.
Quantum tunnelling
“One of the things that changes about you when you work on gravity gradiometers is you start looking around for potential targets everywhere you go,” says Holynski. In March 2021 a team of physicists and engineers that included Bongs, Metje and Holynski took the newly rebuilt gradiometer for its first outside trial, where they trundled it repeatedly over a road on the University of Birmingham campus. They knew that running under the road was a two-by-two-metre hollow tunnel, built to carry utility lines. They also knew approximately where it was, but wanted to see if the gradiometer could find it.
The first time they did this, they noticed a dip in the gravity gradient that seemed to have the right dimensions for the tunnel, and when they repeated the measurements, they saw it again. Because of their previous unsuccessful attempts, Holynski remained trepidatious. “People get quite excited. And then you have to say to them, ‘Sorry, I don’t think that’s quite conclusive enough yet’.”
(a) A schematic of the 2021 test of the gravity gradiometer, with the hollow utility tunnel pictured to scale. (b) The hourglass configuration of the quantum gravity gradiometer. The atom clouds (green dots) are laser-cooled (red arrows) in magneto-optical traps formed using mirrors (blue). To measure the gravity gradient the atoms are subject to interferometry laser pulses (yellow arrows) under freefall (purple dots).
Elsewhere on campus, another team was busy analysing the data. The results, when they were done, were consistent with a hollow object, about two-by-two metres across, and about a metre below the surface. Millions of people will have walked over that road without thinking once about what’s beneath it, but to the researchers, this was the culmination of a decade of work, and proof that cold-atom gradiometers can operate outside the lab (Nature602 590).
The valley of death
“It’s one more step in the direction of making quantum sensors available for real-world everyday use,” says Holger Müller, a physicist at the University of California, Berkeley. In 2019 Müller’s group published the results of a gravity survey it had taken with a cold-atom interferometer during a drive through the California hills (Sci. Adv.5 10.1126/sciadv.aax0800). He is also involved in a NASA project that aims to perform atom interferometry on the International Space Station (Nature Communications15 6414). Müller thinks that for researchers especially, cold-atom gradiometers could make gravity gradient surveys more accessible than with the Lockheed Martin device.
By now, the Birmingham gravity gradiometer is well travelled. As well as land-based trials, it has been on two ship voyages, one lasting several weeks, to test its performance in different environments and its potential for use in navigation. The project has also become a flagship of the UK’s national quantum technologies programme, garnering industry partners including Network Rail and RSK and spinning out into start-up DeltaG (of which Holynski is a co-founder). Another project in France led by the company iXblue has also built a prototype gravity gradiometer that has been demonstrated inside (Phys. Rev. A105 022801).
However, if cold-atom gravity gradiometers are to become an alternative to electromagnetic surveys or spring gravimeters, they must escape the “Valley of Death” – the critical phase in a technology journey when it has been demonstrated but not yet been commercialized.
This won’t be easy. The team has estimated that the gravity gradiometer currently performs about 1.5 times better than the industry-leading spring gravimeter. Spring gravimeters are small, easy to operate and significantly cheaper than the quantum alternative. The cost of the lasers in the quantum gradiometer alone are several hundreds of thousands of pounds, compared to about £100,000 for a spring-based instrument.
The quantum device is also large, requires a team of scientists to operate and maintain it, and consumes much more power than a spring gravimeter. As well as saving time compared to spring gravimeters, a potential advantage of the quantum gravity gradiometer is that because it has no machined moving parts it could be used for passive, long-term environmental monitoring. However, unless the power consumption is reduced it will be tricky to operate it in remote conditions.
In the years since the first test, the team has built another prototype that is about half the size, consumes significantly less power, and delivers the cooling, detection and interferometry using a single laser, which will significantly reduce the total cost. Holynski explains that this system is a “work in progress” that is currently being tested in the laboratory.
A large focus of the group’s efforts has been bringing down the cost of the lasers. “We’ve taken available components from the telecom community and found ways to make them work in our system,” says Holynski. “Now we’re starting to work with the telecom community, the academic and industry community, to think ‘how can we twist their technology and make it cheaper to fit what we need?’”
When Chu and Kasevich demonstrated it for the first time, the idea of atom interferometry was already four decades old, having been proposed by David Bohm and later Eugene Wigner (Am. J. Phys.31 6). Rather than lasers, this theoretical device was based on the Stern–Gerlach effect, in which an atom is in a superposition of spin states, deflected in opposite directions in a magnetic field. Atoms have a much smaller characteristic wavelength than photons, so a practical interferometer requires exquisite control over the atomic wavefronts. In the decades after it was proposed, several theorists, including Julian Schwinger, investigated the idea but found that a useful interferometer would require an extraordinarily controlled low-noise environment that then seemed inaccessible (Found. Phys.18 1045).
Decades in the making, the mobile cold-atom interferometer is a triumph of practical problem-solving and even if the commercial applications have yet to be realized, one thing is clear: when it comes to pushing the boundaries of quantum physics, sometimes it pays to think like an engineer.
From the Global Physics Summit in Anaheim, California
The greatest pleasure of being at a huge physics conference is learning about the science of something that’s familiar, but also a little bit quirky. That’s why I always try to go to sessions given by undergraduate students, because for some reason they seem to do research projects that are the most fun.
I was not disappointed by the talk given this morning by Atharva Lele, who is at the Georgia Institute of Technology here in the US. He spoke about the physics of manu jumping, a competitive sport that originates from the Māori and Pasifika peoples of New Zealand.
The general idea will be familiar to anyone who messed around at swimming pools as a child: who can make the highest splash when they jump into the water.
Cavity creation
According to Lele, the best manu jumpers enter the water back first, creating a V-shape with their legs and upper body. The highest splashes are made when a jumper creates a deep and wide air cavity that quickly closes, driving water upwards in a jet – often to astonishing heights.
Lele and colleagues discovered that a 45° angle between the legs and torso afforded the highest splashes. This is probably because this angle results in a cavity that is both deep and wide. An analysis of videos of manu jumpers revealed that the best ones entered the water at an angle of about 46°, corroborating the teams findings. This is good news for jumpers, because there is risk of injury at higher angles (think belly flop).
Another important aspect of the study looked at what jumpers did when they entered the water – which is to roll and kick. To study the effect of this motion, the team created a “manu bot”, which unfolded as it entered the water. They found that there was an optimal opening time for making the highest splashes – it is a mere 0.26 s.
I was immediately taken back to my childhood in Canada and realized that we were doing our own version of manu from the high diving board at the local pool. The most successful technique that we discovered was to keep our bodies straight, but entering the water at an angle. This would consistently produce a narrow jet of water. I realize now that by entering the water at an angle, we must have been creating a relatively deep and wide cavity – although probably not as efficiently and manu jumpers. Maybe Lele and colleagues could do a follow-up study looking at alternative versions of manu around the world.
A new study probing quantum phenomena in neurons as they transmit messages in the brain could provide fresh insight into how our brains function.
In this project, described in the Computational and Structural Biotechnology Journal, theoretical physicist Partha Ghose from the Tagore Centre for Natural Sciences and Philosophy in India, together with theoretical neuroscientist Dimitris Pinotsis from City St George’s, University of London and the MillerLab of MIT, proved that established equations describing the classical physics of brain responses are mathematically equivalent to equations describing quantum mechanics. Ghose and Pinotsis then derived a Schrödinger-like equation specifically for neurons.
Our brains process information via a vast network containing many millions of neurons, which can each send and receive chemical and electrical signals. Information is transmitted by nerve impulses that pass from one neuron to the next, thanks to a flow of ions across the neuron’s cell membrane. This results in an experimentally detectable change in electrical potential difference across the membrane known as the “action potential” or “spike”.
When this potential passes a threshold value, the impulse is passed on. But below the threshold for a spike, a neuron’s action potential randomly fluctuates in a similar way to classical Brownian motion – the continuous random motion of tiny particles suspended in a fluid – due to interactions with its surroundings. This creates the so-called “neuronal noise” that the researchers investigated in this study.
Previously, “both physicists and neuroscientists have largely dismissed the relevance of standard quantum mechanics to neuronal processes, as quantum effects are thought to disappear at the large scale of neurons,” says Pinotsis. But some researchers studying quantum cognition hold an alternative to this prevailing view, explains Ghose.
“They have argued that quantum probability theory better explains certain cognitive effects observed in the social sciences than classical probability theory,” Ghose tells Physics World. “[But] most researchers in this field treat quantum formalism [the mathematical framework describing quantum behaviour] as a purely mathematical tool, without assuming any physical basis in quantum mechanics. I found this perspective rather perplexing and unsatisfactory, prompting me to explore a more rigorous foundation for quantum cognition – one that might be physically grounded.”
As such, Ghose and Pinotsis began their work by taking ideas from American mathematician Edward Nelson, who in 1966 derived the Schrödinger equation – which predicts the position and motion of particles in terms of a probability wave known as a wavefunction – using classical Brownian motion.
Firstly they proved that the variables in the classical equations for Brownian motion that describe the random neuronal noise seen in brain activity also obey quantum mechanical equations, deriving a Schrödinger-like equation for a single neuron. This equation describes neuronal noise by revealing the probability of a neuron having a particular value of membrane potential at a specific instant. Next, the researchers showed how the FitzHugh-Nagumo equations, which are widely used for modelling neuronal dynamics, could be re-written as a Schrödinger equation. Finally, they introduced a neuronal constant in these Schrödinger-like equations that is analogous to Planck’s constant (which defines the amount of energy in a quantum).
“I got excited when the mathematical proof showed that the FitzHugh-Nagumo equations are connected to quantum mechanics and the Schrödinger equation,” enthuses Pinotsis. “This suggested that quantum phenomena, including quantum entanglement, might survive at larger scales.”
“Penrose and Hameroff have suggested that quantum entanglement might be related to lack of consciousness, so this study could shed light on how anaesthetics work,” he explains, adding that their work might also connect oscillations seen in recordings of brain activity to quantum phenomena. “This is important because oscillations are considered to be markers of diseases: the brain oscillates differently in patients and controls and by measuring these oscillations we can tell whether a person is sick or not.”
Going forward, Ghose hopes that “neuroscientists will get interested in our work and help us design critical neuroscience experiments to test our theory”. Measuring the energy levels for neurons predicted in this study, and ultimately confirming the existence of a neuronal constant along with quantum effects including entanglement would, he says, “represent a big step forward in our understanding of brain function”.
From the Global Physics Summit in Anaheim, California
I spent most of Saturday travelling between the UK and Anaheim in Southern California, so I was up very early on Sunday with jetlag. So just as the sun was rising over the Santa Ana Mountains on a crisp morning, I went for a run in the suburban neighbourhood just south of the Anaheim Convention Center. As I made my way back to my hotel, the sidewalks were already thronging with physicists on their way to register for the Global Physics Summit (GPS) – which is being held in Anaheim by the American Physical Society (APS).
The GPS combines the APS’s traditional March and April meetings, which focus on condensed-matter and particle and nuclear physics, respectively – and much more. This year, about 14,000 physicists are expected to attend. I popped out at lunchtime and spotted a “physics family” walking along Harbor Boulevard, with parents and kids all wearing vintage APS T-shirts with clever slogans. They certainly stood out from most families, many of which were wearing Mickey Mouse ears (Disneyland is just across the road from the convention centre).
Uniting physicists
The GPS starts in earnest bright and early Monday morning, and I am looking forward to spending a week surrounded by thousands of fellow physicists. While many physicists in the US are facing some pretty dire political and funding issues, I am hoping that the global community can unite in the face of the anti-science forces that have emerged in some countries.
This year is the International Year of Quantum Science and Technology, so it’s not surprising that quantum mechanics will be front and centre here in Anaheim. I am looking forward to the “Quantum Playground”, which will be on much of this week. It promises, “themed areas; hands-on interactive experiences; demonstrations and games; art and science installations; mini-performances; and ask the experts”. I’ll report back once I have paid a visit.
Researchers in France have devised a new technique in quantum sensing that uses trapped ultracold atoms to detect acceleration and rotation. They then combined their quantum sensor with a conventional, classical inertial sensor to create a hybrid system that was used to measure acceleration due to Earth’s gravity and the rotational frequency of the Earth. With further development, the hybrid sensor could be deployed in the field for applications such as inertial navigation and geophysical mapping.
Measuring inertial quantities such as acceleration and rotation is at the heart of inertial navigation systems, which operate without information from satellites or other external sources. This relies on the precise knowledge of the position and orientation of the navigation device. Inertial sensors based on classical physics have been available for some time, but quantum devices are showing great promise. On one hand, classical sensors using quartz in micro-electro-mechanical (MEM) devices have gained widespread use due to their robustness and speed. However, they suffer from drifts – a gradual loss of accuracy over time, due to several factors such as temperature sensitivity and material aging. On the other hand, quantum sensors using ultracold atoms achieve better stability over long operation times. While such sensors are already commercially available, the technology is still being developed to achieve the robustness and speed of classical sensors.
Now, the Cold Atom Group of the French Aerospace Lab (ONERA) has devised a new method in atom interferometry that uses ultracold atoms to measure inertial quantities. By launching the atoms using a magnetic field gradient, the researchers demonstrated stabilities below 1 µm/s2 and 1 µrad/s for acceleration and rotation measurements over 24 hours respectively. This was done by continuously performing a 4 s interferometer sequence on the atoms for around 20 min to extract the inertial quantities. That is equivalent to driving a car for 20 min straight and knowing the acceleration and rotation to the µm/s2 and µrad/s level.
Cold-atom accelerometer–gyroscope
They built their cold-atom accelerometer–gyroscope using rubidium-87 atoms. By holding the atoms in a magneto-optical trap, the researchers cool them down to 2 µK, enabling good control over the atoms for further manipulation. By releasing the atoms from the trap, the atoms freely fall along the gravity direction. This allows the researchers to measure their free falling acceleration using atom interferometry. In their protocol, a series of three light pulses that coherently splits an atomic cloud into two paths, redirects and then recombines it allowing the cloud to interfere with itself. From the phase shift of the interference pattern, the inertial quantities can be deduced.
Measuring their rotation rates however, requires that the atoms have an initial velocity in the horizontal direction. This is done by applying a horizontal magnetic field gradient, which results in a horizontal force on atoms with magnetic moments. The rubidium atoms are prepared in one of the magnetic states known as the Zeeman sublevels. The researchers then use a pair of coils that they called the “launching coils” in the horizontal plane to create the necessary magnetic field gradient to give the atoms a horizontal velocity. The atoms are then transferred back to the ground non-magnetic state using a microwave pulse before performing atom interferometry. This avoids any additional magnetic forces that can affect interferometer’s outcome.
Analysing the launch velocity using laser pulses with tuned frequencies, the researchers are able to discriminate the velocity of the atoms whether it being from the magnetic launching scheme or other effects. The researchers observe two dominant and symmetric peaks associated to the velocity of the atoms due to the magnetic launching. However, they also observe a third smaller peak in between. This peak is due to an unwanted effect from the laser beams that transfers additional velocity to the atoms. Further improvement in the stability of the laser beams’ polarization – the orientation of its oscillating electric field with respect to its propagation axis, as well the current noise in the launching coils will allow for more atoms to be launched.
Using their new launch technique, the researchers operated their cold-atom dual accelerometer–gyroscope for two days straight, averaging down their results to obtain an acceleration measurement of 7×10−7 m/s2 and a rotation rate of 4×10−7 rad/s, limited by residual ground vibration noise.
Best of both worlds
While classical sensors suffer from long term drifts, they operate continuously in comparison to a quantum sensor that requires preparation of the atomic sample and the interferometry process which takes around half a second. For this reason, a classical–quantum hybrid sensor benefits from the long-term stability of the quantum sensor and the fast repetition rate of the classical one. By attaching a commercial classical accelerometer and a commercial classical gyroscope to the atom interferometer, they implemented a feedback loop on the classical sensor’s outputs. The researchers demonstrated a respective 100-fold and three-fold improvement on the acceleration and rotation rates stabilities, respectively, of the classical sensors compared to when they are operated alone.
Operating this hybrid sensor continuously and utilizing their magnetic launch technique, the researchers report a measure of the local acceleration due gravity in their laboratory of 980,881.397 mGal (the milligal is a standard unit of gravimetry). They measured Earth’s rotation rate to be 4.82 × 10−5 rad/s. Cross checking with another atomic gravimeter, they find their acceleration value deviating by 2.3 mGal, which they regard to be due to misalignment of the vertical interferometer beams. Their rotation measurement has a significant error of about 25%, which the team attributes to wave-front distortions for the Raman beams used in their interferometer.
Yannick Bidel, a researcher working on this project, explains how such an inertial quantum sensor has room for improvement. Large-momentum-transfer, a technique to increase the arm separation of the interferometer, is one way to go. He further adds that once they reach bias stabilities of 10−9 to 10−10 rad/s within a compact size atom interferometer, such a sensor could become transportable and ready for in-field measurement campaigns.
(Courtesy: EHT Collaboration; Los Alamos National Laboratory)
1 When the Event Horizon Telescope imaged a black hole in 2019, what was the total mass of all the hard drives needed to store the data? A 1 kg B 50 kg C 500 kg D 2000 kg
2 In 1956 MANIAC I became the first computer to defeat a human being in chess, but because of its limited memory and power, the pawns and which other pieces had to be removed from the game? A Bishops B Knights C Queens D Rooks
(Courtesy: IOP Publishing; CERN)
3 The logic behind the Monty Hall problem, which involves a car and two goats behind different doors, is one of the cornerstones of machine learning. On which TV game show is it based? A Deal or No Deal BFamily Fortunes CLet’s Make a Deal DWheel of Fortune
4 In 2023 CERN broke which barrier for the amount of data stored on devices at the lab? A 10 petabytes (1016 bytes) B 100 petabytes (1017 bytes) C 1 exabyte (1018 bytes) D 10 exabytes (1019 bytes)
5 What was the world’s first electronic computer? A Atanasoff–Berry Computer (ABC) B Electronic Discrete Variable Automatic Computer (EDVAC) C Electronic Numerical Integrator and Computer (ENIAC) D Small-Scale Experimental Machine (SSEM)
6 What was the outcome of the chess match between astronaut Frank Poole and the HAL 9000 computer in the movie 2001: A Space Odyssey? A Draw B HAL wins C Poole wins D Match abandoned
7 Which of the following physics breakthroughs used traditional machine learning methods? A Discovery of the Higgs boson (2012) B Discovery of gravitational waves (2016) C Multimessenger observation of a neutron-star collision (2017) D Imaging of a black hole (2019)
8 The physicist John Hopfield shared the 2024 Nobel Prize for Physics with Geoffrey Hinton for their work underpinning machine learning and artificial neural networks – but what did Hinton originally study? A Biology B Chemistry C Mathematics D Psychology
9 Put the following data-driven discoveries in chronological order. A Johann Balmer’s discovery of a formula computing wavelength from Anders Ångström’s measurements of the hydrogen lines B Johannes Kepler’s laws of planetary motion based on Tycho Brahe’s astronomical observations C Henrietta Swan Leavitt’s discovery of the period-luminosity relationship for Cepheid variables D Ole Rømer’s estimation of the speed of light from observations of the eclipses of Jupiter’s moon Io
10 Inspired by Alan Turing’s “Imitation Game” – in which an interrogator tries to distinguish between a human and machine – when did Joseph Weizenbaum develop ELIZA, the world’s first “chatbot”? A 1964 B 1984 C 2004 D 2024
11 What does the CERN particle-physics lab use to store data from the Large Hadron Collider? A Compact discs B Hard-disk drives C Magnetic tape D Solid-state drives
12 In preparation for the High Luminosity Large Hadron Collider, CERN tested a data link to the Nikhef lab in Amsterdam in 2024 that ran at what speed? A 80 Mbps B 8 Gbps C 80 Gbps D 800 Gbps
13 When complete, the Square Kilometre Array telescope will be the world’s largest radio telescope. How many petabytes of data is it expected to archive per year? A 15 B 50 C 350 D 700
This quiz is for fun and there are no prizes. Answers will be published in April.
How would the climate and the environment on our planet change if an asteroid struck? Researchers at the IBS Center for Climate Physics (ICCP) at Pusan National University in South Korea have now tried to answer this question by running several impact simulations with a state-of-the-art Earth system model on their in-house supercomputer. The results show that the climate, atmospheric chemistry and even global photosynthesis would be dramatically disrupted in the three to four years following the event, due to the huge amounts of dust produced by the impact.
Beyond immediate effects such as scorching heat, earthquakes and tsunamis, an asteroid impact would have long-lasting effects on the climate because of the large quantities of aerosols and gases ejected into the atmosphere. Indeed, previous studies on the Chicxulub 10-km asteroid impact, which happened around 66 million years ago, revealed that dust, soot and sulphur led to a global “impact winter” and was very likely responsible for the dinosaurs going extinct during the Cretaceous/Paleogene period.
“This winter is characterized by reduced sunlight, because of the dust filtering it out, cold temperatures and decreased precipitation at the surface,” says Axel Timmermann, director of the ICCP and leader of this new study. “Severe ozone depletion would occur in the stratosphere too because of strong warming caused by the dust particles absorbing solar radiation there.”
These unfavourable climate conditions would inhibit plant growth via a decline in photosynthesis both on land and in the sea and would thus affect food productivity, Timmermann adds.
Something surprising and potentially positive would also happen though, he says: plankton in the ocean would recover within just six months and its abundance could even increase afterwards. Indeed, diatoms (silicate-rich algae) would be more plentiful than before the collision. This might be because the dust created by the asteroid is rich in iron, which would trigger plankton growth as it sinks into the ocean. These phytoplankton “blooms” could help alleviate emerging food crises triggered by the reduction in terrestrial productivity, at least for several years after the impact, explains Timmermann.
The effect of a “Bennu”-sized asteroid impact
In this latest study, published in Science Advances, the researchers simulated the effect of a “Bennu”-sized asteroid impact. Bennu is a so-called medium-sized asteroid with a diameter of around 500 m. This type of asteroid is more likely to impact Earth than the “planet killer” larger asteroids, but has been studied far less.
There is an estimated 0.037% chance of such an asteroid colliding with Earth in September 2182. While this probability is small, such an impact would be very serious, says Timmermann, and would lead to climate conditions similar to those observed after some of the largest volcanic eruptions in the last 100 000 years. “It is therefore important to assess the risk, which is the product of the probability and the damage that would be caused, rather than just the probability by itself,” he tells Physics World. “Our results can serve as useful benchmarks to estimate the range of environmental effects from future medium-sized asteroid collisions.”
The team ran the simulations on the IBS’ supercomputer Aleph using the Community Earth System Model Version 2 (CESM2) and the Whole Atmosphere Community Climate Model Version 6 (WACCM6). The simulations injected up to 400 million tonnes of dust into the stratosphere.
The climate effects of impact-dust aerosols mainly depend on their abundance in the atmosphere and how they evolve there. The simulations revealed that global mean temperatures would drop by 4° C, a value that’s comparable with the cooling estimated as a result of the Toba volcano erupting around 74 000 years ago (which emitted 2000 Tg (2×1015 g) of sulphur dioxide). Precipitation also decreased 15% worldwide and ozone dropped by a dramatic 32% in the first year following the asteroid impact.
Asteroid impacts may have shaped early human evolution
“On average, medium-sized asteroids collide with Earth about every 100 000 to 200 000 years,” says Timmermann. “This means that our early human ancestors may have experienced some of these medium-sized events. These may have impacted human evolution and even affected our species’ genetic makeup.”
The researchers admit that their model has some inherent limitations. For one, CESM2/WACCM6, like other modern climate models, is not designed and optimized to simulate the effects of massive amounts of aerosol injected into the atmosphere. Second, the researchers only focused on the asteroid colliding with the Earth’s land surface. This is obviously less likely than an impact on the ocean, because roughly 70% of Earth’s surface is covered by water, they say. “An impact in the ocean would inject large amounts of water vapour rather than climate-active aerosols such as dust, soot and sulphur into the atmosphere and this vapour needs to be better modelled – for example, for the effect it has on ozone loss,” they say.
The effect of the impact on specific regions on the planet also needs to be better simulated, the researchers add. Whether the asteroid impacts during winter or summer also needs to be accounted for since this can affect the extent of the climate changes that would occur.
Finally, as well as the dust nanoparticles investigated in this study, future work should also look at soot emissions from wildfires ignited by “impact “spherules”, and sulphur and CO2 released from target evaporites, say Timmermann and colleagues. “The ‘impact winter’ would be intensified and prolonged if other aerosols such as soot and sulphur were taken into account.”
This episode of the Physics World Weekly podcast features Ileana Silvestre Patallo, a medical physicist at the UK’s National Physical Laboratory, and Ruth McLauchlan, consultant radiotherapy physicist at Imperial College Healthcare NHS Trust.
In a wide-ranging conversation with Physics World’s Tami Freeman, Patallo and McLauchlan explain how ionizing radiation such as X-rays and proton beams interact with our bodies and how radiation is being used to treat diseases including cancer.
This episode was created in collaboration with IPEM, the Institute of Physics and Engineering in Medicine. IPEM owns the journal Physics in Medicine & Biology.
Helium deep with the Earth could bond with iron to form stable compounds – according to experiments done by scientists in Japan and Taiwan. The work was done by Haruki Takezawa and Kei Hirose at the University of Tokyo and colleagues, who suggest that Earth’s core could host a vast reservoir of primordial helium-3 – reshaping our understanding of the planet’s interior.
Noble gases including helium are normally chemically inert. But under extreme pressures, heavier members of the group (including xenon and krypton) can form a variety of compounds with other elements. To date, however, less is known about compounds containing helium – the lightest noble gas.
Beyond the synthesis of disodium helide (Na2He) in 2016, and a handful of molecules in which helium forms weak van der Waals bonds with other atoms, the existence of other helium compounds has remained purely theoretical.
As a result, the conventional view is that any primordial helium-3 present when our planet first formed would have quickly diffused through Earth’s interior, before escaping into the atmosphere and then into space.
Tantalizing clues
However, there are tantalizing clues that helium compounds could exist in some volcanic rocks on Earth’s surface. These rocks contain unusually high isotopic ratios of helium-3 to helium-4. “Unlike helium-4, which is produced through radioactivity, helium-3 is primordial and not produced in planetary interiors,” explains Hirose. “Based on volcanic rock measurements, helium-3 is known to be enriched in hot magma, which originally derives from hot plumes coming from deep within Earth’s mantle.” The mantle is the region between Earth’s core and crust.
The fact that the isotope can still be found in rock and magma suggests that it must have somehow become trapped in the Earth. “This argument suggests that helium-3 was incorporated into the iron-rich core during Earth’s formation, some of which leaked from the core to the mantle,” Hirose explains.
It could be that the extreme pressures present in Earth’s iron-rich core enabled primordial helium-3 to bond with iron to form stable molecular lattices. To date, however, this possibility has never been explored experimentally.
Now, Takezawa, Hirose and colleagues have triggered reactions between iron and helium within a laser-heated diamond-anvil cell. Such cells crush small samples to extreme pressures – in this case as high as 54 GPa. While this is less than the pressure in the core (about 350 GPa), the reactions created molecular lattices of iron and helium. These structures remained stable even when the diamond-anvil’s extreme pressure was released.
To determine the molecular structures of the compounds, the researchers did X-ray diffraction experiments at Japan’s SPring-8 synchrotron. The team also used secondary ion mass spectrometry to determine the concentration of helium within their samples.
Synchrotron and mass spectrometer
“We also performed first-principles calculations to support experimental findings,” Hirose adds. “Our calculations also revealed a dynamically stable crystal structure, supporting our experimental findings.” Altogether, this combination of experiments and calculations showed that the reaction could form two distinct lattices (face-centred cubic and distorted hexagonal close packed), each with differing ratios of iron to helium atoms.
These results suggest that similar reactions between helium and iron may have occurred within Earth’s core shortly after its formation, trapping much of the primordial helium-3 in the material that coalesced to form Earth. This would have created a vast reservoir of helium in the core, which is gradually making its way to the surface.
However, further experiments are needed to confirm this thesis. “For the next step, we need to see the partitioning of helium between iron in the core and silicate in the mantle under high temperatures and pressures,” Hirose explains.
Observing this partitioning would help rule out the lingering possibility that unbonded helium-3 could be more abundant than expected within the mantle – where it could be trapped by some other mechanism. Either way, further studies would improve our understanding of Earth’s interior composition – and could even tell us more about the gases present when the solar system formed.
Two months into Donald Trump’s second presidency and many parts of US science – across government, academia, and industry – continue to be hit hard by the new administration’s policies. Science-related government agencies are seeing budgets and staff cut, especially in programmes linked to climate change and diversity, equity and inclusion (DEI). Elon Musk’s Department of Government Efficiency (DOGE) is also causing havoc as it seeks to slash spending.
In mid-February, DOGE fired more than 300 employees at the National Nuclear Safety Administration, which is part of the US Department of Energy, many of whom were responsible for reassembling nuclear warheads at the Pantex plant in Texas. A day later, the agency was forced to rescind all but 28 of the sackings amid concerns that their absence could jeopardise national security.
A judge has also reinstated workers who were laid off at the National Science Foundation (NSF) as well as at the Centers for Disease Control and Prevention. The judge said the government’s Office of Personnel Management, which sacked the staff, did not have the authority to do so. However, the NSF rehiring applies mainly to military veterans and staff with disabilities, with the overall workforce down by about 140 people – or roughly 10%.
The NSF has also announced a reduction, the size of which is unknown, in its Research Experiences for Undergraduates programme. Over the last 38 years, the initiative has given thousands of college students – many with backgrounds that are underrepresented in science – the opportunity to carry out original research at institutions during the summer holidays. NSF staff are also reviewing thousands of grants containing such words as “women” and “diversity”.
NASA, meanwhile, is to shut its office of technology, policy and strategy, along with its chief-scientist office, and the DEI and accessibility branch of its diversity and equal opportunity office. “I know this news is difficult and may affect us all differently,” admitted acting administrator Janet Petro in an all-staff e-mail. Affecting about 20 staff, the move is on top of plans to reduce NASA’s overall workforce. Reports also suggest that NASA’s science budget could be slashed by as much as 50%.
Hundreds of “probationary employees” have also been sacked by the National Oceanic and Atmospheric Administration (NOAA), which provides weather forecasts that are vital for farmers and people in areas threatened by tornadoes and hurricanes. “If there were to be large staffing reductions at NOAA there will be people who die in extreme weather events and weather-related disasters who would not have otherwise,” warns climate scientist Daniel Swain from the University of California, Los Angeles.
Climate concerns
In his first cabinet meeting on 26 February, Trump suggested that officials “use scalpels” when trimming their departments’ spending and personnel – rather than Musk’s figurative chainsaw. But bosses at the Environmental Protection Agency (EPA) still plan to cut its budget by about two-thirds. “[W]e fear that such cuts would render the agency incapable of protecting Americans from grave threats in our air, water, and land,” wrote former EPA administrators William Reilly, Christine Todd Whitman and Gina McCarthy in the New York Times.
The White House’s attack on climate science goes beyond just the EPA. In January, the US Department of Agriculture removed almost all data on climate change from its website. The action resulted in a lawsuit in March from the Northeast Organic Farming Association of New York and two non-profit organizations – the Natural Resources Defense Council and the Environmental Working Group. They say that the removal hinders research and “agricultural decisions”.
The Trump administration has also barred NASA’s now former chief scientist Katherine Calvin and members of the State Department from travelling to China for a planning meeting of the Intergovernmental Panel on Climate Change. Meanwhile, in a speech to African energy ministers in Washington on 7 March, US energy secretary Chris Wright claimed that coal has “transformed our world and made it better”, adding that climate change, while real, is not on his list of the world’s top 10 problems. “We’ve had years of Western countries shamelessly saying ‘don’t develop coal’,” he said. “That’s just nonsense.”
At the National Institutes of Health (NIH), staff are being told to cancel hundreds of research grants that involve DEI and transgender issues. The Trump administration also wants to cut the allowance for indirect costs of NIH’s and other agencies’ research grants to 15% of research contracts, although a district court judge has put that move on hold pending further legal arguments. On 8 March, the Trump administration also threatened to cancel $400m in funding to Columbia purportedly due to its failure to tackle anti-semitism on the campus.
A Trump policy of removing “undocumented aliens” continues to alarm universities that have overseas students. Some institutions have already advised overseas students against travelling abroad during holidays, in case immigration officers do not let them back in when they return. Others warn that their international students should carry their immigration documents with them at all times. Universities have also started to rein in spending with Harvard and the Massachusetts Institute of Technology, for example, implementing a hiring freeze.
Falling behind
Amid the turmoil, the US scientific community is beginning to fight back. Individual scientists have supported court cases that have overturned sackings at government agencies, while a letter to Congress signed by the Union of Concerned Scientists and 48 scientific societies asserts that the administration has “already caused significant harm to American science”. On 7 March, more than 30 US cities also hosted “Stand Up for Science” rallies attended by thousands of demonstrators.
Elsewhere, a group of government, academic and industry leaders – known collectively as Vision for American Science and Technology – has released a report warning that the US could fall behind China and other competitors in science and technology. Entitled Unleashing American Potential, it calls for increased public and private investment in science to maintain US leadership. “The more dollars we put in from the feds, the more investment comes in from industry, and we get job growth, we get economic success, and we get national security out of it,” notes Sudip Parikh, chief executive of the American Association for the Advancement of Science, who was involved in the report.
Marcia McNutt, president of the National Academy of Sciences, meanwhile, has called on the community to continue to highlight the benefit of science. “We need to underscore the fact that stable federal funding of research is the main mode by which radical new discoveries have come to light – discoveries that have enabled the age of quantum computing and AI and new materials science,” she said. “These are areas that I am sure are very important to this administration as well.”
New for 2025, the American Physical Society (APS) is combining its March Meeting and April Meeting into a joint event known as the APS Global Physics Summit. The largest physics research conference in the world, the Global Physics Summit brings together 14,000 attendees across all disciplines of physics. The meeting takes place in Anaheim, California (as well as virtually) from 16 to 21 March.
Uniting all disciplines of physics in one joint event reflects the increasingly interdisciplinary nature of scientific research and enables everybody to participate in any session. The meeting includes cross-disciplinary sessions and collaborative events, where attendees can meet to connect with others, discuss new ideas and discover groundbreaking physics research.
The meeting will take place in three adjacent venues. The Anaheim Convention Center will host March Meeting sessions, while the April Meeting sessions will be held at the Anaheim Marriott. The Hilton Anaheim will host SPLASHY (soft, polymeric, living, active, statistical, heterogenous and yielding) matter and medical physics sessions. Cross-disciplinary sessions and networking events will take place at all sites and in the connecting outdoor plaza.
With programming aligned with the 2025 International Year of Quantum Science and Technology, the meeting also celebrates all things quantum with a dedicated Quantum Festival. Designed to “inspire and educate”, the festival incorporates events at the intersection of art, science and fun – with multimedia performances, science demonstrations, circus performers, and talks by Nobel laureates and a NASA astronaut.
Finally, there’s the exhibit hall, where more than 200 exhibitors will showcase products and services for the physics community. Here, delegates can also attend poster sessions, a career fair and a graduate school fair. Read on to find out about some of the innovative product offerings on show at the technical exhibition.
Precision motion drives innovative instruments for physics applications
For over 25 years Mad City Labs has provided precision instrumentation for research and industry, including nanopositioning systems, micropositioners, microscope stages and platforms, single-molecule microscopes and atomic force microscopes (AFMs).
This product portfolio, coupled with the company’s expertise in custom design and manufacturing, enables Mad City Labs to provide solutions for nanoscale motion for diverse applications such as astronomy, biophysics, materials science, photonics and quantum sensing.
Mad City Labs’ piezo nanopositioners feature the company’s proprietary PicoQ sensors, which provide ultralow noise and excellent stability to yield sub-nanometre resolution and motion control down to the single picometre level. The performance of the nanopositioners is central to the company’s instrumentation solutions, as well as the diverse applications that it can serve.
Within the scanning probe microscopy solutions, the nanopositioning systems provide true decoupled motion with virtually undetectable out-of-plane movement, while their precision and stability yields high positioning performance and control. Uniquely, Mad City Labs offers both optical deflection AFMs and resonant probe AFM models.
Product portfolio Mad City Labs provides precision instrumentation for applications ranging from astronomy and biophysics, to materials science, photonics and quantum sensing. (Courtesy: Mad City Labs)
The MadAFM is a sample scanning AFM in a compact, tabletop design. Designed for simple user-led installation, the MadAFM is a multimodal optical deflection AFM and includes software. The resonant probe AFM products include the AFM controllers MadPLL and QS-PLL, which enable users to build their own flexibly configured AFMs using Mad City Labs micro- and nanopositioners. All AFM instruments are ideal for material characterization, but resonant probe AFMs are uniquely well suited for quantum sensing and nano-magnetometry applications.
Stop by the Mad City Labs booth and ask about the new do-it-yourself quantum scanning microscope based on the company’s AFM products.
Mad City Labs also offers standalone micropositioning products such as optical microscope stages, compact positioners and the Mad-Deck XYZ stage platform. These products employ proprietary intelligent control to optimize stability and precision. These micropositioning products are compatible with the high-resolution nanopositioning systems, enabling motion control across micro–picometre length scales.
The new MMP-UHV50 micropositioning system offers 50 mm travel with 190 nm step size and maximum vertical payload of 2 kg, and is constructed entirely from UHV-compatible materials and carefully designed to eliminate sources of virtual leaks. Uniquely, the MMP-UHV50 incorporates a zero power feature when not in motion to minimize heating and drift. Safety features include limit switches and overheat protection, a critical item when operating in vacuum environments.
For advanced microscopy techniques for biophysics, the RM21 single-molecule microscope, featuring the unique MicroMirror TIRF system, offers multicolour total internal-reflection fluorescence microscopy with an excellent signal-to-noise ratio and efficient data collection, along with an array of options to support multiple single-molecule techniques. Finally, new motorized micromirrors enable easier alignment and stored setpoints.
Visit Mad City Labs at the APS Global Summit, at booth #401
New lasers target quantum, Raman spectroscopy and life sciences
HÜBNER Photonics, manufacturer of high-performance lasers for advanced imaging, detection and analysis, is highlighting a large range of exciting new laser products at this year’s APS event. With these new lasers, the company responds to market trends specifically within the areas of quantum research and Raman spectroscopy, as well as fluorescence imaging and analysis for life sciences.
Dedicated to the quantum research field, a new series of CW ultralow-noise single-frequency fibre amplifier products – the Ampheia Series lasers – offer output powers of up to 50 W at 1064 nm and 5 W at 532 nm, with an industry-leading low relative intensity noise. The Ampheia Series lasers ensure unmatched stability and accuracy, empowering researchers and engineers to push the boundaries of what’s possible. The lasers are specifically suited for quantum technology research applications such as atom trapping, semiconductor inspection and laser pumping.
Ultralow-noise operation The Ampheia Series lasers are particularly suitable for quantum technology research applications. (Courtesy: HÜBNER Photonics)
In addition to the Ampheia Series, the new Cobolt Qu-T Series of single frequency, tunable lasers addresses atom cooling. With wavelengths of 707, 780 and 813 nm, course tunability of greater than 4 nm, narrow mode-hop free tuning of below 5 GHz, linewidth of below 50 kHz and powers of 500 mW, the Cobolt Qu-T Series is perfect for atom cooling of rubidium, strontium and other atoms used in quantum applications.
For the Raman spectroscopy market, HÜBNER Photonics announces the new Cobolt Disco single-frequency laser with available power of up to 500 mW at 785 nm, in a perfect TEM00 beam. This new wavelength is an extension of the Cobolt 05-01 Series platform, which with excellent wavelength stability, a linewidth of less than 100 kHz and spectral purity better than 70 dB, provides the performance needed for high-resolution, ultralow-frequency Raman spectroscopy measurements.
For life science applications, a number of new wavelengths and higher power levels are available, including 553 nm with 100 mW and 594 nm with 150 mW. These new wavelengths and power levels are available on the Cobolt 06-01 Series of modulated lasers, which offer versatile and advanced modulation performance with perfect linear optical response, true OFF states and stable illumination from the first pulse – for any duty cycles and power levels across all wavelengths.
The company’s unique multi-line laser, Cobolt Skyra, is now available with laser lines covering the full green–orange spectral range, including 594 nm, with up to 100 mW per line. This makes this multi-line laser highly attractive as a compact and convenient illumination source in most bioimaging applications, and now also specifically suitable for excitation of AF594, mCherry, mKate2 and other red fluorescent proteins.
In addition, with the Cobolt Kizomba laser, the company is introducing a new UV wavelength that specifically addresses the flow cytometry market. The Cobolt Kizomba laser offers 349 nm output at 50 mW with the renowned performance and reliability of the Cobolt 05-01 Series lasers.
Visit HÜBNER Photonics at the APS Global Summit, at booth #359.
Are we at risk of losing ourselves in the midst of technological advancement? Could the tools we build to reflect our intelligence start distorting our very sense of self? Artificial intelligence (AI) is a technological advancement with huge ethical implications, and in The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking,Shannon Vallor offers a philosopher’s perspective on this vital question.
Vallor, who is based at the University of Edinburgh in the UK, argues that artificial intelligence is not just reshaping society but is also subtly rewriting our relationship with knowledge and autonomy. She even goes as far as to say, “Today’s AI mirrors tell us what it is to be human – what we prioritize, find good, beautiful or worth our attention.”
Vallor employs the metaphor of AI as a mirror – a device that reflects human intelligence but lacks independent creativity. According to her, AI systems, which rely on curated sets of training data, cannot truly innovate or solve new challenges. Instead, they mirror our collective past, reflecting entrenched biases and limiting our ability to address unprecedented global problems like climate change. Therefore, unless we carefully consider how we build and use AI, it risks stalling human progress by locking us into patterns of the past.
The book explores how humanity’s evolving relationship with technology – from mechanical automata and steam engines to robotics and cloud computing – has shaped the development of AI. Vallor grounds readers in what AI is and, crucially, what it is not. As she explains, while AI systems appear to “think”, they are fundamentally tools designed to process and mimic human-generated data.
The book’s philosophical underpinnings are enriched by Vallor’s background in the humanities and her ethical expertise. She draws on myths, such as the story of Narcissus, who met a tragic end after being captivated by his reflection, to illustrate the dangers of AI. She gives as an example the effect that AI social-media filters have on the propagation and domination of Western beauty standards.
Vallor also explores the long history of literature grappling with artificial intelligence, self-awareness and what it truly means to be human. These fictional works, which include Do Androids Dream of Electric Sheep? by Philip K Dick, are used not just as examples but as tools to explore the complex relationship between humanity and AI. The emphasis on the ties between AI and popular culture results in writing that is both accessible and profound, deftly weaving complex ideas into a narrative that engages readers from all backgrounds.
One area where I find Vallor’s conclusions contentious is her vision for AI in augmenting science communication and learning. She argues that our current strategies for science communication are inadequate and that improving public and student access to reliable information is critical. In her words: “Training new armies of science communicators is an option, but a less prudent use of scarce public funds than conducting vital research itself. This is one area where AI mirrors will be useful in the future.”
Science communication and teaching are about more than simply summarising papers or presenting data; they require human connection to contextualize findings and make them accessible to broad audiences
In my opinion, this statement warrants significant scrutiny. Science communication and teaching are about more than simply summarising papers or presenting data; they require human connection to contextualize findings and make them accessible to broad audiences. While public distrust of experts is a legitimate issue, delegating science communication to AI risks exacerbating the problem.
AI’s lack of genuine understanding, combined with its susceptibility to bias and detachment from human nuance, could further erode trust and deepen the disconnect between science and society. Vallor’s optimism in this context feels misplaced. AI, as it currently stands, is ill-suited to bridge the gaps that good science communication seeks to address.
Despite its generally critical tone, The AI Mirror is far from a technophobic manifesto. Vallor’s insights are ultimately hopeful, offering a blueprint for reclaiming technology as a tool for human advancement. She advocates for transparency, accountability, and a profound shift in economic and social priorities. Rather than building AI systems to mimic human behaviour, she argues, we should design them to amplify our best qualities – creativity, empathy and moral reasoning – while acknowledging the risk that this technology will devalue these talents as well as amplify them.
The AI Mirror is essential reading for anyone concerned about the future of artificial intelligence and its impact on humanity. Vallor’s arguments are rigorous yet accessible, drawing from philosophy, history and contemporary AI research. She challenges readers to see AI not as a technological inevitability but as a cultural force that we must actively shape.
Her emphasis on the need for a “new language of virtue” for the AI age warrants consideration, particularly in her call to resist the seductive pull of efficiency and automation at the expense of humanity. Vallor argues that as AI systems increasingly influence decision-making in society, we must cultivate a vocabulary of ethical engagement that goes beyond simplistic notions of utility and optimization. As she puts it: “We face a stark choice in building AI technologies. We can use them to strengthen our humane virtues, sustaining and extending our collective capabilities to live wisely and well. By this path, we can still salvage a shared future for human flourishing.”
Vallor’s final call to action is clear: we must stop passively gazing into the AI mirror and start reshaping it to serve humanity’s highest virtues, rather than its worst instincts. If AI is a mirror, then we must decide what kind of reflection we want to see.
Set to operate for two years in a polar orbit about 650 km from the Earth’s surface, SPHEREx will collect data from 450 million galaxies as well as more than 100 million stars to create a 3D map of the cosmos.
It will use to this gain an insight into cosmic inflation – the rapid expansion of the universe following the Big Bang.
It will also search the Milky Way for hidden reservoirs of water, carbon dioxide and other ingredients critical for life as well as study the cosmic glow of light from the space between galaxies.
The craft features three concentric shields that surround the telescope to protect it from light and heat. Three mirrors, including a 20 cm primary mirror, collect light before feed it into filters and detectors. The set-up allows the telescope to resolve 102 different wavelengths of light.
Packing a punch
SPHEREx has been launched together with another NASA mission dubbed Polarimeter to Unify the Corona and Heliosphere (PUNCH). Via a constellation of four satellites in a low-Earth orbit, PUNCH will make 3D observations of the Sun’s corona to learn how the mass and energy become solar wind. It will also explore the formation and evolution of space weather events such as coronal mass ejections, which can create storms of energetic particle radiation that can be damaging to spacecraft.
PUNCH will now undergo a three-month commissioning period in which the four satellites will enter the correct orbital formation and the instruments calibrated to operate as a single “virtual instrument” before it begins studying the solar wind.
“Everything in NASA science is interconnected, and sending both SPHEREx and PUNCH up on a single rocket doubles the opportunities to do incredible science in space,” noted Nicky Fox, associate administrator for NASA’s science mission directorate. “Congratulations to both mission teams as they explore the cosmos from far-out galaxies to our neighbourhood star. I am excited to see the data returned in the years to come.”