India has unveiled plans to build two new optical-infrared telescopes and a dedicated solar telescope in the Himalayan desert region of Ladakh. The three new facilities, expected to cost INR 35bn (about £284m), were announced by the Indian finance minister Nirmala Sitharaman on 1 February.
First up is a 3.7 m optical-infrared telescope, which is expected to come online by 2030. It will be built near the existing 2 m Himalayan Chandra Telescope (HCT) at Hanle, about 4500 m above sea level. Astronomers use the HCT for a wide range of investigations, including stellar evolution, galaxy spectroscopy, exoplanet atmospheres and time-domain studies of supernovae, variable stars and active galactic nuclei.
“The arid and high-altitude Ladakh desert is firmly established as among the world’s most attractive sites for multiwavelength astronomy,” Annapurni Subramaniam, director of the Indian Institute of Astrophysics (IIA) in Bangalore, told Physics World. “HCT has demonstrated both site quality and opportunities for sustained and competitive science from this difficult location.”
The 3.7 m telescope is a stepping stone towards a proposed 13.7 m National Large Optical-Infrared Telescope (NLOT), which is expected to open in 2038. “NLOT is intended to address contemporary astronomy goals, working in synergy with major domestic and international facilities,” says Maheswar Gopinathan, a scientist at the IIA, which is leading all three projects.
Gopinathan says NLOT’s large collecting area will enable research on young stellar systems, brown dwarfs and exoplanets, while also allowing astronomers to detect faint sources and to rapidly follow up extreme cosmic events and gravitational wave detections.
Along with India’s upgraded Giant Metrewave Radio Telescope, a planned gravitational-wave observatory in the country and the Square Kilometre Array in Australasia and South Africa, Gopinathan says that NLOT “will usher in a new era of multimessenger and multiwavelength astronomy.”
The third telescope to be supported is the 2m National Large Solar Telescope (NLST), which will be built near Pangong Tso lake 4350 m above sea level. Also expected to come online by 2030, the NLST is an advance on India’s existing 50 cm telescope at the Udaipur Solar Observatory, which provides a spatial resolution of about 100 km. Scientists also plan to combine NLST observations with data from Aditya-L1, India’s space-based solar observatory, which launched in 2023.
“We have two key goals [with NLST],” says Dibyendu Nandi, an astrophysicist at the Indian Institute of Science Education and Research in Kolkata, “to probe small-scale perturbations that cascade into large flares or coronal mass ejections and improve our understanding of space weather drivers and how energy in localised plasma flows is channelled to sustain the ubiquitous magnetic fields.”
While bolstering India’s domestic astronomical capabilities, scientists say the Ladakh telescopes – located between observatories in Europe, the Americas, East Asia and Australia – would significantly improve global coverage of transient and variable phenomena.
A faint flash of infrared light in the Andromeda galaxy was emitted at the birth of a stellar-mass black hole – according to a team of astronomers in the US. Kishalay De at Columbia University and the Flatiron Institute, and colleagues, noticed that the flash was followed by the rapid dimming of a once-bright star. They say that the star collapsed, with almost all of its material falling into a newly forming black hole. Their analysis suggests that there may be many more such black holes in the universe than previously expected.
When a massive star runs out of fuel for nuclear fusion it can no longer avoid gravitational collapse. As it implodes, such a star is believed to emit an intense burst of neutrinos, whose energy can be absorbed by the star’s outer layers.
In some cases, this energy is enough to tear material away from the core, triggering spectacular explosions known as core-collapse supernovae. Sometimes, however, this energy transfer is insufficient to halt the collapse, which continues until a stellar-mass black hole is created. These stellar deaths are far less dramatic than supernovae, and are therefore very difficult to observe.
Observational evidence for these stellar-mass black holes include their gravitational influence on the motions of stars; and the gravitational waves emitted when they merge together. So far, however, their initial formation has proven far more difficult to observe.
Mysterious births
“While there is consensus that these objects must be formed as the end products of the lives of likely very massive stars, there has remained little convincing observational evidence of watching stars turn into black holes,” De explains. “As a result, we don’t even have constraints on questions as fundamental as which stars can turn into black holes.”
The main problem is the low key nature of the stellar implosions. While core-collapse supernovae shine brightly in the sky, “finding an individual star disappearing in a galaxy is remarkably difficult,” De says. “A typical galaxy has a 100 billion stars in it, and being able to spot one that disappears makes it very challenging.”
Fortunately, it is believed that these stars do not vanish without a trace. “Whenever a black hole does form from the near complete inward collapse of a massive star, its very outer envelope must be still ejected because it is too loosely bound to the star,” De explains. As it expands and cools, models predict that this ejected material should emit a flash of infrared radiation – vastly dimmer than a supernova, but still bright enough for infrared surveys to detect.
To search for these flashes, De’s team examined data from NASA’s NEOWISE infrared survey and several other telescopes. They identified a near-infrared flash that was observed in 2014 and closely matched their predictions for a collapsing star. That flash was emitted by a supergiant star in the Andromeda galaxy.
Nowhere to be seen
Between 2017 and 2022, the star dimmed rapidly before disappearing completely across all regions of the electromagnetic spectrum. “This star used to be one of the most luminous stars in the Andromeda Galaxy, and now it was nowhere to be seen,” says De.
“Astronomers can spot supernovae billions of light years away – but even at this remarkable proximity, we didn’t see any evidence of an explosive supernova,” De says. “This suggests that the star underwent a near pure implosion, forming a black hole.”
The team also examined a previously-observed dimming in a galaxy 10 times more distant. While several competing theories had emerged to explain that disappearance, the pattern of dimming bore a striking resemblance to their newly-validated model, strongly suggesting that this event too signalled the birth of a stellar-mass black hole.
Because these events occurred so recently in ordinary galaxies like Andromeda, De’s team believe that similar implosions must be happening routinely across the universe – and they hope that their work will trigger a new wave of discoveries.
“The estimated mass of the star we observed is about 13 times the mass of the Sun, which is lower than what astronomers have assumed for the mass of stars that turn into black holes,” De says. “This fundamentally changes out understanding of the landscape of black hole formation – there could be many more black holes out there than we estimate.”
A new class of biomolecules called magneto-sensitive fluorescent proteins, or MFPs, could improve imaging of biological processes inside living cells and potentially underpin innovative therapies.
The fluorescent proteins commonly used in biological studies respond solely to light being shone at them. But because that light gets scattered by tissues there are inaccuracies in determining exactly where the resulting fluorescence originates. By contrast, the MFPs created by a team led by Harrison Steel, head of the Engineered Biotechnology Research Group at the University of Oxford in the UK, fluoresce partly in response to highly predictable magnetic fields and radio waves that pass through biological tissues without deflection.
Sensor schematic An MFP excited by blue light emits green fluorescence, the intensity of which can be modulated by applying appropriate magnetic or radiofrequency fields. (Courtesy: Gabriel Abrahams)
To detect where MFPs are located within living cells, the researchers apply both a static magnetic field with a precisely known gradient and a radiofrequency (RF) signal, which modulate the fluorescence triggered via excitation by a light-emitting diode (LED).
The emitted fluorescence is brightest whenever the RF is in resonance with a transition energy of the entangled electron system present within the MFP. Since the resonance frequency depends on the surrounding magnetic field strength, the brightness reveals the protein’s location.
As detailed in their recent Nature paper, the researchers engineered the MFPs by “directed evolution”: starting with a DNA sequence, making two to three thousand variants of it, and selecting the variants with the best fluorescence response to magnetic fields before repeating the entire process multiple times. The resulting proteins were tested via ODMR (optically detected magnetic resonance) and MFE (magnetic-field effect) experiments, revealing that they could be detected in single living cells and sense their local microenvironment.
Importantly, these MFPs can be made in research labs using a straightforward biological technique. “This is a totally different way of coming up with new quantum materials compared to other engineering efforts for quantum sensors like nitrogen vacancies [in diamonds] which need to be manufactured in highly specialized facilities,” explains first author Gabriel Abrahams, a doctoral student in Steel’s research group. Abrahams helped develop quantum diamond microscopes during his master’s in physics at the Quantum Nano Sensing Lab in Melbourne, Australia before moving onto the Oxford Interdisciplinary Bioscience Doctoral Training Programme.
The MFPs were inspired by the work of study co-authors Maria Ingaramo and Andy York, both then working for Calico Life Sciences. They had observed a small change in fluorescence when a magnet interacted with a quantum-enabled protein, explains Abrahams. “That was really cool! I hadn’t seen anything like that, and there were clearly potential applications if it could be made better,” he says.
Steel tells Physics World that “a lot of the past work in quantum biology was with fragile proteins, often at cryogenic temperatures. Surprisingly you could easily measure these MFPs in single living cells every few minutes as they can work for a long time at room temperature”. Furthermore, using MFPs only requires adding a magnet to existing fluorescence microscopy equipment, allowing new data to be cost-effectively obtained.
“For instance, you might use three or four fluorescent proteins to tag natural processes in a mammalian cell in a petri dish to see when they are being used and where they go. We could instead tag with 10 or 15 MFPs, allowing you to measure extra targets by just applying a magnetic field,” Steel explains.
Quantum engineer Peter Maurer from the University of Chicago in the US, who was not involved in the study, is enthusiastic about these new MFPs. “By combining magnetic fields and fluorescence, this work establishes an exciting new imaging modality with broad potential for future evolution. Notably, similar approaches could be directly applicable to qubits [quantum bits], such as the fluorescent protein qubits our team published in Nature last year,” he says.
Next, Steel intends to improve their instrumentation for using MFPs – much of which was adopted from researchers investigating how birds navigate via the earth’s magnetic field. Future MFP applications could include microbiome studies sensing where bacteria travel in our bodies, and the development of highly controllable actuators for drug delivery. “If you would like to turn on the protein’s ability to bind to a cancer cell, for example, you could simply put a magnet on the outside of a person in the right location,” he concludes.
A proposed industrial-scale green hydrogen and ammonia project in Chile that astronomers warned could cause “irreparable damage” to the clearest skies in the world has been cancelled. The decision by AES Andes, a subsidiary of the US power company AES Corporation, to shelve plans for the INNA complex has been welcomed by the European Southern Observatory (ESO).
AES Andes submitted an Environmental Impact Assessment for the green hydrogen project in December 2024. Expected to cover more than 3000 hectares, it would have been located just a few kilometres from ESO’s Paranal Observatory in Chile’s Atacama Desert, which is one of the world’s most important astronomical research sites due to its stable atmosphere and lack of light pollution.
That same month, ESO conducted its own impact assessment, concluding that INNA would increase light pollution above Paranal’s Very Large Telescope by at least 35% and by more than 50% above the southern site of the Cherenkov Telescope Array Observatory (CTAO).
Once built, the CTAO will be the world’s most powerful ground-based observatory for very-high-energy gamma-ray astronomy.
ESO director general Xavier Barcons had warned that the hydrogen project would have posed a major threat to “the performance of the most advanced astronomical facilities anywhere in the world”.
On 23 January, however, AES Andes announced that it will discontinue plans to develop the INNA complex. The firm stated that after a review of its project portfolio it had chosen to instead focus on renewable energy and energy storage. On 6 February AES Andes sent a letter to Chile’s Environmental Assessment Service requesting that INNA is not evaluated, which formally confirmed the end of the project.
Barcons says that ESO is “relieved” about the decision, adding that the case highlights the urgent need to establish clear protection measures in the areas around astronomical observatories.
Barcons notes that green-energy projects as well as other industrial projects can be “fully compatible” with astronomical observatories as long as the facilities are located at sufficient distances away.
Romano Corradi, director of the Gran Telescopio Canarias, which is located at the Roque de los Muchachos Observatory, La Palma, Spain, told Physics World that he was “delighted” with the decision.
Corradi adds that while it is unclear if preserving the night-sky darkness of the region was a relevant factor for the decision to cancel the project, he hopes that global pressure to defend the dark skies played a role.
Active link Bromo volcano in East Java, Indonesia, which is the most volcanically active country in the world, where heavy rainfall has triggered explosive activity and eruptions at active volcanoes. (Courtesy: iStock/Panya_)
A few years ago, Swiss seismologist Verena Simon noticed a striking shift in the pattern of seismic activity and micro earthquakes in the Mont Blanc region. She found that microquakes in the area, which straddles Switzerland, France and Italy, have fallen into an annual pattern since 2015.
Simon and colleagues at the Swiss Seismological Service in fact found that this annual pattern is linked to heat waves driven by climate change. But they are not the only researchers finding such geophysical links to climate change. There is growing evidence that global warming could cause changes in seismicity, volcanic activity and other such hazards.
In the first eight years from 2006, Simon’s team saw no clear pattern. But then from 2015 they found that seismicity always increases in autumn and stays at a higher level until winter. The researchers wondered if the seasonal pattern was linked to a known increase in meltwater infiltration into the Mont Blanc massif in late summer and autumn every year.
Seasonal seismic trends
Scientists have long known that when water percolates underground it increases the pressure in gaps, or pores, in rocks, which alters the balance of forces on faults, leading to slips – and triggering seismic activity.
In the late 1990s researchers analysed water flow into the 12 km long Mont Blanc tunnel, which links France and Italy (La Houille Blanche86 78). They also found a yearly pattern, with a rapid increase in water entering the tunnel between August and October. The low mineral content of the water and results from tracer tests, using fluorescent dyes injected into a glacier crevasse on the massif, confirmed that this increased flow was fresh water from snow and glacier melt.
To explore the seasonal trend in the water table, Simon and colleagues created a hydrological model (a simplified mathematical model of a real-world water flow system) using the tunnel inflow data; plus metrological, hydrological and snow-pack data from elsewhere in the Alps. They also included information on how water diffuses into rocks, alters pore pressure and increases seismic activity (Earth and Planetary Sci. Lett.666 119372).
Underground menace The Mont Blanc Massif, with Lac Blanc in the foreground. The timing of heatwaves in this region seemingly correlates with increased microquakes. (Courtesy: Shutterstock/Rasto SK)
When combined with their seismicity data, autumn seismic activity appeared to be triggered by spring surface runoff, which arises from melting glacial ice and snow. The exact timing depends on the depth of the microquakes, with shallow quakes being linked to surface runoff from the previous year, while there is a two-year delay between runoff and deeper quakes. Essentially, their work found a link between meltwater and seismic activity in the Mont Blanc massif; but it could not explain why the autumn increase in microquakes only started in 2015.
Perhaps the answer lies in historic meteorological data of the area. In 2015 the Alps experienced a prolonged, record-breaking heatwave, which led to very many high-altitude rockfalls in a number of areas, including in the Mont Blanc massif, as rock-wall permafrost warmed. Data also show that since then there has been a big increase in days when the average temperatures in the Swiss Alps is above 0 °C. These so-called “positive degree days” are known to lead to increased glacial melt.
All of these findings support the idea that the onset of seasonal seismic activity is linked to climate change-induced increases in meltwater and alterations in flow paths. Simon explains that rock collapses can alter the pathways that water follows as it infiltrates into the ground. Combined with increases in meltwater, this can lead to pore-pressure changes that increase seismicity and trigger it in new places.
These small earthquakes in the Mont Blanc massif are unlikely to trouble local communities. But the researchers did find that at times the seismic hazard – an indicator of how often and intensely the earth could shake in a specific area – rose by nearly four orders of magnitude, compared with pre-2015 level. They warn that similar processes in glaciated areas that experience larger earthquakes than the Alps, such as the Himalayas, might be less gentle.
Extreme rainfall
Climate change is also altering water-flow patterns by increasing the intensity of extreme weather events and heavy rainfall. And there is already evidence that such extreme precipitation can influence seismic activity.
In 2020 Storm Alex brought record-breaking rainfall to the south-east of France, with some areas seeing more than 600 mm in 24 hours. In the following 100 days 188 earthquakes were recorded in the Tinée valley, in south-eastern France. Although all were below two in magnitude, that volume of microquakes would usually be spread over a five-year period in the region. A 2024 analysis carried out by seismologists in France concluded that increased fluid pressure from the extreme rainfall caused a stressed fault system to slip, initiating a seismic swarm – a localized cluster of earthquakes, without a single “mainshock”, that take place over a relatively short period of days, months or years (see figure 1).
French seismologist Laeticia Jacquemond and colleagues have developed a model showing the sequence of mechanisms that likely trigger a seismic swarm, which is a localized cluster of earthquakes. The sequence starts with abrupt and extreme rainfall, like 2020’s Storm Alex. Thanks to open fault zones, a lot of rainfall is transmitted deep within a critically stressed crust. The fluid invasion through the fractured medium then induces a poroelastic response of the crust at shallow depths, triggering or accelerating a seismic slip on fault planes. As this slip propagates through the fault network, it pressurizes and stresses locked asperities (areas on an active fault where there is increased friction), predisposed to rupture, and initiates a seismic swarm.
There have been other examples in Europe of seismic activity linked to extreme rainfall. For instance, in September 2002 a catastrophic storm in western Provence in southern France, with similar rainfall levels as Storm Alex, triggered a clear and sudden increase in seismic activity, a study concluded. While another analysis found that an unusual series of 47 earthquakes over 12 hours in central Switzerland in August 2005 was likely caused by three days of intense rainfall.
According to Marco Bohnhoff from the GFZ Helmholtz Centre for Geosciences in Potsdam, Germany, the link between fluid infiltration into the ground and seismicity is well understood – from fluid injection for oil and gas production, to geothermal development and heavy rainfall. “The pore pressure is increased if there is a small load on top, enforced by water, and that changes the pressure conditions in the underground, which can release energy that is already stored there,” Bohnhoff explains.
Pressure conditions Scientists have tracked the change in water level in the reservoir behind the four dams that make up the Koyna hydroelectric project in Maharashtra, India, finding that the rise during monsoon season is accompanied by an increase in seismic activity over the same period. (Courtesy: iStock/yogesh_more)
A good example of this is the Koyna Dam, one of India’s largest hydroelectric projects, which consists of four dams. Every year during the monsoon season the water level in the reservoir behind the dams increases by about 20–25 m, and with this comes an increase in seismic activity. “After the rain stops and the water level decreases, the earthquake activity stops,” says Bohnhoff. “So, the earthquake activity distribution nicely follows the water level.”
Rising seas and seismic activity
According to Bohnoff, anything that increases the pressure underground could trigger earthquakes. But he has also been studying the effect of another consequence of climate change: sea-level rise.
Undisputed and accelerating, sea-level rise is driven by two main effects linked to climate change: the expansion of ocean waters as they warm, and the melting of land ice, mainly the Antarctic and Greenland ice sheets. According to the World Meteorological Organization, sea levels will rise by half a metre by 2100 if emissions follow the Paris Agreement, but increases of up to two metres cannot be ruled out if emissions are even higher.
As ocean waters increase, so does the load on the underground. “This will change the global earthquake activity rate,” says Bohnhoff. In a study published in 2024, Bohnhoff and colleagues found that sea-level rise will advance the seismic clock, leading to more and in some cases stronger earthquakes (Seismological Research Letters95 2571).
“It doesn’t mean that all of a sudden there will be earthquakes everywhere, but earthquakes that would have occurred sometime in the future will occur sooner,” he says. “We’re changing the regularity of earthquakes.” The risk created by this is greatest in coastal mega-cities, located near critical fault zones, such as San Francisco and Los Angeles in the US; Istanbul in Turkey; and Tokyo and Yokohama in Japan.
The findings cannot be used to predict individual earthquakes – in fact, it is very difficult to predict how much the seismic clock will advance, as it depends on the amount of sea-level rise. But there are faults around the world that are critically stressed and close to the end of their seismic cycle.
“Faults that are very, very close to failure, where basically there would be an earthquake, say in 100 years or 50 years, they might be advanced and that might occur very soon,” he explains.
Between a rock and a hard place
Another significant geological hazard linked to climate change and heavy rainfall is volcanic activity. In December 2021 there was devastating eruption of Mount Semeru, on the Indonesian island of Java. “There was a really heavy rainfall event and that caused the collapse of the lava dome at the summit,” says Jamie Farquharson, a volcanologist at Niigata University in Japan.
This led to a series of eruptions, pyroclastic flows and “lahars” – devastating flows of mud and volcanic debris – that killed at least 69 people and damaged more than 5000 homes. Although it is challenging to attribute this specific event to climate change, Farquharson says that it is a good example of how global warming-induced heavy rainfall could exacerbate volcanic hazards.
Farquharson and colleagues noticed links between ground deformations and rainfall at several volcanoes. “We started seeing some correlations and thought why shouldn’t we? Because from a rock mechanics point of view, these volcanoes would be more prone to fracturing and other kinds of failure when the pore pressure is high,” says Farquharson. “And one of the easiest ways of increasing pore pressure is by dumping a load of rain onto the volcano.”
Such rock fracturing can open new pathways for magma to propagate towards the surface. This can happen deep underground, but also near the surface, for instance by causing a chunk of the flank to slide off a volcano. As with earthquakes, these changes could alter the timing of eruptions. For volcanoes that might be primed for an eruption, where the magma chamber is inflating, extreme rainfall events might hasten an eruption. But as Farquharson explains, such rainfall events “could bring something that was going to happen at an unspecified point in the future across a tipping point”.
A few years ago Farquharson, together with atmospheric scientist Falk Amelung of the University of Miami in the US, published a study showing that if global warming continues at current rates, rainfall-linked volcanic activity – such as dome explosions and flank collapses – will increase at more than 700 volcanoes around the globe (R. Soc. Open Sci.9 220275).
To explore the impact of rainfall, Farquharson and Amelung analysed decades of reports on volcanic activity from the Smithsonian’s Global Volcanism Program. This showed that heavy or extreme rainfall has been linked to eruptions and other hazards, such as lahars at at least 174 volcanoes (see figure 2).
There are 1234 volcanoes on land that have been active in the Holocene, the current geological epoch, which began around 12,000 years ago. The geologists used nine different models to explore how climate change might alter rainfall at these volcanos. They found that 716 of these volcanoes will experience more extreme rainfall as global temperatures continue to rise. The models did not agree on whether rainfall will become more or less extreme at 407 of the volcanoes, and the remaining 111 are in regions expected to see a drop in heavy rain.
Jamie Farquharson and colleagues are studying how heavy rainfall drives a range of volcanic hazards. The colours on the map reflect the “forced model response” (FMR) – the percentage change of heavy precipitation for a given unit of global warming. Serving as a proxy for the likelihood of extreme rainfall events, the value of FMR was averaged from nine different “general circulation models” (i.e. global climate models). FMR is shown here as the percentage rise or fall in extreme rainfall projected by the models for every degree of global warming between 2005 and 2100 CE. The darkest reds show areas that will experience a 20% or more decrease in extreme rainfall for each degree of warming, while the darkest blues highlight areas which will experience a 20% or more increase in extreme rainfall per degree of warming. The figures were made with CMIP5 model data, which assumes a “high emissions” scenario. Their results suggest that if global warming continues unchecked, the incidence of primary and secondary rainfall-related volcanic activity – such as dome explosions or flank collapse – will increase at more than 700 volcanoes around the globe.
Volcanic regions where heavy rainfall is expected to increase include the Caribbean islands, parts of the Mediterranean, the East African Rift system, and most of the Pacific Ring of Fire.
In fact, volcanic hazards in many of these regions have already been linked to heavy rainfall. For instance, in 1998 extreme rainfall in Italy led to devastating debris flows on Mount Vesuvius and Campi Flegrei, near Naples, killing 160 people.
Elsewhere, rainfall has sparked explosive activity at Mount St Helens, in the Cascade Mountains of Canada and the western US. Other volcanoes in this range, which is part of the Ring of Fire, put major population centres at significant lahar risk, due to their steep slopes. In both the Caribbean and Indonesia – the world’s most volcanically active country, heavy rainfall has triggered explosive activity and eruptions at active volcanoes.
Farquharson and Amelung warn that if heavy rainfall increases in these regions as predicted, it will heighten an already considerable threat to life, property and infrastructure. As we enter a new era of much higher resolution climate modelling, Farquharson hopes that we will “be able to get a much better handle on exactly which [volcanic] systems could be affected the most”. This may enable scientists to better estimate how hazards will change at specific geographical locations.
Fire and ice
Scientists are also concerned about what will happen to volcanoes currently buried under ice as the climate warms.Through modelling work and studying volcanoes that sat below the Patagonian Ice Sheet during and at the end of the last ice age, Brad Singer, a geoscientist at the University of Wisconsin-Madison in the US, and colleagues have been exploring the impact of deglaciation on volcanic processes.
They found that ice loss can lead to an increase in large explosive eruptions. This occurs because as the ice melts, the weight on the volcano drops, which allows magma to expand and put pressure on the rock within the volcano. Also, as pressure from the ice reduces, dissolved volatile gases like water and carbon dioxide separate from the magma to form gas bubbles. This further increases the pressure in the magma chamber, which can promote an eruption.
But each volcano responds differently to ice. Singer’s team has been dating and studying the chemical composition of lava flow samples from South America, to track the behaviour of volcanoes over tens of thousands of years, through the build-up of the ice and after deglaciation.
The Patagonian Ice Sheet began to melt very rapidly about 18,000 years ago and by about 16,000 years ago it was gone. “We develop a timeline and put compositions on that timeline and look to see if there were any changes in the composition of the magmas that were erupting as a function of the thickness of the ice sheet,” explains Singer. “We are finding some really interesting things.”
The Puyehue-Cordón Caulle and Mocho-Choshuenco volcanic complexes in southern Chile both erupt rhyolitic magmas. But they were not producing this type of magma before the ice retreated, as Singer and colleagues found (GSA Bulletin136 5262) (see figure 3).
Geologist Brad Singer and colleagues are studying how glaciers and ice sheets impact the evolution of volcanoes, to develop a “lava-fed delta” model. (a) The researchers studied basaltic andesites in the Río Blanco river in Argentina (Arb). A fine-grained extrusive igneous rock that forms when volcanic magma erupts and crystallizes outside of the volcano, basaltic andesites were impounded by the Patagonian Ice Sheet roughly 26,000 years ago. Here they formed cliffs that were then occupied by the Patagonian Ice Sheet at 1500–1700 m above sea level between 26,000 and 20,000 years ago. Ice on top of the edifice should have been comparatively thinner than in the surrounding valleys. (b) As the ice sheet retreated between 18,000 and 16,000 years ago, dacite – a fine-grained volcanic rock formed by rapid solidification of lava that is high in silica and low in alkali metal oxides – from the Río Truful river in Chile (Drt) flowed onto it. (c) Lava is channelized as it melts the ice to form a lava-fed delta. (d) Dacite flows through the ice and to its base.
“We don’t know for sure that [magma change] is attributable to the glaciation, but it is curious that immediately following the deglaciation we start to see the first appearance of these highly explosive rhyolitic magmas,” says Singer. The volcanologists suspect that the ice sheet reduced eruptions at these volcanos, leading magma to accumulate over thousands of years. “That accumulated reservoir can evolve into this explosive dangerous magma type called rhyolite,” Singer adds.
But that didn’t always happen. The Calbuco volcano, in southern Chile, has always erupted andesite, an intermediate-composition magma. “It’s never erupted basalt, it’s never erupted rhyolite, it’s erupting andesite, regardless of whether the ice is there or not,” explains Singer.
There are also differences in how quickly volcanoes reacted to the deglaciation. At Mocho-Choshuenco, for example, there was a large rhyolite eruption about 3000 years after the loss of ice. Singer suspects that the delay “reflects the time that it took to exsolve the volatiles from the rhyolite”. But at the nearby, very active Villarrica volcano, there was no such delay. It experienced a huge eruption 16,800 years ago, almost immediately after the ice disappeared.
Melting ice sheets
Volcanic activity from melting ice sheets, due to current climate change, is probably not a direct hazard to people. But below the West Antarctic Ice Sheet sits the West Antarctic Rift – a system that is thought to contain at least 100 active volcanoes.
A major contributor to global sea-level rise, the West Antarctic Ice Sheet is particularly vulnerable to collapse as temperatures rise. If they become more active and explosive, the volcanoes of the West Antarctic Rift System could accelerate ice melting and sea-level rise.
Icy danger Thwaites Glacier (photographed by the Copernicus Sentinel-2 satellite in 2019) is a tongue of the West Antarctic Ice Sheet and has so much ice that it alone could raise global sea levels by around 60 cm. The ice sheet sits on top of a rift system thought to contain 100 active volcanoes. Reduced ice load as the sheet melts could trigger these volcanoes, which would in turn accelerate melting. (CC BY-SA 3.0 IGO/ESA)
“The melting of the West Antarctic Ice Sheet could remove the surface load that’s preventing eruptions from occurring,” says Singer. Such eruptions could bring lava and heat to the base of the ice sheet, which is dangerous because melting at the base can cause the ice to move faster into the ocean. The resulting rising sea levels could go on to advance the seismic clock and trigger earthquakes.
In the long run, increased volcanic activity will impact global climate, with the cumulative effect of multiple eruptions contributing to global warming thanks to a build-up of greenhouse gases. Essentially, a positive feedback loop is created, as melting ice caps, helped by volcanoes, could lead to more earthquakes. Managing the Earth’s warming and protecting the world’s remaining glaciers and ice sheets is therefore more crucial than ever.
Nothing stays static in today’s job market. Physicist Gabi Steinbach recalls that about five years ago, fresh physics PhDs could snag lucrative data-scientist positions in companies without job experience. “It was a really big boom,” says Steinbach, at the University of Maryland, US. Then, schools started formal data-science programmes that churned out job-ready candidates to compete with physicists. Now, the demand for physicists as data scientists “has already subsided,” she says.
Today, new graduates face an uncertain job market, as companies wrestle with the role of artificial intelligence (AI), and due to the funding cuts of science research agencies in the US. But those with physics degrees should stay optimistic, according to Matt Thompson, a physicist at Zap Energy, a fusion company based in Seattle, Washington.
“I don’t think the value of a physics education ever changes,” says Thompson, who has mentored many young physicists. “It is not a flash-in-the-pan major where the funding and jobs come from changes. The value of the discipline truly is evergreen.”
Evergreen discipline
In particular, a physics degree prepares you for numerous technical roles in emerging industrial markets. Thompson’s company, for example, offers a number of technical roles that could fit physicists with a bachelor’s, master’s or PhD.
A good way to set yourself up for success is to begin your job hunt two years before you expect to graduate, says Steinbach, who guides young researchers in career development. “Many students underestimate the time it takes,” she says.
The early start should help with the “internal” work of job hunting, as Steinbach calls it, where students figure out their personal ambitions. “I always ask students or postdocs, what’s your ultimate goal?” she says. “What industry do you want to work in? Do you like teamwork? Do you want a highly technical job?”
Then, the external job hunt begins. Students can find formal job listings on Physics World Jobs, APS Physics Jobs and in the Physics World Careers and APS Careers guides, as well as companies’ websites or on LinkedIn. Another way to track opportunities is to read investment news, says Monica Volk, who has spent the last decade hiring for companies, including Silicon Valley start-ups. She follows “Term Sheet,” a Fortune newsletter, to see which companies have raised money. “If they just raised $20 million, they’re going to spend that money on hiring people,” she says.
Expert advice From left to right: Gabi Steinbach, Matt Thompson, Monica Volk, Carly Saxton, and Valentine Zatti. (Courtesy: Gabi Steinbach; Zap Energy; Mike Craig; Crouse Powell Photography; Alice & Bob)
Volk encourages applicants to tailor their résumé for each specific job. “Your résumé should tell a story, where the next chapter in the story is the job that you’re applying for,” she says.
Hiring managers want a CV to show that a candidate from academia can “hit deadlines, communicate clearly, collaborate and give feedback.” Applicants can show this capability by describing their work specifically. “Talk about different equipment you’ve used, or the applications your research has gone into,” says Carly Saxton, the VP of HR at Quantum Computing, Inc. (QCI), based in New Jersey, in the US. Thompson adds that describing your academic research with an emphasis on results – reports written, projects completed and the importance of a particular numerical finding – will give those in industry the confidence that you can get something done.
It’s also important to research the company you’re applying for. Generative AI can help with this, says Valentine Zatti, the HR director for Alice & Bob, a quantum computing start-up in France. For example, she has given ChatGPT a LinkedIn page and asked it to summarize the recent news about a company and list its main competitors. She is careful to verify the veracity of the summaries.
When writing a CV , it’s important to use the keywords from the job description. Many companies use applicant-tracking systems, which automatically filter out CV without those keywords. This may involve learning the jargon of the industry. For example, when Thompson looked for jobs in the defence sector, he found out they called cameras “EO/IR,” short for electro-optic infrared instruments. Once he started referring to his expertise using those words, “I got a lot better response,” he says.
Generative AI can also assist you in putting together a résumé. For example, it can make résumés, which should be one page long, more concise, or help you better match your language to the job description. But Steinbach cautions that you must stay vigilant. “If it’s writing things that don’t sound like you, or if you can’t remember what’s written on it, you will fail at your interview,” says Steinbach.
Companies fill job openings quickly, especially right now, so Thompson also recommends focusing on networking.“It’s fine to apply for jobs you see online, but that should be maybe 20 percent of your effort,” he says. “Eighty percent should be talking to people.” One effective approach is through company internships before graduation. “We jump at the opportunity to hire former interns,” says Saxton.
Thompson suggests arranging a half-hour call with someone whose job looks interesting to you. You can find people through your alumni networks, LinkedIn or APS’s Industry Mentoring for Physicists (IMPact) program, which connects students and early-career physicists from any country with industrial physicists worldwide for career guidance. You can also attend career fairs at your university and those
organized by the APS.
Skills showcase
Once a company is interested in you, you can expect several rounds of interviews. The first will be about the logistics of the job – whether you’d need to relocate, for example. After that, for technical roles you can expect technical interviews. Recently, companies have encountered candidates secretively using AI to cheat during these interviews. They may eliminate the candidate for cheating. “If you don’t know how to do something, it’s better to be honest about it than to use AI to get through a test,” says Saxton. “Companies are willing to teach and develop core skills.”
What physics grads use AI tools for in their jobs
(Courtesy: American Institute of Physics)
The“AI Use Among Physics Degree Recipients” report by the American Institute of Physics, published in August 2025, shows how recent physics degree recipients are engaging with AI, encompassing both its development and its application in daily professional activities. New bachelor’s graduates working in both STEM and non-STEM roles who received their degrees between 2023 and 2024 answered whether they routinely used AI tools in their day-to-day work in February 2025.
However, with transparency, showcasing AI skills could be a boon during job interviews. A 2025 survey from the American Institute of Physics found that around one in four students with a physics bachelor’s degree (see the graph) and two in five with physics PhDs routinely use AI for work. The report also found that one in 12 physics bachelor’s degree-earners and nearly one in five physics doctorate-earners who entered the workforce in 2024 have jobs in AI development.
The emerging quantum industry is also a promising job market for physicists. Globally, investors put nearly $2 billion in quantum technology in 2024, while public investments in quantum in early 2025 reached $10 billion. “You’ll have an opportunity to work for companies in their building stage, and you’re able to earn equity as part of that company,” says Saxton.
Alice & Bob are in the midst of hiring 100 new staff, 25 of whom are quantum physicists, including experimentalists and theorists, based in Paris. Zatti, in particular, wants to boost the number of women working in the field.
Currently, the pool of qualified candidates in quantum is small. Consequently, Alice & Bob can screen CVs manually, says Zatti. Both Alice & Bob and US-based QCI say they are willing to hire internationally. QCI is willing to pay legal fees for candidates to help them continue working in the US, says Saxton.
It’s important to stay flexible in today’s job market. “Don’t ignore current trends, but don’t get married to them either,” suggests Steinbach. Thompson agrees, adding that curiosity is key. “You just have to be creative. If you can open your aperture to all of private industry, there’s a lot of opportunity out there.”
People who teach physics often remove friction from calculations to make life easier for students. While that might speed up someone’s homework, it does mean that this all-important force tends to fade into the background, despite it being crucial for our daily lives. Here to bring friction centre stage is Jennifer Vail, a “tribologist” – or studier of friction – at US firm TA Instruments.
Friction: a Biography is an engaging and wide-ranging book illustrating its many manifestations in the natural world, showing how this force can be harnessed to solve practical engineering problems. Vail, who wrote the book after giving a hugely popular TED talk on friction, does a great job of connecting abstract physical ideas with familiar human experience.
I like, for example, her description of what happens when two surfaces slide over each other but the friction between them isn’t constant. As she explains, this “stick-slip” motion isn’t great if you’re trying to inject a drug into someone with a syringe. But it can be exploited to beautiful effect by violinists, creating “downright lovely” sounds (though apparently not when she’s practising on her own viola).
One of the book’s strengths is its historical context. Famous figures like Leonardo da Vinci are introduced alongside the development of their ideas, lending a human dimension to the science. The author does a great job of explaining how tribology, which comes from the Greek for “to rub”, has been shaped by careful experimentation and the application of rigorous scientific thinking to industrial problems.
After a trip to Switzerland, the physicist Frank Bowden showed we can ski because frictional heating causes a thin layer of snow to melt beneath our skis, providing liquid lubrication.
After a trip to Switzerland, for example, the Australian-born physicist Frank Bowden showed we can ski because frictional heating causes a thin layer of snow to melt beneath our skis, providing liquid lubrication. This overturned an earlier explanation associated with Osborne Reynolds (best known for the eponymous number marking the transition from laminar to turbulent flow) who’d thought that snow melts due to pressure.
Then there is the 19th-century researcher Robert Thurston, whose pendulum experiments on friction in bearings, described here in detail, guided the design of more efficient lubricated systems. As Vail explains, understanding friction is vital in the design of engines, where even small modifications – such as texturing surfaces, adding coatings, or putting nanoparticles into lubricants – can make them much more efficient and extend their useful life.
Historical anecdotes are woven throughout the book. The story of why graphite in pencils came to be called “lead” is particularly memorable. It turns out that the Romans used lead to write, so the name stuck – even after graphite became more popular because it allowed darker writing. There are also lots of excursions into the natural world: did you know that beetles have a protein in their leg joints that acts as a solid lubricant?
Smooth operator
Vail’s discussion of lubrication is clear and well-integrated with practical examples. Particularly insightful is the explanation of how hydrodynamic lubrication occurs in biological systems, such as human cartilage, where a thin fluid layer separates cartilage surfaces in joints, reducing friction and wear. As Vail makes clear, tribology is vital in physiology, for example in how contact lenses work when we blink our eyes or how food feels in our mouth when we chew.
The book also examines fluid dynamics and drag, distinguishing between viscosity as a material property and drag as a force. Vail’s discussion of plaque on the walls of our arteries is particularly compelling. If there’s not enough drag to shear off the plaque it can cause blockages and, potentially, a heart attack – showing how friction plays a role in our health.
Environmental considerations are addressed too. The author discusses, for example, the impact of polytetrafluoroethalyene (PTFE), which she calls “the most controversial solid lubricant ever”. Also known as Teflon, it is widely used in frying pans, but is synthesized using some pretty nasty carcinogenic “forever” chemicals that don’t break down in the environment. PTFE also has a shady past, being first used in the Manhattan atomic-bomb project to coat valves when separating isotopes of uranium.
Friction can improve energy efficiency, reduce greenhouse-gas emissions, and mitigate global warming.
On a more positive note, Vail shows how an understanding of friction can improve energy efficiency, reduce greenhouse-gas emissions, and mitigate global warming. The book extends further still, encompassing atmospheric, oceanic and planetary processes, as well as astronomy and cosmology. Friction is a universal physical principle, extending well beyond conventional engineering applications and broadening the scope of the book.
However, Vail’s intended audience is not always clear. Some sections read like a primer for tribologists, while others are highly speculative, such as the idea that life originated on Earth because oxidized molybdenum was delivered from Mars aboard Martian meteorites. There are also occasional errors and ambiguities, such as her discussion of the subtleties of the Earth’s tides.
Statements such as electric vehicles “consuming 106% energy” could have been more clearly explained, while her market estimate for anti-friction coatings of just over $1.5m by 2028 is almost certainly too low by three orders of magnitude. While these issues do not undermine the book’s scientific substance, they may distract careful readers, and the rapid movement between topics occasionally disrupts the narrative flow.
Overall, though, Vail does a good job of balancing technical exposition with anecdote and gentle humour. Friction might seem an unpromising subject for a book, but non-expert readers will find much to surprise and engage them. Despite its flaws, I would recommend it as an illuminating, if imperfect, celebration of friction and its central role in science and engineering.
High-level backing “Quantum Metrology: From Foundations to the Future” was held at NPL as part of the global celebrations for the UNESCO International Year of Quantum Science and Technology. Above: Lord Vallance, UK Minister for Science, Innovation, Research and Nuclear, opens the workshop with the official launch of the NMI-Q collaboration, an international metrology initiative that aims to accelerate the adoption of quantum technologies and applications. (Courtesy: NPL)
The UNESCO International Year of Quantum Science and Technology (IYQ) ends on an exotic flourish this month, with the official closing ceremony – which will be live-streamed from Accra, Ghana – looking back on what’s been a global celebration “observed through activities at all levels aimed at increasing public awareness of the importance of quantum science and applications”.
The timing of IYQ has proved apposite, mirroring as it does a notable inflection point within the quantum technology sector. Advances in fundamental quantum science and applied R&D are accelerating on a global scale, harnessing the exotic properties of quantum mechanics – entanglement, tunnelling, superposition and the like – to underpin practical applications in quantum computing and quantum communications.
Quantum metrology, meanwhile, has progressed from its roots in fundamental physics to become a cornerstone of technology innovation, yielding breakthroughs in fields such as precision timing, navigation, cryptography and advanced imaging – and that’s just for starters.
Collaborate to accelerate
Notwithstanding all this forward motion, IYQ has also highlighted significant challenges when it comes to scaling quantum systems, achieving fault tolerance and ensuring reproducible performance. Enter NMI-Q, an international initiative that leverages the combined expertise of the world’s leading National Metrology Institutes (NMIs) – from the G7 countries and Australia – to accelerate the adoption of foundational hardware and software technologies for quantum computing systems and the quantum internet.
Cyrus Larijani “We want NMI-Q to blossom into something much bigger than the individual NMIs.” (Courtesy: NPL)
The NMI-Q partnership was officially launched in November last year at the IYQ conference “Quantum Metrology: From Foundations to the Future”, an event hosted by NPL. Together, the respective NMIs will conduct collaborative pre-standardization research; develop a set of “best measurement practices” needed by industry to fast-track quantum innovation; and, ultimately, shape the global standardization effort in quantum technologies.
“NMI-Q has an ambitious and broad-scope brief, but it’s very much a joined-up effort when it comes to the division of labour,” says Cyrus Larijani, NPL’s head of quantum programme. The rationale being that no one country can do it all when it comes to the performance metrics, benchmarks and standards needed to take quantum breakthroughs out of the laboratory and into the commercial mainstream.
Post-launch, NMI-Q has received a collective “uptick” from the quantum community, with the establishment of internationally recognized standards and trusted benchmarks seen as core building blocks for the at-scale uptake and interoperability of quantum technologies. “What’s more,” adds Larijani, “there’s a clear consensus for collaboration over competition [between the NMIs], supported by shared development roadmaps and open-access platforms to avoid fragmentation and geopolitical barriers.”
Follow the money
In terms of technology push, the scale of investment – both public and private sector – in all things quantum means that the nascent supply chain is evolving at pace, linking component manufacturers, subsystem developers and full-stack quantum computing companies. That’s reinforced by plenty of downstream pull: all sorts of industries – from finance to healthcare, telecoms to energy generation – are seeking to understand the commercial upsides of quantum technologies, but don’t yet have the necessary domain knowledge and skill sets to take full advantage of the opportunities.
Given that context, the onus is on NMI-Q to pool its world-leading expertise in quantum metrology to inform evidence-based decision-making among key stakeholders in the “quantum ecosystem”: investors, policy-makers, manufacturers and, ultimately, the end-users of quantum applications. “Our task is to make sure that quantum technologies are built on reliable, scalable and interoperable foundations,” notes Larijani. “That’s the crux of where we’re going with NMI-Q.”
Made to measure NMI-Q leverages the combined expertise of NMIs from the G7 countries and Australia to shape the global standardization effort in quantum science and technology. Above: NMI-Q representatives gathered at NPL in November for the collaboration’s official launch, announced by UK science minister Lord Vallance (front row, third from right). (Courtesy: NPL)
Right now, NPL and its partner NMIs are busy shaping NMI-Q’s work programme and deliverables for 2026 and beyond, with the benchmarking of quantum computers very much front-and-centre. Their challenge lies in the diversity of quantum hardware platforms in the mix; also the emergence of two different approaches to quantum computing – one being a gate-based framework for universal quantum computation, the other an analogue approach tailored to outperforming classical computers on specific tasks.
“In this start-up phase, it’s all about bringing everyone together to define and assign the granular NMI-Q work packages and associated timelines,” says Larijani. Operational and strategic alignment is also mandatory across the member NMIs, so that each laboratory (and its parent government) is fully on board with the collaboration’s desired outcomes. “It’s going very well so far in terms of aligning members’ national interests versus NMI-Q’s direction of travel,” adds Larijani. “This emphasis on ‘science diplomacy’, if you like, will remain crucial to our success.”
Long term, NMI-Q’s development of widely applicable performance metrics, benchmarks and standards will, it is hoped, enable the quantum technology industry to achieve critical mass on the supply side, with those economies of scale driving down prices and increasing demand.
“Ultimately, though, we want NMI-Q to blossom into something much bigger than the individual NMIs, spanning out to engage the supply chains of member countries,” says Larijani. “It’s really important for NPL and the NMI-Q partners to help quantum companies scale their offerings, advance their technology readiness level and, sooner than later, get innovative products and services into the market.”
That systematic support for innovation and technology translation is evident on the domestic front as well. The UK Quantum Standards Network Pilot – which is being led by NPL – brings together representatives from industry (developers and end-users), academia and government to work on all aspects of standards development and ensure that UK quantum technology companies have access to global supply chains and markets.
Quantum impact
So what does success look like for Larijani in 2026? “We’re really motivated to work with as many quantum companies as we can – to help these organizations launch new quantum products and applications,” he explains. Another aspiration is to encourage industry partners to co-locate their R&D and innovation activities within NPL’s Institute for Quantum Standards and Technology.
“There are moves to establish a quantum technology cluster at NPL to enable UK and overseas companies to access our specialist know-how and unique measurement capability,” Larijani concludes. “Equally, as a centre-of-excellence in quantum science, we can help to scale the UK quantum workforce as well as encourage our own spin-out ventures in quantum metrology.”
Quantum futures: inclusive, ethical, sustainable
“Quantum Metrology: From Foundations to the Future” was held at NPL as part of UNESCO’s IYQ global celebrations. Organized by a steering committee of NMI-Q members, the conference explored quantum metrology and standards as enablers of technology innovation; also their role as “a cornerstone for trust, interoperability, and societal benefit in quantum innovation and adoption”.
The commitments below – articulated as formal recommendations for UNESCO – reflect the collective vision of conference delegates for an inclusive, ethical and sustainable quantum future…
Governance and ethics: attendees emphasized the need for robust governance and ethical oversight in quantum technologies. They called for the establishment of neutral international bodies, ideally under UN leadership, to ensure fair and transparent governance. Inclusivity was highlighted as essential, with a strong focus on extending benefits to developing nations and maintaining open dialogue. Concerns were raised about risks linked to scalability, security and potential misuse by non-state actors, underscoring the importance of proactive monitoring.
Standards and infrastructure: participants advocated for sustained funding to develop international standards and benchmarking frameworks. They also stressed the value of shared fabrication facilities and testbeds to democratize access and accelerate innovation globally.
Education and talent: education and talent development emerged as a priority, with recommendations to launch fully funded MSc programmes, practical placements and mentoring networks. Strengthening links between industry and academia, alongside outreach to schools, are seen as vital for early engagement and long-term skills development.
Societal impact: delegates urged that societal impact remain central to quantum initiatives. Applications in healthcare, climate modelling and sustainability should be a priority; also arts and cultural integration efforts to foster public understanding and ethical reflection.
Using artificial intelligence (AI) increases scientists’ productivity and impact but collectively leads to a shrinking of research focus. That is according to an analysis of more than 41 million research papers by scientist in China and the US, which finds that scientists who produce AI-augmented research also progress faster in their careers than their colleagues who do not (Nature649 1237).
The study was carried out by James Evans, a sociologist at the University of Chicago, and his colleagues who analysed 41.3 million papers listed in the OpenAlex dataset published between 1980 and 2025. They looked at papers in physics and five other disciplines – biology, chemistry, geology, materials science and medicine.
Using an AI language model to identify AI-assisted work, the team picked out almost 310,000 AI-augmented papers from the dataset. They found that AI-supported publications receive more citations than no-AI-assisted papers, while also being more impactful across multiple indicators and having a higher prevalence in high-impact journals.
Individual researchers who adopt AI publish, on average, three times as many papers and get almost five times as many citations as those not using AI. In physics, researchers who use AI tools garner 183 citations every year, on average, while those who do not use AI get only 51 annually.
AI also boosts career trajectories. Based on an analysis of more than two million scientists in the dataset, the study finds that junior researchers who adopt AI are more likely to become established scientists. They also gain project leadership roles almost one-and-a-half years earlier, on average, than those who do not use AI.
Fundamental questions
But when the researchers examined the knowledge spread of a random sample of 10,000 papers, half of which used AI, they found that AI-produced work shrinks the range of topics covered by almost 5%. The finding is consistent across all six disciplines. Furthermore, AI papers are more clustered than non-AI papers, suggesting a tendency to concentrate on specific problems.
AI tools, in other words, appear to funnel research towards areas rich in data and help to automate established fields rather than exploring new topics. Evans and colleagues think this AI-induced convergence could drive science away from foundational questions and towards data-rich operational topics.
AI could, however, help combat this trend. “We need to reimagine AI systems that expand not only cognitive capacity but also sensory and experimental capacity,” they say. “[This could] enable and incentivize scientists to search, select and gather new types of data from previously inaccessible domains rather than merely optimizing analysis of standing data.”
Meanwhile, a new report by the AI company OpenAI has found that messages on advanced topics in science and mathematics on ChatGPT over the last year have grown by nearly 50%, to almost 8.4 million per week. The firm says its generative AI chatbot is being used to advance research across scientific fields from experiment planning and literature synthesis to mathematical reasoning and data analysis.
Astronomers have created the most detailed map to date of the vast structures of dark matter that appear to permeate the universe. Using the James Webb Space Telescope (JWST), the team, led by Diana Scognamiglio at NASA’s Jet Propulsion Laboratory, used gravitational lensing to map the dark matter filaments and clusters with unprecedented resolution. As a result, physicists have new and robust data to test theories of dark matter.
Dark matter is a hypothetical substance that appears to account for about 85% of the mass in universe – yet it has never been observed directly. Dark matter is invoked by physicists to explain the dynamics and evolution of large scale structures in the universe. This includes the gravitational formation of galaxy clusters and the cosmic filaments connecting them over 100-million-light–year distances.
Light from very distant objects beyond these structures is deflected by the gravitational tug of dark matter within the clusters and filaments. This can be observed on Earth as the gravitational lensing of these distant objects. This distorts images of the distant objects and affects their observed brightness. These effects can be used to determine the dark-matter content of the clusters and filaments.
In 2007, the Cosmic Evolution Survey (COSMOS) used the Hubble Space Telescope to create a map of cosmic filaments in an area of the sky about nine times larger than that occupied by the Moon.
“The COSMOS field was published by Richard Massey and my advisor, Jason Rhodes,” Scognamiglio recounts. “It has a special place in the history of dark-matter mapping, with the first wide-area map of space-based weak lensing mass.”
However, Hubble’s limited resolution meant that many smaller-scale features remained invisible in COSMOS. In a new survey called COSMOS-Web, Scognamiglio’s team harnessed the vastly improved imaging capabilities of the JWST, which offers over twice the resolution of its predecessor.
Sharp and sensitive
“We used JWST’s exceptional sharpness and sensitivity to measure the shapes of many more faint, distant galaxies in the COSMOS-Web field – the central part of the original COSMOS field,” Scognamiglio describes. “This allowed us to push weak gravitational lensing into a new regime, producing a much sharper and more detailed mass map over a contiguous area.”
With these improvements, the team could measure the shapes of 129 galaxies per square arcminute in area of sky the size of 2.5 full moons. With thorough mathematical analysis, they could then identify which of these galaxies had been distorted by dark-matter lensing.
“The map revealed fine structure in the cosmic web, including filaments and mass concentrations that were not visible in previous space-based maps,” Scognamiglio says.
Peak star formation
The map allowed the team to identify lensing structures out to distances of roughly 5 billion light–years, corresponding to the universe’s peak era of star formation. Beyond this point, galaxies became too sparse and dim for their shapes to be measured reliably, placing a new limit on the COSMOS-Web map’s resolution.
With this unprecedented resolution, the team could also identify features as small as the dark matter halos encircling small clusters of galaxies, which were invisible in the original COSMOS survey. The astronomers hope their result will set a new, higher-resolution benchmark for future studies using JWST’s observations to probe the elusive nature of dark matter, and its intrinsic connection with the formation and evolution of the universe’s largest structures.
“It also sets the stage for current and future missions like ESA’s Euclid and NASA’s Nancy Grace Roman Space Telescope, which will extend similar dark matter mapping techniques to much larger areas of the sky,” Scognamiglio says.
The facts seem simple enough. In 1957 Chen Ning Yang and Tsung-Dao Lee won the Nobel Prize for Physics “for their penetrating investigation of the so-called parity laws which has led to important discoveries regarding the elementary particles”. The idea that parity is violated shocked physicists, who had previously assumed that every process in nature remains the same if you reverse all three spatial co-ordinates.
Thanks to the work of Lee and Yang, who were Chinese-American theoretical physicists, it now appeared that this fundamental physics concept wasn’t true (see box below). As Yang once told Physics World columnist and historian of science Robert Crease, the discovery of parity violation was like having the lights switched off and being so confused that you weren’t sure you’d be in the same room when they came back on.
But one controversy has always surrounded the prize.
Lee and Yang published their findings in a paper in October 1956 (Phys. Rev. 1 254), meaning that their Nobel prize was one of the rare occasions that satisfied Alfred Nobel’s will, which says the award should go to work done “during the preceding year”. However, the first verification of parity violation was published in February 1957 (Phys. Rev.105 1413) by a team of experimental physicists led by Chien-Shiung Wu at Columbia University, where Lee was also based. (Yang was at the Institute for Advanced Study in Princeton at the time.)
Surely Wu, an eminent experimentalist (see box below “Chien-Shiung Wu: a brief history”), deserved a share of the prize for contributing to such an fundamental discovery? In her paper, entitled “Experimental Test of Parity Conservation in Beta Decay”, Wu says she had “inspiring discussions” with Lee and Yang. Was gender bias at play, did her paper miss the deadline, or was she simply never nominated?
The Wu experiment
(Courtesy: IOP Publishing)
Parity is a property of elementary particles that says how they behave when reflected in a mirror. If the parity of a particle does not change during reflection, parity is said to be conserved. In 1956 Tsung-Dao Lee and Chen Ning Yang realized that while parity conservation had been confirmed in electromagnetic and strong interactions, there was no compelling evidence that it should also hold in weak interactions, such as radioactive decay. In fact, Lee and Yang thought parity violation could explain the peculiar decay patterns of K mesons, which are governed by the weak interaction.
In 1957 Chien-Shiung Wu suggested an experiment to check this based on unstable cobalt-60 nuclei radioactively decaying into nickel-60 while emitting beta rays (electrons). Working at very low temperatures to ensure almost no random thermal motion – and thereby enabling a strong magnetic field to align the cobalt nuclei with their spins parallel – Wu found that far more electrons were emitted in a downward direction than upward.
In the figure, (a) shows how a mirror image of this experiment should also produce more electrons going down than up. But when the experiment was repeated, with the direction of the magnetic field reversed to change the direction of the spin as it would be in the mirror image, Wu and colleagues found that more electrons were produced going upwards (b). The fact that the real-life experiment with reversed spin direction behaved differently from the mirror image proved that parity is violated in the weak interaction of beta decay.
Back then, the Nobel statutes stipulated that all details about who had been nominated for a Nobel prize – and why the winners were chosen by the Nobel committee – were to be kept secret forever. Later, in 1974, the rules were changed, allowing the archives to be opened 50 years after an award had been made. So why did the mystery not become clear in 2007, half a century after the 1957 prize?
The reason is that there is a secondary criterion for prizes awarded by the Royal Swedish Academy of Sciences – in physics and chemistry – which is that the archive must stay shut for as long as a laureate is still alive. Lee and Yang were in their early 30s when they were awarded the prize and both went on to live very long lives. Lee died on 24 August 2024 aged 97 and it was not until the death of Yang on 18 October 2025 at 103 that the chance to solve the mystery finally arose.
Chien-Shiung Wu: a brief history
Overlooked for a Nobel Chien-Shiung Wu in 1963 at Columbia University by which time she had already received the first three of her 23 known nominations for a Nobel prize. (Courtesy: Smithsonian Institution)
Born on 31 May 1912 in Jiangsu province in eastern China, Chien-Shiung Wu graduated with a degree in physics from National Central University in Nanjing. After a few years of research in China, she moved to the US, gaining a PhD at the University of California at Berkeley in 1940. Three years later Wu took up a teaching job at Princeton University in New Jersey – a remarkable feat given that women were not then even allowed to study at Princeton.
During the Second World War, Wu joined the Manhattan atomic-bomb project, working on radiation detectors at Columbia University in New York. After the conflict was over, she started studying beta decay – one of the weak interactions associated with radioactive decay. Wu famously led a crucial experiment studying the beta decay of cobalt-60 nuclei, which confirmed a prediction made in October 1956 by her Columbia colleague Tsung-Dao Lee and Chen Ning Yang in Princeton that parity can be violated in the weak interaction.
Lee and Yang went on to win the 1957 Nobel Prize for Physics but the Nobel Committee was not aware that Lee had in fact consulted Wu in spring 1956 – several months before their paper came out – about potential experiments to prove their prediction. As she was to recall in 1973, studying the decay of cobalt-60 was “a golden opportunity” to test their ideas that she “could not let pass”.
The first woman in the Columbia physics department to get a tenured position and a professorship, Wu remained at Columbia for the rest of her career. Taking an active interest in physics well into retirement, she died on 16 February 1997 at the age of 84. Only now, with the publication of this Physics World article, has it become clear that despite receiving 23 nominations from 18 different physicists in 16 years between 1958 and 1974, she never won a Nobel prize.
Entering the archives
As two physicists based in Stockholm with a keen interest in the history of science, we had already examined the case of Lise Meitner, another female physicist who never won a Nobel prize – in her case for fission. We’d published our findings about Meitner in the December 2023 issue of Fysikaktuellt – the journal of the Swedish Physical Society. So after Yang died, we asked the Center for History of Science at the Royal Swedish Academy of Sciences if we could look at the 1957 archives.
A previous article in Physics World from 2012 by Magdolna Hargittai, who had spoken to Anders Bárány, former secretary of the Nobel Committee for Physics, seemed to suggest that Wu wasn’t awarded the 1957 prize because her Physical Review paper had been published in February of that year. This was after the January cut-off and therefore too late to be considered on that occasion (although the trio could have been awarded a joint prize in a subsequent year).
History in the making Left image: Mats Larsson (centre) and Ramon Wyss (left) at the Center for History of Science at the Royal Swedish Academy of Sciences in Stockholm, Sweden, on 13 November 2025, where they become the first people to view the archive containing information about the nominations for the 1957 Nobel Prize for Physics. They are shown here in the company of centre director Karl Grandin (right). Right image: Larsson and Wyss with their hands on the archives, on which this Physics World article is based. (Courtesy: Anne Miche de Malleray)
After receiving permission to access the archives, we went to the centre on Thursday 13 November 2025, where – with great excitement – we finally got our hands on the thick, black, hard-bound book containing information about the 1957 Nobel prizes in physics and chemistry. About 500 pages long, the book revealed that there were a total of 58 nominations for the 1957 Nobel Prize for Physics – but none at all for Wu that year. As we shall go on to explain, she did, however, receive a total of 23 nominations over the next 16 years.
Lee and Yang, we discovered, received just a single nomination for the 1957 prize, submitted by John Simpson, an experimental physicist at the University of Chicago in the US. His nomination reached the Nobel Committee on 29 January 1957, just before the deadline of 31 January. Simpson clearly had a lot of clout with the committee, which commissioned two reports from its members – both Swedish physicists – based on his recommendation. One was by Oskar Klein on the theoretical aspects of the prize and the other by Erik Hulthén on the experimental side of things.
Report revelations
Klein devotes about half of his four-page report to the Hungarian-born theorist Eugene Wigner, who – we discovered – received seven separate nominations for the 1957 prize. In his opening remarks, Klein notes that Wigner’s work on symmetry principles in physics, first published in 1927, had gained renewed relevance in light of recent experiments by Wu, Leon Lederman and others. According to Klein, these experiments cast a new light on the fundamental symmetry principles of physics.
Klein then discusses three important papers by Wigner and concludes that he, more than any other physicist, established the conceptual background on symmetry principles that enabled Lee and Yang to clarify the possibilities of experimentally testing parity non-conservation. Klein also analyses Lee and Yang’s award-winning Physical Review paper in some detail and briefly mentions subsequent articles of theirs as well as papers by two future Nobel laureates – Lev Landau and Abdus Salam.
Klein does not end his report with an explicit recommendation, but identifies Lee, Yang and Wigner as having made the most important contributions. It is noteworthy that every physicist mentioned in Klein’s report – apart from Wu – eventually went on to receive a Nobel Prize for Physics. Wigner did not have to wait long, winning the 1963 prize together with Maria Goeppert Mayer and Hans Jensen, who had also been nominated in 1957.
As for Hulthén’s experimental report, it acknowledges that Wu’s experiment started after early discussions with Lee and Yang. In fact, Lee had consulted Wu at Columbia on the subject of parity conservation in beta-decay before Lee and Yang’s famous paper was published. According to Wu, she mentioned to Lee that the best way would be to use a polarized cobalt-60 source for testing the assumption of parity violation in beta-decay.
Many physicists were aware of Lee and Yang’s paper, which was certainly seen as highly speculative, whereas Wu realized the opportunity to test the far-reaching consequences of parity violation. Since she was not a specialist of low-temperature nuclear alignment, she contacted Ernest Ambler at the National Bureau of Standards in Washington DC, who was a co-author on her Physics Review paper of 15 February 1957.
Hulthén describes in detail the severe technical challenges that Wu’s team had to overcome to carry out the experiment. These included achieving an exceptionally low temperature of 0.001 K, placing the detector inside the cryostat, and mitigating perturbations from the crystalline field that weakened the magnetic field’s effectiveness.
Despite these difficulties, the experimentalists managed to obtain a first indication of parity violations, which they presented on 4 January 1957 at a regular lunch that took place at Columbia every Friday. The news of these preliminary results spread like wildfire throughout the physics community, prompting other groups to immediately follow suit.
Hulthén mentions, for example, a measurement of the magnetic moment of the mu (μ) meson (now known as the muon) that Richard Garvin, Leon Lederman and Marcel Weinrich performed at Columbia’s cyclotron almost as soon as Lederman had obtained information of Wu’s work. He also cites work at the University of Leiden in the Netherlands led by C J Gorter that apparently had started to look into parity violation independently of Wu’s experiment (Physica23 259).
Wu’s nominations
It is clear from Hulthén’s report that the Nobel Physics Committee was well informed about the experimental work carried out in the wake of Lee and Yang’s paper of October 1956, in particular the groundbreaking results of Wu. However, it is not clear from a subsequent report dated 20 September 1957 (see box below) from the Nobel Committee why Wigner was not awarded a share of the 1957 prize, despite his seven nominations. Nor is there any suggestion of postponing the prize a year in order to include Wu. The report was discussed on 23 October 1957 by members of the “Physics Class” – a group of physicists in the academy who always consider the committee’s recommendations – who unanimously endorsed it.
The Nobel Committee report of 1957
(Courtesy: The Nobel Archive, The Royal Swedish Academy of Sciences, Stockholm)
This image is the final page of a report written on 20 September 1957 by the Nobel Committee for Physics about who should win the 1957 Nobel Prize for Physics. Dated 20 September 1957 and published here for the first time since it was written, the English translation is as follows. “Although much experimental and theoretical work remains to be done to fully clarify the necessary revision of the parity principle, it can already be said that a discovery with extremely significant consequences has emerged as a result of the above-mentioned study by Lee and Yang. In light of the above, the committee proposes that the 1957 Nobel Prize in Physics be awarded jointly to: Dr T D Lee, New York, and Dr C N Yang, Princeton, for their profound investigation of the so-called parity laws, which has led to the discovery of new properties of elementary particles.” The report was signed by Manne Siegbahn (chair), Gudmund Borelius, Erik Hulthén, Oskar Klein, Erik Rudberg and Ivar Waller.
Most noteworthy with regard to this meeting of the Physics Class was that Meitner – who had also been overlooked for the Nobel prize – took part in the discussions. Meitner, who was Austrian by birth, had been elected a foreign member of the Royal Swedish Academy of Sciences in 1945, becoming a “Swedish member” after taking Swedish citizenship in 1951. In the wake of these discussions, the academy decided on 31 October 1957 to award the 1957 Nobel Prize for Physics to Lee and Yang. We do not know, though, if Meitner argued for Wu to be awarded a share of that year’s prize.
A total of 23 nominations to give a Nobel prize to Wu reached the Nobel Committee on 10 separate years and she was nominated by 18 leading physicists, including various Nobel-prize winners and Tsung-Dao Lee himself
Although Wu did not receive any nominations in 1957, she was nominated the following year by the 1955 Nobel laureates in physics, Willis Lamb and Polykarp Kusch. In fact, after Lee and Yang won the prize, nominations to give a Nobel prize to Wu reached the committee on 10 separate years out of the next 16 (see graphic below). She was nominated by a total of 18 leading physicists, including various Nobel-prize winners and Lee himself. In fact, Lee nominated Wu for a Nobel prize on three separate occasions – in 1964, 1971 and 1972.
However, it appears she was never nominated by Yang (at the time of writing, we only have archive information up to 1974). One reason for Lee’s support and Yang’s silence could be attributed to the early discussions that Lee had with Wu, influencing the famous Lee and Yang paper, which Yang may not have been aware of. It is also not clear why Lee and Yang never acknowledged their discussion with Wu about the cobalt-60 experiment that was proposed in their paper; further research may shed more light on this topic.
Following Wu’s nomination in 1958, the Nobel Committee simply re-examined the investigations already carried out by Klein and Hulthén. The same procedure was repeated in subsequent years, but no new investigations into Wu’s work were carried out until 1971 when she received six nominations – the highest number she got in any one year.
Nominations for Wu from 1958 to 1974
(Courtesy: IOP Publishing)
Our examination of the newly released Nobel archive from 1957 indicates that although Chien-Shiung Wu was not nominated for that year’s prize, which was won by Chen Ning Yang and Tsung-Dao Lee, she did receive a total of 23 nominations over the next 16 years (1974 being the last open archive at the time of writing). Those 23 nominations were made by 18 different physicists, with Lee nominating Wu three times and Herwig Schopper, Emilio Segrè and Ryoya Utiyama each doing so twice. The peak year for nominations for her was 1971 when she received six nominations. The archives also show that in October 1957 Werner Heisenberg submitted a nomination for Lee (but not Yang); it was registered as a nomination for 1958. The nomination is very short and it is not clear why Heisenberg did not nominate Yang.
That year the committee decided to ask Bengt Nagel, a theorist at KTH Royal Institute of Technology, to investigate the theoretical importance of Wu’s experiments. The nominations she received for the Nobel prize concerned three experiments. In addition to her 1957 paper on parity violation there was a 1949 article she’d written with her Columbia colleague R D Albert verifying Enrico Fermi’s theory of beta decay (Phys. Rev. 75 315) and another she wrote in 1963 with Y K Lee and L W Mo on the conserved vector current, which is a fundamental hypothesis of the Standard Model of particle physics (Phys. Rev. Lett. 10 253).
After pointing out that four of the 1971 nominations came from Wu’s colleagues at Columbia, which to us may have hinted at a kind of lobbying campaign for her, Nagel stated that the three experiments had “without doubt been of great importance for our understanding of the weak interaction”. However, he added, “the experiments, at least the last two, have been conducted to certain aspects as commissioned or direct suggestions of theoreticians”.
In Nagel’s view, Wu’s work therefore differed significantly from, for example, James Cronin and Val Fritsch’s famous discovery in 1964 of charge-parity (CP) violation in the decay of Ko mesons. They had made their discovery under their own steam, whereas (Nagel suggested) Wu’s work had been carried out only after being suggested by theorists. “I feel somewhat hesitant whether their theoretical importance is a sufficient motivation to render Wu the Nobel prize,” Nagel concluded.
Missed opportunity
The Nobel archives are currently not open beyond 1974 so we don’t know if Wu received any further nominations over the next 23 years until her her death in 1997. Of course, had Wu not carried out her experimental test of parity violation, it is perfectly possible that another physicist or group of physicists would have something similar in due course.
Nevertheless, to us it was a missed opportunity not to include Wu as the third prize winner alongside Lee and Yang. Sure, she could not have won the prize in 1957 as she was not nominated for it and her key publication did not appear before the January deadline. But it would simply have been a case of waiting a year and giving Wu and her theoretical colleagues the prize jointly in 1958.
Another possible course of action would have been to single out the theoretical aspects of symmetry violation and award the prize to Lee, Wigner and Yang, as Klein had suggested in his report. Unfortunately, full details of the physics committee’s discussions are not contained in the archives, which means we don’t know if this was a genuine possibility being considered at the time.
But what is clear is that the Nobel committee knew full well the huge importance of Wu’s experimental confirmation of parity violation following the bold theoretical insights of Lee and Yang. Together, their work opened a new chapter in the world of physics. Without Wu’s interest in parity violation and her ingenious experimental knowledge, Lee and Yang would never have won the Nobel prize.
Measuring atmospheric plastics Abundance and composition of microplastics (MP) and nanoplastics (NP) in aerosols and estimated fluxes across atmospheric compartments in semiarid (Xi’an) and humid subtropical (Guangzhou) urban environments. (TSP: total suspended particles) (Courtesy: Institute of Earth Environment, CAS)
Plastic has become a global pollutant concern over the last couple of decades: it is widespread in society, not often disposed of effectively, and generates both microplastics (1 µm to 5 mm in size) and nanoplastics (smaller than 1 µm) that have infiltrated many ecosystems – including being found inside humans and animals.
Over time, bulk plastics break down into micro- and nanoplastics through fragmentation mechanisms that create much smaller particles with a range of shapes and sizes. Their small size has become a problem because they are increasingly finding their way into waterways that pollute the environment, into cities and other urban environments, and are now even being transported to remote polar and high-altitude regions.
This poses potential health risks around the world. While the behaviour of micro- and nanoplastics in the atmosphere is poorly understood, it’s thought that they are transported by transcontinental and transoceanic winds, which causes the spread of plastic in the global carbon cycle.
However, the lack of data on the emission, distribution and deposition of atmospheric micro- and nanoplastic particles makes it difficult to definitively say how they are transported around the world. It is also challenging to quantify their behaviour, because plastic particles can have a range of densities, sizes and shapes that undergo physical changes in clouds, all of which affect how they travel.
A global team of researchers has developed a new semi-automated microanalytical method that can quantify atmospheric plastic particles present in air dustfall, rain, snow and dust resuspension. The research was performed across two Chinese megacities, Guangzhou and Xi’an.
“As atmospheric scientists, we noticed that microplastics in the atmosphere have been the least reported among all environmental compartments in the Earth system due to limitations in detection methods, because atmospheric particles are smaller and more complex to analyse,” explains Yu Huang, from the Institute of Earth Environment of the Chinese Academy of Sciences (IEECAS) and one of the paper’s lead authors. “We therefore set out to develop a reliable detection technique to determine whether microplastics are present in the atmosphere, and if so, in what quantities.”
Quantitative detection
For this new approach, the researchers employed a computer-controlled scanning electron microscopy (CCSEM) system equipped with energy-dispersive X-ray spectroscopy to reduce human bias in the measurements (which is an issue in manual inspections). They located and measured individual micro- and nanoplastic particles – enabling their concentration and physicochemical characteristics to be determined – in aerosols, dry and wet depositions, and resuspended road dust.
“We believe the key contribution of this work lies in the development of a semi‑automated method that identifies the atmosphere as a significant reservoir of microplastics. By avoiding the human bias inherent in visual inspection, our approach provides robust quantitative data,” says Huang. “Importantly, we found that these microplastics often coexist with other atmospheric particles, such as mineral dust and soot – a mixing state that could enhance their potential impacts on climate and the environment.”
The method could detect and quantify plastic particles as small as 200 nm, and revealed airborne concentrations of 1.8 × 105 microplastics/m3 and 4.2 × 104 nanoplastics/m3 in Guangzhou and 1.4 × 105 microplastics/m3 and 3.0 × 104 nanoplastics/m3 in Xi’an. This is two to six orders of magnitude higher for both microplastic and nanoplastic fluxes than reported previously via visual methods.
The team also found that the deposition samples were more heterogeneously mixed with other particle types (such as dust and other pollution particles) than aerosols and resuspension samples, which showed that particles tend to aggregate in the atmosphere before being removed during atmospheric transport.
The study revealed transport insights that could be beneficial for investigating the climate, ecosystem and human health impacts of plastic particles at all levels. The researchers are now advancing their method in two key directions.
“First, we are refining sampling and CCSEM‑based analytical strategies to detect mixed states between microplastics and biological or water‑soluble components, which remain invisible with current techniques. Understanding these interactions is essential for accurately assessing microplastics’ climate and health effects,” Huang tells Physics World. “Second, we are integrating CCSEM with Raman analysis to not only quantify abundance but also identify polymer types. This dual approach will generate vital evidence to support environmental policy decisions.”
Last November I visited the CERN particle-physics lab near Geneva to attend the 4th International Symposium on the History of Particle Physics, which focused on advances in particle physics during the 1980s and 1990s. As usual, it was a refreshing, intellectually invigorating visit. I’m always inspired by the great diversity of scientists at CERN – complemented this time by historians, philosophers and other scholars of science.
As noted by historian John Krige in his opening keynote address, “CERN is a European laboratory with a global footprint. Yet for all its success it now faces a turning point.” During the period under examination at the symposium, CERN essentially achieved the “world laboratory” status that various leaders of particle physics had dreamt of for decades.
By building the Large Electron Positron (LEP) collider and then the Large Hadron Collider (LHC), the latter with contributions from Canada, China, India, Japan, Russia, the US and other non-European nations, CERN has attracted researchers from six continents. And as the Cold War ended in 1989–1991, two prescient CERN staff members developed the World Wide Web, helping knit this sprawling international scientific community together and enable extensive global collaboration.
The LHC was funded and built during a unique period of growing globalization and democratization that emerged in the wake of the Cold War’s end. After the US terminated the Superconducting Super Collider in 1993, CERN was the only game in town if one wanted to pursue particle physics at the multi-TeV energy frontier. And many particle physicists wanted to be involved in the search for the Higgs boson, which by the mid-1990s looked as if it should show up at accessible LHC energies.
Having discovered this long-sought particle at the LHC in 2012, CERN is now contemplating an ambitious construction project, the Future Circular Collider (FCC). Over three times larger than the LHC, it would study this all-important, mass-generating boson in greater detail using an electron–positron collider dubbed FCC-ee, estimated to cost $18bn and start operations by 2050.
Later in the century, the FCC-hh, a proton–proton collider, would go in the same tunnel to see what, if anything, may lie at much higher energies. That collider, the cost of which is currently educated guesswork, would not come online until the mid 2070s.
But the steadily worsening geopolitics of a fragmenting world order could make funding and building these colliders dicey affairs. After Russia’s expulsion from CERN, little in the way of its contributions can be expected. Chinese physicists had hoped to build an equivalent collider, but those plans seem to have been put on the backburner for now.
And the “America First” political stance of the current US administration is hardly conducive to the multibillion-dollar contribution likely required from what is today the world’s richest (albeit debt-laden) nation. The ongoing collapse of the rules-based world order was recently put into stark relief by the US invasion of Venezuela and abduction of its president Nicolás Maduro, followed by Donald Trump’s menacing rhetoric over Greenland.
While these shocking events have immediate significance for international relations, they also suggest how difficult it may become to fund gargantuan international scientific projects such as the FCC. Under such circumstances, it is very difficult to imagine non-European nations being able to contribute a hoped-for third of the FCC’s total costs.
But the mounting European populist right-wing parties are no great friends of physics either, nor of international scientific endeavours. And Europeans face the not-insignificant costs of military rearmament in the face of Russian aggression and likely US withdrawal from Europe.
So the other two thirds of the FCC’s many billions in costs cannot be taken for granted – especially not during the decades needed to construct its 91 km tunnel, 350 GeV electron–positron collider, the subsequent 100 TeV proton collider, and the massive detectors both machines require.
According to former CERN director-general Chris Llewellyn Smith in his symposium lecture, “The political history of the LHC“, just under 12% of the material project costs of the LHC eventually came from non-member nations. It therefore warps the imagination to believe that a third of the much greater costs of the FCC can come from non-member nations in the current “Wild West” geopolitical climate.
But particle physics desperately needs a Higgs factory. After the 1983 Z boson discovery at the CERN SPS Collider, it took just six years before we had not one but two Z factories – LEP and the Stanford Linear Collider – which proved very productive machines. It’s now been more than 13 years since the Higgs boson discovery. Must we wait another 20 years?
Other options
CERN therefore needs a more modest, realistic, productive new scientific facility – a “Plan B” – to cope with the geopolitical uncertainties of an imperfect, unpredictable world. And I was encouraged to learn that several possible ideas are under consideration, according to outgoing CERN director-general Fabiola Gianotti in her symposium lecture, “CERN today and tomorrow“.
Three of these ideas reflect the European Strategy for Particle Physics, which states that “an electron–positron Higgs factory is the highest-priority next CERN collider”. Two linear electron–positron colliders would require just 11–34 km of tunnelling and could begin construction in the mid-2030s, but would involve a fair amount of technical risk and cost roughly €10bn.
The least costly and risky option, dubbed LEP3, involves installing superconducting radio-frequency cavities in the existing LHC tunnel once the high-luminosity proton run ends. Essentially an upgrade of the 200 GeV LEP2, this approach is based on well-understood technologies and would cost less than €5bn but can reach at most 240 GeV. The linear colliders could attain over twice that energy, enabling research on Higgs-boson decays into top quarks and the triple-Higgs self-interaction.
Other proposed projects involving the LHC tunnel can produce large numbers of Higgs bosons with relatively minor backgrounds, but they can hardly be called “Higgs factories”. One of these, dubbed the LHeC, could only produce a few thousand Higgs bosons annually and would allow other important research on proton structure functions. Another idea is the proposed Gamma Factory, in which laser beams would be backscattered from LHC beams of partially stripped ions. If sufficient photon energies and intensity can be achieved, it will allow research on the γγ → H interaction. These alternatives would cost at most a few billion euros.
As Krige stressed in his keynote address, CERN was meant to be more than a scientific laboratory at which European physicists could compete with their US and Soviet counterparts. As many of its founders intended, he said, it was “a cultural weapon against all forms of bigoted nationalism and anti-science populism that defied Enlightenment values of critical reasoning”. The same logic holds true today.
In planning the next phase in CERN’s estimable history, it is crucial to preserve this cultural vitality, while of course providing unparalleled opportunities to do world-class science – lacking which, the best scientists will turn elsewhere.
I therefore urge CERN planners to be daring but cognizant of financial and political reality in the fracturing world order. Don’t for a nanosecond assume that the future will be a smooth extrapolation from the past. Be fairly certain that whatever new facility you decide to build, there is a solid financial pathway to achieving it in a reasonable time frame.
The future of CERN – and the bracing spirit of CERN – rests in your hands.
Most researchers know the disappointment of submitting an abstract to give a conference lecture, only to find that it has been accepted as a poster presentation instead. If this has been your experience, I’m here to tell you that you need to rethink the value of a good poster.
For years, I pestered my university to erect a notice board outside my office so that I could showcase my group’s recent research posters. Each time, for reasons of cost, my request was unsuccessful. At the same time, I would see similar boards placed outside the offices of more senior and better-funded researchers in my university. I voiced my frustrations to a mentor whose advice was, “It’s better to seek forgiveness than permission.” So, since I couldn’t afford to buy a notice board, I simply used drawing pins to mount some unauthorized posters on the wall beside my office door.
Some weeks later, I rounded the corner to my office corridor to find the head porter standing with a group of visitors gathered around my posters. He was telling them all about my research using solar energy to disinfect contaminated drinking water in disadvantaged communities in Sub-Saharan Africa. Unintentionally, my illegal posters had been subsumed into the head porter’s official tour that he frequently gave to visitors.
The group moved on but one man stayed behind, examining the poster very closely. I asked him if he had any questions. “No, thanks,” he said, “I’m not actually with the tour, I’m just waiting to visit someone further up the corridor and they’re not ready for me yet. Your research in Africa is very interesting.” We chatted for a while about the challenges of working in resource-poor environments. He seemed quite knowledgeable on the topic but soon left for his meeting.
A few days later while clearing my e-mail junk folder I spotted an e-mail from an Asian “philanthropist” offering me €20,000 towards my research. To collect the money, all I had to do was send him my bank account details. I paused for a moment to admire the novelty and elegance of this new e-mail scam before deleting it. Two days later I received a second e-mail from the same source asking why I hadn’t responded to their first generous offer. While admiring their persistence, I resisted the urge to respond by asking them to stop wasting their time and mine, and instead just deleted it.
So, you can imagine my surprise when the following Monday morning I received a phone call from the university deputy vice-chancellor inviting me to pop up for a quick chat. On arrival, he wasted no time before asking why I had been so foolish as to ignore repeated offers of research funding from one of the college’s most generous benefactors. And that is how I learned that those e-mails from the Asian philanthropist weren’t bogus.
The gentleman that I’d chatted with outside my office was indeed a wealthy philanthropic funder who had been visiting our university. Having retrieved the e-mails from my deleted items folder, I re-engaged with him and subsequently received €20,000 to install 10,000-litre harvested-rainwater tanks in as many primary schools in rural Uganda as the money would stretch to.
Secret to success Kevin McGuigan discovered that one research poster can lead to generous funding contributions. (Courtesy: Antonio Jaen Osuna)
About six months later, I presented the benefactor with a full report accounting for the funding expenditure, replete with photos of harvested-rainwater tanks installed in 10 primary schools, with their very happy new owners standing in the foreground. Since you miss 100% of the chances you don’t take, I decided I should push my luck and added a “wish list” of other research items that the philanthropist might consider funding.
The list started small and grew steadily ambitious. I asked for funds for more tanks in other schools, a travel bursary, PhD registration fees, student stipends and so on. All told, the list came to a total of several hundred thousand euros, but I emphasized that they had been very generous so I would be delighted to receive funding for any one of the listed items and, even if nothing was funded, I was still very grateful for everything he had already done. The following week my generous patron deposited a six-figure-euro sum into my university research account with instructions that it be used as I saw fit for my research purposes, “under the supervision of your university finance office”.
In my career I have co-ordinated several large-budget, multi-partner, interdisciplinary, international research projects. In each case, that money was hard-earned, needing at least six months and many sleepless nights to prepare the grant submission. It still amuses me that I garnered such a large sum on the back of one research poster, one 10-minute chat and fewer than six e-mails.
So, if you have learned nothing else from this story, please don’t underestimate the power of a strategically placed and impactful poster describing your research. You never know with whom it may resonate and down which road it might lead you.
Many biological networks – including blood vessels and plant roots – are not organized to minimize total length, as long assumed. Instead, their geometry follows a principle of surface minimization, following a rule that is also prevalent in string theory. That is the conclusion of physicists in the US, who have created a unifying framework that explains structural features long seen in real networks but poorly captured by traditional mathematical models.
Biological transport and communication networks have fascinated scientists for decades. Neurons branch to form synapses, blood vessels split to supply tissues, and plant roots spread through soil. Since the mid-20th century, many researchers believed that evolution favours networks that minimize total length or volume.
“There is a longstanding hypothesis, going back to Cecil Murray from the 1940s, that many biological networks are optimized for their length and volume,” Albert-László Barabási of Northeastern University explains. “That is, biological networks, like the brain and the vascular systems, are built to achieve their goals with the minimal material needs.” Until recently, however, it had been difficult to characterize the complicated nature of biological networks.
Now, advances in imaging have given Barabási and colleagues a detailed 3D picture of real physical networks, from individual neurons to entire vascular systems. With these new data in hand, the researchers found that previous theories are unable to describe real networks in quantitative terms.
From graphs to surfaces
To remedy this, the team defined the problem in terms of physical networks, systems whose nodes and links have finite thickness and occupy space. Rather than treating them as abstract graphs made of idealized edges, the team models them as geometrical objects embedded in 3D space.
To do this, the researchers turned to an unexpected mathematical tool. “Our work relies on the framework of covariant closed string field theory, developed by Barton Zwiebach and others in the 1980s,” says team member Xiangyi Meng at Rensselaer Polytechnic Institute. This framework provides a correspondence between network-like graphs and smooth surfaces.
Unlike string theory, their approach is entirely classical. “These surfaces, obtained in the absence of quantum fluctuations, are precisely the minimal surfaces we seek,” Meng says. No quantum mechanics, supersymmetry, or exotic string-theory ingredients are required. “Those aspects were introduced mainly to make string theory quantum and thus do not apply to our current context.”
Using this framework, the team analysed a wide range of biological systems. “We studied human and fruit fly neurons, blood vessels, trees, corals, and plants like Arabidopsis,” says Meng. Across all these cases, a consistent pattern emerged: the geometry of the networks is better predicted by minimizing surface area rather than total length.
Complex junctions
One of the most striking outcomes of the surface-minimization framework is its ability to explain structural features that previous models cannot. Traditional length-based theories typically predict simple Y-shaped bifurcations, where one branch splits into two. Real networks, however, often display far richer geometries.
“While traditional models are limited to simple bifurcations, our framework predicts the existence of higher-order junctions and ‘orthogonal sprouts’,” explains Meng.
These include three- or four-way splits and perpendicular, dead-end offshoots. Under a surface-based principle, such features arise naturally and allow neurons to form synapses using less membrane material overall and enable plant roots to probe their environment more effectively.
Ginestra Bianconi of the UK’s Queen Mary University of London says that the key result of the new study is the demonstration that “physical networks such as the brain or vascular networks are not wired according to a principle of minimization of edge length, but rather that their geometry follows a principle of surface minimization.”
Bianconi, who was not involved in the study, also highlights the interdisciplinary leap of invoking ideas from string theory, “This is a beautiful demonstration of how basic research works”.
Interdisciplinary leap
The team emphasizes that their work is not immediately technological. “This is fundamental research, but we know that such research may one day lead to practical applications,” Barabási says. In the near term, he expects the strongest impact in neuroscience and vascular biology, where understanding wiring and morphology is essential.
Bianconi agrees that important questions remain. “The next step would be to understand whether this new principle can help us understand brain function or have an impact on our understanding of brain diseases,” she says. Surface optimization could, for example, offer new ways to interpret structural changes observed in neurological disorders.
Looking further ahead, the framework may influence the design of engineered systems. “Physical networks are also relevant for new materials systems, like metamaterials, who are also aiming to achieve functions at minimal cost,” Barabási notes. Meng points to network materials as a particularly promising area, where surface-based optimization could inspire new architectures with tailored mechanical or transport properties.
According to today’s leading experts in artificial intelligence (AI), this new technology is a danger to civilization. A statement on AI risk published in 2023 by the US non-profit Center for AI Safety warned that mitigating the risk of extinction from AI must now be “a global priority”, comparing it to other societal-scale dangers such as pandemics and nuclear war. It was signed by more than 600 people, including the winner of the 2024 Nobel Prize for Physics and so-called “Godfather of AI” Geoffrey Hinton. In a speech at the Nobel banquet after being awarded the prize, Hinton noted that AI may be used “to create terrible new viruses and horrendous lethal weapons that decide by themselves who to kill or maim”.
Despite signing the letter, Sam Altman of OpenAI, the firm behind ChatGPT, has stated that the company’s explicit ambition is to create artificial general intelligence (AGI) within the next few years, to “win the AI-race”. AGI is predicted to surpass human cognitive capabilities for almost all tasks, but the real danger is if or when AGI is used to generate more powerful versions of itself. Sometimes called “superintelligence”, this would be impossible to control. Companies do not want any regulation of AI and their business model is for AGI to replace most employees at all levels. This is how firms are expected to benefit from AI, since wages are most companies’ biggest expense.
AI, to me, is not about saving the world, but about a handful of people wanting to make enormous amounts of money from it. No-one knows what internal mechanism makes even today’s AI work – just as one cannot find out what you think from how the neurons in your brain are firing. If we don’t even understand today’s AI models, how are we going to understand – and control – the more powerful models that already exist or are planned in the near future?
AI has some practical benefits but too often is put to mostly meaningless, sometimes downright harmful, uses such as cheating your way through school or creating disinformation and fake videos online. What’s more, an online search with the help of AI requires at least 10 times as much energy as a search without AI. It already uses 5% of all electricity in the US and by 2028 this figure is expected to be 15%, which will be over a quarter of all US households’ electricity consumption. AI data servers are more than 50% as carbon intensive as the rest of the US’s electricity supply.
Those energy needs are why some tech companies are building AI data centres – often under confidential, opaque agreements – very quickly for fear of losing market share. Indeed, the vast majority of those centres are powered by fossil-fuel energy sources – completely contrary to the Paris Agreement to limit global warming. We must wisely allocate Earth’s strictly limited resources, with what is wasted on AI instead going towards vital things.
To solve the climate crisis, there is definitely no need for AI. All the solutions have already been known for decades: phasing out fossil fuels, reversing deforestation, reducing energy and resource consumption, regulating global trade, reforming the economic system away from its dependence on growth. The problem is that the solutions are not implemented because of short-term selfish profiteering, which AI only exacerbates.
Playing with fire
AI, like all other technologies, is not a magic wand and, as Hinton says, potentially has many negative consequences. It is not, as the enthusiasts seem to think, a magical free resource that provides output without input (and waste). I believe we must rethink our naïve, uncritical, overly fast, total embrace of AI. Universities are known for wise reflection, but worryingly they seem to be hurrying to jump on the AI bandwagon. The problem is that the bandwagon may be going in the wrong direction or crash and burn entirely.
Why then should universities and organizations send their precious money to greedy, reckless and almost totalitarian tech billionaires? If we are going to use AI, shouldn’t we create our own AI tools that we can hopefully control better? Today, more money and power is transferred to a few AI companies that transcend national borders, which is also a threat to democracy. Democracy only works if citizens are well educated, committed, knowledgeable and have influence.
AI is like using a hammer to crack a nut. Sometimes a hammer may be needed but most of the time it is not and is instead downright harmful. Happy-go-lucky people at universities, companies and throughout society are playing with fire without knowing about the true consequences now, let alone in 10 years’ time. Our mapped-out path towards AGI is like a zebra on the savannah creating an artificial lion that begins to self-replicate, becoming bigger, stronger, more dangerous and more unpredictable with each generation.
Wise reflection today on our relationship with AI is more important than ever.
Encrypted qubits can be cloned and stored in multiple locations without violating the no-cloning theorem of quantum mechanics, researchers in Canada have shown. Their work could potentially allow quantum-secure cloud storage, in which data can be stored on multiple servers, thereby allowing for redundancy without compromising security. The research also has implications for quantum fundamentals.
Heisenberg’s uncertainty principle – which states that it is impossible to measure conjugate variables of a quantum object with less than a combined minimum uncertainty – is one of the central tenets of quantum mechanics. The no-cloning theorem – that it is impossible to create identical clones of unknown quantum states – flows directly from this. Achim Kempf of the University of Waterloo explains, “If you had [clones] you could take half your copies and perform one type of measurement, and the other half of your copies and perform an incompatible measurement, and then you could beat the uncertainty principle.”
No-cloning poses a challenge those trying to create a quantum internet. On today’s Internet, storage of information on remote servers is common, and multiple copies of this information are usually stored in different locations to preserve data in case of disruption. Users of a quantum cloud server would presumably desire the same degree of information security, but no-cloning theorem would apparently forbid this.
Signal and noise
In the new work, Kempf and his colleague Koji Yamaguchi, now at Japan’s Kyushu University, show that this is not the case. Their encryption protocol begins with the generation of a set of pairs of entangled qubits. When a qubit, called A, is encrypted, it interacts with one qubit (called a signal qubit) from each pair in turn. In the process of interaction, the signal qubits record information about the state of A, which has been altered by previous interactions. As each signal qubit is entangled with a noise qubit, the state of the noise qubits is also changed.
Another central tenet of quantum mechanics, however, is that quantum entanglement does not allow for information exchange. “The noise qubits don’t know anything about the state of A either classically or quantum mechanically,” says Kempf. “The noise qubits’ role is to serve as a record of noise…We use the noise that is in the signal qubit to encrypt the clone of A. You drown the information in noise, but the noise qubit has a record of exactly what noise has been added because [the signal qubits and noise qubits] are maximally entangled.”
Therefore, a user with all of the noise qubits knows nothing about the signal, but knows all of the noise that was added to it. Possession of just one of the signal qubits, therefore, allows them to recover the unencrypted qubit. This does not violate the uncertainty principle, however, because decrypting one copy of A involves making a measurement of the noise qubits: “At the end of [the measurement], the noise qubits are no longer what they were before, and they can no longer be used for the decryption of another encrypted clone,” explains Kempf.
Cloning clones
Kempf says that, working with IBM, they have demonstrated hundreds of steps of iterative quantum cloning (quantum cloning of quantum clones) on a Heron 2 processor successfully and showed that the researchers could even clone entangled qubits and recover the entanglement after decryption. “We’ll put that on the arXiv this month,” he says.
The research is described in Physical Review LettersandBarry Sanders at Canada’s University of Calgary is impressed by both the elegance and the generality of the result. He notes it could have significance for topics as distant as information loss from black holes: “It’s not a flash in the pan,” he says; “If I’m doing something that is related to no-cloning, I would look back and say ‘Gee, how do I interpret what I’m doing in this context?’: It’s a paper I won’t forget.”
Seth Lloyd of MIT agrees: “It turns out that there’s still low-hanging fruit out there in the theory of quantum information, which hasn’t been around long,” he says. “It turns out nobody ever thought to look at this before: Achim is a very imaginative guy and it’s no surprise that he did.” Both Lloyd and Sanders agree that quantum cloud storage remains hypothetical, but Lloyd says “I think it’s a very cool and unexpected result and, while it’s unclear what the implications are towards practical uses, I suspect that people will find some very nice applications in the near future.”
Heavy-duty vehicles (HDVs) powered by hydrogen-based proton-exchange membrane (PEM) fuel cells offer a cleaner alternative to diesel-powered internal combustion engines for decarbonizing long-haul transportation sectors. The development path of sub-components for HDV fuel-cell applications is guided by the total cost of ownership (TCO) analysis of the truck.
TCO analysis suggests that the cost of the hydrogen fuel consumed over the lifetime of the HDV is more dominant because trucks typically operate over very high mileages (~a million miles) than the fuel cell stack capital expense (CapEx). Commercial HDV applications consume more hydrogen and demand higher durability, meaning that TCO is largely related to the fuel-cell efficiency and durability of catalysts.
This article is written to bridge the gap between the industrial requirements and academic activity for advanced cathode catalysts with an emphasis on durability. From a materials perspective, the underlying nature of the carbon support, Pt-alloy crystal structure, stability of the alloying element, cathode ionomer volume fraction, and catalyst–ionomer interface play a critical role in improving performance and durability.
We provide our perspective on four major approaches, namely, mesoporous carbon supports, ordered PtCo intermetallic alloys, thrifting ionomer volume fraction, and shell-protection strategies that are currently being pursued. While each approach has its merits and demerits, their key developmental needs for future are highlighted.
Nagappan Ramaswamy
Nagappan Ramaswamy joined the Department of Chemical Engineering at IIT Bombay as a faculty member in January 2025. He earned his PhD in 2011 from Northeastern University, Boston specialising in fuel cell electrocatalysis.
He then spent 13 years working in industrial R&D – two years at Nissan North American in Michigan USA focusing on lithium-ion batteries, followed by 11 years at General Motors in Michigan USA focusing on low-temperature fuel cells and electrolyser technologies. While at GM, he led two multi-million-dollar research projects funded by the US Department of Energy focused on the development of proton-exchange membrane fuel cells for automotive applications.
At IIT Bombay, his primary research interests include low-temperature electrochemical energy-conversion and storage devices such as fuel cells, electrolysers and redox-flow batteries involving materials development, stack design and diagnostics.
India has been involved in nuclear energy and power for decades, but now the country is turning to small modular nuclear reactors (SMRs) as part of a new, long-term push towards nuclear and renewable energy. In December 2025 the country’s parliament passed a bill that allows private companies for the first time to participate in India’s nuclear programme, which could see them involved in generating power, operating plants and making equipment.
Some commentators are unconvinced that the move will be enough to help meet India’s climate pledge to achieving 500 GW of non-fossil-fuel based energy generation by 2030. Interestingly, however, India has now joined other nations, such as Russia and China, in taking an interest in SMRs. They could help stem the overall decline in nuclear power, which now accounts for just 9% of electricity generated around the world – down from 17.5% in 1996.
Last year India’s finance minister Nirmala Sitharaman announced a nuclear energy mission funded with 200 billion Indian rupees ($2.2bn) to develop at least five indigenously designed and operational SMRs by 2033. Unlike huge, conventional nuclear plants, such as pressurized heavy-water reactors (PHWRs), most or all components of an SMR are manufactured in factories before being assembled at the reactor site.
SMRs, typically generate less than 300 MW of electrical power but – being modular – additional capacity can be brought on quickly and easily given their lower capital costs, shorter construction times, ability to work with lower-capacity grids and lower carbon emissions. Despite their promise, there are only two fully operating SMRs in the world – both in Russia – although two further high-temperature gas-cooled SMRs are currently being built in China. In June 2025 Rolls-Royce SMR was selected as the preferred bidder by Great British Nuclear to build the UK’s first fleet of SMRs, with plans to provide 470 MW of low-carbon electricity.
Cost benefit analysis
An official at the Department of Atomic Energy told Physics World that part of that mix of five new SMRs in India could be the 200 MW Bharat small modular reactor, which are based on pressurized water reactor technology and use slightly enriched uranium as a fuel. Other options are 55 MW small modular reactors and the Indian government also plans to partner with the private sector to deploy 220 MW Bharat small reactors.
Despite such moves, some are unconvinced that small nuclear reactors could help India scale its nuclear ambitions. “SMRs are still to demonstrate that they can supply electricity at scale,” says Karthik Ganesan, a fellow and director of partnerships at the Council on Energy, Environment and Water (CEEW), a non-profit policy research think-tank based in New Delhi. “SMRs are a great option for captive consumption, where large investment that will take time to start generating is at a premium.”
Ganesan, however, says it is too early to comment on the commercial viability of SMRs as cost reductions from SMRs depend on how much of the technology is produced in a factory and in what quantities. “We are yet to get to that point and any test reactors deployed would certainly not be the ones to benchmark their long-term competitiveness,” he says. “[But] even at a higher tariff, SMRs will still have a use case for industrial consumers who want certainty in long-term tariffs and reliable continuous supply in a world where carbon dioxide emissions will be much smaller than what we see from the power sector today.”
M V Ramana from the University of British Columbia, Vancouver, who works in international security and energy supply, is concerned over the cost efficiency of SMRs compared to their traditional counterparts. “Larger reactors are cheaper on a per-megawatt basis because their material and work requirements do not scale linearly with power capacity,” says Ramana. This, according to Ramana, means that the electricity SMRs produce will be more expensive than nuclear energy from large reactors, which are already far more expensive than renewables such as solar and wind energy.
Clean or unclean?
Even if SMRs take over from PHWRs, there is still the question of what do with its nuclear waste. As Ramana points out, all activities linked to the nuclear fuel chain – from mining uranium to dealing with the radioactive wastes produced – have significant health and environmental impacts. “The nuclear fuel chain is polluting, albeit in a different way from that of fossil fuels,” he says, adding that those pollutants remain hazardous for hundreds of thousands of years. “There is no demonstrated solution to managing these radioactive wastes – nor can there be, given the challenge of trying to ensure that these materials do not come into contact with living beings,” says Ramana.
Ganesan, however, thinks that nuclear energy is still clean as it produces electricity with much a lower environmental footprint especially when it comes to so-called “criteria pollutants”: ozone; particulate matter; carbon monoxide; lead; sulphur dioxide; and nitrogen dioxide. While nuclear waste still needs to be managed, Ganesan says the associated costs are already included in the price of setting up a reactor. “In due course, with technological development, the burn up will significantly higher and waste generated a lot lesser.”
The 2026 SPIE Photonics West meeting takes place in San Francisco, California, from 17 to 22 January. The premier event for photonics research and technology, Photonics West incorporates more than 100 technical conferences covering topics including lasers, biomedical optics, optoelectronics, quantum technologies and more.
As well as the conferences, Photonics West also offers 60 technical courses and a new Career Hub with a co-located job fair. There are also five world-class exhibitions featuring over 1500 companies and incorporating industry-focused presentations, product launches and live demonstrations. The first of these is the BiOS Expo, which begins on 17 January and examines the latest breakthroughs in biomedical optics and biophotonics technologies.
Then starting on 20 January, the main Photonics West Exhibition will host more than 1200 companies and showcase the latest innovative optics and photonics devices, components, systems and services. Alongside, the Quantum West Expo features the best in quantum-enabling technology advances, the AR | VR | MR Expo brings together leading companies in XR hardware and systems and – new for 2026 – the Vision Tech Expo highlights cutting-edge vision, sensing and imaging technologies.
Here are some of the product innovations on show at this year’s event.
Enabling high-performance photonics assembly with SmarAct
As photonics applications increasingly require systems with high complexity and integration density, manufacturers face a common challenge: how to assemble, align and test optical components with nanometre precision – quickly, reliably and at scale. At Photonics West, SmarAct presents a comprehensive technology portfolio addressing exactly these demands, spanning optical assembly, fast photonics alignment, precision motion and advanced metrology.
Rapid and reliable SmarAct’s technology portfolio enables assembly, alignment and testing of optical components with nanometre precision. (Courtesy: SmarAct)
A central highlight is SmarAct’s Optical Assembly Solution, presented together with a preview of a powerful new software platform planned for release in late-Q1 2026. This software tool is designed to provide exceptional flexibility for implementing automation routines and process workflows into user-specific control applications, laying the foundation for scalable and future-proof photonics solutions.
For high-throughput applications, SmarAct showcases its Fast Photonics Alignment capabilities. By combining high-dynamic motion systems with real-time feedback and controller-based algorithms, SmarAct enables rapid scanning and active alignment of PICs and optical components such as fibres, fibre array units, lenses, beam splitters and more. These solutions significantly reduce alignment time while maintaining sub-micrometre accuracy, making them ideal for demanding photonics packaging and assembly tasks.
Both the Optical Assembly Solution and Fast Photonics Alignment are powered by SmarAct’s electromagnetic (EM) positioning axes, which form the dynamic backbone of these systems. The direct-drive EM axes combine high speed, high force and exceptional long-term durability, enabling fast scanning, smooth motion and stable positioning even under demanding duty cycles. Their vibration-free operation and robustness make them ideally suited for high-throughput optical assembly and alignment tasks in both laboratory and industrial environments.
Precision feedback is provided by SmarAct’s advanced METIRIO optical encoder family, designed to deliver high-resolution position feedback for demanding photonics and semiconductor applications. The METIRIO stands out by offering sub-nanometre position feedback in an exceptionally compact and easy-to-integrate form factor. Compatible with linear, rotary and goniometric motion systems – and available in vacuum-compatible designs – the METIRIO is ideally suited for space-constrained photonics setups, semiconductor manufacturing, nanopositioning and scientific instrumentation.
For applications requiring ultimate measurement performance, SmarAct presents the PICOSCALE Interferometer and Vibrometer. These systems provide picometre-level displacement and vibration measurements directly at the point of interest, enabling precise motion tracking, dynamic alignment, and detailed characterization of optical and optoelectronic components. When combined with SmarAct’s precision stages, they form a powerful closed-loop solution for high-yield photonics testing and inspection.
Together, SmarAct’s motion, metrology and automation solutions form a unified platform for next-generation photonics assembly and alignment.
Visit SmarAct at booth #3438 at Photonics West and booth #8438 at BiOS to discover how these technologies can accelerate your photonics workflows.
Avantes previews AvaSoftX software platform and new broadband light source
Photonics West 2026 will see Avantes present the first live demonstration of its completely redesigned software platform, AvaSoftX, together with a sneak peek of its new broadband light source, the AvaLight-DH-BAL. The company will also run a series of application-focused live demonstrations, highlighting recent developments in laser-induced breakdown spectroscopy (LIBS), thin-film characterization and biomedical spectroscopy.
AvaSoftX is developed to streamline the path from raw spectra to usable results. The new software platform offers preloaded applications tailored to specific measurement techniques and types, such as irradiance, LIBS, chemometry and Raman. Each application presents the controls and visualizations needed for that workflow, reducing time and the risk of user error.
Streamlined solution The new AvaSoftX software platform offers next-generation control and data handling. (Courtesy: Avantes)
Smart wizards guide users step-by-step through the setup of a measurement – from instrument configuration and referencing to data acquisition and evaluation. For more advanced users, AvaSoftX supports customization with scripting and user-defined libraries, enabling the creation of reusable methods and application-specific data handling. The platform also includes integrated instruction videos and online manuals to support the users directly on the platform.
The software features an accessible dark interface optimized for extended use in laboratory and production environments. Improved LIBS functionality will be highlighted through a live demonstration that combines AvaSoftX with the latest Avantes spectrometers and light sources.
Also making its public debut is the AvaLight-DH-BAL, a new and improved deuterium–halogen broadband light source designed to replace the current DH product line. The system delivers continuous broadband output from 215 to 2500 nm and combines a more powerful halogen lamp with a reworked deuterium section for improved optical performance and stability.
A switchable deuterium and halogen optical path is combined with deuterium peak suppression to improve dynamic range and spectral balance. The source is built into a newly developed, more robust housing to improve mechanical and thermal stability. Updated electronics support adjustable halogen output, a built-in filter holder, and both front-panel and remote-controlled shutter operation.
The AvaLight-DH-BAL is intended for applications requiring stable, high-output broadband illumination, including UV–VIS–NIR absorbance spectroscopy, materials research and thin-film analysis. The official launch date for the light source, as well as the software, will be shared in the near future.
Avantes will also run a series of live application demonstrations. These include a LIBS setup for rapid elemental analysis, a thin-film measurement system for optical coating characterization, and a biomedical spectroscopy demonstration focusing on real-time measurement and analysis. Each demo will be operated using the latest Avantes hardware and controlled through AvaSoftX, allowing visitors to assess overall system performance and workflow integration. Avantes’ engineering team will be available throughout the event.
For product previews, live demonstrations and more, meet Avantes at booth #1157.
HydraHarp 500: high-performance time tagger redefines precision and scalability
One year after its successful market introduction, the HydraHarp 500 continues to be a standout highlight at PicoQuant’s booth at Photonics West. Designed to meet the growing demands of advanced photonics and quantum optics, the HydraHarp 500 sets benchmarks in timing performance, scalability and flexible interfacing.
At its core, the HydraHarp 500 delivers exceptional timing precision combined with ultrashort jitter and dead time, enabling reliable photon timing measurements even at very high count rates. With support for up to 16 fully independent input channels plus a common sync channel, the system allows true simultaneous multichannel data acquisition without cross-channel dead time, making it ideal for complex correlation experiments and high-throughput applications.
At the forefront of photon timing The high-resolution multichannel time tagger HydraHarp 500 offers picosecond timing precision. It combines versatile trigger methods with multiple interfaces, making it ideally suited for demanding applications that require many input channels and high data throughput. (Courtesy: PicoQuant)
A key strength of the HydraHarp 500 is its high flexibility in detector integration. Multiple trigger methods support a wide range of detector technologies, from single-photon avalanche diodes (SPADs) to superconducting nanowire single-photon detectors (SNSPDs). Versatile interfaces, including USB 3.0 and a dedicated FPGA interface, ensure seamless data transfer and easy integration into existing experimental setups. For distributed and synchronized systems, White Rabbit compatibility enables precise cross-device timing coordination.
Engineered for speed and efficiency, the HydraHarp 500 combines ultrashort per-channel dead time with industry-leading timing performance, ensuring complete datasets and excellent statistical accuracy even under demanding experimental conditions.
Looking ahead, PicoQuant is preparing to expand the HydraHarp family with the upcoming HydraHarp 500 L. This new variant will set new standards for data throughput and scalability. With outstanding timing resolution, excellent timing precision and up to 64 flexible channels, the HydraHarp 500 L is engineered for highest-throughput applications powered – for the first time – by USB 3.2 Gen 2×2, making it ideal for rapid, large-volume data acquisition.
With the HydraHarp 500 and the forthcoming HydraHarp 500 L, PicoQuant continues to redefine what is possible in photon timing, delivering precision, scalability and flexibility for today’s and tomorrow’s photonics research. For more information, visit www.picoquant.com or contact us at info@picoquant.com.
Meet PicoQuant at BiOS booth #8511 and Photonics West booth #3511.
“It’s hard to say when exactly sending people to Mars became a goal for humanity,” ponders author Scott Solomon in his new book Becoming Martian: How Living in Space Will Change Our Bodies and Minds – and I think we’d all agree. Ten years ago, I’m not sure any of us thought even returning to the Moon was seriously on the cards. Yet here we are, suddenly living in a second space age, where the first people to purchase one-way tickets to the Red Planet have likely already been born.
The technology required to ship humans to Mars, and the infrastructure required to keep them alive, is well constrained, at least in theory. One could write thousands of words discussing the technical details of reusable rocket boosters and underground architectures. However, Becoming Martian is not that book. Instead, it deals with the effect Martian life will have on the human body – both in the short term across a single lifetime; and in the long term, on evolutionary timescales.
This book’s strength lies in its authorship: it is not written by a physicist enthralled by the engineering challenge of Mars, nor by an astronomer predisposed to romanticizing space exploration. Instead, Solomon is a research biologist who teaches ecology, evolutionary biology and scientific communication at Rice University in Houston, Texas.
Becoming Martian starts with a whirlwind, stripped-down tour of Mars across mythology, astronomy, culture and modern exploration. This effectively sets out the core issue: Mars is fundamentally different from Earth, and life there is going to be very difficult. Solomon goes on to describe the effects of space travel and microgravity on humans that we know of so far: anaemia, muscle wastage, bone density loss and increased radiation exposure, to name just a few.
Where the book really excels, though, is when Solomon uses his understanding of evolutionary processes to extend these findings and conclude how Martian life would be different. For example, childbirth becomes a very risky business on a planet with about one-third of Earth’s gravity. The loss of bone density translates into increased pelvic fractures, and the muscle wastage into an inability for the uterus to contract strongly enough. The result? All Martian births will likely need to be C-sections.
Solomon applies his expertise to the whole human body, including our “entourage” of micro-organisms. The indoor life of a Martian is likely to affect the immune system to the degree that contact with an Earthling would be immensely risky. “More than any other factor, the risk of disease transmission may be the wedge that drives the separation between people on the two planets,” he writes. “It will, perhaps inevitably, cause the people on Mars to truly become Martians.” Since many diseases are harboured or spread by animals, there is a compelling argument that Martians would be vegan and – a dealbreaker for some I imagine – unable to have any pets. So no dogs, no cats, no steak and chips on Mars.
Let’s get physical
The most fascinating part of the book for me is how Solomon repeatedly links the biological and psychological research with the more technical aspects of designing a mission to Mars. For example, the first exploratory teams should have odd numbers, to make decisions easier and us-versus-them rifts less likely. The first colonies will also need to number between 10,000 and 11,000 individuals to ensure enough genetic diversity to protect against evolutionary concepts such as genetic drift and population crashes.
Amusingly, the one part of human activity most important for a sustainable colony – procreation – is the most understudied. When a NASA scientist made the suggestion a colony would need private spaces with soundproof walls, the backlash was so severe that NASA had to reassure Congress that taxpayer dollars were not being “wasted” encouraging sexual activity among astronauts.
Solomon’s writing is concise yet extraordinarily thorough – there is always just enough for you to feel you can understand the importance and nuance of topics ranging from Apollo-era health studies to evolution, and from AI to genetic engineering. The book is impeccably researched, and he presents conflicting ethical viewpoints so deftly, and without apparent judgement, that you are left plenty of space to imprint your own opinions. So much so that when Solomon shares his own stance on the colonization of Mars in the epilogue, it comes as a bit of a surprise.
In essence, this book lays out a convincing argument that it might be our biology, not our technology, that limits humanity’s expansion to Mars. And if we are able to overcome those limitations, either with purposeful genetic engineering or passive evolutionary change, this could mean we have shed our humanity.
Becoming Martian is one of the best popular-science books I have read within the field, and it is an uplifting read, despite dealing with some of the heaviest ethical questions in space sciences. Whether you’re planning your future as a Martian or just wondering if humans can have sex in space, this book should be on your wish list.
Using incidental data collected by the BepiColombo mission, an international research team has made the first detailed measurements of how coronal mass ejections (CMEs) reduce cosmic-ray intensity at varying distances from the Sun. Led by Gaku Kinoshita at the University of Tokyo, the team hopes that their approach could help improve the accuracy of space weather forecasts following CMEs.
CMEs are dramatic bursts of plasma originating from the Sun’s outer atmosphere. In particularly violent events, this plasma can travel through interplanetary space, sometimes interacting with Earth’s magnetic field to produce powerful geomagnetic storms. These storms result in vivid aurorae in Earth’s polar regions and can also damage electronics on satellites and spacecraft. Extreme storms can even affect electrical grids on Earth.
To prevent such damage, astronomers aim to predict the path and intensity of CME plasma as accurately as possible – allowing endangered systems to be temporarily shut down with minimal disruption. According to Kinoshita’s team, one source of information has so far been largely unexplored.
Pushing back cosmic rays
Within interplanetary space, CME plasma interacts with cosmic rays, which are energetic charged particles of extrasolar origin that permeate the solar system with a roughly steady flux. When an interplanetary CME (ICME) passes by, it temporarily pushes back these cosmic rays, creating a local decrease in their intensity.
“This phenomenon is known as the Forbush decrease effect,” Kinoshita explains. “It can be detected even with relatively simple particle detectors, and reflects the properties and structure of the passing ICME.”
In principle, cosmic-ray observations can provide detailed insights into the physical profile of a passing ICME. But despite their relative ease of detection, Forbush decreases had not yet been observed simultaneously by detectors at multiple distances from the Sun, leaving astronomers unclear on how propagation distance affects their severity.
Now, Kinoshita’s team have explored this spatial relationship using BepiColombo, a European and Japanese mission that will begin orbiting Mercury in November 2026. While the mission focuses on Mercury’s surface, interior, and magnetosphere, it also carries non-scientific equipment capable of monitoring cosmic rays and solar plasma in its surrounding environment.
“Such radiation monitoring instruments are commonly installed on many spacecraft for engineering purposes,” Kinoshita explains. “We developed a method to observe Forbush decreases using a non-scientific radiation monitor onboard BepiColombo.”
Multiple missions
The team combined these measurements with data from specialized radiation-monitoring missions, including ESA’s Solar Orbiter, which is currently probing the inner heliosphere from inside Mercury’s orbit, as well as a network of near-Earth spacecraft. Together, these instruments allowed the researchers to build a detailed, distance-dependent profile of a week-long ICME that occurred in March 2022.
Just as predicted, the measurements revealed a clear relationship between the Forbush decrease effect and distance from the Sun.
“As the ICME evolved, the depth and gradient of its associated cosmic-ray decrease changed accordingly,” Kinoshita says.
With this method now established, the team hopes it can be applied to non-scientific radiation monitors on other missions throughout the solar system, enabling a more complete picture of the distance dependence of ICME effects.
“An improved understanding of ICME propagation processes could contribute to better forecasting of disturbances such as geomagnetic storms, leading to further advances in space weather prediction,” Kinoshita says. In particular, this approach could help astronomers model the paths and intensities of solar plasma as soon as a CME erupts, improving preparedness for potentially damaging events.
When particle colliders smash particles into each other, the resulting debris cloud sometimes contains a puzzling ingredient: light atomic nuclei. Such nuclei have relatively low binding energies, and they would normally break down at temperatures far below those found in high-energy collisions. Somehow, though, their signature remains. This mystery has stumped physicists for decades, but researchers in the ALICE collaboration at CERN have now figured it out. Their experiments showed that light nuclei form via a process called resonance-decay formation – a result that could pave the way towards searches for physics beyond the Standard Model.
Baryon resonance
The ALICE team studied deuterons (a bound proton and neutron) and antideuterons (a bound antiproton and antineutron) that form in experiments at CERN’s Large Hadron Collider. Both deuterons and antideuterons are fragile, and their binding energies of 2.2 MeV would seemingly make it hard for them to form in collisions with energies that can exceed 100 MeV – 100 000 times hotter than the centre of the Sun.
The collaboration found that roughly 90% of the deuterons seen after such collisions form in a three-phase process. In the first phase, an initial collision creates a so-called baryon resonance, which is an excited state of a particle made of three quarks (such as a proton or neutron). This particle is called a Δ baryon and is highly unstable, so it rapidly decays into a pion and a nucleon (a proton or a neutron) during the second phase of the process. Then, in the third (and, crucially, much later) phase, the nucleon cools down to a point where its energy properties allow it to bind with another nucleon to form a deuteron.
Smoking gun
Measuring such a complex process is not easy, especially as everything happens on a length scale of femtometres (10-15 meter). To tease out the details, the collaboration performed precision measurements to correlate the momenta of the pions and deuterons. When they analysed the momentum difference between these particle pairs, they observed a peak in the data corresponding to the mass of the Δ baryon. This peak shows that the pion and the deuteron are kinematically linked because they share a common ancestor: the pion came from the same Δ decay that provided one of the deuteron’s nucleons.
Panos Christakoglou, a member of the ALICE collaboration based at the Netherlands’ Maastricht University, says the experiment is special because in contrast to most previous attempts, where results were interpreted in light of models or phenomenological assumptions, this technique is model-independent. He adds that the results of this study could be used to improve models of high energy proton-proton collisions in which light nuclei (and maybe hadrons more generally) are formed. Other possibilities include improving our interpretations of cosmic-ray studies that measure the fluxes of (anti)nuclei in the galaxy – a useful probe for astrophysical processes.
The hunt is on
Intriguingly, Christakoglou suggests that the team’s technique could also be used to search for indirect signs of dark matter. Many models predict that dark-matter candidates such as Weakly Interacting Massive Particles (WIMPs) will decay or annihilate in processes that also produce Standard Model particles, including (anti)deuterons. “If for example one measures the flux of (anti)nuclei in cosmic rays being above the ‘Standard Model based’ astrophysical background, then this excess could be attributed to new physics which might be connected to dark matter,” Christakoglou tells Physics World.
Michael Kachelriess, a physicist at the Norwegian University of Science and Technology in Trondheim, Norway, who was not involved in this research, says the debate over the correct formation mechanism for light nuclei (and antinuclei) has divided particle physicists for a long time. In his view, the data collected by the ALICE collaboration decisively resolves this debate by showing that light nuclei form in the late stages of a collision via the coalescence of nucleons. Kachelriess calls this a “great achievement” in itself, and adds that similar approaches could make it possible to address other questions, such as whether thermal plasmas form in proton-proton collisions as well as in collisions between heavy ions.
New calculations by physicists in the US provide deeper insights into an exotic material in which superconductivity and magnetism can coexist. Using a specialized effective field theory, Zhengyan Shi and Todadri Senthil at the Massachusetts Institute of Technology show how this coexistence can emerge from the collective states of mobile anyons in certain 2D materials.
An anyon is a quasiparticle with statistical properties that lie somewhere between those of bosons and fermions. First observed in 2D electron gases in strong magnetic fields, anyons are known for their fractional electrical charge and fractional exchange statistics, which alter the quantum state of two identical anyons when they are exchanged for each other.
Unlike ordinary electrons, anyons produced in these early experiments could not move freely, preventing them from forming complex collective states. Yet in 2023, experiments with a twisted bilayer of molybdenum ditelluride provided the first evidence for mobile anyons through observations of fractional quantum anomalous Hall (FQAH) insulators. This effect appears as fractionally quantized electrical resistance in 2D electron systems at zero applied magnetic field.
Remarkably, these experiments revealed that molybdenum ditelluride can exhibit superconductivity and magnetism at the same time. Since superconductivity usually relies on electron pairing that can be disrupted by magnetism, this coexistence was previously thought impossible.
Anyonic quantum matter
“This then raises a new set of theoretical questions,” explains Shi. “What happens when a large number of mobile anyons are assembled together? What kind of novel ‘anyonic quantum matter’ can emerge?”
In their study, Shi and Senthil explored these questions using a new effective field theory for an FQAH insulator. Effective field theories are widely used in physics to approximate complex phenomena without modelling every microscopic detail. In this case, the duo’s model captured the competition between anyon mobility, interactions, and fractional exchange statistics in a many-body system of mobile anyons.
To test their model, the researchers considered the doping of an FQAH insulator – adding mobile anyons beyond the plateau in Hall resistance, where the existing anyons were effectively locked in place. This allowed the quasiparticles to move freely and form new collective phases.
“Crucially, we recognized that the fate of the doped state depends on the energetic hierarchy of different types of anyons,” Shi explains. “This observation allowed us to develop a powerful heuristic for predicting whether the doped state becomes a superconductor without any detailed calculations.”
In their model, Shi and Senthil focused on a specific FQAH insulator called a Jain state, which hosts two types of anyon excitations. One type has electrical charge of 1/3 of an electron and the other with 2/3. In a perfectly clean system, doping the insulator with 2/3-charge anyons produced a chiral topological superconductor, a phase that is robust against disorder and features edge currents flowing in only one direction. In contrast, doping with 1/3-charge anyons produced a metal with broken translation symmetry – still conducting, but with non-uniform patterns in its electron density.
Anomalous vortex glass
“In the presence of impurities, we showed that the chiral superconductor near the superconductor–insulator transition is a novel phase of matter dubbed the ‘anomalous vortex glass’, in which patches of swirling supercurrents are sprinkled randomly across the sample,” Shi describes. “Observing this vortex glass phase would be smoking-gun evidence for the anyonic mechanism for superconductivity.”
The results suggest that even when adding the simplest kind of anyons – like those in the Jain state – the collective behaviour of these quasiparticles can enable the coexistence of magnetism and superconductivity. In future studies, the duo hopes that more advanced methods for introducing mobile anyons could reveal even more exotic phases.
“Remarkably, our theory provides a qualitative account of the phase diagram of a particular 2D material (twisted molybdenum ditelluride), although many more tests are needed to rule out other possible explanations,” Shi says. “Overall, these findings highlight the vast potential of anyonic quantum matter, suggesting a fertile ground for future discoveries.”