Electronic chips are made using photolithography, which involves shining ultraviolet light through a patterned mask and onto a semiconductor wafer. The light activates a photoresist on the surface, which allows the etching of a pattern on the wafer. Through successive iterations of photolithography and the deposition of metals, devices with features as small as a few dozen nanometres are created.
Crucial to this complex manufacturing process is aligning the wafer with successive masks. This must be done in a rapid and repeatable manner, while maintaining nanometre precision throughout the manufacturing process. That’s where Queensgate – part of precision optical and mechanical instrumentation manufacturer Prior Scientific – comes into the picture.
For 45 years, UK-based Queensgate has led the way in the development of nanopositioning technologies. The firm spun out of Imperial College London in 1979 as a supplier of precision instrumentation for astronomy. Its global reputation was sealed when NASA chose Queensgate technology for use on the Space Shuttle and the International Space Station. The company has worked for over two decades with the hard-disk drive-maker Seagate to develop technologies for the rapid inspection of read/write heads during manufacture. Queensgate is also involved in a longstanding collaboration with the UK’s National Physical Laboratory (NPL) to develop nanopositioning technologies that are being used to define international standards of measurement.
Move to larger wafers
The semiconductor industry is in the process of moving from 200 mm to 300 mm wafers – which doubles the number of chips that can be produced from a wafer. Processing the larger and heavier wafers requires a new generation of equipment that can position wafers at nanometre precision.
Queensgate already works with original equipment manufacturers (OEMs) to make optical wafer-inspection systems that are used to identify defects during the processing of 300 mm wafers. Now the company has set its sights on wafer alignment systems. The move to 300 mm wafers offers the company an opportunity to contribute to the development the next-generation alignment systems – says Queensgate product manager Craig Goodman.
“The wafers are getting bigger, which puts a bigger strain on the positioning requirements and we’re here to help solve problems that that’s causing,” explains Goodman. “We are getting lots of inquiries from OEMs about how our technology can be used in the precision positioning of wafers used to produce next-generation high-performance semiconductor devices”.
The move to 300 mm means that fabs need to align wafers that are both larger in area and much heavier. What is more, a much heavier chuck is required to hold a 300 mm wafer during production. This leads to conflicting requirements for a positioning system. It must be accurate over shorter distances as feature sizes shrink, but also be capable of moving a much larger and much heavier wafer and chuck. Today, Queensgate’s wafer stage can handle wafers weighing up to 14 kg while achieving a spatial resolution of 1.5 nm.
Goodman explains that Queensgate’s technology is not used to make large adjustments in the relative alignment of wafer and mask – which is done by longer travel stages using technologies such as air-bearings. Instead, the firm’s nanopositioning systems are used in the final stage of alignment, moving the wafer by less than 1 mm at nanometre precision.
Eliminating noise
Achieving this precision was a huge challenge that Queensgate has overcome by focusing on the sources of noise in its nanopositioning systems. Goodman says that there are two main types of noise that must be minimized. One is external vibration, which can come from a range of environmental sources – even human voices. The other is noise in the electronics that control the nanopositioning system’s piezoelectric actuators.
Goodman explains that noise reduction is achieved through the clever design of the mechanical and electronic systems used for nanopositioning. The positioning stage, for example, must be stiff to reject vibrational noise and notch filters are used to minimize the effect of electronic noise to the sub-nanometre level.
Queensgate provides its nanopostioning technology to OEMs, who integrate it within their products – which are then sold to chipmakers. Goodman says that Queensgate works in-house with its OEM customers to ensure that the desired specifications are achieved. “A stage or a positioner for 300 mm wafers is a highly customized application of our technologies,” he explains.
While the resulting nanopositioning systems are state of the art, Goodman points out that they will be used in huge facilities that process tens of thousands of wafers per month. “It is our aim and our customer’s aim that Queensgate nanopositioning technologies will be used in the mass manufacture of chips,” says Goodman. This means that the system must be very fast to achieve high throughput. “That is why we are using piezoelectric actuators for the final micron of positioning – they are very fast and very precise.”
Today most chip manufacturing is done in Asia, but there are ongoing efforts to boost production in the US and Europe to ensure secure supplies in the future. Goodman says this trend to semiconductor independence is an important opportunity for Queensgate. “It’s a highly competitive, growing and interesting market to be a part of,” he says.
The datasheet for Queensgate’s NPS-XYP-250Q 300 mm wafer–mask alignment stage is now available. Goodman describes it as, “the physically ‘largest’ piezo nanopositioning stage ever delivered”.
On Christmas Day in 1741, when Swedish scientist Anders Celsius first noted down the temperature in his Uppsala observatory using his own 100-point – or “Centi-grade” – scale, he would have had no idea that this was to be his greatest legacy.
A newly published, engrossing biography – Celsius:a Life and Death by Degrees – by Ian Hembrow, tells the life story of the man whose name is so well known. The book reveals the broader scope of Celsius’ scientific contributions beyond the famous centigrade scale, as well as highlighting the collaborative nature of scientific endeavours, and drawing parallels to modern scientific challenges such as climate change.
That winter, Celsius, who was at the time in his early 40s, was making repeated measurements of the period of a pendulum – the time it takes for one complete swing back and forth. He could use that to calculate a precise value for the acceleration caused by gravity, and he was expecting to find that value to be very slightly greater in Sweden than at more southern latitudes. That would provide further evidence for the flattening of the Earth at the poles, something that Celsius had already helped establish. But it required great precision in the experimental work, and Celsius was worried that the length (and therefore the period) of the pendulum would vary slightly with temperature. He had started these measurements that summer and now it was winter, which meant he had lit a fire to hopefully match the summer temperatures. But would that suffice?
Throughout his career, Celsius had been a champion of precise measurement, and he knew that temperature readings were often far from precise. He was using a thermometer sent to him by the French astronomer Joseph-Nicolas Delisle, with a design based on the expansion of mercury. That method was promising, but Delisle used a scale that took the boiling point of water and the temperature in the basement of his home in Paris as its tworeference points. Celsius was unconvinced by the latter. So he made adaptations (which are still there to be seen in an Uppsala museum), twisting wire around the glass tube at the boiling and freezing points of water, and dividing the length between the two into 100 even steps.
The centigrade scale, later renamed in his honour, was born. In his first recorded readings he found the temperature in the pleasantly heated room to be a little over 80 degrees! Following Delisle’s system – perhaps noting that this would mean he had to do less work with negative numbers – he placed the boiling point at zero on his scale, and the freezing point at 100. It was some years later, after his death, that a scientific consensus flipped the scale on its head to create the version we know so well today.
Hembrow does a great job at placing this moment in the context of the time, and within the context of Celsius’ life. He spends considerable time recounting the scientist’s many other achievements and the milestones of his fascinating life.
The expedition that had established the flattening of the Earth at the poles was the culmination of a four-year grand tour that Celsius had undertaken in his early 30s. Already a professor at Uppsala University, in the town where he had grown up in an academic family, he travelled to Germany, Italy, France and London. There he saw at first hand the great observatories that he had heard of and established links with the people who had built and maintained them.
On his extended travels he became a respected figure in the world of science and so it was no surprise when he was selected to join a French expedition to the Arctic in 1736, led by mathematician Pierre Louis Maupertuis, to measure a degree of latitude. Issac Newton had died just a few years before and his ideas relating to gravitation were not yet universally accepted. If it could be shown that the distance between two lines of latitude was greater near the poles than on the equator, that would prove Newton right about the shape of the Earth, a key prediction of his theory of gravitation.
After a period of time in London equipping themselves with the precision instruments, the team started the arduous journey to the Far North. Once there they had to survey the land – a task made challenging by the thick forest and hilly territory. They selected nine mountains to climb with their heavy equipment, felling dozens of trees on each and then creating a sturdy wooden marker on each peak. This allowed them to create a network of triangles stretching north, with each point visible from the two next to it. But they also needed one straight line of known length to complete their calculations. With his local knowledge, Celsius knew that this could only be achieved on the frozen surface of the Torne river – and that it would involve several weeks of living on the ice, working largely in the dark and the intense cold, and sleeping in tents.
After months of hardship, the calculations were complete and showed that the length of one degree of latitude in the Arctic was almost 1.5 km longer than the equivalent value in France. The spheroid shape of the Earth had been established.
Of course, not everybody accepted the result. Politics and personalities got in the way. Hembrow uses this as the starting point for a polemic about aspects of modern science and climate change with which he ends his fine book. He argues that the painstaking work carried out by an international team, willing to share ideas and learn from each other, provides us with a template by which modern problems must be addressed.
Considering how often we use his name, most of us know little about Celsius. This book helps to address that deficit. It is a very enjoyable and accessible read and would appeal, I think, to anybody with an interest in the history of science.
A new transistor made from semiconducting vertical nanowires of gallium antimonide (GaSb) and indium arsenide (InAs) could rival today’s best silicon-based devices. The new transistors are switched on and off by electrons tunnelling through an energy barrier, making them highly energy-efficient. According to their developers at the Massachusetts Institute of Technology (MIT) in the US, they could be ideal for low-energy applications such as the Internet of Things (IoT).
Electronic transistors use an applied voltage to regulate the flow of electricity – that is, electrons – within a semiconductor chip. When this voltage is applied to a conventional silicon transistor, electrons climb over an energy barrier from one side of the device to the other, and it switches from an “off” state to an “on” one. This type of switching is the basis of modern information technology, but there is a fundamental physical limit on the threshold voltage required to get the electrons moving. This limit, which is sometimes termed the “Boltzmann tyranny” because it stems from the Boltzmann-like energy distribution of electrons in a semiconductor, puts a cap on the energy efficiency of this type of transistor.
Highly precise process
In the new work, MIT researchers led by electrical engineer Jesús A del Alamo made their transistor using a top-down fabrication technique they developed. This extremely precise process uses high-quality, epitaxially-grown structures and both dry and wet etching to fabricate nanowires just 6 nm in diameter. The researchers then placed a gate stack composed of a very thin gate dielectric and a metal gate on the sidewalls of the nanowires. Finally, they added point contacts to the source, gate and drain of the transistors using multiple planarization and etch-back steps.
The sub-10 nm size of the devices and the extreme thinness of the gate dielectric (just 2.4 nm) means that electrons are confined in a space so small that they can no longer move freely. In this quantum confinement regime, electrons no longer climb over the thin energy barrier at the GaSb/InAs heterojunction. Instead, they tunnel through it. The voltage required for such a device to switch is much lower than it is for traditional silicon-based transistors.
Steep switching slope and high drive current
Researchers have been studying tunnelling-type transistors for more than 20 years, notes Yanjie Shao, a postdoctoral researcher in nanoelectronics and semiconductor physics at MIT and the lead author of a study in Nature Electronics on the new transistor. Such devices are considered attractive because they allow for ultra-low-power electronics. However, they come with a major challenge: it is hard to maintain a sharp transition between “off” and “on” while delivering a high drive current.
When the project began five years ago, Shao says the team “believed in the potential of the GaSb/InAs ‘broken-band’ system to overcome this difficulty”. But it wasn’t all plain sailing. Fabricating such small vertical nanowires was, he says, “one of the biggest problems we faced”. Making a high-quality gate stack with a very low density of electronic trap states (states within dielectric materials that capture and release charge carriers in a semiconductor channel) was another challenge.
After many unsuccessful attempts, the team found a way to make the system work. “We devised a plasma-enhanced deposition method to make the gate electric and this was key to obtaining exciting transistor performance,” Shao tells Physics World.
The researchers also needed to understand the behaviour of tunnelling transistors, which Shao calls “not easy”. The task was made possible, he adds, by a combination of experimental work and first-principles modelling by Ju Li’s group at MIT, together with quantum transport simulation by David Esseni’s group at the University of Udine, Italy. These studies revealed that band alignment and near-periphery scaling of the number of conduction modes at the heterojunction interface play key roles in the physics of electrons under extreme confinement.
The reward for all this work is a device with a drive current as high as 300 uA/m and a switching slope less than 60 mV/decade (a decade, in this context, is a power of 10 difference between off and on states), meaning that the supply voltage is just 0.3 V. This is below the fundamental limit achievable with silicon-based devices, and around 20 times better than other tunnelling transistors of its type.
Potential for novel devices
Shao says the most likely applications for the new transistor are in ultra-low-voltage electronics. These will be useful for artificial intelligence and Internet of Things (IoT) applications, which require devices with higher energy efficiencies. Shao also hopes the team’s work will bring about a better understanding of the physics at surfaces and interfaces that feature extreme quantum confinement – something that could lead to novel devices that benefit from such nanoscale physics.
The MIT team is now developing transistors with a slightly different configuration that features vertical “nano-fins”. These could make it possible to build more uniform devices with less structural variation across the surface. “Being so small, even a variation of just 1 nm can adversely affect their operation,” Shao says. “We also hope that we can bring this technology closer to real manufacturing by optimizing the process technology.”
Physical exercise plays an important role in controlling disease, including cancer, due to its effect on the human body’s immune system. A research team from the USA and India has now developed a mathematical model to quantitatively investigate the complex relationship between exercise, immune function and cancer.
Exercise is thought to supress tumour growth by activating the body’s natural killer (NK) cells. In particular, skeletal muscle contractions drive the release of interleukin-6 (IL-6), which causes NK cells to shift from an inactive to an active state. The activated NK cells can then infiltrate and kill tumour cells. To investigate this process in more depth, the team developed a mathematical model describing the transition of a NK cell from its inactive to active state, at a rate driven by exercise-induced IL-6 levels.
“We developed this model to study how the interplay of exercise intensity and exercise duration can lead to tumour suppression and how the parameters associated with these exercise features can be tuned to get optimal suppression,” explains senior author Niraj Kumar from the University of Massachusetts Boston.
Impact of exercise intensity and duration
The model, reported in Physical Biology, is constructed from three ordinary differential equations that describe the temporal evolution of the number of inactive NK cells, active NK cells and tumour cells, as functions of the growth rates, death rates, switching rates (for NK cells) and the rate of tumour cell kill by activated NK cells.
Kumar and collaborators – Jay Taylor at Northeastern University and T Bagarti at Tata Steel’s Graphene Center – first investigated how exercise intensity impacts tumour suppression. They used their model to determine the evolution over time of tumour cells for different values of α0 – a parameter that correlates with the maximum level of IL-6 and increases with increased exercise intensity.
Simulating tumour growth over 20 days showed that the tumour population increased non-monotonically, exhibiting a minimum population (maximum tumour suppression) at a certain critical time before increasing and then reaching a steady-state value in the long term. At all time points, the largest tumour population was seen for the no-exercise case, confirming the premise that exercise helps suppress tumour growth.
The model revealed that as the intensity of the exercise increased, the level of tumour suppression increased alongside, due to the larger number of active NK cells. In addition, greater exercise intensity sustained tumour suppression for a longer time. The researchers also observed that if the initial tumour population was closer to the steady state, the effect of exercise on tumour suppression was reduced.
Next, the team examined the effect of exercise duration, by calculating tumour evolution over time for varying exercise time scales. Again, the tumour population showed non-monotonic growth with a minimum population at a certain critical time and a maximum population in the no-exercise case. The maximum level of tumour suppression increased with increasing exercise duration.
Finally, the researchers analysed how multiple bouts of exercise impact tumour suppression, modelling a series of alternating exercise and rest periods. The model revealed that the effect of exercise on maximum tumour suppression exhibits a threshold response with exercise frequency. Up to a critical frequency, which varies with exercise intensity, the maximum tumour suppression doesn’t change. However, if the exercise frequency exceeds the critical frequency, it leads to a corresponding increase in maximum tumour suppression.
Clinical potential
Overall, the model demonstrated that increasing the intensity or duration of exercise leads to greater and sustained tumour suppression. It also showed that manipulating exercise frequency and intensity within multiple exercise bouts had a pronounced effect on tumour evolution.
These results highlight the model’s potential to guide the integration of exercise into a patient’s cancer treatment programme. While still at the early development stage, the model offers valuable insight into how exercise can influence immune responses. And as Taylor points out, as more experimental data become available, the model has potential for further extension.
“In the future, the model could be adapted for clinical use by testing its predictions in human trials,” he explains. “For now, it provides a foundation for designing exercise regimens that could optimize immune function and tumour suppression in cancer patients, based on the exercise intensity and duration.”
Next, the researchers plan to extend the model to incorporate both exercise and chemotherapy dosing. They will also explore how heterogeneity in the tumour population can influence tumour suppression.
The US Department of Energy (DOE) has awarded $50m to a consortium of national laboratories and universities to develop sodium-ion batteries as a sustainable, low-cost alternative to lithium-ion technology.
Lithium-ion batteries currently dominate the electric-vehicle market and they are also used in smartphones and to store energy from renewable source such as wind and solar. Yet relying on a single battery technology such as lithium-ion creates dependencies on critical elements such as lithium, cobalt and nickel.
Sodium, however, is an abundant, inexpensive element and offers a promising way to diversify battery materials. The downside is that sodium-ion batteries currently store less energy per unit weight and volume than lithium-ion batteries.
The money from the DOE over the next five years will be used to create the Low-cost Earth-abundant Na-ion Storage (LENS) consortium. LENS will be led by Argonne National Laboratory and includes five other DOE national laboratories such as Brookhaven, Lawrence Berkely and Sandia as well eight US universities.
“By leading the LENS consortium, Argonne will push sodium-ion battery technology forward and contribute to a secure energy future for everyone,” notes Argonne director Paul Kearns. “Our scientific expertise and dynamic collaborations in this important field will strengthen US competitiveness.”
The LENS consortium will now develop high-energy electrode materials and electrolytes for sodium-ion batteries as well as design, integrate and benchmark battery cells with the aim of creating high-energy, long-lasting batteries.
“The challenge ahead is improving sodium-ion energy density so that it first matches and then exceeds that of phosphate-based lithium-ion batteries while minimizing and eliminating the use of all critical elements,” says LENS consortium director Venkat Srinivasan.
Venkat Srinivasan, William Mustain and Martin Freer appeared on a Physics World Live panel discussion about battery technology held on 21 November 2024, which you can watch online now
Stained glass is the most “physical” of all art forms. If you’ve ever been inside Chartres Cathedral in France or York Minster in the UK, you’ll know how such glass can transform a building by turning sunlight into gloriously captivating multicoloured patterns. What you might not realize, however, is that centuries of scientific innovation have forged this form of art.
Byzantine glaziers started making leaded glass windows back in the 6th century CE before the technique spread widely across Europe. But our ability to stain glass only emerged in the 14th century when medieval alchemists found that coating glass with silver nitrate and firing it in a kiln gave the material a range of orange and yellow hues.
Later, a range of other techniques were developed to produce various decorative effects, with stained glass becoming both an art and a craft. Particularly important has been the use of hydrofluoric acid – a poisonous and highly corrosive liquid – to strip off the surface of glass, layer by layer, to alter its colour and thickness.
Known as hydrofluoric acid etching, the technique is widely used by modern architectural glass artists. Beautiful patterns can be created by altering the strength and temperature of the acid and varying the time for which the glass is exposed to it. Materials like wax, bitumen and lead foil can also be used as resists to leave parts of the glass untouched.
Like other “glass artists”, I am an experimentalist of sorts. We use an empirical knowledge of glass to make beautiful objects – and sometimes even make new discoveries
Like other “glass artists”, I am an experimentalist of sorts. We use an empirical knowledge of glass to make beautiful objects – and sometimes even make new discoveries. In fact, some historians say that hydrofluoric acid was first created in 1670 by a German glassworker named Heinrich Schwanhardt.
While treating a mineral called fluorspar with acid, he saw that the lenses in his spectacles went cloudy – prompting him to start using the same reaction to etch glass. Only much later – in the late 18th century – did chemists carry out “proper” lab experiments to show that flourspar (calcium fluoride) reacts with the acid to create what we now call hydrofluoric acid.
From the 19th century onwards, acid-etching techniques started to be used by numerous stained-glass artists and studios throughout Britain and Ireland. Dublin-born Harry Clarke (1889–1931) was the leading proponent of the hydrofluoric acid-etching technique, which he mastered in an exceptionally personal and imaginative manner.
Art of glass
I first came across acid etching in 2010 while studying glass and architecture at Central Saint Martins, which is part of the University of the Arts London. The technique intrigued me and I started wondering about its origins and how it works, from a scientific point of view. What chemical processes are involved? What happens if you vary how the acid is applied? And how can that create new decorative effects?
Unable to find full answers to my questions, I started carrying out my own experiments and investigations. I wanted to understand how fluorspar – which can be colourless, deep green or even purple – can be turned into hydrofluoric acid and what goes on at a chemical level when it etches glass.
During my investigations, which I published in 2014 in The Journal of Stained Glass (38 146), I was astonished to find references to glass in the famous lectures given by Richard Feynman about quantum electrodynamics. Published in book form as QED: the Strange Theory of Light and Matter, Feynman explained the partial reflection of light by experimenting with blocks of glass.
He showed that the amount of light reflected increases with the thickness of the glass, pointing out that photons interact with electrons throughout the material, not just on the surface. “A piece of glass,” Feynman wrote, “is a terrible monster of complexity – huge numbers of electrons are jiggling about.”
In my own work, I’ve recently been experimenting with glass of different thickness to make a piece of art inspired by the packaging for a quantum chip made by Rigetti Computing. Entitled Per scientiam ad astra (through science to the stars), the artwork was displayed at the 2024 British Glass Biennale at the Ruskin Glass Centre in Stourbridge, UK – a historically significant area for glass-making that pioneered the creation of etched glass in the 19th century.
Rigetti Computing’s quantum chip
The quantum computers developed by US firm Rigetti Computing are based on superconducting qubits made from materials such as aluminium, indium and niobium. The qubits are manufactured using a mix of novel fabrication methods and well-established semiconductor and micro-electromechanical systems (MEMS) processing techniques. The quantum chip – containing the qubits and other components such as readout resonators – are carefully assembled inside a gold-plated copper packaging that connects it to a printed circuit board (PCB).
The PCB in turn routes the signals to microwave connectors, with the whole system cooled to below 10 millikelvin using dilution refrigeration. The environment in which the quantum bits operate is carefully engineered so that they don’t lose their coherence. Rigetti’s design could, in principle, be scaled up to create much larger and more reliable quantum processors with many more qubits.
The packaging for the quantum chip, on which Oksana Kondratyeva’s artwork is based, is a disc 81.5 mm in diameter and 12 mm deep (see image). With the chip at its centre, the packaging is mounted at the bottom of a tower-like structure that, along with the rest of the fridge and wiring, forms the fully assembled quantum computer. Signals are delivered to and from the chip to drive qubit operations and return measured results.
A quantum rose
Creating an artwork based on quantum technology might be an unusual thing to do, but when I saw a photo of the packaging for a quantum chip back in 2020, I was astounded by the elegant geometry of this enigmatic object, which holds the “brain” of the company’s quantum computer (see box above). Reminding me of the beautiful circular rose windows of medieval cathedrals, I wanted to use glass to create a “quantum rose” for the 21st century. Later, Rigetti got in touch with me after my plans were reported on in Physics World in June 2022.
As you can imagine, hydrofluoric acid etching is an extremely dangerous technique, given how corrosive the liquid is. I acid-etch glass from the German company LambertsGlas in a specially equipped studio with a ventilation cabinet to extract fumes and I wear a protective suit with a respiratory mask. As you can see from the video below, I look more like an astronaut than an artist.
Acid etching can be done in lots of different ways (see Materials Today Proceedings55 56) – but I prefer to apply the acid freely with a cotton or plastic brush, coining my technique “acid painting”. The resulting artwork, which took me several months to make, is just over a metre in diameter.
This video has no voice over. (Video courtesy: Space Production)
Mostly blue with a red focal point, the artwork constantly changes as you move around it. Visitors to the British Glass Biennale seemed to be attracted to it, with comments such as “empowering” and “ethereal”. Per scientiam ad astra will now move to a private residence that just happens to be not far from the UK’s National Quantum Computing Centre in Oxfordshire, where one of Rigetti’s quantum computers is housed.
Art–science crossover
Stained-glass windows were once “illuminated books” for people who could not read – mysterious transmitters of knowledge that told stories about human life. The same, in a way, is true of quantum computers, which are broadening our understanding of reality. And just as mathematical equations can have an inner beauty, so too do quantum computers through the myriad technological innovations that underpin them.
With 2025 being the International Year of Quantum Science and Technology, I hope my artwork raises interesting questions at the intersection between art and science, continuing the “two-cultures” dialogue introduced by C P Snow in 1959. Is it a metaphorical window into the secret architecture of the universe? Or is it a visualization of our reality, where Newtonian and quantum-mechanical worlds merge?
Working with stained glass requires an understanding of how materials behave but that knowledge will only get you so far. To reveal new regions of reality and its beauty, unexpectedness plays a role too. Stained-glass art is the convergence of certainty and uncertainty, where science and art come together. Art can unite people; and through the beauty in art, we can create a better reality.
Xenon nuclei change shape as they collide, transforming from soft, oval-shaped particles to rigid, spherical ones. This finding, which is based on simulations of experiments at CERN’s Large Hadron Collider (LHC), provides a first look at how the shapes of atomic nuclei respond to extreme conditions. While the technique is still at the theoretical stage, physicists at the Niels Bohr Institute (NBI) in Denmark and Peking University in China say that ultra-relativistic nuclear collisions at the LHC could allow for the first experimental observations of these so-called nuclear shape phase transitions.
The nucleus of an atom is made up of protons and neutrons, which are collectively known as nucleons. Like electrons, nucleons exist in different energy levels, or shells. To minimize the energy of the system, these shells take different shapes, with possibilities including pear, spherical, oval or peanut-shell-like formations. These shapes affect many properties of the atomic nucleus as well as nuclear processes such as the strong interactions between protons and neutrons. Being able to identify them is thus very useful for predicting how nuclei will behave.
Colliding pairs of 129Xe atoms at the LHC
In the new work, a team led by You Zhou at the NBI and Huichao Song at Peking University studied xenon-129 (129Xe). This isotope has 54 protons and 75 neutrons and is considered a relatively large atom, making its nuclear shape easier, in principle, to study than that of smaller atoms.
Usually, the nucleus of xenon-129 is oval-shaped (technically, it is a 𝛾-soft rotor). However, low-energy nuclear theory predicts that it can transition to a spherical, prolate or oblate shape under certain conditions. “We propose that to probe this change (called a shape phase transition), we could collide pairs of 129Xe atoms at the LHC and use the information we obtain to extract the geometry and shape of the initial colliding nuclei,” Zhou explains. “Probing these initial conditions would then reveal the shape of the 129Xe atoms after they had collided.”
A quark-gluon plasma
To test the viability of such experiments, the researchers simulated accelerating atoms to near relativistic speeds, equivalent to the energies involved in a typical particle-physics experiment at the LHC. At these energies, when nuclei collide with each other, their constituent protons and neutrons break down into smaller particles. These smaller particles are mainly quarks and gluons, and together they form a quark-gluon plasma, which is a liquid with virtually no viscosity.
Zhou, Song and colleagues modelled the properties of this “almost perfect” liquid using an advanced hydrodynamic model they developed called IBBE-VISHNU. According to these analyses, the Xe nuclei go from being soft and oval-shaped to rigid and spherical as they collide.
Studying shape transitions was not initially part of the researchers’ plan. The original aim of their work was to study conditions that prevailed in the first 10-6 seconds after the Big Bang, when the very early universe is thought to have been filled with a quark-gluon plasma of the type produced at the LHC. But after they realized that their simulations could shed light on a different topic, they shifted course.
“Our new study was initiated to address the open question of how nuclear shape transitions manifest in high-energy collisions,” Zhou explains, “and we also wanted to provide experimental insights into existing theoretical nuclear structure predictions.”
One of the team’s greatest difficulties lay in developing the complex models required to account for nuclear deformation and probe the structure of xenon and its fluctuations, Zhou tells Physics World. “There was also a need for compelling new observables that allow for a direct probe of the shape of the colliding nuclei,” he says.
Applications in both high- and low-energy nuclear and structure physics
The work could advance our understanding of fundamental nuclear properties and the operation of the theory of quantum chromodynamics (QCD) under extreme conditions, Zhou adds. “The insights gleaned from this work could guide future nuclear collision experiments and influence our understanding of nuclear phase transitions, with applications extending to both high-energy nuclear physics and low-energy nuclear structure physics,” he says.
The NBI/Peking University researchers say that future experiments could validate the nuclear shape phase transitions they observed in their simulations. Expanding the study to other nuclei that could be collided at the LHC is also on the cards, says Zhou. “This could deepen our understanding of nuclear structure at ultra-short timescales of 10-24 seconds.”
I have mentioned many times in this column the value of the business awards given by the Institute of Physics (IOP), which can be a real “stamp of approval” for firms developing new technology. Having helped to select the 2024 winners, it was great to see eight companies winning a main IOP Business Innovation Award this time round, bringing the total number of firms honoured over the last 13 years to 86. Some have won awards on more than one occasion, with Fetu being one of the latest to join this elite group.
Set up by Jonathan Fenton in 2016, FeTu originally won an IOP Business Start-up Award in 2020 for its innovative Fenton Turbine. According to Fenton, who is chief executive, it is the closest we have ever got to the ideal, closed-cycle reversible heat engine first imagined by thermodynamics pioneer Nicolas Carnot in 1824. The turbine, the firm claims, could replace compressors, air conditioners, fridges, vacuum pumps and heat pumps with efficiency savings across the board.
Back in 2020, it might have sounded like a “too-good-to-be-true” technology, but Fenton has sensibly set out to prove that’s not the case, with some remarkable results. The turbine is complex to describe but the first version promised to cut the energy cost of compressing gases like air by 25%. They claim has already been proven in independent tests carried out by researchers at the University of Bath.
One challenge of any technology with many different applications is picking which to focus on first.
One challenge of any technology with many different applications is picking which to focus on first. Having decided to focus on a couple of unique selling factors in large markets, FeTu has now won a 2024 Business Innovation Award for developing a revolutionary heat engine that can generate electrical power from waste heat and geothermal sources as low as 40 °C. It has a huge market potential as it is currently not possible to do this economically.
Innovative ideas
Another winner of an IOP Business Innovation Award is Oxford Ionics, a quantum-computing firm set up in 2019 by Chris Balance and Tom Harty after doing PhDs at the University of Oxford. Their firm’s qubits are based on trapped ions, which traditionally have been controlled with lasers. It’s an approach that works well for small processors, but becomes untenable and error-prone as the size of the processor scales, and the number of qubits increases.
Instead of lasers, Oxford Ionics’ trapped-ion processors use a proprietary, patented electronic system to control the qubits. It was for this system that the company was recognized by the IOP, along with its ability to scale the architecture so that the chips can be made in large quantities on standard semiconductor production lines. That’s essential if we are to build practical quantum computers.
Whilst it’s still early days in the commercialisation of quantum computing, Oxford Ionics is an exciting company to watch. It has already won contracts to supply the UK’s National Quantum Computing Centre at Harwell and has bagged a large contract with its partner Infineon Technologies AG in Munich to build a state-of-the-art portable quantum computer for Germany’s cybersecurity innovation agency. The two firms are one of three independent contractors selected by the agency, which is investing a total of €35m in the project.
I should also mention Dublin-based Equal1, which won the IOP’s £10,000 quantum Business Innovation and Growth (qBIG) Prize in 2024. Equal1 is developing rack-mountable quantum computers powered by a system that integrates quantum and classical components onto a single silicon chip using commercial fabrication processes. The company, which aims to develop compact quantum computers, also won 10 months of mentoring from the award’s sponsors Quantum Exponential.
Meanwhile, Covesion – a photonics and quantum components supplier founded in 2009 – has won an IOP Business Innovation Award for its magnesium-doped, periodically poled, lithium niobate (MgO:PPLN) crystals and waveguides. They allow light to be easily converted from one frequency to another, providing access to wavelengths that are not available from commercial laser sources.
With a manufacturing base in Southampton, Covesion works with customers and industry partners to help them design and make high quality MgO:PPLN products used in a wide range of applications. They include quantum computing, communication, sensing and timing; frequency doubling of femtosecond lasers; mid-infrared generation; atom cooling; terahertz generation and biomedical imaging. The shear breadth and global nature of the customer base is impressive.
Sounds promising
Among the companies to win an IOP Business Start-up Award is Metasonixx, based in Brighton. Spun off from the universities of Bristol and Sussex in 2019, the firm makes mass-produced acoustic metamaterial panels, which can dramatically attenuate sound (10 dB in its Sonoblind) and yet still allow air to flow freely (3 dB or 50% attenuation). That might seem counter-intuitive, but that’s where the innovation comes in and the panels can help with noise management and ventilation, allowing industrial ventilators and heat pumps to be more widely used.
The company really got going in 2020, when it got a grant from UK Research and Innovation to see if its metamaterials could cut noise in hospitals to help patients recovering from COVID-19 and improve the well-being of staff. After Metatronixx won the Armourers and Brasiers Venture Prize in 2021 for their successes on COVID wards, the firm decided to mass-produce panels that could perform as well as traditional noise-reduction solutions but are modular and greener, with one-third of the mass and occupying one-twelfth of the space.
From a physics point of view, panels that can let air and light through in this way are interferential filters, but working over four doublings of frequency (or octaves). With manufacturing and first sales in 2023, their desk separators are now being tested in noisy offices worldwide. Metatronixx believes its products, which allow air to flow through them, could help to boost the use of industrial ventilators and heat pumps, thereby helping in the quest to meet net-zero targets.
Winning awards for Metasonixx is not a new experience, having also picked up a “Seal of Exellence Award” from the European Commission in 2023 and honoured at Bristol’s Tech-Xpo in 2024. Its new IOP award will sit very nicely in this innovative company’s trophy cabinet.
In his next article, James McKenzie will look at the rest of the 2024 IOP Business Award winners in imaging and medical technology.
Physicists in the US have found an explanation for why electrons in a material called pentalayer moiré graphene carry fractional charges even in the absence of a magnetic field. This phenomenon is known as the fractional quantum anomalous Hall effect, and teams at the Massachusetts Institute of Technology (MIT), Johns Hopkins University and Harvard University/University of California, Berkeley have independently suggested that an interaction-induced topological “flat” band in the material’s electronic structure may be responsible.
Scientists already knew that electrons in graphene could, in effect, split into fractions of themselves in the presence of a very strong magnetic field. This is an example of the fractional quantum Hall effect, which occurs when a material’s Hall conductance is quantized at fractional multiples of e2/h.
Then, in February this year, an MIT team led by physicist Long Ju spotted the same effect in pentalayer moiré graphene. This material consists of a layer of a two-dimensional hexagonal boron nitride (hBN) with five layers of graphene (carbon sheets just one atom thick) stacked on top of it. The graphene and hBN layers are twisted at a small angle with respect to each other, resulting in a moiré pattern that can induce conflicting properties such as superconductivity and insulating behaviour within the structure.
Answering questions
Although Ju and colleagues were the first to observe the fractional quantum anomalous Hall effect in graphene, their paper did not explain why it occurred. In the latest group of studies, other scientists have put forward a possible solution to the mystery.
According to MIT’s Senthil Todadri, the effect could stem from the fact that electrons in two-dimensional materials like graphene are confined in such small spaces that they start interacting strongly. This means that they can no longer be considered as independent charges that naturally repel each other. The Johns Hopkins team led by Ya-Hui Zhang and the Harvard/Berkeley team led by Ashvin Vishwanath and Daniel E Parker came to similar conclusions, and published their work in PhysicalReviewLetters alongside that of the MIT team.
Crystal-like periodic patterns form an electronic “flat” band
Todadri and colleagues started their analyses with a reasonably realistic model of the pentalayer graphene. This model treats the inter-electron Coulomb repulsion in an approximate way, replacing the “push” of all the other electrons on any given electron with a single potential, Todadri explains. “Such a strategy is routinely employed in quantum mechanical calculations of, say, the structure of atoms, molecules or solids,” he notes.
The MIT physicists found that the moiré arrangement of pentalayer graphene induces a weak electron potential that forces electrons passing through it to arrange themselves in crystal-like periodic patterns that form a “flat” electronic band. This band is absent in calculations that do not account for electron–electron interactions, they say.
Such flat bands are especially interesting because electrons in them become “dispersionless” – that is, their kinetic energy is suppressed. As the electrons slow almost to a halt, their effective mass approaches infinity, leading to exotic topological phenomena as well as strongly correlated states of matter associated with high-temperature superconductivity and magnetism. Other quantum properties of solids such as fractional splitting of electrons can also occur.
“Mountain and valley” landscapes
So what causes the topological flat band in pentalayer graphene to form? The answer lies in the “mountain and valley” landscapes that naturally appear in the electronic crystal. Electrons in this material experience these landscapes as pseudo-magnetic fields, which affect their motion and, in effect, do away with the need to apply a real magnetic field to induce the fractional Hall quantization.
“This interaction-induced topological (‘valley-polarized Chern-1’) band is also predicted by our theory to occur in the four- and six-layer versions of multilayer graphene,” Todadri says. “These structures may then be expected to host phases where electron fractions appear.”
In this study, the MIT team presented only a crude treatment of the fractional states. Future work, Todadri says, may focus on understanding the precise role of the moiré potential produced by aligning the graphene with a substrate. One possibility, he suggests, is that it simply pins the topological electron crystal in place. However, it could also stabilize the crystal by tipping its energy to be lower than a competing liquid state. Another open question is whether these fractional electron phenomena at zero magnetic field require a periodic potential in the first place. “The important next question is to develop a better theoretical understanding of these states,” Todadri tells Physics World.
At 3 a.m. one morning in June 1925, an exhausted, allergy-ridden 23-year old climbed a rock at the edge of a small island off the coast of Germany in the North Sea. Werner Heisenberg, who was an unknown physics postdoc at the time, had just cobbled together, in crude and unfamiliar mathematics, a framework that would shortly become what we know as “matrix mechanics”. If we insist on pegging the birth of quantum mechanics to a particular place and time, Helgoland in June 1925 it is.
Heisenberg’s work a century ago is the reason why the United Nations has proclaimed 2025 to be the International Year of Quantum Science and Technology. It’s a global initiative to raise the public’s awareness of quantum science and its applications, with numerous activities in the works throughout the year. One of the most significant events for physicists will be a workshop running from 9–14 June on Helgoland, exactly 100 years on from the very place where quantum mechanics supposedly began.
Entitled “Helgoland 2025”, the event is designed to honour Heisenberg’s development of matrix mechanics, which organizers have dubbed “the first formulation of quantum theory”. The workshop, they say, will explore “the increasingly fruitful intersection between the foundations of quantum mechanics and the application of these foundations in real-world settings”. But why was Heisenberg’s work so vital to the development of quantum mechanics? Was it really as definitive as we like to think? And is the oft-repeated Helgoland story really true?
How it all began
The events leading up to Heisenberg’s trip can be traced back to the work of Max Planck in 1900. Planck was trying to produce a formula for how certain kinds of materials absorb and emit light depending on energy. In what he later referred to as an “act of sheer desperation”, Planck found himself having to use the idea of the “quantum”, which implied that electromagnetic radiation is not continuous but can be absorbed and emitted only in discrete chunks.
Standing out as a smudge on the beautiful design of classical physics, the idea of quantization appeared of limited use. Some physicists called it “ugly”, “grotesque” and “distasteful”; it was surely a theoretical sticking plaster that could soon be peeled off. But the quantum proved indispensable, cropping up in more and more branches of physics, including the structure of the hydrogen atom, thermodynamics and solid-state physics. It was like an obnoxious visitor whom you try to expel from your house but can’t. Worse, its presence seemed to grow. The quantum, remarked one scientist at the time, was a “lusty infant”.
‘Quantum theory’ was like having instructions for how to get from place A to place B. What you really wanted was a ‘quantum mechanics’ – a map that showed you how to go from any place to any other.
Robert P Crease, Stony Brook University
Attempts to domesticate that infant in the first quarter of the 20th century were made not only by Planck but other physicists too, such as Wolfgang Pauli, Max Born, Niels Bohr and Ralph Kronig. They succeeded only in producing rules for calculating certain phenomena that started with classical theory and imposed conditions. “Quantum theory” was like having instructions for how to get from place A to place B. What you really wanted was a “quantum mechanics” – a map that, working with one set of rules, showed you how to go from any place to any other.
Heisenberg was a young crusader in this effort. Born on 5 December 1901 – the year after Planck’s revolutionary discovery – Heisenberg had the character often associated with artists, with dashing looks, good musicianship and a physical frailty including a severe vulnerability to allergies. That summer in 1923, Heisenberg had just finished his PhD under Arnold Sommerfeld at the Ludwig Maximilian University in Munich and was starting a postdoc with Born at the University of Göttingen.
Like others, Heisenberg was stymied in his attempts to develop a mathematical framework for the frequencies, amplitudes, orbitals, positions and momenta of quantum phenomena. Maybe, he wondered, the trouble was trying to cast these phenomena in a Newtonian-like visualizable form. Instead of treating them as classical properties with specific values, he decided to look at them in purely mathematical terms as operators acting on functions. It was then that an “unfortunate personal setback” occurred.
Destination Helgoland
Referring to a bout of hay fever that had wiped him out, Heisenberg asked Born for a two-week leave of absence from Göttingen and took a boat to Helgoland. The island, which lies some 50 km off Germany’s mainland, is barely 1 km2 in size. However, its strategic military location had given it an outsized history that saw it swapped several times between different European powers. Part of Denmark from 1714, the island was occupied by Britain in 1807 before coming under Germany’s control in 1890.
During the First World War, Germany turned the island into a military base and evacuated all its residents. By the time Heisenberg arrived, the soldiers had long gone and Helgoland was starting to recover its reputation as a centre for commercial fishing and a bracing tourist destination. Most importantly for Heisenberg, it had fresh winds and was remote from allergen producers.
Heisenberg arrived at Helgoland on Saturday 6 June 1925 coughing and sneezing, and with such a swollen face that his landlady decided he had been in a fight. She installed him in a quiet room on the second floor of her Gasthaus that overlooked the beach and the North Sea. But he didn’t stop working. “What exactly happened on that barren, grassless island during the next ten days has been the subject of much speculation and no little romanticism,” wrote historian David Cassidy in his definitive 1992 book Uncertainty: The Life and Science of Werner Heisenberg.
In Heisenberg’s telling, decades later, he kept turning over all he knew and began to construct equations of observables – of frequencies and amplitudes – in what he called “quantum-mechanical series”. He outlined a rough mathematical scheme, but one so awkward and clumsy that he wasn’t even sure it obeyed the conservation of energy, as it surely must. One night Heisenberg turned to that issue.
“When the first terms seemed to accord with the energy principle, I became rather excited,” he wrote much later in his 1971 book Physics and Beyond. But he was still so tired that he began to stumble over the maths. “As a result, it was almost three o’clock in the morning before the final result of my computations lay before me.” The work still seemed finished yet incomplete – it succeeded in giving him a glimpse of a new world though not one worked out in detail – but his emotions were weighted with fear and longing.
“I was deeply alarmed,” Heisenberg continued. “I had the feeling that, through the surface of atomic phenomena, I was looking at a strangely beautiful interior, and felt almost giddy at the thought that I now had to probe this wealth of mathematical structure nature had so generously spread out before me. I was far too excited to sleep and so, as a new day dawned, I made for the southern tip of the island, where I had been longing to climb a rock jutting out into the sea. I now did so without too much trouble, and waited for the sun to rise.”
What happened on Helgoland?
Historians are suspicious of Heisenberg’s account. In their 2023 book Constructing Quantum Mechanics Volume 2: The Arch 1923–1927, Anthony Duncan and Michel Janssen suggest that Heisenberg made “somewhat less progress in his visit to Helgoland in June 1925 than later hagiographical accounts of this episode claim”. They believe that Heisenberg, in Physics and Beyond, may “have misremembered exactly how much he accomplished in Helgoland four decades earlier”.
What’s more – as Cassidy wondered in Uncertainty – how could Heisenberg have been so sure that the result agreed with the conservation of energy without having carted all his reference books along to the island, which he surely had not. Could it really be, Cassidy speculated sceptically, that Heisenberg had memorized the relevant data?
Alexei Kojevnikov – another historian – even doubts that Heisenberg was entirely candid about the reasons behind his inspiration. In his 2020 book The Copenhagen Network: The Birth of Quantum Mechanics from a Postdoctoral Perspective, Kojevnikov notes that fleeing from strong-willed mentors such as Bohr, Born, Kronig, Pauli and Sommerfeld was key to Heisenberg’s creativity. “In order to accomplish his most daring intellectual breakthrough,” Kojevnikov writes, “Heisenberg had to escape from the authority of his academic supervisors into the temporary loneliness and freedom on a small island in the North Sea.”
Whatever did occur on the island, one thing is clear. “Heisenberg had his breakthrough,” decides Cassidy in his book. He left Helgoland 10 days after he arrived, returned to Göttingen, and dashed off a paper that was published in Zeitschrift für Physik in September 1925 (33 879). In the article, Heisenberg wrote that “it is not possible to assign a point in space that is a function of time to an electron by means of observable quantities.” He then suggested that “it seems more advisable to give up completely on any hope of an observation of the hitherto-unobservable quantities (such as the position and orbital period of the electron).”
To modern ears, Heisenberg’s comments may seem unremarkable. But his proposition certainly would have been nearly unthinkable to those steeped in Newtonian mechanics. Of course, the idea of completely abandoning the observability of those quantities wasn’t quite true. Under certain conditions, it can make sense to speak of observing them. But they certainly captured the direction he was taking.
The only trouble was that his scheme, with its “quantum-mechanical relations”, produced formulae that were “noncommutative” – a distressing asymmetry that was surely an incorrect feature in a physical theory. Heisenberg all but shoved this feature under the rug in his Zeitschrift für Physik article, where he relegated the point to a single sentence.
The more mathematically trained Born, on the other hand, sensed something familiar about the maths and soon recognized that Heisenberg’s bizarre “quantum-mechanical relations” with their strange tables were what mathematicians called matrices. Heisenberg was unhappy with that particular name for his work, and considered returning to what he had called “quantum-mechanical series”.
Fortunately, he didn’t, for it would have made the rationale for the Helgoland 2025 conference clunkier to describe. Born was delighted with the connection to traditional mathematics. In particular he found that when the matrix p associated with momentum and the matrix q associated with position are multiplied in different orders, the difference between them is proportional to Planck’s constant, h.
As Born wrote in his 1956 book Physics in My Generation: “I shall never forget the thrill I experienced when I succeeded in condensing Heisenberg’s ideas on quantum conditions in the mysterious equation pq – qp = h/2πi, which is the centre of the new mechanics and was later found to imply the uncertainty relations”. In February 1926, Born, Heisenberg and Jordan published a landmark paper that worked out the implications of this equation (Zeit. Phys.35 557). At last, physicists had a map of the quantum domain.
Almost four decades later in an interview with the historian Thomas Kuhn, Heisenberg recalled Pauli’s “extremely enthusiastic” reaction to the developments. “[Pauli] said something like ‘Morgenröte einer Neuzeit’,” Heisenberg told Kuhn. “The dawn of a new era.” But it wasn’t entirely smooth sailing after that dawn. Some physicists were unenthusiastic about Heisenberg’s new mechanics, while others were outright sceptical.
Yet successful applications kept coming. Pauli applied the equation to light emitted by the hydrogen atom and derived the Balmer formula, a rule that had been known empirically since the mid-1880s. Then, in one of the most startling coincidences in the history of science, the Austrian physicist Erwin Schrödinger produced a complete map of the quantum domain stemming from a much more familiar mathematical basis called “wave mechanics”. Crucially, Heisenberg’s matrix mechanics and Schrödinger’s maps turned out to be identical.
Even more fundamental implications followed. In an article published in Naturwissenschaften (14 899) in September 1926, Heisenberg wrote that our “ordinary intuition” does not work in the subatomic realm. “Because the electron and the atom possess not any degree of physical reality as the objects of our daily experience,” he said, “investigation of the type of physical reality which is proper to electrons and atoms is precisely the subject of quantum mechanics.”
Quantum mechanics, alarmingly, was upending reality itself, for the uncertainty it introduced was not only mathematical but “ontological” – meaning it had to do with the fundamental features of the universe. Early the next year, Heisenberg, in correspondence with Pauli, derived the equation Δp Δq ≥ h/4π, the “uncertainty principle”, which became the touchstone of quantum mechanics. The birth complications, however, persisted. Some even got worse.
Catalytic conference
A century on from Heisenberg’s visit to Helgoland, quantum mechanics still has physicists scratching their heads. “I think most people agree that we are still trying to make sense of even basic non-relativistic quantum mechanics,” admits Jack Harris, a quantum physicist at Yale University who is co-organizing Helgoland 2025 with Časlav Brukner, Steven Girvin and Florian Marquardt.
We really don’t fully understand the quantum world yet. We apply it, we generalize it, we develop quantum field theories and so on, but still a lot of it is uncharted territory.
Igor Pikovsky, Stevens Institute, New Jersey
“We really don’t fully understand the quantum world yet,” adds Igor Pikovsky from the Stevens Institute in New Jersey, who works in gravitational phenomena and quantum optics. “We apply it, we generalize it, we develop quantum field theories and so on, but still a lot of it is uncharted territory.” Philosophers and quantum physicists with strong opinions have debated interpretations and foundational issues for a long time, he points out, but the results of those discussions have been unclear.
Helgoland 2025 hopes to change all that. Advances in experimental techniques let us ask new kinds of fundamental questions about quantum mechanics. “You have new opportunities for studying quantum physics at completely different scales,” says Pikovsky. “You can make macroscopic, Schrödinger-cat-like systems, or very massive quantum systems to test. You don’t need to debate philosophically about whether there’s a measurement problem or a classical-quantum barrier – you can start studying these questions experimentally.”
One phenomenon fundamental to the puzzle of quantum mechanics is entanglement, which prevents the quantum state of a system from being described independently of the state of others. Thanks to the Einstein–Podolsky–Rosen (EPR) paper of 1935 (Phys. Rev.47 777), Chien-Shiung Wu and Irving Shaknov’s experimental demonstration of entanglement in extended systems in 1949, and John Bell’s theorem in 1964 (Physics 1 195), physicists know that entanglement in extended systems is a large part of what’s so weird about quantum mechanics.
Understanding all that entanglement entails, in turn, has led physicists to realize that information is a fundamental physical concept in quantum mechanics. “Even a basic physical quantum system behaves differently depending on how information about it is stored in other systems,” Harris says. “That’s a starting point both for deep insights into what quantum mechanics tells us about the world, and also for applying it.”
Helgoland 2025: have you packed your tent?
Running from 9–14 June 2025 on the island where Werner Heisenberg did his pioneering work on quantum mechanics, the Helgoland 2025 workshop is a who’s who of quantum physics. Five Nobel laureates in the field of quantum foundations are coming. David Wineland and Serge Haroche, who won in 2012 for measuring and manipulating individual quantum systems, will be there. So too will Alain Aspect, John Clauser and Anton Zeilinger, who were honoured in 2022 for their work on quantum-information science.
There’ll be Charles Bennett and Gilles Brassard, who pioneered quantum cryptography, quantum teleportation and other applications, as well quantum-sensing guru Carlton Caves. Researchers from industry are intending to be present, including Krysta Svore, who’s vice-president of Microsoft Quantum.
Other attendees are from the intersection of foundations and applications. There will be researchers working on gravitation, mostly from quantum gravity phenomenology, where the aim is to seek experimental signatures of the effect. Others work on quantum clocks, quantum cryptography, and innovative ways of controlling light, such as using squeezed light at LIGO, to detect gravitational waves.
The programme starts in Hamburg on 9 June with a banquet and a few talks. Attendees will then take a ferry to Helgoland the following morning for a week of lectures, panel discussions and poster sessions. All talks are plenary, but in the evenings panels of a half-dozen or so people will address bigger questions familiar to every quantum physicist but rarely discussed in research papers. What is it about quantum mechanics, for instance, that makes it so compatible with so many interpretations?
If you’re thinking of going, you’re almost certainly out of luck. Registration closed in April 2024, while hotels, Airbnb and Booking.com venues are nearly exhausted. Participants are having to share double rooms or invited to camp on the beaches – with their own gear.
Helgoland 2025 will therefore focus on the two-way street between foundations and applications in what promises to be a unique event. “The conference is intended to be a bit catalytic,” Harris adds. “[There will be] people who didn’t realize that others were working on similar issues in different fields, and a lot of people who will never have met each other”. The disciplinary diversity will be augmented by the presence of students as well as poster sessions, which tend to bring in an even broader variety of research topics.
There will be people [at Helgoland] who work on black holes whose work is familiar to me but who I haven’t met yet.
Ana Maria Rey, University of Colorado, Boulder
One of those looking forward to such encounters is Ana Maria Rey – a theoretical physicist at the University of Colorado, Boulder, and a JILA fellow who studies quantum phenomena in ways that have improved atomic clocks and quantum computing. “There will be people who work on black holes whose work is familiar to me but who I haven’t met yet,” she says. Finding people should be easy: Helgoland is tiny and only a hand-picked group of people have been invited to attend (see box above).
What’s also unusual about Helgoland is that it has as many practically-minded as theoretically-minded participants. But that doesn’t faze Magdalena Zych, a physicist from Stockholm University in Sweden. “I’m biased because academically I grew up in Vienna, where Anton Zeilinger’s group always had people working on theory and applications,” she says.
Zych’s group has, for example, recently discovered a way to use the uncertainty principle to get a better understanding of the semi-classical space–time trajectories of composite particles. She plans to talk about this research at Helgoland, finding it appropriate given that it relies on Heisenberg’s principle, is a product of specific theoretical work and is valid more generally. “It relates to the arch of the conference, looking both backwards and forwards, and from theory to applications.”
Nathalie de Leon: heading for Helgoland
In June 2022, Nathalie de Leon, a physicist at Princeton University working on quantum computing and quantum metrology, was startled to receive an invitation to the Helgoland conference. “It’s not often you get [one] a full three years in advance,” says de Leon, who also found it unusual that participants had to attend for the entire six days. But she was not surprised at the composition of the conference with its mix of theorists, experimentalists and people applying what she calls the “weirder” aspects of quantum theory.
“When I was a graduate student [in the late 2000s], it was still the case that quantum theorists and researchers who built things like quantum computers were well aware of each other but they didn’t talk to each other much,” she recalls. “In their grant proposals, the physicists had to show they knew what the computer scientists were doing, and the computer scientists had to justify their work with appeals to physics. But they didn’t often collaborate.” De Leon points out that over the last five or 10 years, however, more and more opportunities for these groups to collaborate have emerged. “Companies like IBM, Google, QuEra and Quantinuum now have theorists and academics trying to develop the hardware to make quantum tech a practical reality,” she says.
Some quantum applications have even cropped up in highly sophisticated technical devices, such as the huge Laser Interferometer Gravitational Wave Observatory (LIGO). “A crazy amount of classical engineering was used to build this giant interferometer,” says de Leon, “which got all the way down to a miniscule sensitivity. Then as a last step the scientists injected something called squeezed light, which is a direct consequence of quantum mechanics and quantum measurement.” According to de Leon, that squeezing let us see something like eight times more of the universe. “It’s one of the few places where we get a real tangible advantage out of the strangeness of quantum mechanics,” she adds.
Other, more practical benefits are also bound to emerge from quantum information theory and quantum measurement. “We don’t yet have quantum technologies on the open consumer market in the same way we have lasers you can buy on Amazon for $15,” de Leon says. But groups gathering in Helgoland will give us a better sense of where everything is heading. “Things,” she adds, “are moving so fast.”
Sadly, participants will not be able to visit Heisenberg’s Gasthaus, nor any other building where he might have been. During the Second World War, Germany again relocated Helgoland’s inhabitants and turned the island into a military base. After the war, the Allies piled up unexploded ordinances on the island and set them off, in what is said to be one of the biggest conventional explosions in history. The razed homeland was then given back to its inhabitants.
We will not be 300 Heisenbergs going for hikes. [Attendees] certainly won’t be trying to get away from each other.
Jack Harris, Yale University
Helgoland still has rocky outcroppings at its southern end, one of which may or may not be the site of Heisenberg’s early morning climb and vision. But despite the powerful mythology of his story, participants at Helgoland 2025 are not being asked to herald another dawn. “We will not,” says Harris, “be 300 Heisenbergs going for hikes. They certainly won’t be trying to get away from each other.”
The historian of science Mario Biagioli once wrote an article entitled “The scientific revolution is undead”, underlining how arbitrary it is to pin key developments in science – no matter how influential or long-lasting – to specific beginnings and endings, for each new generation of scientists finds ever more to mine in the radical discoveries of predecessors. With so many people working on so many foundational issues set to be at Helgoland 2025, new light is bound to emerge. A century on, the quantum revolution is alive and well.
New constraints on a theory that says dark matter was created just after the Big Bang – rather than at the Big Bang – have been determined by Richard Casey and Cosmin Ilie at Colgate University in the US. The duo calculated the full range of parameters in which a “Dark Big Bang” could fit into the observed history of the universe. They say that evidence of this delayed creation could be found in gravitational waves.
Dark matter is a hypothetical substance that is believed to play an important role in the structure and dynamics of the universe. It appears to account for about 27% of the mass–energy in the cosmos and is part of the Standard Model of cosmology. However, dark matter particles have never been observed directly.
The Standard Model also says that the entire contents of the universe emerged nearly 14 billion years ago in the Big Bang. Yet in 2023, Katherine Freese and Martin Winkler at the University of Texas at Austin introduced a captivating new theory, which suggests that the universe’s dark matter may have been created after the Big Bang.
Evidence comes later on
Freese and Winkler pointed out that presence of photons and normal matter (mostly protons and neutrons) can be inferred from almost immediately after the Big Bang. However, the earliest evidence for dark matter comes from later on, when it began to exert its gravitational influence on normal matter. As a result, the duo proposed that dark matter may have appeared in a second event called the Dark Big Bang.
“In Freese and Winkler’s model, dark matter particles can be produced as late as one month after the birth of our universe,” Ilie explains. “Moreover, dark matter particles produced via a Dark Big Bang do not interact with regular matter except via gravity. Thus, this model could explain why all attempts at detecting dark matter – either directly, indirectly, or via particle production – have failed.”
According to this theory, dark matter particles are generated by a certain type of scalar field. This is an energy field that has a single value at every point in space and time (a familiar example is the field describing gravitational potential energy). Initially, each point of this scalar field would have occupied a local minimum in its energy potential. However, these points could have then transitioned to lower-energy minima via quantum tunnelling. During this transition, the energy difference between the two minima would be released, producing particles of dark matter.
Consistent with observations
Building on this idea, Casey and Ilie looked at how predictions of the Dark Big Bang model could be consistent with astronomers’ observations of the early universe.
“By focusing on the tunnelling potentials that lead to the Dark Big Bang, we were able to exhaust the parameter space of possible cases while still allowing for many different types of dark matter candidates to be produced from this transition,” Casey explains. “Aside from some very generous mass limits, the only major constraint on dark matter in the Dark Big Bang model is that it interacts with everyday particles through gravity alone.” This is encouraging because this limited interaction is what physicists expect of dark matter.
For now, the duo’s results suggest that the Dark Big Bang is far less constrained by past observations than Freese and Winkler originally anticipated. As Ilie explains, their constraints could soon be put to the test.
“We examined two Dark Big Bang scenarios in this newly found parameter space that produce gravitational wave signals in the sensitivity ranges of existing and upcoming surveys,” he says. “In combination with those considered in Freese and Winkler’s paper, these cases could form a benchmark for gravitational wave researchers as they search for evidence of a Dark Big Bang in the early universe.”
Subtle imprint on space–time
If a Dark Big Bang happened, then the gravitational waves it produced would have left a subtle imprint on the fabric of space–time. With this clearer outline of the Dark Big Bang’s parameter space, several soon-to-be active observational programmes will be well equipped to search for these characteristic imprints.
“For certain benchmark scenarios, we show that those gravitational waves could be detected by ongoing or upcoming experiments such as the International Pulsar Timing Array (IPTA) or the Square Kilometre Array Observatory (SKAO). In fact, the evidence of background gravitational waves reported in 2023 by the NANOGrav experiment – part of the IPTA – could be attributed to a Dark Big Bang realization,” Casey says.
If these studies find conclusive evidence for Freese and Winkler’s original theory, Casey and Ilie’s analysis could ultimately bring us a step closer to a breakthrough in our understanding of the ever-elusive origins of dark matter.
The plant kingdom is full of intriguing ways to distribute seeds such as the dandelion pappus effortlessly drifting on air currents to the ballistic nature of fern sporangia.
Not to be outdone, the squirting cucumber (Ecballium elaterium), which is native to the Mediterranean and is often regarded as a weed, has its own unique way of ejecting seeds.
When ripe, the ovoid-shaped fruits detach from the stem and as it does so explosively ejects seeds in a high-pressure jet of mucilage.
The process, which lasts just 30 milliseconds, launches the seeds at more than 20 metres per second with some landing 10 metres away.
“The first time we inspected this plant in the Botanic Garden, the seed launch was so fast that we weren’t sure it had happened,” recalls Oxford University mathematical biologist Derek Moulton. “It was very exciting to dig in and uncover the mechanism of this unique plant.”
The researchers found that in the weeks leading up to the ejection, fluid builds up inside the fruits so they become pressurised. Then just before seed dispersal, some of this fluid moves from the fruit to the stem, making it longer and stiffer.
This process crucially causes the fruit to rotate from being vertical to close to an angle of 45 degrees, improving the launch angle for the seeds.
During the first milliseconds of ejection, the tip of the stem holding the fruit then recoils away causing the fruit to counter-rotate and detach. As it does so, the pressure inside the fruit causes the seeds to eject at high speed.
By changing certain parameters in the model, such as the stiffness of the stem, reveals that the mechanism has been fine-tuned to ensure optimal seed dispersal. For example, a thicker or stiffer stem would result in the seeds being launched horizontally and distributed over a narrower area.
According to Manchester University physicist Finn Box, the findings could be used for more effective drug delivery systems “where directional release is crucial”.
What does an idea need to change the world? Physics drives scientific advancements in healthcare, green energy, sustainable materials and many other applications. However, to bridge the gap between research and real-world applications, physicists need to be equipped with entrepreneurship skills.
Many students dream of using their knowledge and passion for physics to change the world, but when it comes to developing your own product, it can be hard to know where to start. That’s where my job comes in – I have been teaching scientists and engineers entrepreneurship for more than 20 years.
Several of the world’s most successful companies, including Sony, Texas Instruments, Intel and Tesla Motors, were founded by physicists, and there are many contemporary examples too. For example, Unitary, an AI company that identifies misinformation and deepfakes, was founded by Sasha Haco, who has a PhD in theoretical physics. In materials science, Aruna Zhuma is the co-founder of Global Graphene Group, which manufactures single layers of graphene oxide for use in electronics. Zhuma has nearly 500 patents, the second largest number of any inventor in the field.
In the last decade quantum technology, which encompasses computing, sensing and communications, has spawned hundreds of start-ups, often spun out from university research. This includes cybersecurity firm ID Quantique, super sensitive detectors from Single Quantum, and quantum computing from D-Wave. Overall, about 8–9% of students in the UK start businesses straight after they graduate, with just over half (58%) of these graduate entrepreneurs founding firms in their subject area.
However, even if you aren’t planning to set up your own business, entrepreneurship skills will be important no matter what you do with your degree. If you work in industry you will need to spot trends, understand customers’ needs and contribute to products and services. In universities, promotion often requires candidates to demonstrate “knowledge transfer”, which means working with partners outside academia.
Taking your ideas to the next level
The first step of kick-starting your entrepreneurship journey is to evaluate your existing experience and goals. Do you already have an idea that you want to take forward, or just want to develop skills that will broaden you career options?
If you’re exploring the possibilities of entrepreneurship you should look for curricular modules at your university. These are normally tailored to those with no previous experience and cover topics such as opportunity spotting, market research, basic finance, team building and intellectual property. In addition, in the UK at least, many postgraduate centres for doctoral training (CDTs) now offer modules in business and entrepreneurship as part of their training programmes. These courses sometimes give students the opportunity to take part in live company projects, which are a great way to gain skills.
You should also look out for extracurricular opportunities, from speaker events and workshops to more intensive bootcamps, competitions and start-up weekends. There is no mark or grade for these events, so they allow students to take risks and experiment.
Like any kind of research, commercializing physics requires resources such as equipment and laboratory space. For early-stage founders, access to business incubators – organizations that provide shared facilities – is invaluable. You would use an incubator at a relatively early stage to finalize your product, and they can be found in many universities.
Accelerator programmes, which aim to fast-track your idea once you have a product ready and usually run for a defined length of time, can also be beneficial. For example, the University of Southampton has the Future Worlds Programme based in the physical sciences faculty. Outside academia, the European Space Agency has incubators for space technology ideas at locations throughout Europe, and the Institute of Physics also has workspace and an accelerator programme for recently graduated physicists and especially welcomes quantum technology businesses. The Science and Technology Facilities Council (STFC) CERN Business Incubation Centre focuses on high-energy physics ideas and grants access to equipment that would be otherwise unaffordable for a new start-up.
More accelerator programmes supporting physics ideas include Duality, which is a Chicago-based 12-month accelerator programme for quantum ideas; Quantum Delta NL, based in the Netherlands, which provides programmes and shared facilities for quantum research; and Techstars Industries of the Future, which has locations worldwide.
Securing your future
It’s the multimillion-pound deals that make headlines but to get to that stage you will need to gain investors’ confidence, securing smaller funds to take your idea forward step-by-step. This could be used to protect your intellectual property with a patent, make a prototype or road test your technology.
Since early-stage businesses are high risk, this money is likely to come from grants and awards, with commercial investors such as venture capital or banks holding back until they see the idea can succeed. Funding can come from government agencies like the STFC in the UK, or US government scheme America’s Seed Fund. These grants are for encouraging innovation, applied research and for finding disruptive new technology, and no return is expected. Early-stage commercial funding might come from organizations such as Seedcamp, and some accelerator programmes offer funding, or at least organize a “demo day” on completion where you can showcase your venture to potential investors.
While you’re a student, you can take advantage of the venture competitions that run at many universities, where students pitch an idea to a panel of judges. The prizes can be significant, ranging from £10k to £100k, and often come with extra support such as lab space, mentoring and help filing patents. Some of these programmes are physics-specific, for example the Eli and Britt Harari Enterprise Award at the University of Manchester, which is sponsored by physics graduate Eli Harari (founder of SanDisc) awards funding for graphene-related ideas.
Finally, remember that physics innovations don’t always happen in the lab. Theoretical physicist Stephen Wolfram founded Wolfram Research in 1988, which makes computational technology including the answer engine Wolfram Alpha.
Making the grade
There are many examples of students and recent graduates making a success from entrepreneurship. Wai Lau is a Manchester physics graduate who also has a master’s of enterprise degree. He started a business focused on digital energy management, identifying energy waste, while learning about entrepreneurship. His business Cloud Enterprise has now branched out to a wider range of digital products and services.
Computational physics graduate Gregory Mead at Imperial College London started Musicmetric, which uses complex data analytics to keep track of and rank music artists and is used by music labels and artists. He was able to get funding from Imperial Innovations after making a prototype and Musicmetric was eventually bought by Apple.
AssestCool Thermal Metaphotonics technology cools overhead power lines reducing power losses using novel coatings. It entered the Venture Further competition at the University of Manchester and has now had a £2.25m investment from Gritstone Capital.
Entrepreneurship skills are being increasingly recognized as necessary for physics graduates. In the UK, the IOP Degree Accreditation Framework, the standard for physics degrees, expects students to have “business awareness, intellectual property, digital media and entrepreneurship skills”.
Thinking about taking the leap into business can be daunting, but university is the ideal time to think about entrepreneurship. You have nothing to lose and plenty of support available.
Climate science and astronomy have much in common, and this has inspired the astrophysicist Travis Rector to call on astronomers to educate themselves, their students and the wider public about climate change. In this episode of the Physics World Weekly podcast, Rector explains why astronomers should listen to the concerns of the public when engaging about the science of global warming. And, he says the positive outlook of some of his students at the University of Alaska Anchorage makes him believe that a climate solution is possible.
Rector says that some astronomers are reluctant to talk to the public about climate change because they have not mastered the intricacies of the science. Indeed, one aspect of atmospheric physics that has challenged scientists is the role that clouds play in global warming. My second guest this week is the science journalist Michael Allen, who has written a feature article for Physics World called “Cloudy with a chance of warming: how physicists are studying the dynamical impact of clouds on climate change”. He talks about climate feedback mechanisms that involve clouds and how aerosols affect clouds and the climate.
A new algorithmic technique could enhance the output of fusion reactors by smoothing out the laser pulses used to compress hydrogen to fusion densities. Developed by physicists at the University of Bordeaux, France, a simulated version of the new technique has already been applied to conditions at the US National Ignition Facility (NIF) and could also prove useful at other laser fusion experiments.
A major challenge in fusion energy is keeping the fuel – a mixture of the hydrogen isotopes deuterium and tritium – hot and dense enough for fusion reactions to occur. The two main approaches to doing this confine the fuel with strong magnetic fields or intense laser light and are known respectively as magnetic confinement fusion and inertial confinement fusion (ICF). In either case, when the pressure and temperature become high enough, the hydrogen nuclei fuse into helium. Since the energy released in this fusion reaction is, in principle, greater than the energy needed to get it going, fusion has long been viewed as a promising future energy source.
In 2022, scientists at NIF became the first to demonstrate “energy gain” from fusion, meaning that the fusion reactions produced more energy than was delivered to the fuel target via the facility’s system of super-intense lasers. The method they used was somewhat indirect. Instead of compressing the fuel itself, NIF’s lasers heated a gold container known as a hohlraum with the fuel capsule inside. The appeal of this so-called indirect-drive ICF is that it is less sensitive to inhomogeneities in the laser’s illumination. These inhomogeneities arise from interactions between the laser beams and the highly compressed plasma produced during fusion, and they are hard to get rid of.
In principle, though, direct-drive ICF is a stronger candidate for a fusion reactor, explains Duncan Barlow, a postdoctoral researcher at Bordeaux who led the latest research effort. This is because it couples more energy into the target, meaning it can deliver more fusion energy per unit of laser energy.
Reducing computing cost and saving time
To work out which laser configurations are the most homogeneous, researchers typically use iterative radiation-hydrodynamic simulations. These are time-consuming and computationally expensive (requiring around 1 million CPU hours per evaluation). “This expense means that only a few evaluations were run, and each step was best performed by an expert who could use her or his experience and the data obtained to pick the next configurations of beams to test the illumination uniformity,” Barlow says.
The new approach, he explains, relies on approximating some of the laser beam-plasma interactions by considering isotropic plasma profiles. This means that each iteration uses less than 1000 CPU, so thousands can be run for the cost of a single simulation using the old method. Barlow and his colleagues also created an automated method to quantify improvements and select the most promising step forward for the process.
The researchers demonstrated their technique using simulations of a spherical target at NIF. These simulations showed that the optimized configuration should produce convergent shocks in the fuel target, resulting in pressures three times higher (and densities almost two times higher) than in the original experiment. Although their simulations focused on NIF, they say it could also apply to other pellet geometries and other facilities.
Developing tools
The study builds on work by Barlow’s supervisor, Arnaud Colaïtis, who developed a tool for simulating laser-plasma interactions that incorporates a phenomenon known as cross-beam energy transfer (CBET) that contributes to inhomogeneities. Even with this and other such tools, however, Barlow explains that fusion scientists have long struggled to define optical illuminations when the system deviates from a simple mathematical description. “My supervisor recognized the need for a new solution, but it took us a year of further development to identify such a methodology,” he says. “Initially, we were hoping to apply neural networks – similar to image recognition – to speed up the technique, but we realized that this required prohibitively large training data.”
As well as working on this project, Barlow is also involved in a French project called Taranis that aims to use ICF to produce energy – an approach known as inertial fusion energy (IFE). “I am applying the methodology from my ICF work in a new way to ensure the robust, uniform drive of targets with the aim of creating a new IFE facility and eventually a power plant,” he tells Physics World.
A broader physics application, he adds, would be to incorporate more laser-plasma instabilities beyond CBET that are non-linear and normally too expensive to model accurately with radiation-hydrodynamic simulations. Some examples include simulated Brillouin scattering, stimulated Raman scattering and two-plasmon decay. “The method presented in our work, which is detailed in Physical Review Letters, is a great accelerated scheme for better evaluating these laser-plasma instabilities, their impact for illumination configurations and post-shot analysis,” he says.
All eyes were on the election of Donald Trump as US president earlier this month, whose win overshadowed two big appointments in physics. First, the particle physicist Jun Cao took over as director of China’s Institute of High Energy Physics (IHEP) in October, succeeding Yifang Wang, who had held the job since 2011.
Over the last decade, IHEP has emerged as an important force in particle physics, with plans to build a huge 100 km-circumference machine called the Circular Electron Positron Collider (CEPC). Acting as a “Higgs factory”, such a machine would be hundreds of times bigger and pricier than any project IHEP has ever attempted.
But China is serious about its intentions, aiming to present a full CEPC proposal to the Chinese government next year, with construction staring two years later and the facility opening in 2035. If the CEPC opens as planned in 2035, China could leapfrog the rest of the particle-physics community.
China’s intentions will be one pressing issue facing the British particle physicist Mark Thomson, 58, who was named as the 17th director-general at CERN earlier this month. He will take over in January 2026 from current CERN boss Fabiola Gianotti, who will finish her second term next year. Thomson will have a decisive hand in the question of what – and where – the next particle-physics facility should be.
CERN is currently backing the 91 km-circumference Future Circular Collider (FCC), several times bigger than the Large Hadron Collider (LHC). An electron–positron collider designed to study the Higgs boson in unprecedented detail, it could later be upgraded to a hadron collider, dubbed FCC-hh. But with Germany already objecting to the FCC’s steep £12bn price tag, Thomson will have a tough job eking extra cash for it from CERN member states. He’ll also be busy ensuring the upgraded LHC, known as the High-Luminosity LHC, is ready as planned by 2030.
I wouldn’t dare tell Thomson how to do his job, but Physics World did once ask previous CERN directors-general what skills are needed as lab boss. Crucial, they said, were people management, delegation, communication and the ability to speak multiple languages. Physical stamina was deemed a vital attribute too, with extensive international travel and late-night working required.
One former CERN director-general even cited the need to “eat two lunches the same day to satisfy important visitors”. Squeezing double dinners in will probably be the least of Thomson’s worries.
Fortuantely, I bumped into Thomson at an Institute of Physics meeting in London earlier this week, where he agreed to do an interview with Physics World. So you can be sure we’ll get Thomson put his aims and priorities as next CERN boss on record. Stay tuned…
A new imaging technique that takes standard two-dimensional (2D) radio images and reconstructs them as three-dimensional (3D) ones could tell us more about structures such as the jet-like features streaming out of galactic black holes. According to the technique’s developers, it could even call into question physical models of how radio galaxies formed in the first place.
“We will now be able to obtain information about the 3D structures in polarized radio sources whereas currently we only see their 2D structures as they appear in the plane of the sky,” explains Lawrence Rudnick, an observational astrophysicist at the University of Minnesota, US, who led the study. “The analysis technique we have developed can be performed not only on the many new maps to be made with powerful telescopes such as the Square Kilometre Array and its precursors, but also from decades of polarized maps in the literature.”
Analysis of data from the MeerKAT radio telescope array
In their new work, Rudnick and colleagues in Australia, Mexico, the UK and the US studied polarized light data from the MeerKAT radio telescope array at the South African Radio Astronomy Observatory. They exploited an effect called Faraday rotation, which rotates the angle of polarized radiation as it travels through a magnetized ionized region. By measuring the amount of rotation for each pixel in an image, they can determine how much material that radiation passed through.
In the simplest case of a uniform medium, says Rudnick, this information tells us the relative distance between us and the emitting region for that pixel. “This allows us to reconstruct the 3D structure of the radiating plasma,” he explains.
An indication of the position of the emitting region
The new study builds on a previous effort that focused on a specific cluster of galaxies for which the researchers already had cubes of data representing its 2D appearance in the sky, plus a third axis given by the amount of Faraday rotation. In the latest work, they decided to look at this data in a new way, viewing the cubes from different angles.
“We realized that the third axis was actually giving us an indication of the position of the emitting region,” Rudnick says. “We therefore extended the technique to situations where we didn’t have cubes to start with, but could re-create them from a pair of 2D images.”
There is a problem, however, in that polarization angle can also rotate as the radiation travels through regions of space that are anything but uniform, including our own Milky Way galaxy and other intervening media. “In that case, the amount of radiation doesn’t tell us anything about the actual 3D structure of the emitting source,” Rudnick adds. “Separating out this information from the rest of the data is perhaps the most difficult aspect of our work.”
Shapes of structures are very different in 3D
Using this technique, Rudnick and colleagues were able determine the line-of-sight orientation of active galactic nuclei (AGN) jets as they are expelled from a massive black hole at the centre of the Fornax A galaxy. They were also able to observe how the materials in these jets interact with “cosmic winds” (essentially larger-scale versions of the magnetic solar wind streaming from our own Sun) and other space weather, and to analyse the structures of magnetic fields inside the jets from the M87 galaxy’s black hole.
The team found that the shapes of structures as inferred from 2D radio images were sometimes very different from those that appear in the 3D reconstructions. Rudnick notes that some of the mental “pictures” we have in our heads of the 3D structure of radio sources will likely turn out to be wrong after they are re-analysed using the new method. One good example in this study was a radio source that, in 2D, looks like a tangled string of filaments filling a large volume. When viewed in 3D, it turns out that these filamentary structures are in fact confined to a band on the surface of the source. “This could change the physical models of how radio galaxies are formed, basically how the jets from the black holes in their centres interact with the surrounding medium,” Rudnick tells Physics World.
A plan to use millions of smartphones to map out real-time variations in Earth’s ionosphere has been tested by researchers in the US. Developed by Brian Williams and colleagues at Google Research in California, the system could improve the accuracy of global navigation satellite systems (GNSSs) such as GPS and provide new insights into the ionosphere.
A GNSS uses a network of satellites to broadcast radio signals to ground-based receivers. Each receiver calculates its position based on the arrival times of signals from several satellites. These signals first pass through Earth’s ionosphere, which is a layer of weakly-ionized plasma about 50–1500 km above Earth’s surface. As a GNSS signal travels through the ionosphere, it interacts with free electrons and this slows down the signals slightly – an effect that depends on the frequency of the signal.
The problem is that the free electron density is not constant in either time or space. It can spike dramatically during solar storms and it can also be affected by geographical factors such as distance from the equator. The upshot is that variations in free electron density can lead to significant location errors if not accounted for properly.
To deal with this problem, navigation satellites send out two separate signals at different frequencies. These are received by dedicated monitoring stations on Earth’s surface and the differences between arrival times of the two frequencies is used create a real-time maps of the free electron density of the ionosphere. Such maps can then be used to correct location errors. However, these monitoring stations are expensive to install and tend to be concentrated in wealthier regions of the world. This results in large gaps in ionosphere maps.
Dual-frequency sensors
In their study, Williams’ team took advantage of the fact that many modern mobile phones have sensors that detect GNSS signals at two different frequencies. “Instead of thinking of the ionosphere as interfering with GPS positioning, we can flip this on its head and think of the GPS receiver as an instrument to measure the ionosphere,” Williams explains. “By combining the sensor measurements from millions of phones, we create a detailed view of the ionosphere that wouldn’t otherwise be possible.”
This is not a simple task, however, because individual smartphones are not designed for mapping the ionosphere. Their antennas are much less efficient than those of dedicated monitoring stations and the signals that smartphones receive are often distorted by surrounding buildings – and even users’ bodies. Also, these measurements are affected by the design of the phone and its GNSS hardware.
The big benefit of using smartphones is that their ownership is ubiquitous across the globe – including in developing regions such as India, Africa, and Southeast Asia. “In these parts of the world, there are still very few dedicated scientific monitoring stations that are being used by scientists to generate ionosphere maps,” says Williams. “Phone measurements provide a view of parts of the ionosphere that isn’t otherwise possible.”
The team’s proposal involves creating a worldwide network comprising millions of smartphones that will each carry out error correction measurements using the dual-frequency signals from GNSS satellites. Although each individual measurement will be relatively poor, the large number of measurements can be used to improve the overall accuracy of the map.
Simultaneous calibration
“By combining measurements from many phones, we can simultaneously calibrate the individual sensors and produce a map of ionosphere conditions, leading to improved location accuracy, and a better understanding of this important part of the Earth’s atmosphere,” Williams explains.
In their initial tests of the system, the researchers aggregated ionosphere measurements from millions of Android devices around the world. Crucially, there was no need to identify individual devices contributing to the study – ensuring the privacy and security of users.
Williams’ team was able to map a diverse array of variations in Earth’s ionosphere. These included plasma bubbles over India and South America; the effects of a small solar storm over North America; and a depletion in free electron density over Europe. These observations doubled the coverage are of existing maps and boosted resolution when compared to maps made using data from monitoring stations.
If such a smartphone-based network is rolled out, ionosphere-related location errors could be reduced by several metres – which would be a significant advantage to smartphone users.
“For example, devices could differentiate between a highway and a parallel rugged frontage road,” Williams predicts. “This could ensure that dispatchers send the appropriate first responders to the correct place and provide help more quickly.”
Waveguide-based structures can solve partial differential equations by mimicking elements in standard electronic circuits. This novel approach, developed by researchers at Newcastle University in the UK, could boost efforts to use analogue computers to investigate complex mathematical problems.
Many physical phenomena – including heat transfer, fluid flow and electromagnetic wave propagation, to name just three – can be described using partial differential equations (PDEs). Apart from a few simple cases, these equations are hard to solve analytically, and sometimes even impossible. Mathematicians have developed numerical techniques such as finite difference or finite-element methods to solve more complex PDEs. However, these numerical techniques require a lot of conventional computing power, even after using methods such as mesh refinement and parallelization to reduce calculation time.
Alternatives to numerical computing
To address this, researchers have been investigating alternatives to numerical computing. One possibility is electromagnetic (EM)-based analogue computing, where calculations are performed by controlling the propagation of EM signals through a materials-based processor. These processors are typically made up of optical elements such as Bragg gratings, diffractive networks and interferometers as well as optical metamaterials, and the systems that use them are termed “metatronic” by analogy with more familiar electronic circuit elements.
The advantage of such systems is that because they use EM waves, computing can take place literally at light speeds within the processors. Systems of this type have previously been used to solve ordinary differential equations, and to perform operations such as integration, differentiation and matrix multiplication.
Some mathematical operations can also be computed with electronic systems – for example, with grid-like arrays of “lumped” circuit elements (that is, components such as resistors, inductors and capacitors that produce a predictable output from a given input). Importantly, these grids can emulate the mesh elements that feature in the finite-element method of solving various types of PDEs numerically.
Recently, researchers demonstrated that this emulation principle also applies to photonic computing systems. They did this using the splitting and superposition of EM signals within an engineered network of dielectric waveguide junctions known as photonic Kirchhoff nodes. At these nodes, a combination of photonics structures, such as ring resonators and X-junctions, can similarly imitate lumped circuit elements.
Interconnected metatronic elements
In the latest work, Victor Pacheco-Peña of Newcastle’s School of Mathematics, Statistics and Physics and colleagues showed that such waveguide-based structures can be used to calculate solutions to PDEs that take the form of the Helmholtz equation ∇2f(x,y)+k2f(x,y)=0. This equation is used to model many physical processes, including the propagation, scattering and diffraction of light and sound as well as the interactions of light and sound with resonators.
Unlike in previous setups, however, Pacheco-Peña’s team exploited a grid-like network of parallel plate waveguides filled with dielectric materials. This structure behaves like a network of interconnected T-circuits, or metatronic elements, with the waveguide junctions acting as sampling points for the PDE solution, Pacheco-Peña explains. “By carefully manipulating the impedances of the metatronic circuits connecting these points, we can fully control the parameters of the PDE to be solved,” he says.
The researchers used this structure to solve various boundary value problems by inputting signals to the network edges. Such problems frequently crop up in situations where information from the edges of a structure is used to infer details of physical processes in other regions in it. For example, by measuring the electric potential at the edge of a semiconductor, one can calculate the distribution of electric potential near its centre.
Pacheco-Peña says the new technique can be applied to “open” boundary problems, such as calculating how light focuses and scatters, as well as “closed” ones, like sound waves reflecting within a room. However, he acknowledges that the method is not yet perfect because some undesired reflections at the boundary of the waveguide network distort the calculated PDE solution. “We have identified the origin of these reflections and proposed a method to reduce them,” he says.
In this work, which is detailed in Advanced Photonics Nexus, the researchers numerically simulated the PDE solving scheme at microwave frequencies. In the next stages of their work, they aim to extend their technique to higher frequency ranges. “Previous works have demonstrated metatronic elements working in these frequency ranges, so we believe this should be possible,” Pacheco-Peña tells Physics World. “This might also allow the waveguide-based structure to be integrated with silicon photonics or plasmonic devices.”
UK physics “deep tech” could be missing out on almost a £1bn of investment each year. That is according to a new report by the Institute of Physics (IOP), which publishes Physics World. It finds that venture capital investors often struggle to invest in high-innovation physics industries given the lack of a “one-size-fits-all” commercialisation pathway that is seen in others areas such as biotech.
According to the report, physics-based businesses add about £230bn to the UK economy each year and employ more than 2.7 million full-time employees. The UK also has one of the largest venture-capital markets in Europe and the highest rates of spin-out activity, especially in biotech.
Despite this, however, venture capital investment in “deep tech” physics – start-ups whose business model is based on high-tech innovation or significant scientific advances – remains low, attracting £7.4bn or 30% of UK science venture-capital investment.
To find out the reasons for this discrepancy, the IOP interviewed science-led businesses as well as 32 leading venture capital investors. Based on these discussions, it was found that many investors are confused about certain aspects of physics-based start-ups, finding that they often do not follow the familiar lifecycle of development as seen other areas like biotech.
Physics businesses are not, for example, always able to transition from being tech focussed to being product-led in the early stages of development, which prevents venture capitalists from committing large amounts of money. Another issue is that venture capitalists are less familiar with the technologies, timescales and “returns profile” of physics deep tech.
The IOP report estimates that if the full investment potential of physics deep tech is unlocked then it could result in an extra £4.5bn of additional funding over the next five years. In a foreword to the report, Hermann Hauser, the tech entrepreneur and founder of Acorn Computers, highlights “uncovered issues within the system that are holding back UK venture capital investment” into physics-based tech. “Physics deep-tech businesses generate huge value and have unique characteristics – so our national approach to finance for these businesses must be articulated in ways that recognise their needs,” writes Hauser.
Physics deep tech is central to the UK’s future prosperity
Tom Grinyer
At the same time, investors see a lot of opportunity in subjects such as quantum and semiconductor physics as well as with artificial intelligences and nuclear fusion. Jo Slota-Newson, a managing partner at Almanac Ventures who co-wrote the report, says there is “huge potential” for physics deep-tech businesses but “venture capital funds are being held back from raising and deploying capital to support this crucial sector”.
The IOP is now calling for a coordinated effort from government, investors as well as the business and science communities to develop “investment pathways” to address the issues raised in the report. For example, the UK government should ensure grant and debt-financing options are available to support physics tech at “all stages of development”.
Slota-Newson, who has a background in science including a PhD in chemistry from the University of Cambridge, says that such moves should be “at the heart” of the UK’s government’s plans for growth. “Investors, innovators and government need to work together to deliver an environment where at every stage in their development there are opportunities for our deep tech entrepreneurs to access funding and support,” adds Slota-Newson. “If we achieve that we can build the science-driven, innovative economy, which will provide a sustainable future of growth, security and prosperity.”
The report also says that the IOP should play a role by continuing to highlight successful physics deep-tech businesses and to help them attract investment from both the UK and international venture-capital firms. Indeed, Tom Grinyer, group chief executive officer of the IOP, says that getting the model right could “supercharge the UK economy as a global leader in the technologies that will define the next industrial revolution”.
“Physics deep tech is central to the UK’s future prosperity — the growth industries of the future lean very heavily on physics and will help both generate economic growth and help move us to a lower carbon, more sustainable economy,” says Grinyer. “By leveraging government support, sharing information better and designing our financial support of this key sector in a more intelligent way we can unlock billions in extra investment.”
That view is backed by Hauser. “Increased investment, economic growth, and solutions to some of our biggest societal challenges [will move] us towards a better world for future generations,” he writes. “The prize is too big to miss”.
Noise pollution is becoming increasingly common in society today, impacting both humans and wildlife. While loud noises can be an inconvenience, if it’s something that happens regularly, it can have an adverse effect on human health that goes beyond a mild irritation.
As such noise pollution gets worse, researchers are working to mitigate its impact through new sound absorption materials. A team headed up the Agency for Science, Technology and Research (A*STAR) in Singapore has now developed a new approach to tackling the problem by absorbing sound waves using the triboelectric effect.
The World Health Organization has defined noise pollution as noise levels as above 65 dB, with one in five Europeans being regularly exposed to levels considered harmful to their health. “The adverse impacts of airborne noise on human health are growing concern, including disturbing sleep, elevating stress hormone levels, inciting inflammation and even increasing the risk of cardiovascular diseases,” says Kui Yao, senior author on the study.
Passive provides the best route
Mitigating noise requires conversion of the mechanical energy in acoustic waves into another form. For this, passive sound absorbers are a better option than active versions because they require less maintenance and consume no power (so don’t require a lot of extra components to work).
Previous efforts from Yao’s research group have shown that the piezoelectric effect – the process of creating a current when a material undergoes mechanical stress – can convert mechanical energy into electricity and could be used for passive sound absorption. However, the researchers postulated that the triboelectric effect – the process of electrical charge transfer when two surfaces contact each other – could be more effective for absorbing low-frequency noise.
The triboelectric effect is more commonly applied for harvesting mechanical energy, including acoustic energy. But unlike when used for energy harvesting, the use of the triboelectric effect in noise mitigation applications is not limited by the electronics around the material, which can cause impedance mismatching and electrical leakage. For sound absorbers, therefore, there’s potential to create a device with close to 100% efficient triboelectric conversion of energy.
Exploiting the triboelectric effect
Yao and colleagues developed a fibrous polypropylene/polyethylene terephthalate (PP/PET) composite foam that uses the triboelectric effect and in situ electrical energy dissipation to absorb low-frequency sound waves. In this foam, sound is converted into electricity through embedded electrically conductive elements, and this electricity is then dissipated into heat and removed from the material.
The energy dissipation mechanism requires triboelectric pairing materials with a large difference in charge affinity (the tendency to gain or lose charge from/to the other material). The larger the difference between the two fibre materials in the foam, the better the acoustic absorption performance due to the larger triboelectric effect.
To understand the effectiveness of different foam compositions for absorbing and converting sound waves, the researchers designed an acoustic impedance model to analyse the underlying sound absorption mechanisms. “Our theoretical analysis and experimental results show superior sound absorption performance of triboelectric energy dissipator-enabled composite foams over common acoustic absorbing products,” explains Yao.
The researchers first tested the fibrous PP/PET composite foam theoretically and experimentally and found that it had a high noise reduction coefficient (NRC) of 0.66 (over a broad low-frequency range). This translates to a 24.5% improvement in sound absorption performance compared with sound absorption foams that don’t utilize the triboelectric effect.
On the back of this result, the researchers validated their process further by testing other material combinations. This included: a PP/polyvinylidene fluoride (PVDF) foam with an NRC of 0.67 and 22.6% improvement in sound absorption performance; a glass wool/PVDF foam with an NRC of 0.71 and 50.6% improvement in sound absorption performance; and a polyurethane/PVDF foam with an NRC of 0.79 and 43.6% improvement in sound absorption performance.
All the improvements are based on a comparison against their non-triboelectric counterparts – where the sound absorption performance varies from composition to composition, hence the non-linear relationship between percentage values and NRC values. The foams also showed a sound absorption performance of 0.8 NRC at 800 Hz and around 1.00 NRC with sound waves above 1.4 kHz, compared with commercially available counterpart absorber materials.
When asked about the future of the sound absorbers, Yao tells Physics World: “We are continuing to improve the performance properties and seeking collaborations for adoption in practical applications”.
For all of us concerned about climate change, 2023 was a grim year. According to the World Meteorological Organisation (WMO), it was the warmest year documented so far, with records broken – and in some cases smashed – for ocean heat, sea-level rise, Antarctic sea-ice loss and glacier retreat.
Capping off the warmest 10-year period on record, global average near-surface temperature hit 1.45 °C above pre-industrial levels. “Never have we been so close – albeit on a temporary basis at the moment – to the 1.5 °C lower limit of the Paris Agreement on climate change,” said WMO secretary-general Celeste Saulo in a statement earlier this year.
The heatwaves, floods, droughts and wildfires of 2023 are clear signs of the increasing dangers of the climate crisis. As we look to the future and wonder how much the world will warm, accurate climate models are vital.
For the physicists who build and run these models, one major challenge is figuring out how clouds are changing as the world warms, and how those changes will impact the climate system. According to the Intergovernmental Panel on Climate Change (IPCC), these feedbacks create the biggest uncertainties in predicting future climate change.
Cloud cover, high and low
Clouds play a key role in the climate system, as they have a profound impact on the Earth’s radiation budget. That is the balance between the amount of energy coming in from solar radiation, and the amount of energy going back out to space, which is both the reflected (shortwave) and thermal (longwave) energy radiated from the Earth.
How energy flows into and away from the Earth. Based on data from multiple sources including NASA’s CERES satellite instrument, which measures reflected solar and emitted infrared radiation fluxes. All values are fluxes in watts per square metre and are average values based on 10 years of data. First published in 2014.
“Even a subtle change in global cloud properties could be enough to have a noticeable effect on the global energy budget and therefore the amount of warming,” explains climate scientist Paulo Ceppi of Imperial College London, who is an expert on the impact of clouds on global climate.
A key factor in this dynamic is “cloud fraction” – a measurement that climate scientists use to determine the percentage of the Earth covered by clouds at a given time. More specifically, it’s the portion of the Earth’s surface covered by cloud, relative to the portion that is uncovered. Cloud fraction is determined via satellite imagery and is the portion of each pixel (1-km-pixel resolution cloud mask) in an image that is covered by clouds (figure 2).
Apart from the amount of cover, what also matter are the altitude of clouds and their optical thickness. Higher, cooler clouds absorb more thermal energy originating from the Earth’s surface, and therefore have a greater greenhouse warming effect than low clouds. They also tend to be thinner, so they let more sunlight through and overall have a net warming effect. Low clouds, on the other hand, have a weak greenhouse effect, but tend to be thicker and reflect more solar radiation. They generally have a net cooling effect.
2 Cloud fraction
These maps show what fraction of an area was cloudy on average each month, according to measurements collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. MODIS collects information in gridded boxes, or pixels. Cloud fraction is the portion of each pixel that is covered by clouds. Colours range from blue (no clouds) to white (totally cloudy).
The band of persistent clouds around the equator is the Intertropical Convergence Zone – where the easterly trade winds in the Northern and Southern Hemispheres meet, pushing warm, moist air high into the atmosphere. The air expands and cools, and the water vapour condenses into clouds and rain. The cloud band shifts slightly north and south of the equator with the seasons. In tropical countries, this shifting of the zone is what causes rainy and dry seasons.
Video and data courtesy: NASA Earth Observations
As the climate warms, cloud properties are changing, altering the radiation budget and influencing the amount of warming. Indeed, there are two key changes: rising cloud tops and a reduction in low cloud amount.
The most understood effect, Ceppi explains, is that as global temperatures increase, clouds rise higher into the troposphere, which is the lowermost atmospheric layer. This is because as the troposphere warms it expands, increasing to greater altitudes. Over the last 40 years the top of the troposphere, known as the tropopause, has risen by about 50 metres per decade (Sci. Adv. 10.1126/sciadv.abi8065).
“You are left with clouds that rise higher up on average, so have a greater greenhouse warming effect,” Ceppi says. He adds that modelling data and satellite observations support the idea that cloud tops are rising.
Conversely, coverage of low clouds, which reflect sunlight and cool the Earth’s surface, is decreasing with warming. This reduction is mainly in marine low clouds over tropical and subtropical regions. “We are talking a few per cent, so not something that you would necessarily notice with your bare eyes, but it’s enough to have an effect of amplifying global warming,” he adds.
These changes in low clouds are partly responsible for some of the extreme ocean heatwaves seen in recent years (figure 3). While the mechanisms behind these events are complex, one known driver is this reduction in low cloud cover, which allows more solar radiation to hit the ocean (Science 325 460).
“It’s cloud feedback on a more local scale,” Ceppi says. “So, the ocean surface warms locally and that prompts low cloud dissipation, which leads to more solar radiation being absorbed at the surface, which prompts further warming and therefore amplifies and sustains those events.”
3 Ocean heat
Sea surface temperature anomaly (°C) for the month of June 2023, relative to the 1991–2020 reference period. The global ocean experienced an average daily marine heatwave coverage of 32%, well above the previous record of 23% in 2016. At the end of 2023, most of the global ocean between 20° S and 20° N had been in heatwave conditions since early November.
Despite these insights, several questions remain unanswered. For example, Ceppi explains that while we know that low cloud changes will amplify warming, the strength of these effects needs further investigation, to reduce the uncertainty range.
Also, as high clouds move higher, there may be other important changes, such as shifts in optical thickness, which is a measure of how much light is scattered or absorbed by cloud droplets, instead of passing through the atmosphere. “We are a little less certain about what else happens to [high clouds],” says Ceppi.
Diurnal changes
It’s not just the spatial distribution of clouds that impacts climate. Recent research has found an increasing asymmetry in cloud-cover changes between day and night. Simply put, daytime clouds tend to cool Earth’s surface by reflecting solar radiation, while at night clouds trap thermal radiation and have a warming effect. This shift in diurnal distribution could create a feedback loop that amplifies global warming.
By analysing satellite observations and data from the sixth phase of the Coupled Model Intercomparison Project (CMIP6) – which incorporates historical data collected between 1970 and 2014 as well as projections up to the year 2100 – the researchers concluded that this diurnal asymmetry is largely due to rising concentrations of greenhouse gases that make the lower troposphere more stable, which in turn increases the overall heating.
Fewer clouds form during the day, thereby reducing the amount of shortwave radiation that is reflected away. Night-time clouds are more stable, which in turn increases the longwave greenhouse effect. “Our study shows that this asymmetry causes a positive feedback loop that amplifies global warming,” says Quaas. This growing asymmetry is mainly driven by a daytime increase in turbulence in the lower troposphere as the climate warms, meaning that clouds are less likely to form and remain stable during the day.
Mixed-phase clouds
Climate models are affected by more than just the distribution of clouds in space. What also matters is the distribution of liquid water and ice within clouds. In fact, researchers have found that the way in which models simulate this effect influences their predictions of warming in response to greenhouse gas emissions.
So-called “mixed-phase” clouds are those that contain water vapour, ice particles and supercooled liquid droplets, and exist in a three-phase colloidal system. Such clouds are ubiquitous in the troposphere. These clouds are found at all latitudes from the polar regions to the tropics and they play an important role in the climate system.
As the atmosphere warms, mixed-phase clouds tend to shift from ice to liquid water. This transition makes these clouds more reflective, enhancing their cooling effect on the Earth’s surface – a negative feedback that dampens global warming.
In 2016 Trude Storelvmo, an atmospheric scientist at the University of Oslo in Norway, and her colleagues made an important discovery: many climate models overestimate this negative feedback (Geophys. Res. Lett. 10.1029/2023GL105053). Indeed, the models often simulate clouds with too much ice and not enough liquid water. This error exaggerates the cooling effect from the phase transition. Essentially, the clouds in these simulations have too much ice to lose, causing the models to overestimate the increase in their reflectiveness as they warm.
One problem is that these models oversimplify cloud structure, failing to capture the true heterogeneity of mixed-phase clouds. Satellite, balloon and aircraft observations reveal that these clouds are not uniformly mixed, either vertically or horizontally. Instead, they contain pockets of ice and liquid water, leading to complex interactions that are inadequately represented in the simulations. As a result, they overestimate ice formation and underestimate liquid cloud development.
Storelvmo’s work also found that initially, increased cloud reflectivity has a strong effect that helps mitigate global warming. But as the atmosphere continues to warm, the increase in reflectiveness slows. This shift is intuitive: as the clouds become more liquid, they have less ice to lose. At some point they become predominantly liquid, eliminating the phase transition. The clouds cannot become anymore liquid – and thus reflective – and warming accelerates.
Liquid cloud tops
Earlier this year, Storelvmo and colleagues carried out a new study, using satellite data to study the vertical composition of mixed-phase clouds. The team discovered that globally, these clouds are more liquid at the top (Commun. Earth Environ.5 390).
Storelvmo explains that this top cloud layer is important as “it is the first part of the cloud that radiation interacts with”. When the researchers adjusted climate models to correctly capture this vertical composition, it had a significant impact, triggering an additional degree of warming in a “high-carbon emissions” scenario by the end of this century, compared with current climate projections.
“It is not inconceivable that we will reach temperatures where most of [the negative feedback from clouds] is lost, with current CO2 emissions,” says Storelvmo. The point at which this happens is unclear, but is something that scientists are actively working on.
The study also revealed that while changes to mixed-phased clouds in the northern mid-to-high latitudes mainly influence the climate in the northern hemisphere, changes to clouds in the same southern latitudes have global implications.
“When we modify clouds in the southern extratropic that’s communicated all the way to the Arctic – it’s actually influencing warming in the arctic,” says Storelvmo. The reasons for this are not fully understood, but Storelvmo says other studies have seen this effect too.
“It’s an open and active area of research, but it seems that the atmospheric circulation helps pass on perturbations from the Southern Ocean much more efficiently than northern perturbations,” she explains.
The aerosol problem
As well as generating the greenhouse gases that drive the climate crisis, fossil fuel burning also produces aerosols. The resulting aerosol pollution is a huge public health issue. The recent “State of Global Air Report 2024” from the Health Effects Institute found that globally eight million people died because of air pollution in 2021. Dirty air is also now the second-leading cause of death in children under five, after malnutrition.
To tackle these health implications, many countries and organizations have introduced air-quality clean-up policies. But cleaning up air pollution has an unfortunate side-effect: it exacerbates the climate crisis. Indeed, a recent study has even warned that aggressive aerosol mitigation policies will hinder our chances of keeping global warming below 2 °C (Earth’s Future 10.1029/2023EF004233).
When you add small pollution particles to clouds, explains Haywood, it creates “clouds that are made up of a larger number of small cloud droplets and those clouds are more reflective”. The shrinking in cloud droplet size can also reduce precipitation – adding more liquid water in clouds. The clouds therefore last longer, cover a greater area and become more reflective.
But if atmospheric aerosol concentrations are reduced, so too are these reflective, planet-cooling effects. “This masking effect by the aerosols is taken out and we unveil more and more of the full greenhouse warming,” says Quaas.
A good example of this is recent policy aimed at cleaning up shipping fuels by lowering sulphur concentrations. At the start of 2020 the International Maritime Organisation introduced regulations that slashed the limit on sulphur content in fuels from 3.5% to 0.5%.
Haywood explains that this has reduced the additional reflectivity that this pollution created in clouds and caused a sharp increase in global warming rates. “We’ve done some simulations with climate models, and they seem to be suggestive of at least three to four years acceleration of global warming,” he adds.
Overall models suggest that if we remove all the world’s polluting aerosols, we can expect to see around 0.4 °C of additional warming, says Quaas. He acknowledges that we must improve air quality “because we cannot just accept people dying and ecosystems deteriorating”. By doing so, we must also be prepared for this additional warming. But more work is needed, “because the current uncertainty is too large”, he continues. Uncertainty in the figures is around 50%, according to Quaas, which means that slashing aerosol pollution could cause anywhere from 0.2 to 0.6 °C of additional warming.
Haywood says that while current models do a relatively good job of representing how aerosols reduce cloud droplet size and increase cloud brightness, they do a poor job of showing how aerosols effect cloud fraction.
Cloud manipulation
The fact that aerosols cool the planet by brightening clouds opens an obvious question: could we use aerosols to deliberately manipulate cloud properties to mitigate climate change?
“There are more recent proposals to combat the impacts, or the worst of the impacts of global warming, through either stratospheric aerosol injection or marine cloud brightening, but they are really in their infancy and need to be understood an awful lot better before any kind of deployment can even be considered,” says Haywood. “You need to know not just how the aerosols might interact with clouds, but also how the cloud then interacts with the climate system and the [atmospheric] teleconnections that changing cloud properties can induce.”
Haywood recently co-authored a position paper, together with a group of atmospheric scientists in the US and Europe, arguing that a programme of physical science research is needed to evaluate the viability and risks of marine cloud brightening (Sci. Adv. 10 eadi8594).
A proposed form of solar radiation management, known as marine cloud brightening, would involve injecting aerosol particles into low-level, liquid marine clouds – mainly those covering large areas of subtropical oceans – to increase their reflectiveness (figure 4).
Most marine cloud-brightening proposals suggest using saltwater spray as the aerosol. In theory, when sprayed into the air the saltwater would evaporate to produce fine haze particles, which would then be transported by air currents into cloud. Once in the clouds, these particles would increase the number of cloud droplets, and so increase cloud brightness.
4 Marine cloud brightening
In this proposal, ship-based generators would ingest seawater and produce fine aerosol haze droplets with an equivalent dry diameter of approximately 50 nm. In optimal conditions, many of these haze droplets would be lofted into the cloud by updrafts, where they would modify cloud microphysics processes, such as increasing droplet number concentrations, suppressing rain formation, and extending the coverage and lifetime of the clouds. At the cloud scale, the degree of cloud brightening and surface cooling would depend on how effectively the droplet number concentrations can be increased, droplet sizes reduced, and cloud amount and lifetime increased.
Feingold, an author on the position paper, says that a key challenge lies in predicting how additional particles will affect cloud properties. For instance, while more haze droplets might theoretically brighten clouds, it could also lead to unintended effects like increased evaporation or rain, which could even reduce cloud coverage.
Another difficult challenge is the inconstancy of cloud response to aerosols. “Ship traffic is really regular,” explains Feingold, “but if you look at satellite imagery on a daily basis in a certain area, sometimes you see really clear, beautiful ship tracks and other times you don’t – and the ship traffic hasn’t changed but the meteorology has.” This variability depends on cloud susceptibility to aerosols, which is influenced by meteorological conditions.
And even if cloud systems that respond well to marine cloud brightening are identified, it would not be sensible to repeatedly target them. “Seeding the same area persistently could have some really serious knock-on effects on regional temperature and rainfall,” says Feingold.
Essentially, aerosol injections into the same area day after day would create localized radiative cooling, which would impact regional climate patterns. This highlights the ethical concerns with cloud brightening, as such effects could benefit some regions while negatively impacting others.
Addressing many of these questions requires significant advances in current climate models, so that the entire process – from the effects of aerosols on cloud microphysics through to the larger impact on clouds and then global climate circulations – can be accurately simulated. Bridging these knowledge gaps will require controlled field experiments, such as aerosol releases from point sources in areas of interest, while taking observational data using tools like drones, airplanes and satellites. Such experiments would help scientists get a “handle on this connection between emitted particles and brightening”, says Feingold.
But physicists can only do so much. “We are not trying to push marine cloud brightening, we are trying to understand it,” says Feingold. He argues that a parallel effort to discuss the governance of marine cloud brightening is also needed.
In recent years, much progress has been made in determining the impact of clouds, when it comes to regulating our planet’s climate, and their importance in climate modelling. “While major advances in the understanding of cloud processes have increased the level of confidence and decreased the uncertainty range for the cloud feedback by about 50% compared to AR5 [IPCC report], clouds remain the largest contribution to overall uncertainty in climate feedbacks (high confidence),” states the IPCC’s latest Assessment Report (AR6), published in 2021. Physicists and atmospheric scientists will continue to study how cloud systems will respond to our ever-changing climate and planet, but ultimately, it is wider society that needs to decide the way forward.