Physicists and others with STEM backgrounds are sought after in industry for their analytical skills. However, traditional training in STEM subjects is often lacking when it comes to nurturing the soft skills that are needed to succeed in managerial and leadership positions.
Our guest in this podcast is Peter Hirst, who is Senior Associate Dean, Executive Education at the MIT Sloan School of Management. He explains how MIT Sloan works with executives to ensure that they efficiently and effectively acquire the skills and knowledge needed to be effective leaders.
This podcast is sponsored by the MIT Sloan School of Management
New field experiments carried out by physicists in California’s Sierra Nevada mountains suggest that intermittent bursts of embers play an unexpectedly large role in the spread of wildfires, calling into question some aspects of previous fire models. While this is not the first study to highlight the importance of embers, it does indicate that standard modelling tools used to predict wildfire spread may need to be modified to account for these rare but high-impact events.
Embers form during a wildfire due to a combination of heat, wind and flames. Once lofted into the air, they can travel long distances and may trigger new “spot fires” when they land. Understanding ember behaviour is therefore important for predicting how a wildfire will spread and helping emergency services limit infrastructure damage and prevent loss of life.
Watching it burn
In their field experiments, Tirtha Banerjee and colleagues at the University of California Irvine built a “pile fire” – essentially a bonfire fuelled by a representative mixture of needles, branches, pinecones and pieces of wood from ponderosa pine and Douglas fir trees – in the foothills of the Sierra Nevada mountains. A high-frequency (120 frames per second) camera recorded the fire’s behaviour for 20 minutes, and the researchers placed aluminium baking trays around it to collect the embers it ejected.
After they extinguished the pile fire, the researchers brought the ember samples back to the laboratory and measured their size, shape and density. Footage from the camera enabled them to estimate the fire’s intensity based on its height. They also used a technique called particle tracking velocimetry to follow firebrands and calculate their trajectories, velocities and accelerations.
Highly intermittent ember generation
Based on the footage, the team concluded that ember generation is highly intermittent, with occasional bursts containing orders of magnitude more embers than were ejected at baseline. Existing models do not capture such behaviour well, says Alec Petersen, an experimental fluid dynamicist at UC Irvine and lead author of a Physics of Fluids paper on the experiment. In particular, he explains that models with a low computational cost often make simplifications in characterizing embers, especially with regards to fire plumes and ember shapes. This means that while they can predict how far an average firebrand with a certain size and shape will travel, the accuracy of those predictions is poor.
“Although we care about the average behaviour, we also want to know more about outliers,” he says. “It only takes a single ember to ignite a spot fire.”
As an example of such an outlier, Petersen notes that sometimes a strong updraft from a fire plume coincides with the fire emitting a large number of embers. Similar phenomena occur in many types of turbulent flows, including atmospheric winds as well as buoyant fire plumes, and they are characterized by statistically infrequent but extreme fluctuations in velocity. While these fluctuations are rare, they could partially explain why the team observed large (>1mm) firebrands travelling further than models predict, he tells Physics World.
This is important, Petersen adds, because large embers are precisely the ones with enough thermal energy to start spot fires. “Given enough chances, even statistically unlikely events can become probable, and we need to take such events into account,” he says.
New models, fresh measurements
The researchers now hope to reformulate operational models to do just this, but they acknowledge that this will be challenging. “Predicting spot fire risk is difficult and we’re only just scratching the surface of what needs to be included for accurate and useful predictions that can help first responders,” Petersen says.
They also plan to do more experiments in conjunction with a consortium of fire researchers that Banerjee set up. Beginning in November, when temperatures in California are cooler and the wildfire risk is lower, members of the new iFirenet consortium plan to collaborate on a large-scale field campaign at the UC Berkeley Research Forests. “We’ll have tonnes of research groups out there, measuring all sorts of parameters for our various projects,” Petersen says. “We’ll be trying to refine our firebrand tracking experiments too, using multiple cameras to track them in 3D, hopefully supplemented with a thermal camera to measure their temperatures.
“My background is in measuring and describing the complex dynamics of particles carried by turbulent flows,” Petersen continues. “I don’t have the same deep expertise studying fires that I do in experimental fluid dynamics, so it’s always a challenge to learn the best practices of a new field and to familiarize yourself with the great research folks have done in the past and are doing now. But that’s what makes studying fluid dynamics so satisfying – it touches so many corners of our society and world, there’s always something new to learn.”
The particle physics community is in the vanguard of a global effort to realize the potential of quantum computing hardware and software for all manner of hitherto intractable research problems across the natural sciences. The end-game? A paradigm shift – dubbed “quantum advantage” – where calculations that are unattainable or extremely expensive on classical machines become possible, and practical, with quantum computers.
A case study in this regard is the Institute of High Energy Physics (IHEP), the largest basic science laboratory in China and part of the Chinese Academy of Sciences. Headquartered in Beijing, IHEP hosts a multidisciplinary scientific programme spanning elementary particle physics, astrophysics as well as the planning, design and construction of large-scale accelerator projects – among them the China Spallation Neutron Source, which launched in 2018, and the High Energy Photon Source, due to come online in 2025.
Quantum opportunity
Notwithstanding its ongoing investment in experimental infrastructure, IHEP is increasingly turning its attention to the application of quantum computing and quantum machine-learning technologies to accelerate research discovery. In short, exploring use-cases in theoretical and experimental particle physics where quantum approaches promise game-changing scientific breakthroughs. A core partner in this endeavour is Shandong University (SDU) Institute of Frontier and Interdisciplinary Science, home to another of China’s top-tier research programmes in high-energy physics (HEP).
With senior backing from Weidong Li and Xingtao Huang – physics professors at IHEP and SDU, respectively – the two laboratories began collaborating on the applications of quantum science and technology in summer 2022. This was followed by the establishment of a joint working group 12 months later. Operationally, the Quantum Computing for Simulation and Reconstruction (QC4SimRec) initiative comprises eight faculty members (drawn from both institutes) and is supported by a multidisciplinary team of two postdoctoral scientists and five PhD students.
“QC4SimRec is part of IHEP’s at-scale quantum computing effort, tapping into cutting-edge resource and capability from a network of academic and industry partners across China,” explains Hideki Okawa, a professor who heads up quantum applications research at IHEP (as well as co-chairing QC4SimRec alongside Teng Li, an associate professor in SDU’s Institute of Frontier and Interdisciplinary Science). “The partnership with SDU is a logical progression,” he adds, “building on a track-record of successful collaboration between the two centres in areas like high-performance computing, offline software and machine-learning applications for a variety of HEP experiments.”
Right now, Okawa, Teng Li and the QC4SimRec team are set on expanding the scope of their joint research activity. One principal line of enquiry focuses on detector simulation – i.e. simulating the particle shower development in the calorimeter, which is one of the most demanding tasks for the central processing unit (CPU) in collider experiments. Other early-stage applications include particle tracking, particle identification, and analysis of the fundamental physics of particle dynamics and collision.
“Working together in QC4SimRec,” explains Okawa, “IHEP and SDU are intent on creating a global player in the application of quantum computing and quantum machine-learning to HEP problems.”
Sustained scientific impact, of course, is contingent on recruiting the brightest and best talent in quantum hardware and software, with IHEP’s near-term focus directed towards engaging early-career scientists, whether from domestic or international institutions. “IHEP is very supportive in this regard,” adds Okawa, “and provides free Chinese language courses to fast-track the integration of international scientists. It also helps that our bi-weekly QC4SimRec working group meetings are held in English.”
A high-energy partnership
Around 700 km south-east of Beijing, the QC4SimRec research effort at SDU is overseen by Xingtao Huang, dean of the university’s Institute of Frontier and Interdisciplinary Science and an internationally recognized expert in machine-learning technologies and offline software for data processing and analysis in particle physics.
“There’s huge potential upside for quantum technologies in HEP,” he explains. In the next few years, for example, QC4SimRec will apply innovative quantum approaches to build on SDU’s pre-existing interdisciplinary collaborations with IHEP across a range of HEP initiatives – including the Beijing Spectrometer III (BESIII), the Jiangmen Underground Neutrino Observatory (JUNO) and the Circular Electron-Positron Collider (CEPC).
One early-stage QC4SimRec project evaluated quantum machine-learning techniques for the identification and discrimination of muon and pion particles within the BESIII detector. Comparison with traditional machine-learning approaches shows equivalent performance on the same datasets and, by extension, the feasibility of applying quantum machine-learning to data analysis in next-generation collider experiments.
“This is a significant result,” explains Huang, “not least because particle identification – the identification of charged-particle species in the detector – is one of the biggest challenges in HEP experiments.”
Huang is currently seeking to recruit senior-level scientists with quantum and HEP expertise from Europe and North America, building on a well-established faculty team of 48 staff members (32 of them full professors) working on HEP. “We have several open faculty positions at SDU in quantum computing and quantum machine-learning,” he notes. “We’re also interested in recruiting talented postdoctoral researchers with quantum know-how.”
As a signal of intent, and to raise awareness of SDU’s global ambitions in quantum science and technology, Huang and colleagues hosted a three-day workshop (co-chaired by IHEP) last summer to promote the applications of quantum computing and classical/quantum machine-learning in particle physics. With over 100 attendees and speakers attending the inaugural event, including several prominent international participants, a successful follow-on workshop was held in Changchun earlier this year, with planning well under way for the next instalment in 2025.
Along a related coordinate, SDU has launched a series of online tutorials to support aspiring Masters and PhD students keen to further their studies in the applications of quantum computing and quantum machine-learning within HEP.
“Quantum computing is a hot topic, but there’s still a relatively small community of scientists and engineers working on HEP applications,” concludes Huang. “Working together, IHEP and SDU are building the interdisciplinary capacity in quantum science and technology to accelerate frontier research in particle physics. Our long-term goal is to establish a joint national laboratory with dedicated quantum computing facilities across both campuses.”
One thing is clear: the QC4SimRec collaboration offers ambitious quantum scientists a unique opportunity to progress alongside China’s burgeoning quantum ecosystem – an industry, moreover, that’s being heavily backed by sustained public and private investment. “For researchers who want to be at the cutting edge in quantum science and HEP, China is as good a place as any,” Okawa concludes.
For further information about QC4SimRec opportunities, please contact Hideki Okawa at IHEP or Xingtao Huang at SDU.
Quantum machine-learning for accelerated discovery
To understand the potential for quantum advantage in specific HEP contexts, QC4SimRec scientists are currently working on “rediscovering” the exotic particle Zc(3900) using quantum machine-learning techniques.
In terms of the back-story: Zc(3900) is an exotic subatomic particle made up of quarks (the building blocks of protons and neutrons) and believed to be the first tetraquark state observed experimentally – an observation that, in the process, deepened our understanding of quantum chromodynamics (QCD). The particle was discovered in 2013 using the BESIII detector at the Beijing Electron-Positron Collider (BEPCII), with independent observation by the Belle experiment at Japan’s KEK particle physics laboratory.
As part of their study, the IHEP- SDU team deployed the so-called Quantum Support Vector Machine algorithm (a quantum variant of a classical algorithm) for the training along with simulated signals of Zc(3900) and randomly selected events from the real BESIII data as backgrounds.
Using the quantum machine-learning approach, the performance is competitive versus classical machine-learning systems – though, crucially, with a smaller training dataset and fewer data features. Investigations are ongoing to demonstrate enhanced signal sensitivity with quantum computing – work that could ultimately point the way to the discovery of new exotic particles in future experiments.
Optical traps and tweezers can be used to capture and manipulate particles using non-contact forces. A focused beam of light allows precise control over the position of and force applied to an object, at the micron scale or below, enabling particles to be pulled and captured by the beam.
Optical manipulation techniques are garnering increased interest for biological applications. Researchers from Massachusetts Institute of Technology (MIT) have now developed a miniature, chip-based optical trap that acts as a “tractor beam” for studying DNA, classifying cells and investigating disease mechanisms. The device – which is small enough to fit in your hand – is made from a silicon-photonics chip and can manipulate particles up to 5 mm away from the chip surface, while maintaining a sterile environment for cells.
The promise of integrated optical tweezers
Integrated optical trapping provides a compact route to accessible optical manipulation compared with bulk optical tweezers, and has already been demonstrated using planar waveguides, optical resonators and plasmonic devices. However, many such tweezers can only trap particles directly on (or within several microns of) the chip’s surface and only offer passive trapping.
To make optical traps sterile for cell research, 150-µm thick glass coverslips are required. However, the short focal heights of many integrated optical tweezers means that the light beams can’t penetrate into standard sample chambers. Because such devices can only trap particles a few microns above the chip, they are incompatible with biological research that requires particles and cells to be trapped at much larger distances from the chip’s surface.
With current approaches, the only way to overcome this is to remove the cells and place them on the surface of the chip itself. This process contaminates the chip, however, meaning that each chip must be discarded after use and a new chip used for every experiment.
Trapping device for biological particles
Lead author Tal Sneh and colleagues developed an integrated optical phased array (OPA) that can focus emitted light at a specific point in the radiative near field of the chip. To date, many OPA devices have been motivated by LiDAR and optical communications applications, so their capabilities were limited to steering light beams in the far field using linear phase gradients. However, this approach does not generate the tightly focused beam required for optical trapping.
In their new approach, the MIT researchers used semiconductor manufacturing processes to fabricate a series of micro-antennas onto the chip. By creating specific phase patterns for each antenna, the researchers found that they could generate a tightly focused beam of light.
Each antenna’s optical signal was also tightly controlled by varying the input laser wavelength to provide an active spatial tuning for tweezing particles. The focused light beam emitted by the chip could therefore be shaped and steered to capture particles located millimetres above the surface of the chip, making it suitable for biological studies.
The researchers used the OPA tweezers to optically steer and non-mechanically trap polystyrene microparticles at up to 5 mm above the chip’s surface. They also demonstrated stretching of mouse lymphoblast cells, in the first known cell experiment to use single-beam integrated optical tweezers.
The researchers point out that this is the first demonstration of trapping particles over millimetre ranges, with the operating distance of the new device orders of magnitude greater than other integrated optical tweezers. Plasmonic, waveguide and resonator tweezers, for example, can only operate at 1 µm above the surface, while microlens-based tweezers have been able to operate at 20 µm distances.
Importantly, the device is completely reusable and biocompatible, because the biological samples can be trapped and undergo manipulation while remaining within a sterile coverslip. This ensures that both the biological media and the chip stay free from contamination without needing complex microfluidics packaging.
The work in this study provides a new type of modality for integrated optical tweezers, expanding their use into the biological domain to perform experiments on proteins and DNA, for example, as well as to sort and manipulate cells.
The researchers say that they hope to build on this research by creating a device with an adjustable focal height for the light beam, as well as introduce multiple trap sites to manipulate biological particles in more complex ways and employ the device to examine more biological systems.
Machine learning these days has a huge influence in physics, where it’s used in everything from the very practical (designing new circuits for quantum optics experiments) to the esoteric (finding new symmetries in data from the Large Hadron Collider). But it would be wrong to think that machine learning itself isn’t physics or that the Nobel committee – in honouring John Hopfield and Geoffrey Hinton – has been misguidedly seduced by some kind of “AI hype”.
Hopfield, 91, is a fully fledged condensed-matter physicist, who in the 1970s began to study the dynamics of biochemical reactions and its applications in neuroscience. In particular, he showed that the physics of spin glasses can be used to build networks of neurons to store and retrieve information. Hopfield applied his work to the problem of “associative memories” – how hearing a fragment of a song, say, can unlock a memory of the occasion we first heard it.
His work on the statistical physics and training of these “Hopfield networks” – and Hinton’s later on “Boltzmann machines” – paved the way for modern-day AI. Indeed, Hinton, a computer scientist, is often dubbed “the godfather of AI”. On the Physics World Weekly podcast, Anil Ananthaswamy – author of Why Machines Learn: the Elegant Maths Behind Modern AI – said Hinton’s contributions to AI were “immense”.
Of course, machine learning and AI are multidisciplinary endeavours, drawing on not just physics and mathematics, but neuroscience, computer science and cognitive science too. Imagine though, if Hinton and Hopfield had been given, say, a medicine Nobel prize. We’d have physicists moaning they’d been overlooked. Some might even say that this year’s Nobel Prize for Chemistry, which went to the application of AI to protein-folding, is really physics at heart.
We’re still in the early days for AI, which has its dangers. Indeed, Hinton quit Google last year so he could more freely express his concerns. But as this year’s Nobel prize makes clear, physics isn’t just drawing on machine learning and AI – it paved the way for these fields too.
In a new study, an international team of physicists has unified two distinct descriptions of atomic nuclei, taking a major step forward in our understanding of nuclear structure and strong interactions. For the first time, the particle physics perspective – where nuclei are seen as made up of quarks and gluons – has been combined with the traditional nuclear physics view that treats nuclei as collections of interacting nucleons (protons and neutrons). This innovative hybrid approach provides fresh insights into short-range correlated (SRC) nucleon pairs – which are fleeting interactions where two nucleons come exceptionally close and engage in strong interactions for mere femtoseconds. Although these interactions play a crucial role in the structure of nuclei, they have been notoriously difficult to describe theoretically.
“Nuclei (such as gold and lead) are not just a ‘bag of non-interacting protons and neutrons’,” explains Fredrick Olness at Southern Methodist University in the US, who is part of the international team. “When we put 208 protons and neutrons together to make a lead nucleus, they interact via the strong interaction force with their nearest neighbours; specifically, those neighbours within a ‘short range.’ These short-range interactions/correlations modify the composition of the nucleus and are a manifestation of the strong interaction force. An improved understanding of these correlations can provide new insights into both the properties of nuclei and the strong interaction force.”
To investigate the inner structure of atomic nuclei, physicists use parton distribution functions (PDFs). These functions describe how the momentum and energy of quarks and gluons are distributed within protons, neutrons, or entire nuclei. PDFs are typically obtained from high-energy experiments, such as those performed at particle accelerators, where nucleons or nuclei collide at close to the speed of light. By analysing the behaviour of the particles produced in these collisions, physicists can gain essential insights into their properties, revealing the complex dynamics of the strong interaction.
Traditional focus
However, traditional nuclear physics often focuses on the interactions between protons and neutrons within the nucleus, without delving into the quark and gluon structure of nucleons. Until now, these two approaches – one based on fundamental particles and the other on nuclear dynamics — remained separate. Now researchers in the US, Germany, Poland, Finland, Australia, Israel and France have bridged this gap.
The team developed a unified framework that integrates both the partonic structure of nucleons and the interactions between nucleons in atomic nuclei. This approach is particularly useful for studying SRC nucleon pairs, whose interactions have long been recognized as crucial to understanding the structure of nuclei, but they have been notoriously difficult to describe using conventional theoretical models.
By combining particle and nuclear physics descriptions, the researchers were able to derive PDFs for SRC pairs, providing a detailed understanding of how quarks and gluons behave within these pairs.
“This framework allows us to make direct relations between the quark–gluon and the proton–neutron description of nuclei,” said Olness. “Thus, for the first time, we can begin to relate the general properties of nuclei (such as ‘magic number’ nuclei – those with a specific number of protons or neutrons that make them particularly stable – or ‘mirror nuclei’ with equal numbers of protons and neutrons) to the characteristics of the quarks and gluons inside the nuclei.”
Experimental data
The researchers applied their model to experimental data from scattering experiments involving 19 different nuclei, ranging from helium-3 (with two protons and one neutron) to lead-208 (with 208 protons and neutrons). By comparing their predictions with the experimental data, they were able to refine their model and confirm its accuracy.
The results showed a remarkable agreement between the theoretical predictions and the data, particularly when it came to estimating the fraction of nucleons that form SRC pairs. In light nuclei, such as helium, nucleons rarely form SRC pairs. However, in heavier nuclei like lead, nearly half of the nucleons participate in SRC pairs, highlighting the significant role these interactions play in shaping the structure of larger nuclei.
These findings not only validate the team’s approach but also open up new avenues for research.
“We can study what other nuclear characteristics might yield modifications of the short-ranged correlated pairs ratios,” explains Olness. “This connects us to the shell model of the nucleus and other theoretical nuclear models. With the new relations provided by our framework, we can directly relate elemental quantities described by nuclear physics to the fundamental quarks and gluons as governed by the strong interaction force.”
The new model can be further tested using data from future experiments, such as those planned at the Jefferson Lab and at the Electron–Ion Collider at Brookhaven National Laboratory. These facilities will allow scientists to probe quark–gluon dynamics within nuclei with even greater precision, providing an opportunity to validate the predictions made in this study.
Tie-dye, geopolitical tension and a digitized Abba back on stage. Our appetite for revisiting the 1970s shows no signs of waning. Science writer Ferris Jabr has now reanimated another idea that captured the era’s zeitgeist: the concept of a “living Earth”. In Becoming Earth: How Our Planet Came to Life Jabr makes the case that our planet is far more than a lump of rock that passively hosts complex life. Instead, he argues that the Earth and life have co-evolved over geological time and that appreciating these synchronies can help us to steer away from environmental breakdown.
“We, and all living things, are more than inhabitants of Earth – we are Earth, an outgrowth of its structure and an engine of its evolution.” If that sounds like something you might hear in the early hours at a stone circle gathering, don’t worry. Jabr fleshes out his case with the latest science and journalistic flair in what is an impressive debut from the Oregon-based writer.
Becoming Earth is a reappraisal of the Gaia hypothesis, proposed in 1972 by British scientist James Lovelock and co-developed over several decades by US microbiologist Lynn Margulis. This idea of the Earth functioning as a self-regulating living organism has faced scepticism over the years, with many feeling it is untestable and strays into the realm of pseudoscience. In a 1988 essay, the biologist and science historian Stephen Jay Gould called Gaia “a metaphor, not a mechanism”.
Though undoubtedly a prodigious intellect, Lovelock was not your typical academic. He worked independently across fields including medical research, inventing the electron capture detector and consulting for petrochemical giant Shell. Add that to Gaia’s hippyish name – evoking the Greek goddess of Earth – and it’s easy to see why the theory faced a branding issue within mainstream science. Lovelock himself acknowledged errors in the theory’s original wording, which implied the biosphere acted with intention.
Though he makes due reference to the Gaia hypothesis, Jabr’s book is a standalone work, and in revisiting the concept in 2024, he has one significant advantage: we now have a tonne of scientific evidence for tight coupling between life and the environment. For instance, microbiologists increasingly speak of soil as a living organism because of the interconnections between micro-organisms and soil’s structure and function. Physicists meanwhile happily speak of “complex systems” where collective behaviour emerges from interactions of numerous components – climate being the obvious example.
To simplify this sprawling topic, Becoming Earth is structured into three parts: Rock, Water and Air. Accessible scientific discussions are interspersed with reportage, based on Jabr’s visits to various research sites. We kick off at the Sanford Underground Research Facility in South Dakota (also home to neutrino experiments) as Jabr descends 1500 m in search of iron-loving microbes. We learn that perhaps 90% of all microbes live deep underground and they transform Earth wherever they appear, carving vast caverns and regulating the global cycling of carbon and nutrients. Crucially, microbes also created the conditions for complex life by oxygenating the atmosphere.
In the Air section, Jabr scales the 1500 narrow steps of the Amazon Tall Tower Observatory to observe the forest making its own rain. Plants are constantly releasing water into the air through their leaves, and this drives more than half of the 20 billion tonnes of rain that fall on its canopy daily – more than the volume discharged by the Amazon river. “It’s not that Earth is a single living organism in exactly the same way as a bird or bacterium, or even a superorganism akin to an ant colony,” explains Jabr. “Rather that the planet is the largest known living system – the confluence of all other ecosystems – with structures, rhythms, and self-regulating processes that resemble those of its smaller constituent life forms. Life rhymes at every scale.”
When it comes to life’s capacity to alter its environment, not all creatures are born equal. Humans are having a supersized influence on these planetary rhythms despite appearing in recent geological history. Jabr suggests the Anthropocene – a proposed epoch defined by humanity’s influence on the planet – may have started between 50,000 and 10,000 years ago. At that time, our ancestors hunted mammoths and other megafauna into extinction, altering grassland habitats that had preserved a relatively cool climate.
Some of the most powerful passages in Becoming Earth concern our relationship with hydrocarbons. “Fossil fuel is essentially an ecosystem in an urn,” writes Jabr to illustrate why coal and oil store such vast amounts of energy. Elsewhere, on a beach in Hawaii an earth scientist and artist scoop up “plastiglomerates” – rocks formed from the eroded remains of plastic pollution fused with natural sediments. Humans have “forged a material that had never existed before”.
A criticism of the original Gaia hypothesis is that its association with a self-regulating planet may have fuelled a type of climate denialism. Science historian Leah Aronowsky argued that Gaia created the conditions for people to deny humans’ unique capacity to tip the system.
Jabr doesn’t see it that way and is deeply concerned that we are hastening the end of a stable period for life on Earth. But he also suggests we have the tools to mitigate the worst impacts, though this will likely require far more than just cutting emissions. He visits the Orca project in Iceland, the world’s first and largest plant for removing carbon from the atmosphere and storing it over long periods – in this case injecting it into basalt deep below the surface.
In an epilogue, we finally meet a 100-year-old James Lovelock at his Dorset home three years before his death in 2022. Still cheerful and articulate, Lovelock thrived on humour and tackling the big questions. As pointed out by Jabr, Lovelock was also prone to contradiction and the occasional alarmist statement. For instance, in his 2006 book The Revenge of Gaia he claimed that the only few breeding humans left by the end of the century would be confined to the Arctic. Fingers crossed he’s wrong on that one!
Perhaps Lovelock was prone to the same phenomenon we see in quantum physics where even the sharpest scientific minds can end up shrouding the research in hype and woo. Once you strip away the new-ageyness, we may find that the idea of Gaia was never as “out there” as the cultural noise that surrounded it. Thanks to Jabr’s earnest approach, the living Earth concept is alive and kicking in 2024.
The US condensed-matter physicist Leon Cooper, who shared the 1972 Nobel Prize for Physics, has died at the age of 94. In the late 1950s, Cooper, together with his colleagues Robert Schrieffer and John Bardeen, developed a theory of superconductivity that could explain why certain materials undergo an absolute absence of electrical resistance at low temperatures.
Born on 28 February 1930 in New York City, US, Cooper graduated from the Bronx High School of Science in 1947 before earning a degree from Columbia University, which he completed in 1951, and then a PhD in 1954.
Cooper then spent time at the Institute for Advanced Study in Princeton, the University of Illinois and Ohio State University before heading to Brown University in 1958 where he remained for the rest of his career.
It was in Illinois that Cooper began to work on a theoretical explanation of superconductivity – a phenomenon that was first seen by the Dutch physicist Heike Kamerlingh Onnes when he discovered in 1911 that the electrical resistance of mercury suddenly disappeared beneath a temperature of 4.2 K.
However, there was no microscopic theory of superconductivity until 1957, when Bardeen, Cooper and Schrieffer – all based at Illinois – came up with their “BCS” theory. This described how an electron can deform the atomic lattice through which it moves, thereby pairing with a neighbouring electron, which became known as a Cooper pair. Being paired allows all the electrons in a superconductor to move as a single cohort, known as a condensate, prevailing over thermal fluctuations that could cause the pairs to break.
Bardeen, Cooper and Schrieffer published their BCS theory in April 1957 (Phys. Rev.106 162), which was then followed in December by a full-length paper (Phys. Rev. 108 1175). Cooper was in his late 20s when he made the breakthrough.
Not only did the BCS theory of superconductivity successfully account for the behaviour of “conventional” low-temperature superconductors such as mercury and tin but it also had application in particle physics by contributing to the notion of spontaneous symmetry breaking.
For their work the trio won the 1972 Nobel Prize for Physics “for their jointly developed theory of superconductivity, usually called the BCS-theory”.
From BCS to BCM
While Cooper continued to work in superconductivity, later in his career he turned to neuroscience. In 1973 he founded and directed Brown’s Institute for Brain and Neural Systems, which studied animal nervous systems and the human brain. In the 1980s he came up with a physical theory of learning in the visual cortex dubbed the “BCM” theory, named after Cooper and his colleagues Elie Bienenstock and Paul Munro.
He also founded the technology firm Nestor along with Charles Elbaum, which aimed to find commercial and military applications for artificial neural networks.
As well as the Nobel prize, Cooper was awarded the Comstock Prize from the US National Academy of Sciences in 1968 and the Descartes Medal from the Academie de Paris in 1977.
He also wrote numerous books including An Introduction to the Meaning and Structure of Physics in 1968 and Physics: Structure and Meaning in 1992. More recently, he published Science and Human Experiencein 2014.
“Leon’s intellectual curiosity knew no boundaries,” notes Peter Bilderback, who worked with Cooper at Brown. “He was comfortable conversing on any subject, including art, which he loved greatly. He often compared the construction of physics to the building of a great cathedral, both beautiful human achievements accomplished by many hands over many years and perhaps never to be fully finished.”
When British physicist James Chadwick discovered the neutron in 1932, he supposedly said, “I am afraid neutrons will not be of any use to anyone.” The UK’s neutron user facility – the ISIS Neutron and Muon Source, now operated by the Science and Technology Facilities Council (STFC) – was opened 40 years ago. In that time, the facility has welcomed more than 60,000 scientists from around the world. ISIS supports a global community of neutron-scattering researchers, and the work that has been done there shows that Chadwick couldn’t have been more wrong.
By the time of Chadwick’s discovery, scientists knew that the atom was mostly empty space, and that it contained electrons and protons. However, there were some observations they couldn’t explain, such as the disparity between the mass and charge numbers of the helium nucleus.
The neutron was the missing piece of this puzzle. Chadwick’s work was fundamental to our understanding of the atom, but it also set the stage for a powerful new field of condensed-matter physics. Like other subatomic particles, neutrons have wave-like properties, and their wavelengths are comparable to the spacings between atoms. This means that when neutrons scatter off materials, they create characteristic interference patterns. In addition, because they are electrically neutral, neutrons can probe deeper into materials than X-rays or electrons.
Today, facilities like ISIS use neutron scattering to probe everything from spacecraft components and solar cells to studying how cosmic ray neutrons interact with electronics to ensure the resilience of technology for driverless cars and aircraft.
The origins of neutron scattering
On 2 December 1942 a group of scientists at the University of Chicago in the US, led by Enrico Fermi, watched the world’s first self-sustaining nuclear chain reaction, an event that would reshape world history and usher in a new era of atomic science.
One of those in attendance was Ernest O Wollan, a physicist with a background in X-ray scattering. The neutron’s wave-like properties had been established in 1936 and Wollan recognized that he could use neutrons produced by a nuclear reactor like the one in Chicago to determine the positions of atoms in a crystal. Wollan later moved to Oak Ridge National Laboratory (ORNL) in Tennessee, where a second reactor was being built, and at the end of 1944 his team was able to observe Bragg diffraction of neutrons in sodium chloride and gypsum salts.
A few years later Wollan was joined by Clifford Schull, with whom he refined the technique and constructed the world’s first purpose-built neutron-scattering instrument. Schull won the Nobel Prize for Physics in 1994 for his work (with Bertram Brockhouse, who had pioneered the use of neutron scattering to measure excitations), but Wollan was ineligible because he had died 10 years previously.
The early reactors used for neutron scattering were multipurpose, the first to be designed specifically to produce neutron beams was the High Flux Beam Reactor (HFBR) at Brookhaven National Laboratory in the US in 1965. This was closely followed in 1972 by the Institut Laue–Langevin (ILL) in France, a facility that is still running today.
Rather than using a reactor, ISIS is based on an alternative technology called “spallation” that first emerged in the 1970s. In spallation, neutrons are produced by accelerating protons at a heavy metal target. The protons collide like bullets with the nuclei in the target, absorb the proton and then discharge high-energy particles, including neutrons.
The first such sources specifically designed for neutron scattering were the KENS source at the Institute of Materials Structure Science (IMSS) in Japan, which started operation in 1980, and the Intense Pulsed Neutron Source at the Argonne National Laboratory in the US, which started operation in 1981.
The pioneering development work on these sources and in other institutions was of great benefit during the design and development of what was to become ISIS. The facility was approved in 1977 and the first beam was produced on 16 December 1984. In October 1985 the source was formally named ISIS and opened by then UK prime minister Margaret Thatcher. Today around 20 reactor and spallation neutron sources are operational around the world and one – the European Spallation Source (ESS) – is under construction in Sweden.
The name ISIS was inspired by both the river that flows through Oxford and the Egyptian goddess of reincarnation. The relevance of the latter relates to the fact that ISIS was built on the site of the NIMROD proton synchrotron that operated between 1964 and 1978, reusing much of its infrastructure and components.
Producing neutrons and muons
At the heart of ISIS is an 800 MeV accelerator that produces intense pulses of protons 50 times a second. These pulses are then fired at two tungsten targets. Spallation of the tungsten by the proton beam produces neutrons that fly off in all directions.
Before the neutrons can be used, they must be slowed down, which is achieved by passing them through a material called a “moderator”. ISIS uses various moderators which operate at different temperatures, producing neutrons with varying wavelengths. This enables scientists to probe materials on length scales from fractions of an angstrom to hundreds of nanometres.
Arrayed around the two neutron sources and the moderators are more than 25 beamlines that direct neutrons to one of ISIS’s specialized experiments. Many of these perform neutron diffraction, which is used to study the structure of crystalline and amorphous solids, as well as liquids.
When neutrons scatter, they also transfer a small amount of energy to the material and can excite vibrational modes in atoms and molecules. ISIS has seven beamlines dedicated to measuring this energy transfer, a technique called neutron spectroscopy. This can tell us about atomic and molecular bonds and is also used to study properties like specific heat and resistivity, as well as magnetic interactions.
Neutrons have spin so they are also sensitive to the magnetic properties of materials. Neutron diffraction is used to investigate magnetic ordering such as ferrimagnetism whereas spectroscopy is suited to the study of collective magnetic excitations.
Neutrons can sense short and long-ranged magnetic ordering, but to understand localized effects with small magnetic moments, an alternative probe is needed. Since 1987, ISIS has also produced muon beams, which are used for this purpose, as well as other applications. In front of one of the neutron targets is a carbon foil and when the proton beam passes through this it produces pions, which rapidly decay into muons. Rather than scattering, muons become implanted in the material, where they rapidly decay into positrons. By analysing the decay positrons, scientists can study very weak and fluctuating magnetic fields in materials that may be inaccessible with neutrons. For this reason, muon and neutron techniques are often used together.
“The ISIS instrument suite now provides capability across a broad range of neutron and muon science,” says Roger Eccleston, ISIS director. “We’re constantly engaging our user community, providing feedback and consulting them on plans to develop ISIS. This continues as we begin our ‘Endeavour’ programme: the construction of four new instruments and five significant upgrades to deliver even more performance enhancements.
“ISIS has been a part of my career since I arrived as a placement student shortly before the inauguration. Although I have worked elsewhere, ISIS has always been part of my working life. I have seen many important scientific and technical developments and innovations that kept me inspired to keep coming back.”
Over the last 40 years, the samples studied at ISIS have become smaller and more complex, and measurements have become quicker. The kinetics of chemical reactions can be imaged in real-time, and extreme temperatures and pressures can be achieved. Early work from ISIS focused on physics and chemistry questions such as the properties of high-temperature superconductors, the structure of chemicals and the phase behaviour of water. More recent work includes “seeing” catalysis in real-time, studying biological systems such as bacterial membranes, and enhancing the reliability of circuits for driverless cars.
Understanding the building blocks of life
Unlike X-rays and electrons, neutrons scatter strongly from light nuclei including hydrogen, which means they can be used to study water and organic materials.
Water is the most ubiquitous liquid on the planet, but its molecular structure gives it complex chemical and physical properties. Significant work on the phase behaviour of water was performed at ISIS in the early 2000s by scientists from the UK and Italy, who showed that liquid water under pressure transitions between two distinct structures, one low density and one high density (Phys. Rev. Lett. 84 2881).
Water is the molecule of life, and as the technical capabilities of ISIS have advanced, it has become possible to study it inside cells, where it underpins vital functions from protein folding to chemical reactions. In 2023 a team from Portugal used the facilities at ISIS to investigate whether the water inside cells can be used as a biomarker for cancer.
Because it’s confined at the nanoscale, water in a cell will behave quite differently to bulk water. At these scales, water’s properties are highly sensitive to its environment, which changes when a cell becomes cancerous. The team showed that this can be measured with neutron spectroscopy, manifesting as an increased flexibility in the cancerous cells (Scientific Reports13 21079).
If light is incident on an interface between two materials with different refractive indices it may, if the angle is just right, be perfectly reflected. A similar effect is exhibited by neutrons that are directed at the surface of a material, and neutron reflectometry instruments at ISIS use this to measure the thickness, surface roughness, and chemical composition of thin films.
One recent application of this technique at ISIS was a 2018 project where a team from the UK studied the effect of a powerful “last resort” antibiotic on the outer membrane of a bacterium. This antibiotic is only effective at body temperature, and the researchers show that this is because the thermal motion of molecules in the outer membrane makes it easier for the antibiotic to slip in and disrupt the bacterium’s structure (PNAS115 E7587).
Exploring the quantum world
A year after ISIS became operational, physicists Georg Bednorz and Karl Alexander Müller, working at the IBM research laboratory in Switzerland, discovered superconductivity in a material at 35 K, 12 K higher than any other known superconductor at the time. This discovery would later win them the 1987 Nobel Prize for Physics.
High-temperature superconductivity was one of the most significant discoveries of the 1980s, and it was a focus of early work at ISIS. Another landmark came in 1987, when yttrium barium copper oxide (YBCO) was found to exhibit superconductivity above 77 K, meaning that instead of liquid helium, it can be cooled to a superconducting state with the much cheaper liquid nitrogen. The structure of this material was first fully characterized at ISIS by a team from the US and UK (Nature327 310).
Another example of the quantum systems studied at ISIS is quantum spin liquids (QSLs). Most magnetic materials form an ordered phase like a ferromagnet when cooled, but a QSL is an interacting system of electron spins that is, in theory, disordered even when cooled to absolute zero.
QSLs are of great interest today because they are theorized to exhibit long-range entanglement, which could be applied to quantum computing and communications. QSLs have proven challenging to identify experimentally, but evidence from neutron scattering and muon spectroscopy at ISIS has characterized spin-liquid states in a number of materials (Nature471612).
Developing sustainable solutions and new materials
Over the years, experimental set-ups at ISIS have evolved to handle increasingly extreme and complex conditions. Almost 20 years ago, high-pressure neutron experiments performed by a UK team at ISIS showed that surfactants could be designed to enhance the solubility of liquid carbon dioxide, potentially unlocking a vast array of applications in the food and pharmaceutical industries as an environmentally friendly alternative to traditional petrochemical solvents (Langmuir22 9832).
Today, further developments in sample environment, detector technology and data analysis software enable us to observe chemical processes in real time, with materials kept under conditions that closely mimic their actual use. Recently, neutron imaging was used by a team from the UK and Germany to monitor a catalyst used widely in the chemical industry to improve the efficiency of reactions (Chem. Commun. 59 12767). Few methods can observe what is happening during a reaction, but neutron imaging was able to visualize it in real time.
Another discovery made just after ISIS became operational was the chemical buckminsterfullerene or “buckyball”. Buckyballs are a molecular form of carbon that consists of 60 carbon atoms arranged in a spherical structure, resembling a football. The scientists who first synthesized this molecule were awarded the Nobel Prize for Chemistry in 1996, and in the years following this discovery, researchers have studied this form of carbon using a range of techniques, including neutron scattering.
Ensembles of buckyballs can form a crystalline solid, and in the early 1990s studies of crystalline buckminsterfullerene at ISIS revealed that, while adjacent molecules are oriented randomly at room temperature, they transition to an ordered structure below 249 K to minimize their energy (Nature353 147).
Four decades on, fullerenes (the family of materials that includes buckyballs) continue to present many research opportunities. Through a process known as “molecular surgery”, synthetic chemists can create an opening in the fullerene cage, enabling them to insert an atom, ion or molecular cluster. Neutron-scattering studies at ISIS were recently used to characterize helium atoms trapped inside buckyballs (Phys. Chem. Chem. Phys.25 20295). These endofullerenes are helping to improve our understanding of the quantum mechanics associated with confined particles and have potential applications ranging from photovoltaics to drug delivery.
Just as they shed light on materials of the future, neutrons and muons also offer a unique glimpse into the materials, methods and cultures of the past. At ISIS, the penetrative and non-destructive nature of neutrons and muons has been used to study many invaluable cultural heritage objects from ancient Egyptian lizard coffins (Sci. Rep. 13 4582) to Samurai helmets (Archaeol. Anthropol. Sci. 13 96), deepening our understanding of the past without damaging any of these precious artifacts.
Looking within, and to the future
If you want to understand how things structurally fail, you must get right inside and look, and the neutron’s ability to penetrate deep into materials allows engineers to do just that. ISIS’s Engin-X beamline measures the strain within a crystalline material by measuring the spacing between atomic lattice planes. This has been used by sectors including aerospace, oil and gas exploration, automotive, and renewable power.
Recently, ISIS has also been attracting electronics companies looking to use the facility to irradiate their chips with neutrons. This can mimic the high-energy neutrons generated in the atmosphere by cosmic rays, which can cause reliability problems in electronics. So, when you next fly, drive or surf the web, ISIS may just have had a hand in it.
With its many discoveries and developments, ISIS has succeeded in proving Chadwick wrong over the past 40 years, and the facility is now setting its sights on the upcoming decades of neutron-scattering research. “While predicting the future of scientific research is challenging, we can anchor our activities around a couple of trends,” explains ISIS associate director Sean Langridge. “Our community will continue to pursue fundamental research for its intrinsic societal value by discovering, synthesizing and processing new materials. Furthermore, we will use the capabilities of neutrons to engineer and optimize a material’s functionality, for example, to increase operational lifetime and minimize environmental impact.”
The capability requirements will continue to become more complex and, as they do so, the amount of data produced will also increase. The extensive datasets produced at ISIS are well suited for machine-learning techniques. These can identify new phenomena that conventional methods might overlook, leading to the discovery of novel materials.
As ISIS celebrates its 40th anniversary of neutron production, the use of neutrons continues to provide huge value to the physics community. A feasibility and design study for a next-generation neutron and muon source is now under way. Despite four decades of neutrons proving their worth, there is still much to discover over the coming decades of UK neutron and muon science.
Physicists in Germany have used visible light to measure intramolecular distances smaller than 10 nm thanks to an advanced version of an optical fluorescence microscopy technique called MINFLUX. The technique, which has a precision of just 1 angstrom (0.1 nm), could be used to study biological processes such as interactions between proteins and other biomolecules inside cells.
In conventional microscopy, when two features of an object are separated by less than half the wavelength of the light used to image them, they will appear blurry and indistinguishable due to diffraction. Super-resolution microscopy techniques can, however, overcome this so-called Rayleigh limit by exciting individual fluorescent groups (fluorophores) on molecules while leaving neighbouring fluorophores alone, meaning they remain dark.
One such technique, known as nanoscopy with minimal photon fluxes, or MINFLUX, was invented by the physicist Stefan Hell. First reported in 2016 by Hell’s team at the Max Planck Institute (MPI) for Multidisciplinary Sciences in Göttingen, MINFLUX first “switches on” individual molecules, then determines their position by scanning a beam of light with a doughnut-shaped intensity profile across them.
The problem is that at distances of less than 5 to 10 nm, most fluorescent molecules start interacting with each other. This means they cannot emit fluorescence independently – a prerequisite for reliable distance measurements, explains Steffen Sahl, who works with Hell at the MPI.
Non-interacting fluorescent dye molecules
To overcome this problem, the team turned to a new type of fluorescent dye molecule developed in Hell’s research group. These molecules can be switched on in succession using UV light, but they do not interact with each other. This allows the researchers to mark the positions they want to measure with single fluorescent molecules and record their locations independently, to within as little as 0.1 nm, even when the dye molecules are close together.
“The localization process boils down to relating the unknown position of the fluorophore to the known position of the centre of the doughnut beam, where there is minimal or ideally zero excitation light intensity,” explains Hell. “The distance between the two can be inferred from the excitation (and hence the fluorescence) rate of the fluorophore.”
The advantage of MINFLUX, Hell tells Physics World, is that the closer the beam’s intensity minimum gets to the fluorescent molecule, the fewer fluorescence photons are needed to pinpoint the molecule’s location. This takes the burden of producing localizing photons – in effect, tiny lighthouses signalling “Here I am!” – away from the relatively weakly-emitting molecule and shifts it onto the laser beam, which has photons to spare. The overall effect is to reduce the required number of detected photons “typically by a factor of 100”, Hell says, adding that this translates into a 10-fold increase in localization precision compared to traditional camera-based techniques.
“A real alternative” to existing measurement methods
The researchers demonstrated their technique by precisely determining distances of 1–10 nanometres in polypeptides and proteins. To prove that they were indeed measuring distances smaller than the size of these molecules, they used molecules of a different substance, polyproline, as “rulers” of various lengths.
Polyproline is relatively stiff and was used for a similar purpose in early demonstrations of a method called Förster resonance energy transfer (FRET) that is now widely used in biophysics and molecular biology. However, FRET suffers from fundamental limitations on its accuracy, and Sahl thinks the “arguably surprising” 0.1 nm precision of MINFLUX makes it “a real alternative” for monitoring sub-10-nm distances.
While it had long been clear that MINFLUX should, in principle, be able to resolve distances at the < 5 nm scale and measure them to sub-nm precision, Hell notes that it had not been demonstrated at this scale until now. “Showing that the technique can do this is a milestone in its development and demonstration,” he says. “It is exciting to see that we can resolve fluorescence molecules that are so close together that they literally touch.” Being able to measure these distances with angstrom precision is, Hell adds, “astounding if your bear in mind that all this is done with freely propagating visible light focused by a conventional lens”.
“I find it particularly fascinating that we have now gone to the very size scale of biological molecules and can quantify distances even within them, gaining access to details of their conformation,” Sahl adds.
The researchers say that one of the key prerequisites for this work (and indeed all super-resolution microscopy developed to date) was the sequential ON/OFF switching of the fluorophores emitting fluorescence. Because any cross-talk between the two molecules would have been problematic, one of the main challenges was to identify fluorescence molecules with truly independent behaviour – that is, ones in which the silent (OFF-state) molecule did not affect its emitting (ON-state) neighbour and vice versa.
Looking forward, Hell says he and his colleagues are now looking to develop and establish MINFLUX as a standard tool for unravelling and quantifying the mechanics of proteins.
Adaptive radiotherapy – in which a patient’s treatment is regularly replanned throughout their course of therapy – can compensate for uncertainties and anatomical changes and improve the accuracy of radiation delivery. Now, a team at the Paul Scherrer Institute’s Center for Proton Therapy has performed the first clinical implementation of an online daily adaptive proton therapy (DAPT) workflow.
Proton therapy benefits from a well-defined Bragg peak range that enables highly targeted dose delivery to a tumour while minimizing dose to nearby healthy tissues. This precision, however, also makes proton delivery extremely sensitive to anatomical changes along the beam path – arising from variations in mucus, air, muscle or fat in the body – or changes in the tumour’s position and shape.
“For cancer patients who are irradiated with protons, even small changes can have significant effects on the optimal radiation dose,” says first author Francesca Albertini in a press statement.
Online plan adaptation, where the patient remains on the couch during the replanning process, could help address the uncertainties arising from anatomical changes. But while this technique is being introduced into photon-based radiotherapy, daily online adaptation has not yet been applied to proton treatments, where it could prove even more valuable.
To address this shortfall, Albertini and colleagues developed a three-phase DAPT workflow, describing the procedure in Physics in Medicine & Biology. In the pre-treatment phase, two independent plans are created from the patient’s planning CT: a “template plan” that acts as a reference for the online optimized plan, and a “fallback plan” that can be selected on any day as a back-up if necessary.
Next, the online phase involves acquiring a daily CT before each irradiation, while the patient is on the treatment couch. For this, the researchers use an in-room CT-on-rails with a low-dose protocol. They then perform a fully automated re-optimization of the treatment plan based on the daily CT image. If the adapted plan meets the required clinical goals and passes an automated quality assurance (QA) procedure, it is used to treat the patient. If not, the fallback plan is delivered instead.
Finally, in the offline phase, the delivered dose in each fraction is recalculated retrospectively from the log files using a Monte Carlo algorithm. This step enables the team to accurately assess the dose delivered to the patient each day.
First clinical implementation
The researchers employed their DAPT protocol in five adults with tumours in rigid body regions, such as the brain or skull base. As this study was designed to demonstrate proof-of-principle and ensure clinical safety, they specified some additional constraints: only the last few consecutive fractions of each patient’s treatment course were delivered using DAPT; the plans used standard field arrangements and safety margins; and the template and fallback plans were kept the same.
“It’s important to note that these criteria are not optimized to fully exploit the potential clinical benefits of our approach,” the researchers write. “As our implementation progresses and matures, we anticipate refining these criteria to maximize the clinical advantages offered by DAPT.”
Across the five patients, the team performed DAPT for 26 treatment fractions. In 22 of these, the online adapted plans were chosen for delivery. In three fractions, the fallback plan was chosen due to a marginal dose increase to a critical structure, while for one fraction, the fallback plan was utilized due to a miscommunication. The team emphasize that all of the adapted plans passed the online QA steps and all agreed well with the log file-based dose calculations.
The daily adapted plans provided target coverage to within 1.1% of the planned dose and, in 92% of fractions, exhibited improved dose metrics to the targets and/or organs-at-risk (OARs). The researchers observed that a non-DAPT delivery (using the fallback plan) could have significantly increased the maximum dose to both the target and OARs. For one patient, this would have increased the dose to their brainstem by up to 10%. In contrast, the DAPT approach ensured that the OAR doses remained within the 5% threshold for all fractions.
Albertini emphasizes, however, that the main aim of this feasibility study was not to demonstrate superior plan quality with DAPT, but rather to establish that it could be implemented safely and efficiently. “The observed decrease in maximum dose to some OARs was a bonus and reinforces the potential benefits of adaptive strategies,” she tells Physics World.
Importantly, the DAPT process took just a few minutes longer than a non-adaptive session, averaging just above 23 min per fraction (including plan adaptation and assessment of clinical goals). Keeping the adaptive treatment within the typical 30-min time slot allocated for a proton therapy fraction is essential to maintain the patient workflow.
To reduce the time requirement, the team automated key workflow components, including the independent dose calculations. “Once registration between the daily and reference images is completed, all subsequent steps are automatically processed in the background, while the users are evaluating the daily structure and plan,” Albertini explains. “Once the plan is approved, all the QA has already been performed and the plan is ready to be delivered.
Following on from this first-in-patient demonstration, the researchers now plan to use DAPT to deliver full treatments (all fractions), as well as to enable margin reduction and potentially employ more conformal beam angles. “We are currently focused on transitioning our workflow to a commercial treatment planning system and enhancing it to incorporate deformable anatomy considerations,” says Albertini.
Researchers at Tel Aviv University in Israel have developed a method to detect early signs of Parkinson’s disease at the cellular level using skin biopsies. They say that this capability could enable treatment up to 20 years before the appearance of motor symptoms characteristic of advanced Parkinson’s. Such early treatment could reduce neurotoxic protein aggregates in the brain and help prevent the irreversible loss of dopamine-producing neurons.
Parkinson’s disease is the second most common neurodegenerative disease in the world. The World Health Organization reports that its prevalence has doubled in the past 25 years, with more than 8.5 million people affected in 2019. Diagnosis is currently based on the onset of clinical motor symptoms. By the time of diagnosis, however, up to 80% of dopaminergic neurons in the brain may already be dead.
The new method combines a super-resolution microscopy technique, known as direct stochastic optical reconstruction microscopy (dSTORM), with advanced computational analysis to identify and map the aggregation of alpha-synuclein (αSyn), a synaptic protein that regulates transmission in nerve terminals. When it aggregates in brain neurons, αSyn causes neurotoxicity and impacts the central nervous system. In Parkinson’s disease, αSyn begins to aggregate about 15 years before motor symptoms appear.
Importantly, αSyn aggregates also accumulate in the skin. With this in mind, principal investigator Uri Ashery and colleagues developed a method for quantitative assessment of Parkinson’s pathology using skin biopsies from the upper back. The technique, which enables detailed characterization of nano-sized αSyn aggregates, will hopefully facilitate the development of a new molecular biomarker for Parkinson’s disease.
“We hypothesized that these αSyn aggregates are essential for understanding αSyn pathology in Parkinson’s disease,” the researchers write. “We created a novel platform that revealed a unique fingerprint of αSyn aggregates. The analysis detected a larger number of clusters, clusters with larger radii, and sparser clusters containing a smaller number of localizations in Parkinson’s disease patients relative to what was seen with healthy control subjects.”
The researchers used dSTORM to analyse skin biopsies from seven patients with Parkinson’s disease and seven healthy controls, characterizing nanoscale αSyn based on quantitative parameters such as aggregate size, shape, distribution, density and composition.
Their analysis revealed a significant decrease in the ratio of neuronal marker molecules to phosphorylated αSyn molecules (the pathological form of αSyn) in biopsies from Parkinson’s disease patients, suggesting the existence of damaged nerve cells in fibres enriched with phosphorylated αSyn.
The researchers determined that phosphorylated αSyn is organized into dense aggregates of approximately 75 nm in size. They also found that that patients with Parkinson’s disease had a higher number of αSyn aggregates than the healthy controls, with larger αSyn clusters (75 nm compared with 69 nm).
“Parkinson’s disease diagnosis based on quantitative parameters represents an unmet need that offers a route to revolutionize the way Parkinson’s disease and potentially other neurodegenerative diseases are diagnosed and treated,” Ashery and colleagues conclude.
In the next phase of this work, supported by the Michael J. Fox Foundation for Parkinson’s Research, the researchers will increase the number of subjects to 90 to identify differences between patients with Parkinson disease and healthy subjects.
“We intend to pinpoint the exact juncture at which a normal quantity of proteins turns into a pathological aggregate,” says lead author Ofir Sade in a press statement. “In addition, we will collaborate with computer science researchers to develop a machine learning algorithm that will identify correlations between results of motor and cognitive tests and our findings under the microscope. Using this algorithm, we will be able to predict future development and severity of various pathologies.”
“The machine learning algorithm is intended to spot young individuals at risk for Parkinson’s,” Ashery adds. “Our main target population are relatives of Parkinson’s patients who carry mutations that increase the risk for the disease.”
One of my favourite parts of being an atomic physicist is the variety. I get to work with lasers, vacuums, experimental control software, simulations, data analysis and physics theory.
As I’m transitioning to a more senior position, the skills I use have changed. Rather than doing most of the lab-based work myself, I now have a more supervisory role on some projects. I go to the lab when I can but it’s certainly different. I’m also teaching a second-year quantum mechanics course, which requires its own skillset. I try to use my experience to impart more of an experimental flavour. The field is now in an exciting place where we can not only think about experiments with single quantum systems, but actually do them.
It’s important to have the right structures in place to deliver complex projects with many moving parts
I also work part-time at a trapped-ion quantum computing company, Oxford Ionics, which has grown from about 20 to over 60 people since I started in 2021. Being involved in a team with so many people has taught me a lot about the importance of project management. It’s important to have the right structures in place to deliver complex projects with many moving parts. In addition, most of my company colleagues are also not physicists; it’s important to be able to communicate with people across a range of disciplines.
What do you like best and least about your job?
Experimental physics is never boring, as experiments always find new and wonderful ways to break: 90–99% of the time something needs fixing, but when it works it’s just magical.
I’ve been incredibly lucky to work with a fantastic group of people wherever I’ve been. Experimental physics cannot be done alone and I feel very privileged to work with colleagues who are passionate about what they do and have a wide variety of skills.
I also love the opportunities for outreach activities that my position affords me. Since I started at Oxford, I have led work placements as part of In2scienceUK and more recently helped start a week-long summer school for school students with the National Quantum Computing Centre. In many ways, I think promoting the idea that a career in quantum physics is accessible to anyone as long as they are willing to work hard is the most impactful work I can do.
I do dislike that as you spend longer in a field, more and more non-lab-based tasks creep into your calendar. I also find it difficult to switch between different tasks but that’s the price to pay for being involved in multiple projects.
What do you know today, that you wish you knew when you were starting out in your career?
It’s a difficult feeling for me to shake off even now, but when I started my career, I used to feel afraid to ask questions when I didn’t know something. I think it’s easy to fall into the trap of thinking it’s your fault, or that others will think less of you. However, I believe it’s better to see these instances as opportunities to learn rather than being embarrassed.
Scientifically, I think it’s also really important to be able to take a step back from the weeds of technical work and have an idea of the big-picture physics you’re trying to solve. I would have encouraged my past self to spend more time thinking deeply about physics, even beyond the field I was in. Just a couple of hours a week adds up over time without really taking away from other work.
It’s easy to pour yourself completely into a project, but it’s important to do this sustainably and avoid burnout
One last thing I’d tell my past self is to think about boundaries and find a healthy work-life balance. It’s easy to pour yourself completely into a project, but it’s important to do this sustainably and avoid burnout. Other aspects of life are important too.
This episode of the Physics World Weekly podcast, features the physicist and engineer Julia Sutcliffe, who is chief scientific adviser to the UK government’s Department for Business and Trade.
In a wide-ranging conversation with Physics World’s Matin Durrani, Sutcliffe explains how she began her career as a PhD physicist before working in systems engineering at British Aerospace – where she worked on cutting-edge technologies including robotics, artificial intelligence, and autonomous systems. They also chat about Sutcliffe’s current role advising the UK government to ensure that policymaking is underpinned by the best evidence.
A new type of composite material is 10 times more efficient at extracting gold from electronic waste than previous adsorbents. Developed by researchers in Singapore, the UK and China, the environmentally-friendly composite is made from graphene oxide and a natural biopolymer called chitosan, and it filters the gold without an external power source, making it an attractive alternative to older, more energy-intensive techniques.
Getting better at extracting gold from electronic waste, or e-waste, is desirable for two reasons. As well as reducing the volume of e-waste, it would lessen our reliance on mining and refining new gold, which involves environmentally hazardous materials such as activated carbon and cyanides. Electronic waste management is a relatively new field, however, and existing techniques like electrolysis are time-consuming and require a lot of energy.
A more efficient and suitable recovery process
Led by Kostya Novoselov and Daria Andreeva of the Institute for Functional Intelligent Materials at the National University of Singapore, the researchers chose graphene and chitosan because both have desirable characteristics for gold extraction. Graphene boasts a high surface area, making it ideal for adsorbing ions, they explain, while chitosan acts as a natural reducing agent, catalytically converting ionic gold into its solid metallic form.
While neither material is efficient enough to compete with conventional methods such as activated carbon on its own, Andreeva says they work well together. “By combining both of them, we enhance both the adsorption capacity of graphene and the catalytic reduction ability of chitosan,” she explains. “The result is a more efficient and suitable gold recovery process.”
High extraction efficiency
The researchers made the composite by getting one-dimensional chitosan macromolecules to self-assemble on two-dimensional flakes of graphene oxide. This assembly process triggers the formation of sites that bind gold ions. The enhanced extracting ability of the composite comes from the fact that the ion binding is cooperative, meaning that an ion binding at one site allows other ions to bind, too. The team had previously used similar methods in studies that focused on structures such as novel membranes with artificial ionic channels, anticorrosion coatings, sensors and actuators, switchable water valves and bioelectrochemical systems.
Once the gold ions are adsorbed onto the graphene surface, the chitosan catalyses the reduction of these ions, converting them from their ionic state into solid metallic gold, Andreeva explains. “This combined action of adsorption and reduction makes the process both highly efficient and environmentally friendly, as it avoids the use of harsh chemicals typically employed in gold recovery from electronic waste,” she says.
The researchers tested the material on a real waste mixture provided by SG Recycle Group SG3R, Pte, Ltd. Using this mixture, which contained gold in a residual concentration of just 3 ppm, they showed that the composite can extract nearly 17g/g of Au3+ ions and just over 6 g/g of Au+ from a solution – values that are 10 times larger than existing gold adsorbents. The material also has an extraction efficiency of above 99.5 percent by weight (wt%), breaking the current of limit of 75 wt%. To top it off, the ion extraction process is ultrafast, taking around just 10 minutes compared to days for other graphene-based adsorbents.
No applied voltage required
The researchers, who report their work in PNAS, say that the multidimensional architecture of the composite’s structure means that no applied voltage is required to adsorb and reduce gold ions. Instead, the technique relies solely on the chemisorption kinetics of gold ions on the heterogenous graphene oxide/chitosan nanoconfinement channels and the chemical reduction at multiple binding sites. The new process therefore offers a cleaner, more efficient and environmentally-friendly method for recovering gold from electronic waste, they add.
While the present work focused on gold, the team say the technique could be adapted to recover other valuable metals such as silver, platinum or palladium from electronic waste or even mining residues. And that is not all: as well as e-waste, the technology might be applied to a wider range of environmental cleaning efforts, such as filtering out heavy metals from polluted water sources or industrial effluents. “It thus provides a solution for reducing metal contamination in ecosystems,” Andreeva says.
The Singapore researchers are now studying how to regenerate and reuse the composite material itself, to further reduce waste and improve the process’s sustainability. “Our ongoing research is focusing on optimizing the material’s properties, bringing us closer to a scalable, eco-friendly solution for e-waste management and beyond,” Andreeva says.
Weakly interacting massive particles (WIMPs) are prime candidates for dark matter – but the hypothetical particles have never been observed directly. Now, an international group of physicists has proposed a connection between WIMPs and the higher-than-expected flux of antimatter cosmic rays detected by NASA’s Alpha Magnetic Spectrometer (AMS-02) on the International Space Station.
Cosmic rays are high-energy charged particles that are created by a wide range of astrophysical processes including supernovae and the violent regions surrounding supermassive black holes. The origins of cosmic rays are not fully understood so they offer physicists opportunities to look for phenomena not described by the Standard Model of particle physics. This includes dark matter, a hypothetical substance that could account for about 85% of the mass in the universe.
If WIMPs exist, physicists believe that they would occasionally annihilate when they encounter one another to create matter and antimatter particles. Because WIMPs are very heavy, it is possible that these annihilations create antinuclei – the antimatter version of nuclei comprising antiprotons and antineutrons. Some of these antinuclei could make their way to Earth and be detected as cosmic rays
Now, a trio of researchers in Spain, Sweden, and the US has done new calculations that suggest that unexpected antinuclei detections made by AMS-02 could shed light on the nature of dark matter. The trio is led by Pedro De La Torre Luque at the Autonomous University of Madrid.
Heavy antiparticles
According to the Standard Model of particle physics, antinuclei should be an extremely small component of the cosmic rays measured by AMS-02. However, excesses of antideuterons (antihydrogen-2), antihelium-3 and antihelium-4 have been glimpsed in data gathered by AMS-02.
In previous work, De La Torre Luque and colleagues explored the possibility that these antinuclei emerged through the annihilation of WIMPs. Using AMS-02 data, the team put new constraints on the hypothetical properties of WIMPs.
Now, the trio has built on this work. “With this information, we calculated the fluxes of antideuterons and antihelium that AMS-02 could detect: both from dark matter, and from cosmic ray interactions with gas in the interstellar medium,” De La Torre Luque says. “In addition, we estimated the maximum possible flux of antinuclei from WIMP dark matter.”
This allowed the researchers to test whether AMS-02’s cosmic ray measurements are really compatible with standard WIMP models. According to De La Torre Luque, their analysis had mixed implications for WIMPs.
“We found that while the antideuteron events measured by AMS-02 are well compatible with WIMP dark matter annihilating in the galaxy, only in optimistic cases can WIMPs explain the detected events of antihelium-3,” he explains. “No standard WIMP scenario can explain the detection of antihelium-4.”
Altogether, the team’s results are promising for proponents of the idea that WIMPs are a component of dark matter. However, the research also suggest that the WIMP model in its current form is incomplete. To be consistent with the AMS-02 data, the researchers believe that a new WIMP model must further push the bounds of the Standard Model.
“If these measurements are robust, we may be opening the window for something very exotic going on in the galaxy, that could be related to dark matter, says De La Torre Luque. But it could also reveal some unexpected new phenomenon in the universe”. Ultimately, the researchers hope that the precision of their antinuclei measurements could bring us a small step closer to solving one of the deepest, most enduring mysteries in physics.
NASA has released the first images of a full-scale prototype for the six telescopes that will be included in the €1.5bn Laser Interferometer Space Antenna (LISA) mission.
Expected to launch in 2035 and operate for at least four year, LISA is a space-based gravitational-wave mission led by the European Space Agency.
It will comprise of three identical satellites that will be placed in an equilateral triangle in space, with each side of the triangle being 2.5 million kilometers – more than six times the distance between the Earth and the Moon.
The three craft will send infrared laser beams to each other via twin telescopes in the satellites. The beams will be sent to free-floating golden cubes – each slightly smaller than a Rubik’s cube — that are placed inside the craft.
The system will be able to measure the separation between the cubes down to picometers, or trillionths of a meter. Such subtle changes in the distances between the measured laser beams will indicate the presence of a gravitational wave.
The prototype telescope, dubbed the Engineering Development Unit Telescope, was manufactured and assembled by L3Harris Technologies in Rochester, New York.
It is made entirely from an amber-coloured glass-ceramic called Zerodur, which has been manufactured by Schott in Mainz, Germany. The primary mirror of the telescopes is coated in gold to better reflect the infrared lasers and reduce heat loss.
On 25 January ESA’s Science Programme Committee formally approved the start of construction of LISA.
Magnets generally have two poles, north and south, so observing something that behaves like it has only one is extremely unusual. Physicists in Germany and Switzerland have become the latest to claim this rare accolade by making the first direct detection of structures known as orbital angular momentum monopoles. The monopoles, which the team identified in materials known as chiral crystals, had previously only been predicted in theory. The discovery could aid the development of more energy-efficient memory devices.
Traditional electronic devices use the charge of electrons to transfer energy and information. This transfer process is energy-intensive, however, so scientists are looking for alternatives. One possibility is spintronics, which uses the electron’s spin rather than its charge, but more recently another alternative has emerged that could be even more promising. Known as orbitronics, it exploits the orbital angular momentum (OAM) of electrons as they revolve around an atomic nucleus. By manipulating this OAM, it is in principle possible to generate large magnetizations with very small electric currents – a property that could be used to make energy-efficient memory devices.
Chiral topological semi-metals with “built-in” OAM textures
The problem is that materials that support such orbital magnetizations are hard to come by. However, Niels Schröter, a physicist at the Max Planck Institute of Microstructure Physics in Halle, Germany who co-led the new research, explains that theoretical work carried out in the 1980s suggested that certain crystalline materials with a chiral structure could generate an orbital magnetization that is isotropic, or uniform in all directions. “This means that the materials’ magnetoelectric response is also isotropic – it depends solely on the direction of the injected current and not on the crystals’ orientation,” Schröter says. “This property could be useful for device applications since it allows for a uniform performance regardless of how the crystal grains are oriented in a material.”
In 2019, three experimental groups (including the one involved in the latest work) independently discovered a type of material called a chiral topological semimetal that seemed to fit the bill. Atoms in these semimetals are arranged in a helical pattern, which produces something that behaves like a solenoid on the nanoscale, creating a magnetic field whenever an electric current passes through it.
The advantage of these materials, Schröter explains, is that they have “built-in” OAM textures. What is more, he says the specific texture discovered in the most recent work – an OAM monopole – is “special because the magnetic field response can be very large – and isotropic, too”.
Visualizing monopoles
Schröter and colleagues studied chiral topological semimetals made from either palladium and gallium or platinum and gallium (PdGa or PtGa). To understand the structure of these semimetals, they directed circularly polarized X-rays from the Swiss Light Source (SLS) onto samples of PdGa and PtGa prepared by Claudia Felser’s group at the Max Planck Institute in Dresden. In this technique, known as circular dichroism in angle-resolved photoemission spectroscopy (CD-ARPES), the synchrotron light ejects electrons from the sample, and the angles and energies of these electrons provide information about the material’s electronic structure.
“This technique essentially allows us to ‘visualize’ the orbital texture, almost like capturing an image of the OAM monopoles,” Schröter explains. “Instead of looking at the reflected light, however, we observe the emission pattern of electrons.” The new monopoles, he notes, reside in momentum (or reciprocal) space, which is the Fourier transform of our everyday three-dimensional space.
Complex data
One of the researchers’ main challenges was figuring out how to interpret the CD-ARPES data. This turned out to be anything but straightforward. Working closely with Michael Schüler’s theoretical modelling group at the Paul Scherrer Institute in Switzerland, they managed to identify the OAM textures hidden within the complexity of the measurement figures.
Contrary to what was previously thought, they found that the CD-ARPES signal was not directly proportional to the OAMs. Instead, it rotated around the monopoles as the energy of the photons in the synchrotron light source was varied. This observation, they say, proves that monopoles are indeed present.
The findings, which are detailed in Nature Physics, could have important implications for future magnetic memory devices. “Being able to switch small magnetic domains with currents passed through such chiral crystals opens the door to creating more energy-efficient data storage technologies, and possibly also logic devices,” Schröter says. “This study will likely inspire further research into how these materials can be used in practical applications, especially in the field of low-power computing.”
The researchers’ next task is to design and build prototype devices that exploit the unique properties of chiral topological semimetals. “Finding these monopoles has been a focus for us ever since I started my independent research group at the Max Planck Institute for Microstructure Physics in 2021,” Schröter tells Physics World. The team’s new goal, he adds, is to “demonstrate functionalities and create devices that can drive advancements in information technologies”.
To achieve this, he and his colleagues are collaborating with partners at the universities of Regensburg and Berlin. They aim to establish a new centre for chiral electronics that will, he says, “serve as a hub for exploring the transformative potential of chiral materials in developing next-generation technologies”.
Recent battery papers commonly employ interpretation models for which diffusion impedances are in series with interfacial impedance. The models are fundamentally flawed because the diffusion impedance should be part of the interfacial impedance. A general approach is presented that shows how the charge-transfer resistance and diffusion resistance are functions of the concentration of reacting species at the electrode surface. The resulting impedance model incorporates diffusion impedances as part of the interfacial impedance.
An interactive Q&A session follows the presentation.
Mark Orazem obtained his BS and MS degrees from Kansas State University and his PhD in 1983 from the University of California, Berkeley. In 1983, he began his career as assistant professor at the University of Virginia, and in 1988 joined the faculty of the University of Florida, where he is Distinguished Professor of Chemical Engineering and Associate Chair for Graduate Studies. Mark is a fellow of The Electrochemical Society, International Society of Electrochemistry, and American Association for the Advancement of Science. He served as President of the International Society of Electrochemistry and co-authored, with Bernard Tribollet of the Centre national de la recherche scientifique (CNRS), the textbook entitled Electrochemical Impedance Spectroscopy, now in its second edition. Mark received the ECS Henry B. Linford Award, ECS Corrosion Division H. H. Uhlig Award, and with co-author Bernard Tribollet, the 2019 Claude Gabrielli Award for contributions to electrochemical impedance spectroscopy. In addition to writing books, he has taught short courses on impedance spectroscopy for The Electrochemical Society since 2000.
Frequency measurements using multi-qubit entangled states have been performed by two independent groups in the US. These entangled states have correlated errors, resulting in measurement precisions better than the standard quantum limit. One team is based in Colorado and it measured the frequency of an atomic clock with greater precision than possible using conventional methods. The other group is in California and it showed how entangled states could be used in quantum sensing.
Atomic clocks are the most accurate timekeeping devices we have. They work by locking an ultraprecise, frequency comb laser to a narrow linewidth transition in an atom. The higher the transition’s frequency, the faster the clock ticks and the more precisely it can keep time. The clock with the best precision today is operated by Jun Ye’s group at JILA in Boulder, Colorado and colleagues. After running for the age of the universe, this clock would only be wrong by 0.01 s.
The conventional way of improving precision is to use higher-energy, narrower transitions such as those found in highly charged ions and nuclei. These pose formidable challenges, however, both in locating the transitions and in producing stable high-frequency lasers to excite them.
Standard quantum limit
An alternative is to operate existing clocks in more sophisticated ways. “In an optical atomic clock, you’re comparing the oscillations of an atomic superposition with the frequency of a laser,” explains JILA’s Adam Kaufman, “At the end of the experiment, that atom can only be in the excited state or in the ground state, so to get an estimate of the relative frequencies you need to sample that atom many times, and the precision goes like one over the square root of the number of samples.” This is the standard quantum limit, and is derived from the assumption that the atoms collapse randomly, producing random noise in the frequency estimate.
If, however, multiple atoms are placed into a Greenberger–Horne–Zeilinger (GHZ) entangled state and measured simultaneously, information can be acquired at a higher frequency without increasing the fundamental frequency of the transition. JILA’s Alec Cao explains, “Two atoms in a GHZ state are not just two independent atoms. Both the atoms are in the zero state, so the state has an energy of zero, or both the atoms are in the upper state so it has an energy of two. And as you scale the size of the system the energy difference increases.”
Unfortunately the lifetime of a GHZ state is inversely proportional to its size. Therefore, though precision can be acquired in a shorter time, the time window for measurement also drops, cancelling out the benefit. Mark Saffman of the University of Wisconsin-Madison explains, “This idea was suggested about 20 years ago that you could get around this by creating GHZ states of different sizes, and using the smallest GHZ state to measure the least significant bit of your measurement, and as you go to larger and larger GHZ states you’re adding more significant bits to your measurement result.”
In the Colorado experiment, Kaufman, Cao and colleagues used a novel, multi-qubit entangling technique to create GHZ states of Rydberg atoms in a programmable optical tweezer lattice. A Rydberg atom is an atom with one or more electrons in a highly-excited state. They showed that, when interrogated for short times, four-atom GHZ states achieved higher precisions than could be achieved with the same number of uncorrelated atoms. They also constructed gates of up to eight qubits. However, owing to their short lifetimes, they were unable to beat the standard quantum limit with these.
Cascade of GHZ qubits
The Colorado team therefore constructed a cascade of GHZ qubits of increasing sizes, with the largest containing eight atoms. They showed that the fidelity achieved by the cascade was superior to the fidelity achieved by a single large GHZ qubit. Cao compares this to using the large GHZ state on a clock as the second hand while progressively smaller states act as the minute and hour hands. The team did not demonstrate higher phase sensitivity than could theoretically be achieved with the same number of unentangled atoms, but Cao says this is simply a technical challenge.
Meanwhile in California, Manuel Endres and colleagues at Caltech also used GHZ states to do precision spectroscopy on the frequency of an atomic clock using Rydberg atoms in an optical tweezer array. They used a slightly different technique for preparing the GHZ states. This did not allow them to prepare such large GHZ states as their Coloradan counterparts, although Endres argues that their technique should be more scalable. The Caltech work, however, focused on mapping the output data onto “ancilla” qubits and demonstrating a universal set of quantum logic operations.
“The question is, ‘How can a quantum computer help you for a sensor?’” says Endres. “If you had a universal quantum computer that somehow produced a GHZ state on your sensor you could improve the sensing capabilities. The other thing is to take the signal from a quantum computer and do quantum post-processing on that signal. The vision in our [work] is to have a quantum computer integrated with a sensor.”
Saffman, who was not involved with either group, praises the work of both teams. He congratulates the Coloradans for setting out to build a better clock and succeeding – and praises the Californians for going in “another direction” with their GHZ states. Saffman says he would like to see the researchers produce larger GHZ states and show that such states can not only confer an improvement on a clock with the same limitations as a similar clock measured with random atoms, but can produce the world’s best clock overall.
Since 1988 Physics World has boasted among its authors some of the most eminent physicists of the 20th and 21st centuries, as well as some of the best popular-science authors. But while I am, in principle, aware of this, it can still be genuinely exciting to discover who wrote for Physics World before I joined the team in 2011. And for me – a self-avowed book nerd – the most exciting discovery was an article written by Isaac Asimov in 1990.
Asimov is best remembered for his hard science fiction. His Foundation trilogy (1951–1953) and decades of robot stories first collected in I, Robot (1950) are so seminal they have contributed words and concepts to the popular imagination, far beyond actual readers of his work. If you’ve ever heard of the Laws of Robotics (the first of which is that “a robot shall not harm a human, or by inaction allow a human to come to harm”), that was Asimov’s work.
I was introduced to Asimov through what remains the most “hard physics”-heavy sci-fi I have ever tackled: The Gods Themselves (1972). In this short novel, humans make contact with a parallel universe and manage to transfer energy from a parallel world to Earth. When a human linguist attempts to communicate with the “para-men”, he discovers this transfer may be dangerous. The narrative then switches to the parallel world, which is populated by the most “alien” aliens I can remember encountering in fiction.
Underlying this whole premise, though, is the fact that in the parallel world, the strong nuclear force, which binds protons and neutrons together, is even stronger than it is in our own. And Asimov was a good enough scientist that he worked into his novel everything that would be different – subtly or significantly – were this the case. It’s a physics thought experiment; a highly entertaining one that also encompasses ethics, astrobiology, cryptanalysis and engineering.
Of course, Asimov wrote non-fiction, too. His 500+ books include such titles as Understanding Physics (1966), Atom: Journey Across the Subatomic Cosmos (1991) and the extensive Library of the Universe series (1988–1990). The last two of these even came out while Physics World was being published.
So what did this giant of sci-fi and science communication write about for Physics World?
It was, of all things, a review of a book by someone else: specifically, Think of a Number by Malcolm E Lines, a British mathematician. Lines isn’t nearly so famous as his reviewer, but he was still writing popular-science books about mathematics as recently as 2020. Was Asimov impressed? You’ll have to read his review to find out.
The webinar is directly linked with a special issue of Plasma Physics and Controlled Fusion on Advances in the Physics Basis of Negative Triangularity Tokamaks; featuring contributions from all of the speakers, and many more papers from the leading groups researching this fascinating topic.
In recent years the fusion community has begun to focus on the practical engineering of tokamak power plants. From this, it became clear that the power exhaust problem, extracting the energy produced by fusion without melting the plasma-facing components, is just as important and challenging as plasma confinement. To these ends, negative triangularity plasma shaping holds unique promise.
Conceptually, negative triangularity is simple. Take the standard positive triangularity plasma shape, ubiquitous among tokamaks, and flip it so that the triangle points inwards. By virtue of this change in shape, negative triangularity plasmas have been experimentally observed to dramatically improve energy confinement, sometimes by more than a factor of two. Simultaneously, the plasma shape is also found to robustly prevent the transition to the improved confinement regime H-mode. While this may initially seem a drawback, the confinement improvement can enable negative triangularity to still achieve similar confinement to a positive triangularity H-mode. In this way, it robustly avoids the typical difficulties of H-mode: damaging edge localized modes (ELMs) and the narrow scrape-off layer (SOL) width. This is the promise of negative triangularity, an elegant and simple path to alleviating power exhaust while preserving plasma confinement.
The biggest deficiency is currently uncertainty. No tokamak in the world is designed to create negative triangularity plasmas and it has received a fraction of the theory community’s attention. In this webinar, through both theory and experiment, we will explore the knowns and unknowns of negative triangularity and evaluate its future as a power plant solution.
Justin Ball (chair) is a research scientist at the Swiss Plasma Center at EPFL in Lausanne, Switzerland. He earned his Masters from MIT in 2013 and his PhD in 2016 at Oxford University studying the effects of plasma shaping in tokamaks, for which he was awarded the European Plasma Physics PhD Award. In 2019, he and Jason Parisi published the popular science book, The Future of Fusion Energy. Currently, Justin is the principal investigator of the EUROfusion TSVV 2 project, a ten-person team evaluating the reactor prospects of negative triangularity using theory and simulation.
Alessandro Balestri is a PhD student at the Swiss Plasma Center (SPC) located within the École Polytechnique Fédérale de Lausanne (EPFL). His research focuses on using experiments and gyrokinetic simulations to achieve a deep understanding on how negative triangularity reduces turbulent transport in tokamak plasmas and how this beneficial effect can be optimized in view of a fusion power plant. He received his Bachelor and Master degrees in physics at the University of Milano-Bicocca where he carried out a thesis on the first gyrokinetic simulations for the negative triangularity option of the novel Divertor Tokamak Test facility.
Andrew “Oak” Nelson is an associate research scientist with Columbia University where he specializes in negative triangularity (NT) experiments and reactor design. Oak received his PhD in plasma physics from Princeton University in 2021 for work on the H-mode pedestal in DIII-D and has since dedicated his career to uncovering mechanisms to mitigate the power-handling needs faced by tokamak fusion pilot plants. Oak is an expert in the edge regions of NT plasmas and one of the co-leaders of the EU-US Joint Task Force on Negative Triangularity Plasmas. In addition to NT work, Oak consults regularly on various physics topics for Commonwealth Fusion Systems and heads several fusion-outreach efforts.
Tim Happel is the head of the Plasma Dynamics Division at the Max Planck Institute for Plasma Physics in Garching near Munich. His research centres around turbulence and tokamak operational modes with enhanced energy confinement. He is particularly interested in the physics of the Improved Energy Confinement Mode (I-Mode) and plasmas with negative triangularity. During his PhD, which he received in 2010 from the University Carlos III in Madrid, he developed a Doppler backscattering system for the investigation of plasma flows and their interaction with turbulent structures. For this work, he was awarded the Itoh Prize for Plasma Turbulence.
Haley Wilson is a PhD candidate studying plasma physics at Columbia University. Her main research interest is the integrated modelling of reactor-class tokamak core scenarios, with a focus on highly radiative, negative triangularity scenarios. The core modelling of MANTA is her first published work in this area, but her most recent manuscript submission expands the MANTA study to a broader operational space. She was recently selected for an Office of Science Graduate Student Research award, to work with Oak Ridge National Laboratory on whole device modelling of negative triangularity tokamaks using the FUSE framework.
Olivier Sauter obtained his PhD at CRPP-EPFL, Lausanne, Switzerland in 1992, followed by post-doc at General Atomics in 1992-93 and ITER-San Diego (1995/96), leading to the bootstrap current coefficients and experimental studies on Neoclassical tearing modes. He has been JET Task Force Leader, Eurofusion Research Topic Coordinator and recipient of the 2013 John Dawson Award for excellence in plasma physics research and nominated since 2016 as ITER Scientist Fellow in the area of integrated modelling. He is a senior scientist at SPC-EPFL, supervising several PhD theses, and active with AUG, DIII-D, JET, TCV, WEST focusing on real-time simulations and negative triangularity plasmas.
About this journal
Plasma Physics and Controlled Fusion is a monthly publication dedicated to the dissemination of original results on all aspects of plasma physics and associated science and technology.
Editor-in-chief: Jonathan Graves University of York, UK and EPFL, Switzerland.