“What makes a good astronaut?” asks director Hannah Berryman in the opening scene of Spacewoman. It’s a question few can answer better than Eileen Collins. As the first woman to pilot and command a NASA Space Shuttle, her career was marked by historic milestones, extraordinary challenges and personal sacrifices. Collins looks down the lens of the camera and, as she pauses for thought, we cut to footage of her being suited up in astronaut gear for the third time. “I would say…a person who is not prone to panicking.”
In Spacewoman, Berryman crafts a thoughtful, emotionally resonant documentary that traces Collins’s life from a determined young girl in Elmira, New York, to a spaceflight pioneer.
The film’s strength lies in its compelling balance of personal narrative and technical achievement. Through intimate interviews with Collins, her family and former colleagues, alongside a wealth of archival footage, Spacewoman paints a vivid portrait of a woman whose journey was anything but straightforward. From growing up in a working-class family affected by her parents’ divorce and Hurricane Agnes’s destruction, to excelling in the male-dominated world of aviation and space exploration, Collins’s resilience shines through.
Berryman wisely centres the film on the four key missions that defined Collins’s time at NASA. While this approach necessitates a brisk overview of her early military career, it allows for an in-depth exploration of the stakes, risks and triumphs of spaceflight. Collins’s pioneering 1995 mission, STS-63, saw her pilot the Space Shuttle Discovery in the first rendezvous with the Russian space station Mir, a mission fraught with political and technical challenges. The archival footage from this and subsequent missions provides gripping, edge-of-your-seat moments that demonstrate both the precision and unpredictability of space travel.
Perhaps Spacewoman’s most affecting thread is its examination of how Collins’s career intersected with her family life. Her daughter, Bridget, born shortly after her first mission, offers a poignant perspective on growing up with a mother whose job carried life-threatening risks. In one of the film’s most emotionally charged scenes, Collins recounts explaining the Challenger disaster to a young Bridget. Despite her mother’s assurances that NASA had learned from the tragedy, the subsequent Columbia disaster two weeks later underscores the constant shadow of danger inherent in space exploration.
These deeply personal reflections elevate Spacewoman beyond a straightforward biographical documentary. Collins’s son Luke, though younger and less directly affected by his mother’s missions, also shares touching memories, offering a fuller picture of a family shaped by space exploration’s highs and lows. Berryman’s thoughtful editing intertwines these recollections with historic footage, making the stakes feel immediate and profoundly human.
The film’s tension peaks during Collins’s final mission, STS-114, the first “return to flight” after Columbia. As the mission teeters on the brink of disaster due to familiar technical issues, Berryman builds a heart-pounding narrative, even for viewers unfamiliar with the complexities of spaceflight. Without getting bogged down in technical jargon, she captures the intense pressure of a mission fraught with tension – for those on Earth, at least.
Berryman’s previous films include Miss World 1970: Beauty Queens and Bedlam and Banned, the Mary Whitehouse Story. In a recent episode of the Physics World Stories podcast, she told me that she was inspired to make the film after reading Collins’s autobiography Through the Glass Ceiling to the Stars. “It was so personal,” she said, “it took me into space and I thought maybe we could do that with the viewer.” Collins herself joined us for that podcast episode and I found her to be that same calm, centred, thoughtful person we see in the film and who NASA clearly very carefully chose to command such an important mission.
Spacewoman isn’t just about near-misses and peril. It also celebrates moments of wonder: Collins describing her first sunrise from space or recalling the chocolate shuttles she brought as gifts for the Mir cosmonauts. These light-hearted anecdotes reveal her deep appreciation for the unique experience of being an astronaut. On the podcast, I asked Collins what one lesson she would bring from space to life on Earth. After her customary moment’s pause for thought, she replied “Reading books about science fiction is very important.” She was a fan of science fiction in her younger years , which enabled her to dream of the future that she realized at NASA and in space. But, she told me, these days she also reads about real science of the future (she was deep into a book on artificial intelligence when we spoke) and history too. Looking back at Collins’s history in space certainly holds lessons for us all.
Berryman’s directorial focus ultimately circles back to a profound question: how much risk is acceptable in the pursuit of human progress? Spacewoman suggests that those committed to something greater than themselves are willing to risk everything. Collins’s career embodies this ethos, defined by an unshakeable resolve, even in the face of overwhelming odds.
In the film’s closing moments, we see Collins speaking to a wide-eyed girl at a book signing. The voiceover from interviews talks of the women slated to be instrumental in humanity’s return to the Moon and future missions to Mars. If there’s one thing I would change about the film, it’s that the final word is given to someone other than Collins. The message is a fitting summation of her life and legacy, but I would like to have seen it delivered with her understated confidence of someone who has lived it. It’s a quibble though in a compelling film that I would recommend to anyone with an interest in space travel or the human experience here on Earth.
When someone as accomplished as Collins says that you need to work hard and practise, practise, practise it has a gravitas few others can muster. After all, she spent 10 years practising to fly the Space Shuttle – and got to do it for real twice. We see Collins speak directly to the wide-eyed girl in a flight suit as she signs her book and, as she does so, you can feel the words really hit home precisely because of who says them: “Reach for the stars. Don’t give up. Keep trying because you can do it.”
Spacewoman is more than a tribute to a trailblazer; it’s a testament to human perseverance, curiosity and courage. In Collins’s story, Berryman finds a gripping, deeply personal narrative that will resonate with audiences across the planet.
Spacewoman premiered at DOC NYC in November 2024 and is scheduled for theatrical release in 2025. A Haviland Digital Film in association with Tigerlily Productions.
Watch this short video filmed at the APS March Meeting in 2024, where Mark Elo, chief marketing officer of Tabor Quantum Solutions, introduces the Echo-5Q, which he explains is an industry collaboration between FormFactor and Tabor Quantum Systems, using the QuantWare quantum processing unit (QPU).
Elo points out that it is an out-of-the-box solution, allowing customers to order a full-stack system, including the software, refrigeration, control electronics and the actual QPU. With the Echo-5, it gets delivered and installed, so that the customer can start doing quantum measurements immediately. He explains that the Echo-5Q is designed at a price and feature point that increases the accessibility for on-site quantum computing.
Brandon Boiko, senior applications engineer with FormFactor, describes the how FormFactor developed the dilution refrigeration technology that the qubits get installed into. Boiko explains that the product has been designed to reduce the cost of entry into the quantum field – made accessible through FormFactor’s test-and- measurement programme, which allows people to bring their samples on site to take measurements.
Alessandro Bruno is founder and CEO of QuantWare, which provides the quantum processor for the Echo-5Q, the part that sits at the milli Kelvin stage of the dilution refrigerator, and that hosts five qubits. Bruno hopes that the Echo-5Q will democratize access to quantum devices – for education, academic research and start-ups.
As our world becomes ever more dependent on technology, an important question emerges: how much can we truly rely on that technology? To help researchers explore this question, IOP Publishing (which publishes Physics World) is launching a new peer-reviewed, open-access publication called Journal of Reliability Science and Engineering (JRSE). The journal will operate in partnership with the Institute of Systems Engineering (part of the China Academy of Engineering Physics) and will benefit from the editorial and commissioning support of the University of Electronic Science and Technology of China, Hunan University and the Beijing Institute of Structure and Environment Engineering.
“Today’s society relies much on sophisticated engineering systems to manufacture products and deliver services,” says JRSE’s co-editor-in-chief, Mingjian Zuo, a professor of mechanical engineering at the University of Alberta, Canada. “Such systems include power plants, vehicles, transportation and manufacturing. The safe, reliable and economical operation of all these requires the continuing advancement of reliability science and engineering.”
Defining reliability
The reliability of an object is commonly defined as the probability that it will perform its intended function adequately for a specified period of time. “The object in question may be a human being, product, system, or process,” Zuo explains. “Depending on its nature, corresponding sub-disciplines are human-, material-, structural-, equipment-, software- and system reliability.”
Key concepts in reliability science include failure modes, failure rates and reliability function and coherency, as well as measurements such as mean time-to-failure, mean time between failures, availability and maintainability. “Failure modes can be caused by effects like corrosion, cracking, creep, fracture, fatigue, delamination and oxidation,” Zuo explains.
To analyse such effects, researchers may use approaches such as fault tree analysis (FTA); failure modes, effects and criticality analysis (FMECA); and binary decomposition, he adds. These and many other techniques lie within the scope of JRSE, which aims to publish high-quality research on all aspects of reliability. This could, for example, include studies of failure modes and damage propagation as well as techniques for managing them and related risks through optimal design and reliability-centred maintenance.
A focus on extreme environments
To give the journal structure, Zuo and his colleagues identified six major topics: reliability theories and methods; physics of failure and degradation; reliability testing and simulation; prognostics and health management; reliability engineering applications; and emerging topics in reliability-related fields.
As well as regular issues published four times a year, JRSE will also produce special issues. A special issue on system reliability and safety in varying and extreme environments, for example, focuses on reliability and safety methods, physical/mathematical and data-driven models, reliability testing, system lifetime prediction and performance evaluation. Intelligent operation and maintenance of complex systems in varying and extreme environments are also covered.
Interest in extreme environments was one of the factors driving the journal’s development, Zuo says, due to the increasing need for modern engineering systems to operate reliably in highly demanding conditions. As examples, he cites wind farms being built further offshore; faster trains; and autonomous systems such as drones, driverless vehicles and social robots that must respond quickly and safely to ever-changing surroundings in close proximity to humans.
“As a society, we are setting ever higher requirements on critical systems such as the power grid and Internet, water distribution and transport networks,” he says. “All of these demand further advances in reliability science and engineering to develop tools for the design, manufacture and operation as well as the maintenance of today’s sophisticated engineering systems.”
The go-to platform for researchers and industrialists alike
Another factor behind the journal’s launch is that previously, there were no international journals focusing on reliability research by Chinese organizations. Since the discipline’s leaders include several such organizations, Zuo says the lack of international visibility has seriously limited scientific exchange and promotion of reliability research between China and the global community. He hopes the new journal will remedy this. “Notable features of the journal include gold open access (thanks to our partnership with IOP Publishing, a learned-society publisher that does not have shareholders) and a fast review process,” he says.
In general, the number of academic journals focusing on reliability science and engineering is limited, he adds. “JRSE will play a significant role in promoting the advances in reliability research by disseminating cutting-edge scientific discoveries and creative reliability assurance applications in a timely way.
“We are aiming that the journal will become the go-to platform for reliability researchers and industrialists alike.”
The first issue of JRSE will be published in March 2025, and its editors welcome submissions of original research reports as well as review papers co-authored by experts. “There will also be space for perspectives, comments, replies, and news insightful to the reliability community,” says Zuo. In the future, the journal plans to sponsor reliability-related academic forums and international conferences.
With over 100 experts from around the world on its editorial board, Zuo describes JRSE as scientist-led, internationally-focused and highly interdisciplinary. “Reliability is a critical measure of performance of all engineering systems used in every corner of our society,” he says. “This journal will therefore be of interest to disciplines such as mechanical-, electrical-, chemical-, mining- and aerospace engineering as well as the mathematical and life sciences.”
The anomalous and ultra-low thermal expansion of cordierite results from the interplay between lattice vibrations and the elastic properties of the material. That is the conclusion of Martin Dove at China’s Sichuan University and Queen Mary University of London in the UK and Li Li at the Civil Aviation Flight University of China. They showed that the material’s unusual behaviour stems from direction-varying elastic forces in its lattice, which act to vary cordierite’s thermal expansion along different directions.
Cordierite is a naturally-occurring mineral that can also be synthesized. Thanks to its remarkable thermal properties, it is used in products ranging from pizza stones to catalytic converters. When heated to high temperatures, it undergoes ultra-low thermal expansion along two directions, and it shrinks a tiny amount along the third direction. This makes it incredibly useful as a material that can be heated and cooled without changing size or suffering damage.
Despite its widespread use, scientists lack a fundamental understanding of how cordierite’s anomalous thermal expansion arises from the properties of its crystal lattice. Normally, thermal expansion (positive or negative) is understood in terms of Grüneisen parameters. These describe how vibrational modes (phonons) in the lattice cause it to expand or contract along each axis as the temperature changes.
Negative Grüneisen parameters describe a lattice that shrinks when heated, and are seen as key to understanding thermal contraction of cordierite. However, the material’s thermal response is not isotropic (it only contracts only along one axis when heated at high temperatures) so understanding cordierite in terms of its Grüneisen parameters alone is difficult.
Advanced molecular dynamics
In their study, Dove and Li used advanced molecular dynamics simulations to accurately model the behaviour of atoms in the cordierite lattice. Their closely matched experimental observations of the material’s thermal expansion, providing them with key insights into why the material has a negative thermal expansion in just one direction.
“Our research demonstrates that the anomalous thermal expansion of cordierite originates from a surprising interplay between atomic vibrations and elasticity,” Dove explains. The elasticity is described in the form of an elastic compliance tensor, which predicts how a material will distort in response to a force applied along a specific direction.
At lower temperatures, lattice vibrations occur at lower frequencies. In this case, the simulations predicted negative thermal expansion in all directions – which is in line with observations of the material.
At higher temperatures, the lattice becomes dominated by high-frequency vibrations. In principle, this should result in positive thermal expansion in all three directions. Crucially, however, Dove and Li discovered that this expansion is cancelled out by the material’s elastic properties, as described by its elastic compliance tensor.
What is more, the unique arrangement of crystal lattice meant that this tensor varied depending on the direction of the applied force, creating an imbalance that amplifies differences between the material’s expansion along each axis.
Cancellation mechanism
“This cancellation mechanism explains why cordierite exhibits small positive expansion in two directions and small negative expansion in the third,” Dove explains. “Initially, I was sceptical of the results. The initial data suggested uniform expansion behaviour at both high and low temperatures, but the final results revealed a delicate balance of forces. It was a moment of scientific serendipity.”
Altogether, Dove and Li’s result clearly shows that cordierite’s anomalous behaviour cannot be understood by focusing solely on the Grüneisen parameters of its three axes. It is crucial to take its elastic compliance tensor into account.
In solving this long-standing mystery, the duo now hope their results could help researchers to better predict how cordierite’s thermal expansion will vary at different temperatures. In turn, they could help to extend the useful applications of the material even further.
“Anisotropic materials like cordierite hold immense potential for developing high-performance materials with unique thermal behaviours,” Dove says. “Our approach can rapidly predict these properties, significantly reducing the reliance on expensive and time-consuming experimental procedures.”
A new way to measure the temperatures of objects by studying the effect of their black-body radiation on Rydberg atoms has been demonstrated by researchers at the US National Institute of Standards and Technology (NIST). The system, which provides a direct, calibration-free measure of temperature based on the fact that all atoms of a given species are identical, has a systematic temperature uncertainty of around 1 part in 2000.
The black-body temperature of an object is defined by the spectrum of the photons it emits. In the laboratory and in everyday life, however, temperature is usually measured by comparison to a reference. “Radiation is inherently quantum mechanical,” says NIST’s Noah Schlossberger, “but if you go to the store and buy a temperature sensor that measures the radiation via some sort of photodiode, the rate of photons converted into some value of temperature that you see has to be calibrated. Usually that’s done using some reference surface that’s held at a constant temperature via some sort of contact thermometer, and that contact thermometer has been calibrated to another contact thermometer – which in some indirect way has been tied into some primary standard at NIST or some other facility that offers calibration services.” However, each step introduces potential error.
This latest work offers a much more direct way of determining temperature. It involves measuring the black-body radiation emitted by an object directly, using atoms as a reference standard. Such a sensor does not need calibration because quantum mechanics dictates that every atom of the same type is identical. In Rydberg atoms the electrons are promoted to highly excited states. This makes the atoms much larger, less tightly bound and more sensitive to external perturbations. As part of an ongoing project studying their potential to detect electromagnetic fields, the researchers turned their attention to atom-based thermometry. “These atoms are exquisitely sensitive to black-body radiation,” explains NIST’s Christopher Holloway, who headed the work.
Packet of rubidium atoms
Central to the new apparatus is a magneto-optical trap inside a vacuum chamber containing a pure rubidium vapour. Every 300 ms, the researchers load a new packet of rubidium atoms into the trap, cool them to around 1 mK and excite them from the 5S energy level to the 32S Rydberg state using lasers. They then allow them to absorb black-body radiation from the surroundings for around 100 μs, causing some of the 32S atoms to change state. Finally, they apply a strong, ramped electric field, ionizing the atoms. “The higher energy states get ripped off easier than the lower energy states, so the electrons that were in each state arrive at the detector at a different time. That’s how we get this readout that tells us the population in each of the states,” explains Schlossberger, the work’s first author. The researchers can use this ratio to infer the spectrum of the black-body radiation absorbed by the atoms and, therefore, the temperature of the black body itself.
The researchers calculated the fractional systematic uncertainty of their measurement as 0.006, which corresponds to around 2 K at room temperature. Schlossberger concedes that this sounds relatively unimpressive compared to many commercial thermometers, but he notes that their thermometer measures absolute temperature, not relative temperature. “If I had two skyscrapers next to each other, touching, and they were an inch different in height, you could probably measure that difference to less than a millimetre,” he says, “If I asked you to tell me the total height of the skyscraper, you probably couldn’t.”
One application of their system, the researchers say, could lie in optical clocks, where frequency shifts due to thermal background noise are a key source of uncertainty. At present, researchers have to perform a lot of in situ thermometry to try to infer the black-body radiation experienced by the clock without disturbing the clock itself. Schlossberger says that, in future, one additional laser, could potentially allow the creation of Rydberg states in the clock atoms. “It’s sort of designed so that all the hardware is the same as atomic clocks, so without modifying the clock significantly it would tell you the radiation experienced by the same atoms that are used in the clock in the location they’re used.”
The work is described in a paper in Physical Review Research. Atomic physicist Kevin Weatherill of Durham University in the UK says “it’s an interesting paper and I enjoyed reading it”. “The direction of travel is to look for a quantum measurement for temperature – there are a lot of projects going on at NIST and some here in the UK,”, he says. He notes, however, that this experiment is highly complex and says “I think at the moment just measuring the width of an atomic transition in a vapour cell [which is broadened by the Doppler effect as atoms move faster] gives you a better bound on temperature than what’s been demonstrated in this paper.”
I am one of two co-chairs, along with my colleague Hendrik Ohldag, of the Quantum Materials Research and Discovery Thrust Area at ALS. Among other things, our remit is to advise ALS management on long-term strategy regarding quantum science, We launch and manage beamline development projects to enhance the quantum research capability at ALS and, more broadly, establish collaborations with quantum scientists and engineers in academia and industry.
In terms of specifics, the thrust area addresses problems of condensed-matter physics related to spin and quantum properties – for example, in atomically engineered multilayers, 2D materials and topological insulators with unusual electronic structures. As a beamline scientist, active listening is the key to establishing productive research collaborations with our scientific end-users – helping them to figure out the core questions they’re seeking to answer and, by extension, the appropriate experimental techniques to generate the data they need.
The task, always, is to translate external users’ scientific goals into practical experiments that will run reliably on the ALS beamlines. High-level organizational skills, persistence and exhaustive preparation go a long way: it takes a lot of planning and dialogue to ensure scientific users get high-quality experimental results.
What do you like best and least about your job?
A core part of my remit is to foster the collective conversation between ALS staff scientists and the quantum community, demystifying synchrotron science and the capabilities of the ALS with prospective end-users. The outreach activity is exciting and challenging in equal measure – whether that’s initiating dialogue with quantum experts at scientific conferences or making first contact using Teams or Zoom.
Internally, we also track the latest advances in fundamental quantum science and applied R&D. In-house colloquia are mandatory, with guest speakers from the quantum community engaging directly with ALS staff teams to figure out how our portfolio of synchrotron-based techniques – whether spectroscopy, scattering or imaging – can be put to work by users from research or industry. This learning and development programme, in turn, underpins continuous improvement of the beamline support services we offer to all our quantum end-users.
As for downsides: it’s never ideal when a piece of instrumentation suddenly “breaks” on a Friday afternoon. This sort of troubleshooting is probably the part of the job I like least, though it doesn’t happen often and, in any case, is a hit I’m happy to take given the flexibility inherent to my role.
What do you know today that you wish you knew when you were starting out in your career?
It’s still early days, but I guess the biggest lesson so far is to trust in my own specialist domain knowledge and expertise when it comes to engaging with the diverse research community working on quantum materials. My know-how in photon science – from coherent X-ray scattering and X-ray detector technology to in situ magnetic- and electric-field studies and automated measurement protocols – enables visiting researchers to get the most out of their beamtime at ALS.
Radiation therapy is a targeted cancer treatment that’s typically delivered over several weeks, using a plan that’s optimized on a CT scan taken before treatment begins. But during this time, the geometry of the tumour and the surrounding anatomy can vary, with different patients responding in different ways to the delivered radiation. To optimize treatment quality, such changes must be taken into consideration. And this is where adaptive radiotherapy comes into play.
Adaptive radiotherapy uses patient images taken throughout the course of treatment to update the initial plan and compensate for any anatomical variations. By adjusting the daily plan to match the patient’s daily anatomy, adaptive treatments ensure more precise, personalized and efficient radiotherapy, improving tumour control while reducing toxicity to healthy tissues.
The implementation of adaptive radiotherapy is continuing to expand, as technology developments enable adaptive treatments in additional tumour sites. And as more cancer centres worldwide choose this approach, there’s a need for flexible, innovative software to streamline this increasing clinical uptake.
Designed to meet these needs, RayStation – the treatment planning system from oncology software specialist RaySearch Laboratories – makes adaptive radiotherapy faster and easier to implement in clinical practice. The versatile and holistic RayStation software provides all of the tools required to support adaptive planning, today and into the future.
“We need to be fast, we need to be predictable and we need to be user friendly,” says Anna Lundin, technical product manager at RaySearch Laboratories.
Meeting the need for speed
Typically, adaptive radiotherapy uses the cone-beam CT (CBCT) images acquired for daily patient positioning to perform plan adaptation. For seamless implementation into the clinical workflow to fully reflect the daily anatomical changes, this procedure should be performed “online” with the patient on the treatment table, as opposed to an “offline” approach where plan adaptation occurs after the patient has left the treatment session. Such online adaptation, however, requires the ability to analyse patient scans and perform adaptive re-planning as rapidly as possible.
To fulfil the needs for streamlining all types of adaptive (online or offline) requirements, RayStation incorporates a package of advanced algorithms that perform key tasks, including segmentation, deformable registration, CBCT image enhancement and recontouring, all while the previously delivered dose is taken into consideration. By automating all of these steps, RayStation accelerates the replanning process to the speed needed for online adaptation, with the ability to create an adaptive plan in less than a minute.
Central to this process is RayStation’s dose tracking, which uses the daily images to calculate the actual dose delivered to the patient in each fraction. This ability to evaluate treatment progress, both on a daily basis and considering the estimated total dose, enables informed decisions as to whether to replan or not. The software’s flexible workflow allows users to perform daily dose tracking, compare plans with daily anatomical information against the original plans and adapt when needed.
“You can document trigger points for when adaptation is needed,” Lundin explains. “So you can evaluate whether the original plan is still good to go or whether you want to update or adapt the treatment plan to changes that have occurred.”
User friendly
Another challenge when implementing online adaptation is that its time constraints necessitate access to intuitive tools that enable quick decision making. “One of the big challenges with adaptive radiotherapy has been that a lot of the decision making and processes have been done on an ad hoc basis,” says Lundin. “We need to utilize the same protocol-based planning for adaptive as we do for standard treatment planning.”
As such, RaySearch Laboratories has focused on developing software that’s easy to use, efficient and accessible to a large proportion of clinical personnel. RayStation enables clinics to define and validate clinical procedures for a specific patient category in advance, eliminating the need to repeat this each time.
“By doing this, we let the clinicians focus on what they do best – taking responsibility for the clinical decisions – while RayStation focuses on providing all the data that they need to make that possible,” Lundin adds.
Versatile design
Lundin emphasizes that this accelerated adaptive replanning solution is built upon RayStation’s pre-existing comprehensive framework. “It’s not a parallel solution, it’s a progression,” she explains. “That means that all the tools that we have for robust optimization and evaluation, tools to assess biological effects, support for multiple treatment modalities – all that is also available when performing adaptive assessments and adaptive planning.”
This flexibility allows RayStation to support both photon- and ion-based treatments, as well as multiple imaging modalities. “We have built a framework that can be configured for each site and each clinical indication,” says Lundin. “We believe in giving users the freedom to select which techniques and which strategies to employ.”
We let the clinicians focus on what they do best – taking responsibility for the clinical decisions – while RayStation focuses on providing all the data that they need to make that possible
In particular, adaptive radiotherapy is gaining interest among the proton therapy community. For such highly conformal treatments, it’s even more important to regularly assess the actual delivered dose and ensure that the plan is updated to deliver the correct dose each day. “We have the first clinics using RayStation to perform adaptive proton treatments in an online fashion,” Lundin says.
It’s likely that we will also soon see the emergence of biologically adapted radiotherapy, in which treatments are adapted not just to the patient’s anatomy, but to the tumour’s biological characteristics and biological response. Here again, RayStation’s flexible and holistic architecture can support the replanning needs of this advanced treatment approach.
Predictable performance
Lundin points out that the progression towards online adaptation has been valuable for radiotherapy as a whole. “A lot of the improvements required to handle the time-critical procedures of online adaptive are of large benefit to all adaptive assessments,” she explains. “Fast and predictable replanning is crucial to allow us to treat more patients with greater specificity using less clinical resources. I see it as strictly necessary for online adaptive, but good for all.”
Artificial intelligence (AI) is not only a key component in enhancing the speed and consistency of treatment planning (with tools such as deep learning segmentation and planning), but also enables the handling of massive data sets, which in turn allows users to improve the treatment “intents” that they prescribe.
Learning more about how the delivered dose correlates with clinical outcome provides important feedback on the performance and effectiveness of current adaptive processes. This will help optimize and personalize future treatments and, ultimately, make the adaptive treatments more predictable and effective as a whole.
Lundin explains that full automation is the only way to generate the large amount of data in the predictable and consistent manner required for such treatment advancements, noting that it is not possible to achieve this manually.
RayStation’s ability to preconfigure and automate all of the steps needed for daily dose assessment enables these larger-scale dose follow-up clinical studies. The treatment data can be combined with patient outcomes, with AI employed to gain insight into how to best design treatments or predict how a tumour will respond to therapy.
“I look forward to seeing more outcome-related studies of adaptive radiotherapy, so we can learn from each other and have more general recommendations, as has been done in the field of standard radiotherapy planning,” says Lundin. “We need to learn and we need to improve. I think that is what adaptive is all about – to adapt each person’s treatment, but also adapt the processes that we use.”
Future evolution
Looking to the future, adaptive radiotherapy is expected to evolve rapidly, bolstered by ongoing advances in imaging techniques and increasing data processing speeds. RayStation’s machine learning-based segmentation and plan optimization algorithms will continue to play a central role in supporting this evolution, with AI making treatment adaptations more precise, personalized and efficient, enhancing the overall effectiveness of cancer treatment.
“RaySearch, with the foundation that we have in optimization and advancing treatment planning and workflows, is very well equipped to take on the challenges of these future developments,” Lundin adds. “We are looking forward to the improvements to come and determined to meet the expectations with our holistic software.”
This webinar will present the overall experience of a radiotherapy department that utilizes RTsafe QA solutions, including the RTsafe Prime and SBRT anthropomorphic phantoms for intracranial stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) applications, respectively, as well as the remote dosimetry services offered by RTsafe. The session will explore how these phantoms can be employed for end-to-end QA measurements and dosimetry audits in both conventional linacs and a Unity MR-Linac system. Key features of RTsafe phantoms, such as their compatibility with RTsafe’s remote dosimetry services for point (OSLD, ionization chamber), 2D (films), and 3D (gel) dosimetry, will be discussed. These capabilities enable a comprehensive SRS/SBRT accuracy evaluation across the entire treatment workflow – from imaging and treatment planning to dose delivery.
Christopher Schneider is the adaptive radiotherapy technical director at Mary Bird Perkins Cancer Center and serves as an adjunct assistant professor in the Department of Physics and Astronomy at Louisiana State University in Baton Rouge. Under his supervision, Mary Bird’s MR-guided adaptive radiotherapy program has provided treatment to more than 150 patients in its first year alone. Schneider’s research group focuses on radiation dosimetry, late effects of radiation, and the development of radiotherapy workflow and quality-assurance enhancements.
Imagine you have been transported to another universe with four spatial dimensions. What would the colour of the Sun be in this four-dimensional universe? You may assume that the surface temperature of the Sun is the same as in our universe and is approximately T = 6 × 103 K. [10 marks]
Boltzmann constant, kB = 1.38 × 10−23 J K−1
Speed of light, c = 3 × 108 m s−1
Solution
Black body radiation, spectral density: ε (ν) dν = hνρ (ν) n (ν)
The photon energy, E = hν where h is Planck’s constant and ν is the photon frequency.
The density of states, ρ (ν) = Aνn−1 where A is a constant independent of the frequency and the frequency term is the scaling of surface area of an n-dimensional sphere.
The Bose–Einstein distribution,
n(v)
where k is the Boltzmann constant and T is the temperature.
We let
and get
ε(x)
We do not need the constant of proportionality (which is not simple to calculate in 4D) to find the maximum of ε (x). Working out the constant just tells us how tall the peak is, but we are interested in where the peak is, not the total radiation.
We set this equal to zero for the maximum of the distribution,
This yields x = n (1 − e−x) where
and we can relate
and c being the speed of light.
This equation has the solution x = n +W (−ne−n) where W is the Lambert W function z = W (y) that solves zez = y (although there is a subtlety about which branch of the function). This is kind of useless to do anything with, though. One can numerically solve this equation using bisection/Newton–Raphson/iteration. Alternatively, one could notice that as the number of dimensions increases, e−x is small, so to leading approximation x ≈ n. One can do a little better iterating this, x ≈ n − ne−n which is what we will use. Note the second iteration yields
Number of dimensions, n
Numerical solution
Approximation
2
1.594
1.729
3
2.821
2.851
4 (the one we want)
3.921
3.927
5
4.965
4.966
6
5.985
5.985
Using the result above,
616 nm is middle of the spectrum, so it will look white with a green-blue tint. Note, we have used T = 6000 K for the temperature here, as given in the question.
It would also be valid to look at ε (λ) dλ instead of ε (ν) dν.
Question 2: Heavy stuff
In a parallel universe, two point masses, each of 1 kg, start at rest a distance of 1 m apart. The only force on them is their mutual gravitational attraction, F = –Gm1m2/r2. If it takes 26 hours and 42 minutes for the two masses to meet in the middle, calculate the value of the gravitational constant G in this universe. [10 marks]
Solution
First we will set up the equations of motion for our system. We will set one mass to be at position −x and the other to be at x, so the masses are at a distance of 2x from each other. Starting from Newton’s law of gravity:
we can then use Newton’s second law to rewrite the LHS,
which we can simplify to
It is important that you get the right factor here depending on your choice for the particle coordinates at the start. Note there are other methods of getting this point, e.g. reduced mass.
We can now solve the second order ODE above. We will not show the whole process here but present the starting point and key results. We can write the acceleration in terms of the velocity. The initial velocity is zero and the initial position
So,
and once the integrals are solved we can rearrange for the velocity,
Now we can form an expression for the total time taken for the masses to meet in the middle,
There are quite a few steps involved in solving this integral, for these solutions, we shall make use of the following (but do attempt to solve it for yourselves in full).
Hence,
We can now rearrange for G and substitute in the values given in the question, don’t forget to convert the time into seconds.
This is the generally accepted value for the gravitational constant of our universe as well.
Question 3: Just like clockwork
Consider a pendulum clock that is accurate on the Earth’s surface. Figure 1 shows a simplified view of this mechanism.
A pendulum clock runs on the gravitational potential energy from a hanging mass (1). The other components of the clock mechanism regulate the speed at which the mass falls so that it releases its gravitational potential energy over the course of a day. This is achieved using a swinging pendulum of length l (2), whose period is given by
where g is the acceleration due to gravity.
Each time the pendulum swings, it rocks a mechanism called an “escapement” (3). When the escapement moves, the gear attached to the mass (4) is released. The mass falls freely until the pendulum swings back and the escapement catches the gear again. The motion of the falling mass transfers energy to the escapement, which gives a “kick” to the pendulum that keeps it moving throughout the day.
Radius of the Earth, R = 6.3781 × 106 m
Period of one Earth day, τ0 = 8.64 × 104 s
How slow will the clock be over the course of a day if it is lifted to the hundredth floor of a skyscraper? Assume the height of each storey is 3 m. [4 marks]
Solution
We will write the period of oscillation of the pendulum at the surface of the Earth to be
.
At a height h above the surface of the Earth the period of oscillation will be
,
where g0 and gh are the acceleration due to gravity at the surface of the Earth and a height h above it respectively.
We can define τ0 to be the total duration of the day which is 8.64 × 104 seconds and equal to N complete oscillations of the pendulum at the surface. The lag is then τh which will equal N times the difference in one period of the two clocks, τh = NΔT, where ΔT = (Th − T0). We can now take a ratio of the lag over the day and the total duration of the day:
Then by substituting in the expressions we have for the period of a pendulum at the surface and height h we can write this in terms of the gravitational constant,
[Award 1 mark for finding the ratio of the lag over the day and the total period of the day.]
The acceleration due to gravity at the Earth’s surface is
where G is the universal gravitational constant, M is the mass of the Earth and R is the radius of the Earth. At an altitude h, it will be
[Award 1 mark for finding the expression for the acceleration due to gravity at height h.]
Substituting into our expression for the lag, we get:
This simplifies to an expression for the lag over a day. We can then substitute in the given values to find,
[Award 2 marks for completing the simplification of the ratio and finding the lag to be ≈ 4 s.]
Question 4: Quantum stick
Imagine an infinitely thin stick of length 1 m and mass 1 kg that is balanced on its end. Classically this is an unstable equilibrium, although the stick will stay there forever if it is perfectly balanced. However, in quantum mechanics there is no such thing as perfectly balanced due to the uncertainty principle – you cannot have the stick perfectly upright and not moving at the same time. One could argue that the quantum mechanical effects of the uncertainty principle on the system are overpowered by others, such as air molecules and photons hitting it or the thermal excitation of the stick. Therefore, to investigate we would need ideal conditions such as a dark vacuum, and cooling to a few millikelvins, so the stick is in its ground state.
Moment of inertia for a rod,
where m is the mass and l is the length.
Uncertainty principle,
There are several possible approximations and simplifications you could make in solving this problem, including:
sinθ ≈ θ for small θ
and
Calculate the maximum time it would take such a stick to fall over and hit the ground if it is placed in a state compatible with the uncertainty principle. Assume that you are on the Earth’s surface. [10 marks]
Hint: Consider the two possible initial conditions that arise from the uncertainty principle.
Solution
We can imagine this as an inverted pendulum, with gravity acting from the centre of mass and at an angle θ from the unstable equilibrium point.
[Award 1 mark for a suitable diagram of the system.]
We must now find the equations of motion of the system. For this we can use Newton’s second law in its rotational form τ = Iα (torque = moment of inertia × angular acceleration). We have another equation for torque we can use as well
where r is the distance from the pivot to the centre of mass and F is the force, which in this case is gravity mg. We can then equate these giving
Substituting in the given moment of inertia of the stick and that the angular acceleration
We can cancel a few things and rearrange to get a differential equation of the form:
we then can take the small angle approximation sin θ ≈ θ, resulting in
[Award 2 marks for finding the equation of motion for the system and using the small angle approximation.]
Solve with ansatz of θ = Aeωt + Be−ωt, where we have chosen
We can clearly see that this will satisfy the differential equation
Now we can apply initial conditions to find A and B, by looking at the two cases from the uncertainty principle
Case 1: The stick is at an angle but not moving
At t = 0, θ = Δθ
θ = Δθ = A + B
At t = 0,
, A=B
This implies Δθ = 2A and we can then find
So we can now write
or
Case 2: The stick is at upright but moving
At t = 0, θ = 0
This condition gives us A = −B.
At t = 0,
This initial condition has come from the relationship between the tangential velocity, Δv which equals the distance to the centre of mass from the pivot point, and the angular velocity . Using the above initial condition gives us where
We can now write
[Award 4 marks for finding the two expressions for θ by using the two cases of the uncertainty principle.]
Now there are a few ways we can finish off this problem, we shall look at three different ways. In each case when the stick has fallen on the ground .
Method 1
Take and , use then rearrange for tf in both cases. We have
Look at the expression for cosh−1x and sinh−1x given in the question. They are almost identical, we can then approximate the two arguments to each other and we find,
we can then substitute in the uncertainty principle as and then write an expression of , which we can put back into our arccosh expression (or do it for Δv and put into arcsinh).
where and .
Method 2
In this next method, when you get to the inverse hyperbolic functions, you can take an expansion of their natural log forms in the tending to infinity limit. To first order both functions give ln 2x, we can then equate the arguments and find Δx or Δv in terms of the other and use the uncertainty principle. This would give the time taken as,
where and .
Method 3
Rather than using hyperbolic functions, you could do something like above and do an expansion of the exponentials in the two expressions for tf or we could make life even easier and do the following.
Disregard the e−ωt terms as they will be much smaller than the eωt terms. Equate the two expressions for and then take the natural logs, once again arriving at an expression of
where and .
This method efficiently sets B = 0 when applying the initial conditions.
[Award 2 marks for reaching an expression for t using one of the methods above or a suitable alternative that gives the correct units for time.]
Then, by using one of the expressions above for time, substitute in the values and find that t = 10.58 seconds.
[Award 1 mark for finding the correct time value of t = 10.58 seconds.]
If you’re a student who wants to sign up for the 2025 edition of PLANCKS UK and Ireland, entries are now open at plancks.uk
Oil spills can pollute large volumes of surrounding water – thousands of times greater than the spill itself – causing long-term economic, environmental, social and ecological damage. Effective methods for in situ capture of spilled oil are thus essential to minimize contamination from such disasters.
Many oil spill cleanup technologies, however, exhibit poor hydrodynamic stability under complex flow conditions, which leads to poor oil-capture efficiency. To address this shortfall, researchers from Harbin Institute of Technology in China have come up with a new approach to oil cleanup using a vortex-anchored filter (VAF).
“Since the 1979 Atlantic Empress disaster, interception and adsorption have been the primary methods for oil spill recovery, but these are sensitive to water-flow fluctuation,” explains lead author Shijie You. Oil-in-water emulsions from leaking pipelines and offshore industrial discharge are particularly challenging, says You, adding that “these problems inspire us to consider how we can address hydrodynamic stability of oil-capture devices under turbulent conditions”.
Inspired by the natural world
You and colleagues believe that the answers to oil spill challenges could come from nature – arguably the world’s greatest scientist. They found that the deep-sea glass sponge E. aspergillum, which lives at depths of up to 1000 m in the Pacific Ocean, has an excellent ability to filter feed with a high effectiveness, selectivity and robustness, and that its food particles share similarities with oil droplets.
The anatomical structure of E. aspergillum – also known as Venus’ flower basket – provided inspiration for the researchers to design their VAF. By mimicking the skeletal architecture and filter feeding patterns of the sponge, they created a filter that exhibited a high mass transfer and hydrodynamic stability in cleaning up oil spills under turbulent flow.
“The E. aspergillum has a multilayered skeleton–flagellum architecture, which creates 3D streamlines with frequent collision, deflection, convergence and separation,” explains You. “This can dissipate macro-scale turbulent flows into small-scale swirling flow patterns called low-speed vortical flows within the body cavity, which reduces hydrodynamic load and enhances interfacial mass transfer.”
For the sponges, this allows them to maintain a high mechanical stability while absorbing nutrients from the water. The same principles can be applied to synthetic materials for cleaning up oil spills.
The VAF is a synthetic form of the sponge’s architecture and, according to You, “is capable of transferring kinematic energy from an external water flow into multiple small-scale low-speed vortical flows within the body cavity to enhance hydrodynamic stability and oil capture efficiency”.
The tubular outer skeleton of the VAF comprises a helical ridge and chequerboard lattice. It is this skeleton that creates a slow vortex field inside the cavity and enables mass transfer of oil during the filtering process. Once the oil has been forced into the filter, the internal area – composed of flagellum-shaped adsorbent materials – provides a large interfacial area for oil adsorption.
Using the VAF to clean up oil spills
The researchers used their nature-inspired VAF to clean up oil spills under complex hydrodynamic conditions. You states that “the VAF can retain the external turbulent-flow kinetic energy in the low-speed vortical flows – with a small Kolmogorov microscale (85 µm) [the size of the smallest eddy in a turbulent flow] – inside the cavity of the skeleton, leading to enhanced interfacial mass transfer and residence time”.
“This led to an improvement in the hydrodynamic stability of the filter compared to other approaches by reducing the Reynolds stresses in nearly quiescent wake flows,” You explains. The filter was also highly resistant to bending stresses caused at the boundary of the filter when trying separate viscous fluids. When put into practice, the VAF was able to capture more than 97% of floating, underwater and emulsified oils, even under strong turbulent flow.
When asked how the researchers plan to improve the filter further, You tells Physics World that they “will integrate the VAF with photothermal, electrothermal and electrochemical modules for environmental remediation and resource recovery”.
“We look forward to applying VAF-based technologies to solve sea pollution problems with a filter that has an outstanding flexibility and adaptability, easy-to-handle operability and scalability, environmental compatibility and life-cycle sustainability,” says You.
A topological electronic crystal (TEC) in which the quantum Hall effect emerges without the need for an external magnetic field has been unveiled by an international team of physicists. Led by Josh Folk at the University of British Columbia, the group observed the effect in a stack of bilayer and trilayer graphene that is twisted at a specific angle.
In a classical electrical conductor, the Hall voltage and its associated resistance appear perpendicular both to the direction of an applied electrical current and an applied magnetic field. A similar effect is also seen in 2D electron systems that have been cooled to ultra-low temperatures. But in this case, the Hall resistance becomes quantized in discrete steps.
This quantum Hall effect can emerge in electronic crystals, also known as Wigner crystals. These are arrays of electrons that are held in place by their mutual repulsion. Some researchers have considered the possibility of a similar effect occurring in structures called TECs, but without an applied magnetic field. This is called the “quantum anomalous Hall effect”.
Anomalous Hall crystal
“Several theory groups have speculated that analogues of these structures could emerge in quantized anomalous Hall systems, giving rise to a type of TEC termed an ‘anomalous Hall crystal’,” Folk explains. “This structure would be insulating, due to a frozen-in electronic ordering in its interior, with dissipation-free currents along the boundary.”
For Folk’s team, the possibility of anomalous hall crystals emerging in real systems was not the original focus of their research. Initially, a team at the University of Washington had aimed to investigate the diverse phenomena that emerge when two or more flakes of graphene are stacked on top of each other, and twisted relative to each other at different angles
While many interesting behaviours emerged from these structures, one particular stack caught the attention of Washington’s Dacen Waters, which inspired his team to get in touch with Folk and his colleagues in British Columbia.
In a vast majority of cases, the twisted structures studied by the team had moiré patterns that were very disordered. Moiré patterns occur when two lattices are overlaid and rotated relative to each other. Yet out of tens of thousands of permutations of twisted graphene stacks, one structure appeared to be different.
Exceptionally low levels of disorder
“One of the stacks seemed to have exceptionally low levels of disorder,” Folk describes. “Waters shared that one with our group to explore in our dilution refrigerator, where we have lots of experience measuring subtle magnetic effects that appear at a small fraction of a degree above absolute zero.”
As they studied this highly ordered structure, the team found that its moiré pattern helped to modulate the system’s electronic properties, allowing a TEC to emerge.
“We observed the first clear example of a TEC, in a device made up of bilayer graphene stacked atop trilayer graphene with a small, 1.5° twist,” Folk explains. “The underlying topology of the electronic system, combined with strong electron-electron interactions, provide the essential ingredients for the crystal formation.”
After decades of theoretical speculation, Folk, Waters and colleagues have identified an anomalous Hall crystal, where the quantum Hall effect emerges from an in-built electronic structure, rather than an applied magnetic field.
Beyond confirming the theoretical possibility of TECs, the researchers are hopeful that their results could lay the groundwork for a variety of novel lines of research.
“One of the most exciting long-term directions this work may lead is that the TEC by itself – or perhaps a TEC coupled to a nearby superconductor – may host new kinds of particles,” Folk says. “These would be built out of the ‘normal’ electrons in the TEC, but totally unlike them in many ways: such as their fractional charge, and properties that would make them promising as topological qubits.”
If you have worked in a university, research institute or business during the past two decades you will be familiar with the term equality, diversity and inclusion (EDI). There is likely to be an EDI strategy that includes measures and targets to nurture a workforce that looks more like the wider population and a culture in which everyone can thrive. You may find a reasoned business case for EDI, which extends beyond the organization’s legal obligations, to reflect and understand the people that you work with.
Look more closely and it is possible that the “E” in EDI is not actually equality, but rather equity. Equity is increasingly being used as a more active commitment, not least by the Institute of Physics, which publishes Physics World. How, though, is equity different to equality? What is causing this change of language and will it make any difference in practice?
These questions have become more pressing as discussions around equality and equity have become entwined in the culture wars. This is a particularly live issue in the US as Donald Trump’s second term as US president has begun to withdraw funding from EDI activities. But it has also influenced science policy in the UK.
The distinction between equality and equity is often illustrated by a cartoon published in 2016 by the UK artist Angus Maguire (above). It shows a fence and people of variable height gaining an equal view of a baseball match thanks to different numbers of crates that they stand on. This has itself, however, resulted in arguments about other factors such as the conditions necessary to watch the game in the stadium, or indeed even join in. That requires consideration about how the teams and the stadium could adapt to the needs of all potential participants, but also how these changes might affect the experience of others involved.
In terms of education, the Organization for Economic Co-operation and Development (OECD) states that equity “does not mean that all students obtain equal education outcomes, but rather that differences in students’ outcomes are unrelated to their background or to economic and social circumstances over which the students have no control”. This is an admirable goal, but there are questions about how to achieve it.
In OECD member countries, freedom of choice and competition yield social inequalities that flow through to education and careers. This means that governments are continually balancing the benefits of inspiring and rewarding individuals alongside concerns about group injustice.
In 2024, we hosted a multidisciplinary workshop about equity in science, and especially physics. Held at the University of Birmingham, it brought together physicists at different career stages with social scientists and people who had worked on science and education in government, charities and learned societies. At the event, social scientists told us that equality is commonly conceived as a basic right to be treated equally and not discriminated against, regardless of personal characteristics. This right provides a platform for “equality of opportunity” whereby barriers are removed so talent and effort can be rewarded.
Actions like these have helped to improve participation and progression across physics education and careers, but there is still significant underrepresentation and marginalization due to gender, ethnicity and social background. This is not unusual in open and competitive societies where the effects of promoting equal opportunities are often outweighed by the resources and connections of people with characteristics that are highly represented. Talent and effort are crucial in “high-performance” sectors such as academia and industry, but they are not the only factors influencing success.
Physicists at the meeting told us that they are motivated by intellectual curiosity, fascination with the natural world and love for their subject. Yet there is also, in physics, a culture of “genius” and competition, in which confidence is crucial. Facilities and working conditions, which often involve short-term contracts and international mobility, are difficult to balance alongside other life commitments. Although inequalities and exclusions are recognized, they are often ascribed to broader social factors or the inherent requirements of research. As a result, physicists tend not to accept responsibility for inequities within the discipline.
Physics has a culture of “hyper-meritocracy” where being correct counts more than respecting others
Many physicists want merit to be a reflection of talent and effort. But we identified that physics has a culture of “hyper-meritocracy” where being correct counts more than respecting others. Across the community, some believe in positive action beyond the removal of discrimination, but others can be actively hostile to any measure associated with EDI. This is a challenging environment for any young researcher and we heard distressing stories of isolation from women and colleagues who had hidden disabilities or those who were the first in their family to go to university.
The experience, positive or not, when joining a research group as a postgraduate or postdoctoral researcher is often linked with the personality of leaders. Peer groups and networks have helped many physicists through this period of their career, but it is also where the culture in a research group or department can drive some to the margins and ultimately out of the profession. In environments like this, equal opportunities have proved insufficient to advance diversity, let alone inclusion.
Culture change
Organizations that have replaced equality with equity want to signal a commitment not just to equal treatment, but also more equitable outcomes. However, those who have worked in government told us that some people become disengaged, thinking such efforts can only be achieved by reducing standards and threatening cultures they value. Given that physics needs technical proficiency and associated resources and infrastructure, it is not a discipline where equity can mean an equal distribution of positions and resources.
Physics can, though, counter the influence of wider inequalities by helping colleagues who are under-represented to gain the attributes, experiences and connections that are needed to compete successfully for doctoral studentships, research contracts and academic positions. It can also face up to its cultural problems, so colleagues who are minoritized feel less marginalized and they are ultimately recognized for their efforts and contributions.
This will require physicists giving more prominence to marginalized voices as well as critically and honestly examining their culture and tackling unacceptable behaviour. We believe we can achieve this by collaborating with our social science colleagues. That includes gathering and interpreting qualitative data, so there is shared understanding of problems, as well as designing strategies with people who are most affected, so that everyone has a stake in success.
If this happens, we can look forward to a physics community that genuinely practices equity, rather than espousing equality of opportunity.
One hundred and one years ago, Danish physicist Niels Bohr proposed a radical theory together with two young colleagues – Hendrik Kramers and John Slater – in an attempt to resolve some of the most perplexing issues in fundamental physics at the time. Entitled “The Quantum Theory of Radiation”, and published in the Philosophical Magazine, their hypothesis was quickly proved wrong, and has since become a mere footnote in the history of quantum mechanics.
Despite its swift demise, their theory perfectly illustrates the sense of crisis felt by physicists at that moment, and the radical ideas they were prepared to contemplate to resolve it. For in their 1924 paper Bohr and his colleagues argued that the discovery of the “quantum of action” might require the abandonment of nothing less than the first law of thermodynamics: the conservation of energy.
As we celebrate the centenary of Werner Heisenberg’s 1925 quantum breakthrough with the International Year of Quantum Science and Technology (IYQ) 2025, Bohr’s 1924 paper offers a lens through which to look at how the quantum revolution unfolded. Most physicists at that time felt that if anyone was going to rescue the field from the crisis, it would be Bohr. Indeed, this attempt clearly shows signs of the early rift between Bohr and Albert Einstein about the quantum realm, that would turn into a lifelong argument. Remarkably, the paper also drew on an idea that later featured in one of today’s most prominent alternatives to Bohr’s “Copenhagen” interpretation of quantum mechanics.
Genesis of a crisis
The quantum crisis began when German physicist Max Planck proposed the quantization of energy in 1900, as a mathematical trick for calculating the spectrum of radiation from a warm, perfectly absorbing “black body”. Later, in 1905, Einstein suggested taking this idea literally to account for the photoelectric effect, arguing that light consisted of packets or quanta of electromagnetic energy, which we now call photons.
Bohr entered the story in 1912 when, working in the laboratory of Ernest Rutherford in Manchester, he devised a quantum theory of the atom. In Bohr’s picture, the electrons encircling the atomic nucleus (that Rutherford had discovered in 1909) are constrained to specific orbits with quantized energies. The electrons can hop in “quantum jumps” by emitting or absorbing photons with the corresponding energy.
Bohr had no theoretical justification for this ad hoc assumption, but he showed that, by accepting it, he could predict (more or less) the spectrum of the hydrogen atom. For this work Bohr was awarded the 1922 Nobel Prize for Physics, the same year that Einstein collected the prize for his work on light quanta and the photoelectric effect (he had been awarded it in 1921 but was unable to attend the ceremony).
After establishing an institute of theoretical physics (now the Niels Bohr Institute) in Copenhagen in 1917, Bohr’s mission was to find a true theory of the quantum: a mechanics to replace, at the atomic scale, the classical physics of Isaac Newton that worked at larger scales. It was clear that classical physics did not work at the scale of the atom, although Bohr’s correspondence principle asserted that quantum theory should give the same results as classical physics at a large enough scale.
Quantum theory was at the forefront of physics at the time, and so was the most exciting topic for any aspiring young physicist. Three groups stood out as the most desirable places to work for anyone seeking a fundamental mathematical theory to replace the makeshift and sometimes contradictory “old” quantum theory that Bohr had cobbled together: that of Arnold Sommerfeld in Münich, of Max Born in Göttingen, and of Bohr in Copenhagen.
Dutch physicist Hendrik Kramers had hoped to work on his doctorate with Born – but in 1916 the First World War ruled that out, and so he opted instead for Copenhagen, in politically neutral Denmark. There he became Bohr’s assistant for ten years: as was the case with several of Bohr’s students, Kramers did the maths (it was never Bohr’s forte) while Bohr supplied the ideas, philosophy and kudos. Kramers ended up working on an impressive range of problems, from chemical physics to pure mathematics.
Reckless and radical
One of the most vexing question for Bohr and his Copenhagen circle in the early 1920s was how to think about electron orbits in atoms. Try as they might, they couldn’t find a way to make the orbits “fit” with experimental observations of atomic spectra.
Perhaps, in quantum systems like atoms, we have to abandon any attempt to construct a physical picture at all
Bohr and others, including Heisenberg, began to voice a possibility that seemed almost reckless: perhaps, in quantum systems like atoms, we have to abandon any attempt to construct a physical picture at all. Maybe we just can’t think of quantum particles as objects moving along trajectories in space and time.
This struck others, such as Einstein, as desperate, if not crazy. Surely the goal of science had always been to offer a picture of the world in terms of “things happening to objects in space”. What else could there be than that? How could we just give it all up?
But it was worse than that. For one thing, Bohr’s quantum jumps were supposed to happen instantaneously: an electron, say, jumping from one orbit to another in no time at all. In classical physics, everything happens continuously: a particle gets from here to there by moving smoothly across the intervening space, in some finite time. The discontinuities of quantum jumps seemed to some – like Austrian physicist Erwin Schrödinger in Vienna – bordering on the obscene.
Worse still was the fact that while the old quantum theory stipulated the energy of quantum jumps, there was nothing to dictate when they would happen – they simply did. In other words, there was no causal kick that instigated a quantum jump: the electron just seemed to make up its own mind about when to jump. As Heisenberg would later proclaim in his 1927 paper on the uncertainty principle (Zeitschrift für Physik43 172),quantum theory “establishes the final failure of causality”.
Such notions were not the only source of friction between the Copenhagen team and Einstein. Bohr didn’t like light quanta. While they seemed to explain the photoelectric effect, Bohr was convinced that light had to be fundamentally wave-like, so that photons (to use the anachronistic term) were only a way of speaking, not real entities.
To add to the turmoil in 1924, the French physicist Louis de Broglie had, in his doctoral thesis for the Sorbonne, turned the quantum idea on its head by proposing that particles such as electrons might show wave-like behaviour. Einstein had at first considered this too wild, but soon came round to the idea.
Go where the waves take you
In 1924 these virtually heretical ideas were only beginning to surface, but they were creating such a sense of crisis that it seemed anything was possible. In the 1960s, science historian Paul Forman suggested that the feverish atmosphere in physics was part of an even wider cultural current. By rejecting causality and materialism, the German quantum physicists, Forman said, were attempting to align their ideas with a rejection of mechanistic thinking while embracing the irrational – as was the fashion in the philosophical and intellectual circles of the beleaguered Weimar republic. The idea has been hotly debated by historians and philosophers of science – but it was surely in Copenhagen, not Munich or Göttingen, that the most radical attitudes to quantum theory were developing.
Then, just before Christmas in 1923, a new student arrived at Copenhagen. John Clarke Slater, who had a PhD in physics from Harvard, turned up at Bohr’s institute with a bold idea. “You know those difficulties about not knowing whether light is old-fashioned waves or Mr Einstein’s light particles”, he wrote to his family during a spell in Cambridge that November. “I had a really hopeful idea… I have both the waves and the particles, and the particles are sort of carried along by the waves, so that the particles go where the waves take them.” The waves were manifested in a kind of “virtual field” of some kind that spread throughout the system, and they acted to “pilot” the particles.
Bohr was mostly not a fan of Slater’s idea, not least because it retained the light particles that he wished to dispose of. But he liked Slater’s notion of a virtual field that could put one part of a quantum system in touch with others. Together with Slater and Kramers, Bohr prepared a paper in a remarkably short time (especially for him) outlining what became known as the Bohr-Kramers-Slater (BKS) theory. They sent it off to the Philosophical Magazine (where Bohr had published his seminal papers on the quantum atom) at the end of January 1924, and it was published in May (47(281) 785). As was increasingly characteristic of Bohr’s style, it was free of any mathematics (beyond Einstein’s quantum relationship E=hν).
In the BKS picture, an excited atom about to emit light can “communicate continually” with the other atoms around it via the virtual field. The transition, with emission of a light quantum, is then not spontaneous but induced by the virtual field. This mechanism could solve the long-standing question of how an atom “knows” which frequency of light to emit in order to reach another energy level: the virtual field effectively puts the atom “in touch” with all the possible energy states of the system.
The problem was that this meant the emitting atom was in instant communication with its environment all around – which violated the law of causality. Well then, so much the worse for causality: BKS abandoned it. The trio’s theory also violated the conservation of energy and momentum – so they had to go too.
Causality and conservation, abandoned
But wait: hadn’t these conservation laws been proved? In 1923 the American physicist Arthur Compton in Cambridge had shown that when light is scattered by electrons, they exchange energy, and the frequency of the light decreases as it gives up energy to the electrons. The results of Compton’s experiments agreed perfectly with predictions made on the assumptions that light is a stream of quanta (photons) and that their collisions with electrons conserve energy and momentum.
Ah, said BKS, but that’s only true statistically. The quantities are conserved on average, but not in individual collisions. After all, such statistical outcomes were familiar to physicists: that was the basis of the second law of thermodynamics, which presented the inexorable increase in entropy as a statistical phenomenon that need not constrain processes involving single particles.
The radicalism of the BKS paper got a mixed reception. Einstein, perhaps predictably, was dismissive. “Abandonment of causality as a matter of principle should be permitted only in the most extreme emergency”, he wrote. Wolfgang Pauli, who had worked in Copenhagen in 1922–23, confessed to being “completely negative” about the idea. Born and Schrödinger were more favourable.
Geiger agreed, and the duo devised a scheme for detecting both the scattered electron and the scattered photon in separate detectors. If causality and energy conservation were preserved, the detections should be simultaneous; while any delay between them could indicate a violation. As Bothe would later recall “The ‘question to Nature’ which the experiment was designed to answer could therefore be formulated as follows: is it exactly a scatter quantum and a recoil electron that are simultaneously emitted in the elementary process, or is there merely a statistical relationship between the two?” It was incredibly painstaking work to seek such coincident detections using the resources then available. But in April 1925 Geiger and Bothe reported simultaneity within a millisecond – close enough to make a strong case that Compton’s treatment, which assumed energy conservation, was correct. Compton himself, working with Alfred Simon using a cloud chamber, confirmed that energy and momentum were conserved for individual events (Phys. Rev. 26 289).
Revolutionary defeat… singularly important
Bothe was awarded the 1954 Nobel Prize for Physics for the work. He shared it with Born for his work on quantum theory, and Geiger would surely have been a third recipient, if he had not died in 1945. In his Nobel speech, Bothe definitively stated that “the strict validity of the law of the conservation of energy even in the elementary process had been demonstrated, and the ingenious way out of the wave-particle problem discussed by Bohr, Kramers, and Slater was shown to be a blind alley.”
Bohr was gracious in his defeat, writing to a colleague in April 1925 that “It seems… there is nothing else to do than to give our revolutionary efforts as honourable a funeral as possible.” Yet he was soon to have no need of that particular revolution, for just a few months later Heisenberg, who had returned to Göttingen after working with Bohr in Copenhagen for six months, came up the first proper theory of quantum mechanics, later called matrix mechanics.
“In spite of its short lifetime, the BKS theory was singularly important,” says historian of science Helge Kragh, now emeritus professor at the Niels Bohr Institute. “Its radically new approach paved the way for a greater understanding, that methods and concepts of classical physics could not be carried over in a future quantum mechanics.”
The Bothe-Geiger experiment that [the paper] inspired was not just an important milestone in early particle physics. It was also a crucial factor in Heisenberg’s argument [about] the probabilistic character of his matrix mechanics
The BKS paper was thus in a sense merely a mistaken curtain-raiser for the main event. But the Bothe-Geiger experiment that it inspired was not just an important milestone in early particle physics. It was also a crucial factor in Heisenberg’s argument that the probabilistic character of his matrix mechanics (and also of Schrödinger’s 1926 version of quantum mechanics, called wave mechanics) couldn’t be explained away as a statistical expression of our ignorance about the details, as it is in classical statistical mechanics.
Rather, the probabilities that emerged from Heisenberg’s and Schrödinger’s theories applied to individual events: they were, Heisenberg said, fundamental to the way single particles behave. Schrödinger was never happy with that idea, but today it seems inescapable.
Over the next few years, Bohr and Heisenberg argued that the new quantum mechanics indeed smashed causality and shattered the conventional picture of reality as an objective world of objects moving in space–time with fixed properties. Assisted by Born, Wolfgang Pauli and others, they articulated the “Copenhagen interpretation”, which became the predominant vision of the quantum world for the rest of the century.
Failed connections
Slater wasn’t at all pleased with what became of the idea he took to Copenhagen. Bohr and Kramers had pressured him into accepting their take on it, “without the little lump carried along on the waves”, as he put it in mid-January. “I am willing to let them have their way”, he wrote at the time, but in retrospect he felt very unhappy about his time in Denmark. After the BKS theory was disproved, Bohr wrote to Slater saying “I have a bad conscience in persuading you to our views”.
Slater replied that there was no need for that. But in later life – after he had made a name for himself in solid-state physics – Slater admitted to a great deal of resentment. “I completely failed to make any connection with Bohr”, he said in a 1963 interview with the historian of science Thomas Kuhn. “I fought with them [Bohr and Kramers] so seriously that I’ve never had any respect for those people since. I had a horrible time in Copenhagen.” While most of Bohr’s colleagues and students expressed adulation, Slater’s was a rare dissenting voice.
But Slater might have reasonably felt more aggrieved at what became of his “pilot-wave” idea. Today, that interpretation of quantum theory is generally attributed to de Broglie – who intimated a similar notion in his 1924 thesis, before presenting the theory in more detail at the famous 1927 Solvay Conference – and to American physicist David Bohm, who revitalized the idea in the 1950s. Initially dismissed on both occasions, the de Broglie-Bohm theory has gained advocates in recent years, not least because it can be applied to a classical hydrodynamic analogue, in which oil droplets are steered by waves on an oil surface.
Whether or not it is the right way to think about quantum mechanics, the pilot-wave theory touches on the deep philosophical problems of the field. Can we rescue an objective reality of concrete particles with properties described by hidden variables, as Einstein had advocated, from the fuzzy veil that Bohr and Heisenberg seemed to draw over the quantum world? Perhaps Slater would at least be gratified to know that Bohr has not yet had the last word.
In a ground-breaking theoretical study, two physicists have identified a new class of quasiparticle called the paraparticle. Their calculations suggest that paraparticles exhibit quantum properties that are fundamentally different from those of familiar bosons and fermions, such as photons and electrons respectively.
Using advanced mathematical techniques, Kaden Hazzard at Rice University in the US and his former graduate student Zhiyuan Wang, now at the Max Planck Institute of Quantum Optics in Germany, have meticulously analysed the mathematical properties of paraparticles and proposed a real physical system that could exhibit paraparticle behaviour.
“Our main finding is that it is possible for particles to have exchange statistics different from those of fermions or bosons, while still satisfying the important physical principles of locality and causality,” Hazzard explains.
Particle exchange
In quantum mechanics, the behaviour of particles (and quasiparticles) is probabilistic in nature and is described by mathematical entities known as wavefunctions. These govern the likelihood of finding a particle in a particular state, as defined by properties like position, velocity, and spin. The exchange statistics of a specific type of particle dictates how its wavefunction behaves when two identical particles swap places.
For bosons such as photons, the wavefunction remains unchanged when particles are exchanged. This means that many bosons can occupy the same quantum state, enabling phenomena like lasers and superfluidity. In contrast, when fermions such as electrons are exchanged, the sign of the wavefunction flips from positive to negative or vice versa. This antisymmetric property prevents fermions from occupying the same quantum state. This underpins the Pauli exclusion principle and results in the electronic structure of atoms and the nature of the periodic table.
Until now, physicists believed that these two types of particle statistics – bosonic and fermionic – were the only possibilities in 3D space. This is the result of fundamental principles like locality, which states that events occurring at one point in space cannot instantaneously influence events at a distant location.
Breaking boundaries
Hazzard and Wang’s research overturns the notion that 3D systems are limited to bosons and fermions and shows that new types of particle statistics, called parastatistics, can exist without violating locality.
The key insight in their theory lies in the concept of hidden internal characteristics. Beyond the familiar properties like position and spin, paraparticles require additional internal parameters that enable more complex wavefunction behaviour. This hidden information allows paraparticles to exhibit exchange statistics that go beyond the binary distinction of bosons and fermions.
Paraparticles exhibit phenomena that resemble – but are distinct from – fermionic and bosonic behaviours. For example, while fermions cannot occupy the same quantum state, up to two paraparticles could be allowed to coexist in the same point in space. This behaviour strikes a balance between the exclusivity of fermions and the clustering tendency of bosons.
Bringing paraparticles to life
While no elementary particles are known to exhibit paraparticle behaviour, the researchers believe that paraparticles might manifest as quasiparticles in engineered quantum systems or certain materials. A quasiparticle is particle-like collective excitation of a system. A familiar example is the hole, which is created in a semiconductor when a valence-band electron is excited to the conduction band. The vacancy (or hole) left in the valence band behaves as a positively-charged particle that can travel through the semiconductor lattice.
Experimental systems of ultracold atoms created by collaborators of the duo could be one place to look for the exotic particles. “We are working with them to see if we can detect paraparticles there,” explains Wang.
In ultracold atom experiments, lasers and magnetic fields are used to trap and manipulate atoms at temperatures near absolute zero. Under these conditions, atoms can mimic the behaviour of more exotic particles. The team hopes that similar setups could be used to observe paraparticle-like behaviour in higher-dimensional systems, such as 3D space. However, further theoretical advances are needed before such experiments can be designed.
Far-reaching implications
The discovery of paraparticles could have far-reaching implications for physics and technology. Fermionic and bosonic statistics have already shaped our understanding of phenomena ranging from the stability of neutron stars to the behaviour of superconductors. Paraparticles could similarly unlock new insights into the quantum world.
“Fermionic statistics underlie why some systems are metals and others are insulators, as well as the structure of the periodic table,” Hazzard explains. “Bose-Einstein condensation [of bosons] is responsible for phenomena such as superfluidity. We can expect a similar variety of phenomena from paraparticles, and it will be exciting to see what these are.”
As research into paraparticles continues, it could open the door to new quantum technologies, novel materials, and deeper insights into the fundamental workings of the universe. This theoretical breakthrough marks a bold step forward, pushing the boundaries of what we thought possible in quantum mechanics.
If you’re a postdoc who wants to nail down that permanent faculty position, it’s wise to publish a highly cited paper after your PhD. That’s the conclusion of a study by an international team of researchers, which finds that publication rates and performance during the postdoc period is key to academic retention and early-career success. Their analysis also reveals that more than four in 10 postdocs drop out of academia.
A postdoc is usually a temporary appointment that is seen as preparation for an academic career. Many researchers, however, end up doing several postdocs in a row as they hunt for a permanent faculty job. “There are many more postdocs than there are faculty positions, so it is a kind of systemic bottleneck,” says Petter Holme, a computer scientist at Aalto University in Finland, who led the study.
Previous research into academic career success has tended to overlook the role of a postdoc, focusing instead on, say, the impact of where researchers did their PhD. To eke out the effect of a postdoc, Holme and colleagues combined information of academics’ career stages from LinkedIn with their publication history obtained from Microsoft Academic Graph. The resulting global dataset covered 45, 572 careers spanning 25 years across all academic disciplines.
Overall, they found, 41% of postdocs left academia. But researchers who publish a highly cited paper as a postdoc are much more likely to pursue a faculty career – whether they published a highly cited paper during their PhD degree, or not. Publication rate is also vital, with researchers who publish less as postdocs compared to their PhD days being more likely to drop out of academia. Conversely, as productivity increased, so did the likelihood of a postdoc gaining a faculty position.
Expanding horizons
Holme says their results suggest that a researcher only has a few years “to get on the positive feedback loop, where one success leads to another”. In fact, the team found that a “moderate” change in research topic when moving from PhD to postdoc could improve future success. “It is a good thing to change your research focus, but not too much,” says Holme because it widens perspective without having to learn an entire new research topic from scratch.
Likewise, shifting perspective by moving abroad can also benefit postdocs. The analysis shows that a researcher moving abroad for a postdoc boosts their citations, but a move to a different institution in the same country has a negligible impact.
Two independent teams in the US have demonstrated the potential of using the optical properties of nanocrystals to create remote sensors that measure tiny forces on tiny length scales. One team is based at Stanford University and used nanocrystals to measure the micronewton-scale forces exerted by a worm as it chewed bacteria. The other team is based at several institutes and used the photon avalanche effect in nanocrystals to measure sub-nanonewton to micronewton forces. The latter technique could potentially be used to study forces involved in processes such as stem cell differentiation.
Remote sensing of forces at small scales is challenging, especially inside living organisms. Optical tweezers cannot make remote measurements inside the body, while fluorophores – molecules that absorb and re-emit light – can measure forces in organisms, but have limited range, problematic stability or, in the case of quantum dots, toxicity. Nanocrystals with optical properties that change when subjected to external forces offer a way forward.
At Stanford, materials scientist Jennifer Dionne led a team that used nanocrystals doped with ytterbium and erbium. When two ytterbium atoms absorb near-infrared photons, they can then transfer energy to a nearby erbium atom. In this excited state, the erbium can either decay directly to its lowest energy state by emitting red light, or become excited to an even higher-energy state that decays by emitting green light. These processes are called upconversion.
Colour change
The ratio of green to red emission depends on the separation between the ytterbium and erbium atoms, and the separation between the erbium atoms – explains Dionne’s PhD student Jason Casar, who is lead author of a paper describing the Stanford research. Forces on the nanocrystal can change these separations and therefore affect that ratio.
The researchers encased their nanocrystals in polystyrene vessels approximately the size of a E coli bacterium. They then mixed the encased nanoparticles with E coli bacteria that were then fed to tiny nematode worms. To extract the nutrients, the worm’s pharynx needs to break open the bacterial cell wall. “The biological question we set out to answer is how much force is the bacterium generating to achieve that breakage?” explains Stanford’s Miriam Goodman.
The researchers shone near-infrared light on the worms, allowing them to monitor the flow of the nanocrystals. By measuring the colour of the emitted light when the particles reached the pharynx, they determined the force it exerted with micronewton-scale precision.
Meanwhile, a collaboration of scientists at Columbia University, Lawrence Berkeley National Laboratory and elsewhere has shown that a process called photon avalanche can be used to measure even smaller forces on nanocrystals. The team’s avalanching nanoparticles (ANPs) are sodium yttrium fluoride nanocrystals doped with thulium – and were discovered by the team in 2021.
The fun starts here
The sensing process uses a laser tuned off-resonance from any transition from the ground state of the ANP. “We’re bathing our particles in 1064 nm light,” explains James Schuck of Columbia University, whose group led the research. “If the intensity is low, that all just blows by. But if, for some reason, you do eventually get some absorption – maybe a non-resonant absorption in which you give up a few phonons…then the fun starts. Our laser is resonant with an excited state transition, so you can absorb another photon.”
This creates a doubly excited state that can decay radiatively directly to the ground state, producing an upconverted photon. Or, it energy can be transferred to a nearby thulium atom, which becomes resonant with the excited state transition and can excite more thulium atoms into resonance with the laser. “That’s the avalanche,” says Schuck; “We find on average you get 30 or 40 of these events – it’s analogous to a chain reaction in nuclear fission.”
Now, Schuck and colleagues have shown that the exact number of photons produced in each avalanche decreases when the nanoparticle experiences compressive force. One reason is that the phonon frequencies are raised as the lattice is compressed, making non-radiatively decay energetically more favourable.
The thulium-doped nanoparticles decay by emitting either red or near infrared photons. As the force increases, the red dims more quickly, causing a change in the colour of the emitted light. These effects allowed the researchers to measure forces from the sub-nanonewton to the micronewton range – at which point the light output from the nanoparticles became too low to detect.
Not just for forces
Schuck and colleagues are now seeking practical applications of their discovery, and not just for measuring forces.
“We’re discovering that this avalanching process is sensitive to a lot of things,” says Schuck. “If we put these particles in a cell and we’re trying to measure a cellular force gradient, but the cell also happened to change its temperature, that would also affect the brightness of our particles, and we would like to be able to differentiate between those things. We think we know how to do that.”
If the technique could be made to work in a living cell, it could be used to measure tiny forces such as those involved in the extra-cellular matrix that dictate stem cell differentiation.
Andries Meijerink of Utrecht University in the Netherlands believes both teams have done important work that is impressive in different ways. Schuck and colleagues for unveiling a fundamentally new force sensing technique and Dionne’s team for demonstrating a remarkable practical application.
However, Meijerink is sceptical that photon avalanching will be useful for sensing in the short term. “It’s a very intricate process,” he says, adding, “There’s a really tricky balance between this first absorption step, which has to be slow and weak, and this resonant absorption”. Nevertheless, he says that researchers are discovering other systems that can avalanche. “I’m convinced that many more systems will be found,” he says.
Both studies are described in Nature. Dionne and colleagues report their results here, and Schuck and colleagues here.
Last year was the year of elections and 2025 is going to be the year of decisions.
After many countries, including the UK, Ireland and the US, went to the polls in 2024, the start of 2025 will see governments at the beginning of new terms, forced to respond swiftly to mounting economic, social, security, environmental and technological challenges.
These issues would be difficult to address at any given time, but today they come amid a turbulent geopolitical context. Governments are often judged against short milestones – the first 100 days or a first budget – but urgency should not come at the cost of thinking long-term, because the decisions over the next few months will shape outcomes for years, perhaps decades, to come. This is no less true for science than it is for health and social care, education or international relations.
In the UK, the first half of the year will be dominated by the government’s spending review. Due in late spring, it could be one of the toughest political tests for UK science, as the implications of the tight spending plans announced in the October budget become clear. Decisions about departmental spending will have important implications for physics funding, from research to infrastructure, facilities and teaching.
One of the UK government’s commitments is to establish 10-year funding cycles for key R&D activities – a policy that could be a positive improvement. Physics discoveries often take time to realise in full, but their transformational nature is indisputable. From fibre-optic communications to magnetic resonance imaging, physics has been indispensable to many of the world’s most impactful and successful innovations.
Emerging technologies, enabled by physicists’ breakthroughs in fields such as materials science and quantum physics, promise to transform the way we live and work, and create new business opportunities and open up new markets. A clear, comprehensive and long-term vision for R&D would instil confidence among researchers and innovators, and long-term and sustainable R&D funding would enable people and disruptive ideas to flourish and drive tomorrow’s breakthroughs.
Alongside the spending review, we are also expecting the publication of the government’s industrial strategy. The focus of the green paper published last year was an indication of how the strategy will place significance on science and technology in positioning the UK for economic growth.
If we don’t recognise the need to fund more physicists, we will miss so many of the opportunities that lie ahead
Physics-based industries are a foundation stone for the UK economy and are highly productive, as highlighted by research commissioned by the Institute of Physics, which publishes Physics World. Across the UK, the physics sector generates £229bn gross value added, or 11% of total UK gross domestic product. It creates a collective turnover of £643bn, or £1380bn when indirect and induced turnover is included.
Labour productivity in physics-based businesses is also strong at £84 300 per worker, per year. So, if physics is not at the heart of this effort, then the government’s mission of economic revival is in danger of failing to get off the launch pad.
A pivotal year
Another of the new government’s policy priorities is the strategic defence review, which is expected to be published later this year. It could have huge implications for physics given its core role in many of the technologies that contribute to the UK’s defence capabilities. The changing geopolitical landscape, and potential for strained relations between global powers, may well bring research security to the front of the national mind.
Intellectual property, and scientific innovation, are some of the UK’s greatest strengths and it is right to secure them. But physics discoveries in particular can be hampered by overzealous security measures. So much of the important work in our discipline comes from years of collaboration between researchers across the globe. Decisions about research security need to protect, not hamper, the future of UK physics research.
This year could also be pivotal for UK universities, as securing their financial stability and future will be one of the major challenges. Last year, the pressures faced by higher education institutions became apparent, with announcements of course closures, redundancies and restructures as a way of saving money. The rise in tuition fees has far from solved the problem, so we need to be prepared for more turbulence coming for the higher education sector.
These things matter enormously. We have heard that universities are facing a tough situation, and it’s getting harder for physics departments to exist. But if we don’t recognise the need to fund more physicists, we will miss so many of the opportunities that lie ahead.
As we celebrate the International Year of Quantum Science and Technology that marks the centenary of the initial development of quantum mechanics by Werner Heisenberg, 2025 is a reminder of how the benefits of physics span over decades.
We need to enhance all the vital and exciting developments that are happening in physics departments. The country wants and needs a stronger scientific workforce – just think about all those individuals who studied physics and now work in industries that are defending the country – and that workforce will be strongly dependent on physics skills. So our priority is to make sure that physics departments keep doing world-leading research and preparing the next generation of physicists that they do so well.
Permanent distortions in space–time caused by the passage of gravitational waves could be detectable from Earth. Known as “gravitational memory”, such distortions are predicted to occur most prominently when the core of a supernova collapses. Observing them could therefore provide a window into the death of massive stars and the creation of black holes, but there’s a catch: the supernova might have to happen in our own galaxy.
Physicists have been detecting gravitational waves from colliding stellar-mass black holes and neutron stars for almost a decade now, and theory predicts that core-collapse supernovae should also produce them. The difference is that unlike collisions, supernovae tend to be lopsided – they don’t explode outwards equally in all directions. It is this asymmetry – in both the emission of neutrinos from the collapsing core and the motion of the blast wave itself – that produces the gravitational-wave memory effect.
“The memory is the result of the lowest frequency aspects of these motions,” explains Colter Richardson, a PhD student at the University of Tennessee in Knoxville, US and co-lead author (with Haakon Andresen of Sweden’s Oskar Klein Centre) of a Physical Review Letters paper describing how gravitational-wave memory detection might work on Earth.
Filtering out seismic noise
Previously, many physicists assumed it wouldn’t be possible to detect the memory effect from Earth. This is because it manifests at frequencies below 10 Hz, where noise from seismic events tends to swamp detectors. Indeed, Harvard astrophysicist Kiranjyot Gill argues that detecting gravitational memory “would require exceptional sensitivity in the millihertz range to separate it from background noise and other astrophysical signals” – a sensitivity that she says Earth-based detectors simply don’t have.
Anthony Mezzacappa, Richardson’s supervisor at Tennessee, counters this by saying that while the memory signal itself cannot be detected, the ramp-up to it can. “The signal ramp-up corresponds to a frequency of 20–30 Hz, which is well above 10 Hz, below which the detector response needs to be better characterized for what we can detect on Earth, before dropping down to virtually 0 Hz where the final memory amplitude is achieved,” he tells Physics World.
The key, Mezzacappa explains, is a “matched filter” technique in which templates of what the ramp-up should look like are matched to the signal to pick it out from low-frequency background noise. Using this technique, the team’s simulations show that it should be possible for Earth-based gravitational-wave detectors such as LIGO to detect the ramp-up even though the actual deformation effect would be tiny – around 10-16 cm “scaled to the size of a LIGO detector arm”, Richardson says.
The snag is that for the ramp-up to be detectable, the simulations suggest the supernova would need to be close – probably within 10 kiloparsecs (32,615 light-years) of Earth. That would place it within our own galaxy, and galactic supernovae are not exactly common. The last to be observed in real time was spotted by Johannes Kepler in 1604; though there have been others since, we’ve only identified their remnants after the fact.
Going to the Moon
Mezzacappa and colleagues are optimistic that multimessenger astronomy techniques such as gravitational-wave and neutrino detectors will help astronomers identify future Milky Way supernovae as they happen, even if cosmic dust (for example) hides their light for optical observers.
Gill, however, prefers to look towards the future. In a paper under revision at Astrophysical Journal Letters, and currently available as a preprint, she cites two proposals for detectors on the Moon that could transform gravitational-wave physics and extend the range at which gravitational memory signals can be detected.
The first, called the Lunar Gravitational Wave Antenna, would use inertial sensors to detect the Moon shaking as gravitational waves ripple through it. The other, known as the Laser Interferometer Lunar Antenna, would be like a giant, triangular version of LIGO with arms spanning tens of kilometres open to space. Both are distinct from the European Space Agency’s Laser Interferometer Space Antenna, which is due for launch in the 2030s, but is optimized to detect gravitational waves from supermassive black holes rather than supernovae.
“Lunar-based detectors or future space-based observatories beyond LISA would overcome the terrestrial limitations,” Gill argues. Such detectors, she adds, could register a memory effect from supernovae tens or even hundreds of millions of light-years away. This huge volume of space would encompass many galaxies, making the detection of gravitational waves from core-collapse supernovae almost routine.
The memory of something far away
In response, Richardson points out that his team’s filtering method could also work at longer ranges – up to approximately 10 million light-years, encompassing our own Local Group of galaxies and several others – in certain circumstances. If a massive star is spinning very quickly, or it has an exceptionally strong magnetic field, its eventual supernova explosion will be highly collimated and almost jet-like, boosting the amplitude of the memory effect. “If the amplitude is significantly larger, then the detection distance is also significantly larger,” he says.
Whatever technologies are involved, both groups agree that detecting gravitational-wave memory is important. It might, for example, tell us whether a supernova has left behind a neutron star or a black hole, which would be valuable because the reasons one forms and not the other remain a source of debate among astrophysicists.
“By complementing other multimessenger observations in the electromagnetic spectrum and neutrinos, gravitational-wave memory detection would provide unparalleled insights into the complex interplay of forces in core-collapse supernovae,” Gill says.
Richardson agrees that a detection would be huge and hopes that his work and that of others “motivates new investigations into the low-frequency region of gravitational-wave astronomy”.
Several years ago I was sitting at the back of a classroom supporting a newly qualified science teacher. The lesson was going well, a pretty standard class on Hooke’s law, when a student leaned over to me and asked “Why are we doing this? What’s the point?”.
Having taught myself, this was a question I had been asked many times before. I suspect that when I was a teacher, I went for the knee-jerk “it’s useful if you want to be an engineer” response, or something similar. This isn’t a very satisfying answer, but I never really had the time to formulate a real justification for studying Hooke’s law, or physics in general for that matter.
Who is the physics curriculum designed for? Should it be designed for the small number of students who will pursue the subject, or subjects allied to it, at the post-16 and post-18 level? Or should we be reflecting on the needs of the overwhelming majority who will never use most of the curriculum content again? Only about 10% of students pursue physics or physics-rich subjects post-16 in England, and at degree level, only around 4000 students graduate with physics degrees in the UK each year.
One argument often levelled at me is that learning this is “useful”, to which I retort – in a similar vein to the student from the first paragraph – “In what way?” In the 40 years or so since first learning Hooke’s law, I can’t remember ever explicitly using it in my everyday life, despite being a physicist. Whenever I give a talk on this subject, someone often pipes up with a tenuous example, but I suspect they are in the minority. An audience member once said they consider the elastic behaviour of wire when hanging pictures, but I suspect that many thousands of pictures have been successfully hung with no recourse to F = –kx.
Hooke’s law is incredibly important in engineering but, again, most students will not become engineers or rely on a knowledge of the properties of springs, unless they get themselves a job in a mattress factory.
From a personal perspective, Hooke’s law fascinates me. I find it remarkable that we can see the macroscopic properties of materials being governed by microscopic interactions and that this can be expressed in a simple linear form. There is no utilitarianism in this, simply awe, wonder and aesthetics. I would always share this “joy of physics” with my students, and it was incredibly rewarding when this was reciprocated. But for many, if not most, my personal perspective was largely irrelevant, and they knew that the curriculum content would not directly support them in their future careers.
At this point, I should declare my position – I don’t think we should take Hooke’s law, or physics, off the curriculum, but my reason is not the one often given to students.
A series of lessons on Hooke’s law is likely to include: experimental design; setting up and using equipment; collecting numerical data using a range of devices; recording and presenting data, including graphs; interpreting data; modelling data and testing theories; devising evidence-based explanations; communicating ideas; evaluating procedures; critically appraising data; collaborating with others; and working safely.
Science education must be about preparing young people to be active and critical members of a democracy, equipped with the skills and confidence to engage with complex arguments that will shape their lives. For most students, this is the most valuable lesson they will take away from Hooke’s law. We should encourage students to find our subject fascinating and relevant, and in doing so make them receptive to the acquisition of scientific knowledge throughout their lives.
At a time when pressures on the education system are greater than ever, we must be able to articulate and justify our position within a crowded curriculum. I don’t believe that students should simply accept that they should learn something because it is on a specification. But they do deserve a coherent reason that relates to their lives and their careers. As science educators, we owe it to our students to have an authentic justification for what we are asking them to do. As physicists, even those who don’t have to field tricky questions from bored teenagers, I think it’s worthwhile for all of us to ask ourselves how we would answer the question “What is the point of this?”.
The New Journal of Physics (NJP) has long been a flagship journal for IOP Publishing. The journal published its first volume in 1998 and was an early pioneer of open-access publishing. Co-owned by the Institute of Physics, which publishes Physics World, and the Deutsche Physikalische Gesellschaft (DPG), after some 25 years the journal is now seeking to establish itself further as a journal that represents the entire range of physics disciplines.
NJP publishes articles in pure, applied, theoretical and experimental research, as well as interdisciplinary topics. Research areas include optics, condensed-matter physics, quantum science and statistical physics, and the journal publishes a range of article types such as papers, topical reviews, fast-track communications, perspectives and special issues.
While NJP has been seen as a leading journal for quantum information, optics and condensed-matter physics, the journal is currently undergoing a significant transformation to broaden its scope to attract a wider array of physics disciplines. This shift aims to enhance the journal’s relevance, foster a broader audience and maintain NJP’s position as a leading publication in the global scientific community.
While quantum physics in general, and quantum optics and quantum information in particular, will remain crucial areas for the journal, researchers in other fields such as gravitational-wave research, condensed- and soft-matter physics, polymer physics, theoretical chemistry, statistical and mathematical physics are being encouraged to submit their articles to the journal. “It’s a reminder to the community that NJP is a journal for all kinds of physics and not just a select few,” says quantum physicist Andreas Buchleitner from the Albert-Ludwigs-Universität Freiburg who is NJP’s editor-in-chief.
Historically, NJP has had a strong focus on theoretical physics, particularly in quantum information. Yet another significant aspect of NJP’s new strategy is the inclusion of more experimental research. Attracting high-quality experimental papers to balance its content and enhance its reputation as a comprehensive physics journal, will also allow it to compete with other leading physics journals. Part of this shift will also involve attracting a reliable and loyal group of authors who regularly publish their best work in NJP.
A broader scope
To aid this move, NJP has recently grown its editorial board to add expertise in subjects such as gravitational-wave physics. This diversity of capabilities is crucial to evaluate submissions from different areas of physics and maintain high standards of quality during the peer-review process. That point is particularly relevant for Buchleitner, who sees the expansion of the editorial board as helping to improve the journal’s handling of submissions to ensure that authors feel their work is being evaluated fairly and by knowledgeable and engaged individuals. “Increasing the editorial board was quite an important concept in terms of helping the journal expand,” adds Buchleitner. “What is important to me is that scientists who contact the journal feel that they are talking to people and not to artificial intelligence substitutes.”
While citation metrics such as impact factors are often debated in terms of their scientific value, they remain essential for a journal’s visibility and reputation. In the competitive landscape of scientific publishing, they can set a journal apart from its competitors. With that in mind, NJP, which has an impact factor of 2.8, is also focusing on improving its citation indices to compete with top-tier journals.
Yet that doesn’t only just include the impact factor but other metrics that ensure efficient and constructive handling of submissions that will encourage researchers to publish with the journal again. To set it apart from competitors, the time taken to first decision before peer review, for example, is only six days while the journal has a median of 50 days to first decision after peer review.
Society benefits
While NJP pioneered the open-access model of scientific publishing, that position is no longer unique given the huge increase in open-access journals over the past decade. Yet the publishing model continues to be an important aspect of the journal’s identity to ensure that the research it publishes is freely available to all. Another crucial factor to attract authors and set it apart from commercial entities is that NJP is published by learned societies – the IOP and DPG.
NJP has often been thought of as a “European journal”. Indeed, NJP’s role is significant in the context of the UK leaving the European Union, in that it serves as a bridge between the UK and mainland European research communities. “That’s one of the reasons why I like the journal,” says Buchleitner, who adds that with a wider scope NJP will not only publish the best research from around the world but also strengthen its identity as a leading European journal.
The darkest, clearest skies anywhere in the world could suffer “irreparable damage” by a proposed industrial megaproject. That is the warning from the European Southern Observatory (ESO) in response to plans by AES Andes, a subsidiary of the US power company AES Corporation, to develop a green hydrogen project just a few kilometres from ESO’s flagship Paranal Observatory in Chile’s Atacama Desert.
The Atacama Desert is considered one of the most important astronomical research sites in the world due to its stable atmosphere and lack of light pollution. Sitting 2635 m above sea level, on Cerro Paranal, the Paranal Observatory is home to key astronomical instruments including the Very Large Telescope. The Extremely Large Telescope (ELT) – the largest visible and infrared light telescope in the world – is also being constructed at the observatory on Cerro Armazones with first light expected in 2028.
AES Chile submitted an Environmental Impact Assessment in Chile for an industrial-scale green hydrogen project at the end of December. The complex is expected to cover more than 3000 hectares – similar in size to 1200 football pitches. According to AES, the project is in the early stages of development, but could include green hydrogen and ammonia production plants, solar and wind farms as well as battery storage facilities.
ESO is calling for the development to be relocated to preserve “one of Earth’s last truly pristine dark skies” and “safeguard the future” of astronomy. “The proximity of the AES Andes industrial megaproject to Paranal poses a critical risk to the most pristine night skies on the planet,” says ESO director general Xavier Barcons. “Dust emissions during construction, increased atmospheric turbulence, and especially light pollution will irreparably impact the capabilities for astronomical observation.”
In a statement sent to Physics World, an AES spokesperson says they “understand there are concerns raised by ESO regarding the development of renewable energy projects in the area”. The spokesperson adds that the project would be in an area “designated for renewable energy development”. They also claim that the company is “dedicated to complying with all regulatory guidelines and rules” and “supporting local economic development while maintaining the highest environmental and safety standards”.
According to the statement, the proposal “incorporates the highest standards in lighting” to comply with Chilean regulatory requirements designed “to prevent light pollution, and protect the astronomical quality of the night skies”.
Yet Romano Corradi, director of the Gran Telescopio Canarias, which is located at the Roque de los Muchachos Observatory, La Palma, Spain, noted that it is “obvious” that the light pollution from such a large complex will negatively affect observations. “There are not many places left in the world with the dark and other climatic conditions necessary to do cutting-edge science in the field of observational astrophysics,” adds Corradi. “Light pollution is a global effect and it is therefore essential to protect sites as important as Paranal.”
Biomedical microrobots could revolutionize future cancer treatments, reliably delivering targeted doses of toxic cancer-fighting drugs to destroy malignant tumours while sparing healthy bodily tissues. Development of such drug-delivering microrobots is at the forefront of biomedical engineering research. However, there are many challenges to overcome before this minimally invasive technology moves from research lab to clinical use.
Microrobots must be capable of rapid, steady and reliable propulsion through various biological materials, while generating enhanced image contrast to enable visualization through thick body tissue. They require an accurate guidance system to precisely target diseased tissue. They also need to support sizable payloads of drugs, maintain their structure long enough to release this cargo, and then efficiently biodegrade – all without causing any harm to the body.
Aiming to meet this tall order, researchers at the California Institute of Technology (Caltech) and the University of Southern California have designed a hydrogel-based, image-guided, bioresorbable acoustic microrobot (BAM) with these characteristics and capabilities. Reporting their findings in Science Robotics, they demonstrated that the BAMs could successfully deliver drugs that decreased the size of bladder tumours in mice.
Microrobot design
The team, led by Caltech’s Wei Gao, fabricated the hydrogel-based BAMs using high-resolution two-photon polymerization. The microrobots are hollow spheres with an outer diameter of 30 µm and an 18 µm-diameter internal cavity to trap a tiny air bubble inside.
The BAMs have a hydrophobic inner surface to prolong microbubble retention within biofluids and a hydrophilic outer layer that prevents microrobot clustering and promotes degradation. Magnetic nanoparticles and therapeutic agents integrated into the hydrogel matrix enable wireless magnetic steering and drug delivery, respectively.
The entrapped microbubbles are key as they provide propulsion for the BAMs. When stimulated by focused ultrasound (FUS), the bubbles oscillate at their resonant frequencies. This vibration creates microstreaming vortices around the BAM, generating a propulsive force in the opposite direction of the flow. The microbubbles inside the BAMs also act as ultrasound contrast agents, enabling real-time, deep-tissue visualization.
The researchers designed the microrobots with two cylinder-like openings, which they found achieves faster propulsion speeds than single- or triple-opening spheres. They attribute this to propulsive forces that run parallel to the sphere’s boundary improving both speed and stability of movement when activated by FUS.
They also discovered that asymmetric placement of the microbubble centre from the centre of the sphere generated propulsion speeds more than twice that achieved by BAMS with a symmetric design.
To perform simultaneous imaging of BAM location and acoustic propulsion within soft tissue, the team employed a dual-probe design. An ultrasound imaging probe enabled real-time imaging of the bubbles, while the acoustic field generated by a FUS probe (at an excitation frequency of 480 kHz and an applied acoustic pressure of 626 kPa peak-to-peak) provided effective propulsion.
In vitro and in vivo testing
The team performed real-time imaging of the propulsion of BAMs in vitro, using an agarose chamber to simulate an artificial bladder. When exposed to an ultrasound field generated by the FUS probe, the BAMs demonstrated highly efficient motion, as observed in the ultrasound imaging scans. The propulsion direction of BAMs could be precisely controlled by an external magnetic field.
The researchers also conducted in vivo testing, using laboratory mice with bladder cancer and the anti-cancer drug 5-fluorouracil (5-FU). They treated groups of mice with either phosphate buffered saline, free drug, passive BAMs or active (acoustically actuated and magnetically guided) BAMs, at three day intervals over four sessions. They then monitored the tumour progression for 21 days, using bioluminescence signals emitted by cancer cells.
The active BAM group exhibited a 93% decrease in bioluminescence by the 14th day, indicating large tumour shrinkage. Histological examination of excised bladders revealed that mice receiving this treatment had considerably reduced tumour sizes compared with the other groups.
“Embedding the anticancer drug 5-FU into the hydrogel matrix of BAMs substantially improved the therapeutic efficiency compared with 5-FU alone,” the authors write. “These BAMs used a controlled-release mechanism that prolonged the bioavailability of the loaded drug, leading to sustained therapeutic activity and better outcomes.”
Mice treated with active BAMS experienced no weight changes, and no adverse effects to the heart, liver, spleen, lung or kidney compared with the control group. The researchers also evaluated in vivo degradability by measuring BAM bioreabsorption rates following subcutaneous implantation into both flanks of a mouse. Within six weeks, they observed complete breakdown of the microrobots.
Gao tells Physics World that the team has subsequently expanded the scope of its work to optimize the design and performance of the microbubble robots for broader biomedical applications.
“We are also investigating the use of advanced surface engineering techniques to further enhance targeting efficiency and drug loading capacity,” he says. “Planned follow-up studies include preclinical trials to evaluate the therapeutic potential of these robots in other tumour models, as well as exploring their application in non-cancerous diseases requiring precise drug delivery and tissue penetration.”
So-called “forever chemicals”, or per- and polyfluoroalkyl substances (PFAS), are widely used in consumer, commercial and industrial products, and have subsequently made their way into humans, animals, water, air and soil. Despite this ubiquity, there are still many unknowns regarding the potential human health and environmental risks that PFAS pose.
Join us for an in-depth exploration of PFAS with four leading experts who will shed light on the scientific advances and future challenges in this rapidly evolving research area.
Our panel will guide you through a discussion of PFAS classification and sources, the journey of PFAS through ecosystems, strategies for PFAS risk mitigation and remediation, and advances in the latest biotechnological innovations to address their effects.
Sponsored by Sustainability Science and Technology, a new journal from IOP Publishing that provides a platform for researchers, policymakers, and industry professionals to publish their research on current and emerging sustainability challenges and solutions.
Jonas Baltrusaitis, inaugural editor-in-chief of Sustainability Science and Technology, has co-authored more than 300 research publications on innovative materials. His work includes nutrient recovery from waste, their formulation and delivery, and renewable energy-assisted catalysis for energy carrier and commodity chemical synthesis and transformations.
Linda S Lee is a distinguished professor at Purdue University with joint appointments in the Colleges of Agriculture (COA) and Engineering, program head of the Ecological Sciences & Engineering Interdisciplinary Graduate Program and COA assistant dean of graduate education and research. She joined Purdue in 1993 with degrees in chemistry (BS), environmental engineering (MS) and soil chemistry/contaminant hydrology (PhD) from the University of Florida. Her research includes chemical fate, analytical tools, waste reuse, bioaccumulation, and contaminant remediation and management strategies with PFAS challenges driving much of her research for the last two decades. Her research is supported by a diverse funding portfolio. She has published more than 150 papers with most in top-tier environmental journals.
Clinton Williams is the research leader of Plant and Irrigation and Water Quality Research units at US Arid Land Agricultural Research Center. He has been actively engaged in environmental research focusing on water quality and quantity for more than 20 years. Clinton looks for ways to increase water supplies through the safe use of reclaimed waters. His current research is related to the environmental and human health impacts of biologically active contaminants (e.g. PFAS, pharmaceuticals, hormones and trace organics) found in reclaimed municipal wastewater and the associated impacts on soil, biota, and natural waters in contact with wastewater. His research is also looking for ways to characterize the environmental loading patterns of these compounds while finding low-cost treatment alternatives to reduce their environmental concentration using byproducts capable of removing the compounds from water supplies.
Sara Lupton has been a research chemist with the Food Animal Metabolism Research Unit at the Edward T Schafer Agricultural Research Center in Fargo, ND within the USDA-Agricultural Research Service since 2010. Sara’s background is in environmental analytical chemistry. She is the ARS lead scientist for the USDA’s Dioxin Survey and other research includes the fate of animal drugs and environmental contaminants in food animals and investigation of environmental contaminant sources (feed, water, housing, etc.) that contribute to chemical residue levels in food animals. Sara has conducted research on bioavailability, accumulation, distribution, excretion, and remediation of PFAS compounds in food animals for more than 10 years.
Jude Maul received a master’s degree in plant biochemistry from University of Kentucky and a PhD in horticulture and biogeochemistry from Cornell University in 2008. Since then he has been with the USDA-ARS as a research ecologist in the Sustainable Agriculture System Laboratory. Jude’s research focuses on molecular ecology at the plant/soil/water interface in the context of plant health, nutrient acquisition and productivity. Taking a systems approach to agroecosystem research, Jude leads the USDA-ARS-LTAR Soils Working group which is creating an national soils data repository which coincides with his research results contributing to national soil health management recommendations.
About this journal
Sustainability Science and Technology is an interdisciplinary, open access journal dedicated to advances in science, technology, and engineering that can contribute to a more sustainable planet. It focuses on breakthroughs in all science and engineering disciplines that address one or more of the three sustainability pillars: environmental, social and/or economic. Editor-in-chief: Jonas Baltrusaitis, Lehigh University, USA
Striking evidence that string theory could be the sole viable “theory of everything” has emerged in a new theoretical study of particle scattering that was done by a trio of physicists in the US. By unifying all fundamental forces of nature, including gravity, string theory could provide the long-sought quantum description of gravity that has eluded scientists for decades.
The research was done by Caltech’s Clifford Cheung and Aaron Hillman along with Grant Remmen at New York University. They have delved into the intricate mathematics of scattering amplitudes, which are quantities that encapsulate the probabilities of particles interacting when they collide.
Through a novel application of the bootstrap approach, the trio demonstrated that imposing general principles of quantum mechanics uniquely determines the scattering amplitudes of particles at the smallest scales. Remarkably, the results match the string scattering amplitudes derived in earlier works. This suggests that string theory may indeed be an inevitable description of the universe, even as direct experimental verification remains out of reach.
“A bootstrap is a mathematical construction in which insight into the physical properties of a system can be obtained without having to know its underlying fundamental dynamics,” explains Remmen. “Instead, the bootstrap uses properties like symmetries or other mathematical criteria to construct the physics from the bottom up, ‘effectively pulling itself up by its bootstraps’. In our study, we bootstrapped scattering amplitudes, which describe the quantum probabilities for the interactions of particles or strings.”
Why strings?
String theory posits that the elementary building blocks of the universe are not point-like particles but instead tiny, vibrating strings. The different vibrational modes of these strings give rise to the various particles observed in nature, such as electrons and quarks. This elegant framework resolves many of the mathematical inconsistencies that plague attempts to formulate a quantum description of gravity. Moreover, it unifies gravity with the other fundamental forces: electromagnetic, weak, and strong interactions.
However, a major hurdle remains. The characteristic size of these strings is estimated to be around 10−35 m, which is roughly 15 orders of magnitude smaller than the resolution of today’s particle accelerators, including the Large Hadron Collider. This makes experimental verification of string theory extraordinarily challenging, if not impossible, for the foreseeable future.
Faced with the experimental inaccessibility of strings, physicists have turned to theoretical methods like the bootstrap to test whether string theory aligns with fundamental principles. By focusing on the mathematical consistency of scattering amplitudes, the researchers imposed constraints based on basic quantum mechanical requirements on the scattering amplitudes such as locality and unitarity.
“Locality means that forces take time to propagate: particles and fields in one place don’t instantaneously affect another location, since that would violate the rules of cause-and-effect,” says Remmen. “Unitarity is conservation of probability in quantum mechanics: the probability for all possible outcomes must always add up to 100%, and all probabilities are positive. This basic requirement also constrains scattering amplitudes in important ways.”
In addition to these principles, the team introduced further general conditions, such as the existence of an infinite spectrum of fundamental particles and specific high-energy behaviour of the amplitudes. These criteria have long been considered essential for any theory that incorporates quantum gravity.
Unique solution
Their result is a unique solution to the bootstrap equations, which turned out to be the Veneziano amplitude — a formula originally derived to describe string scattering. This discovery strongly indicates that string theory meets the most essential criteria for a quantum theory of gravity. However, the definitive answer to whether string theory is truly the “theory of everything” must ultimately come from experimental evidence.
Cheung explains, “Our work asks: what is the precise math problem whose solution is the scattering amplitude of strings? And is it the unique solution?”. He adds, “This work can’t verify the validity of string theory, which like all questions about nature is a question for experiment to resolve. But it can help illuminate whether the hypothesis that the world is described by vibrating strings is actually logically equivalent to a smaller, perhaps more conservative set of bottom up assumptions that define this math problem.”
The trio’s study opens up several avenues for further exploration. One immediate goal for the researchers is to generalize their analysis to more complex scenarios. For instance, the current work focuses on the scattering of two particles into two others. Future studies will aim to extend the bootstrap approach to processes involving multiple incoming and outgoing particles.
Another direction involves incorporating closed strings, which are loops that are distinct from the open strings analysed in this study. Closed strings are particularly important in string theory because they naturally describe gravitons, the hypothetical particles responsible for mediating gravity. While closed string amplitudes are more mathematically intricate, demonstrating that they too arise uniquely from the bootstrap equations would further bolster the case for string theory.
SPIE Photonics West, the world’s largest photonics technologies event, takes place in San Francisco, California, from 25 to 30 January. Showcasing cutting-edge research in lasers, biomedical optics, biophotonics, quantum technologies, optoelectronics and more, Photonics West features leaders in the field discussing the industry’s challenges and breakthroughs, and sharing their research and visions of the future.
As well as 100 technical conferences with over 5000 presentations, the event brings together several world-class exhibitions, kicking off on 25 January with the BiOS Expo, the world’s largest biomedical optics and biophotonics exhibition.
The main Photonics West Exhibition starts on 28 January. Hosting more than 1200 companies, the event highlights the latest developments in laser technologies, optoelectronics, photonic components, materials and devices, and system support. The newest and fastest growing expo, Quantum West, showcases photonics as an enabling technology for a quantum future. Finally, the co-located AR | VR | MR exhibition features the latest extended reality hardware and systems. Here are some of the innovative products on show at this year’s event.
HydraHarp 500: a new era in time-correlated single-photon counting
Photonics West sees PicoQuant introduce its newest generation of event timer and time-correlated single-photon counting (TCSPC) unit – the HydraHarp 500. Setting a new standard in speed, precision and flexibility, the TCSPC unit is freely scalable with up to 16 independent channels and a common sync channel, which can also serve as an additional detection channel if no sync is required.
At the core of the HydraHarp 500 is its outstanding timing precision and accuracy, enabling precise photon timing measurements at exceptionally high data rates, even in demanding applications.
In addition to the scalable channel configuration, the HydraHarp 500 offers flexible trigger options to support a wide range of detectors, from single-photon avalanche diodes to superconducting nanowire single-photon detectors. Seamless integration is ensured through versatile interfaces such as USB 3.0 or an external FPGA interface for data transfer, while White Rabbit synchronization allows precise cross-device coordination for distributed setups.
The HydraHarp 500 is engineered for high-throughput applications, making it ideal for rapid, large-volume data acquisition. It offers 16+1 fully independent channels for true simultaneous multi-channel data recording and efficient data transfer via USB or the dedicated FPGA interface. Additionally, the HydraHarp 500 boasts industry-leading, extremely low dead-time per channel and no dead-time across channels, ensuring comprehensive datasets for precise statistical analysis.
Step into the future of photonics and quantum research with the HydraHarp 500. Whether it’s achieving precise photon correlation measurements, ensuring reproducible results or integrating advanced setups, the HydraHarp 500 redefines what’s possible – offering
precision, flexibility and efficiency combined with reliability and seamless integration to
achieve breakthrough results.
Meet PicoQuant at BiOS booth #8511 and Photonics West booth #3511.
SmarAct: shaping the future of precision
SmarAct is set to make waves at the upcoming SPIE Photonics West, the world’s leading exhibition for photonics, biomedical optics and laser technologies, and the parallel BiOS trade fair. SmarAct will showcase a portfolio of cutting-edge solutions designed to redefine precision and performance across a wide range of applications.
At Photonics West, SmarAct will unveil its latest innovations, as well as its well-established and appreciated iris diaphragms and optomechanical systems. All of the highlighted technologies exemplify SmarAct’s commitment to enabling superior control in optical setups, a critical requirement for research and industrial environments.
Attendees can also experience the unparalleled capabilities of electromagnetic positioners and SmarPod systems. With their hexapod-like design, these systems offer nanometre-scale precision and flexibility, making them indispensable tools for complex alignment tasks in photonics and beyond.
One major highlight is SmarAct’s debut of a 3D pick-and-place system designed for handling optical fibres. This state-of-the-art solution integrates precision and flexibility, offering a glimpse into the future of fibre alignment and assembly. Complementing this is a sophisticated gantry system for microassembly of optical components. Designed to handle large travel ranges with remarkable accuracy, this system meets the growing demand for precision in the assembly of intricate optical technologies. It combines the best of SmarAct’s drive technologies, such as fast (up to 1 m/s) and durable electromagnetic positioners and scanner stages based on piezo-driven mechanical flexures with maximum scanning speed and minimum scanning error.
Simultaneously, at the BiOS trade fair SmarAct will spotlight its new electromagnetic microscopy stage, a breakthrough specifically tailored for life sciences applications. This advanced stage delivers exceptional stability and adaptability, enabling researchers to push the boundaries of imaging and experimental precision. This innovation underscores SmarAct’s dedication to addressing the unique challenges faced by the biomedical and life sciences sectors, as well as bioprinting and tissue engineering companies.
Throughout the event, SmarAct’s experts will demonstrate these solutions in action, offering visitors an interactive and hands-on understanding of how these technologies can meet their specific needs. Visit SmarAct’s booths to engage with experts and discover how SmarAct solutions can empower your projects.
Whether you’re advancing research in semiconductors, developing next-generation photonic devices or pioneering breakthroughs in life sciences, SmarAct’s solutions are tailored to help you achieve your goals with unmatched precision and reliability.
Precision positioning systems enable diverse applications
For 25 years Mad City Labs has provided precision instrumentation for research and industry – including nanopositioning systems, micropositioners, microscope stages and platforms, single-molecule microscopes, atomic-force microscopes (AFMs) and customized solutions.
The company’s newest micropositioning system – the MMP-UHV50 – is a modular, linear micropositioner designed for ultrahigh-vacuum (UHV) environments. Constructed entirely from UHV-compatible materials and carefully designed to eliminate sources of virtual leaks, the MMP-UHV50 offers 50 mm travel range with 190 nm step size and a maximum vertical payload of 2 kg.
Uniquely, the MMP-UHV50 incorporates a zero-power feature when not in motion, to minimize heating and drift. Safety features include limit switches and overheat protection – critical features when operating in vacuum environments. The system includes the Micro-Drive-UHV digital electronic controller, supplied with LabVIEW-based software and compatible with user-written software via the supplied DLL file (for example, Python, Matlab or C++).
Other products from Mad City Labs include piezo nanopositioners featuring the company’s proprietary PicoQ sensors, which provide ultralow noise and excellent stability to yield sub-nanometre resolution. These high-performance sensors enable motion control down to the single picometre level.
For scanning probe microscopy, Mad City Labs’s nanopositioning systems provide true decoupled motion with virtually undetectable out-of-plane movement, while their precision and stability yields high positioning performance and control. The company offers both an optical deflection AFM – the MadAFM, a multimodal sample scanning AFM in a compact, tabletop design and designed for simple installation – plus resonant probe AFM models.
The resonant probe products include the company’s AFM controllers, MadPLL and QS-PLL, which enable users to build their own flexibly configured AFMs using Mad City Labs’ micro- and nanopositioners. All AFM instruments are ideal for material characterization, but the resonant probe AFMs are uniquely suitable for quantum sensing and nano-magnetometry applications.
Mad City Labs also offers standalone micropositioning products, including optical microscope stages, compact positioners for photonics and the Mad-Deck XYZ stage platform, all of which employ proprietary intelligent control to optimize stability and precision. They are also compatible with the high-resolution nanopositioning systems, enabling motion control across micro-to-picometre length scales.
Finally, for high-end microscopy applications, the RM21 single-molecule microscope, featuring the unique MicroMirror TIRF system, offers multi-colour total internal-reflection fluorescence microscopy with an excellent signal-to-noise ratio and efficient data collection, along with an array of options to support multiple single-molecule techniques.
Our product portfolio, coupled with our expertise in custom design and manufacturing, ensures that we are able to provide solutions for nanoscale motion for diverse applications such as astronomy, photonics, metrology and quantum sensing.
Learn more at BiOS booth #8525 and Photonics West booth #3525.
Incoming US President Donald Trump has selected Silicon Valley executive Michael Kratsios as director of the Office of Science and Technology Policy (OSTP). Kratsios will also serve as Trump’s science advisor, a position that, unlike the OSTP directorship, does not require approval by the US Senate. Meanwhile, computer scientist Lynne Parker from the University of Tennessee, Knoxville, has been appointed to a new position – executive director of the President’s Council on Advisors on Science and Technology. Parker, who is a former member of OSTP, will also act as counsellor to the OSTP director.
Kratsios, with a BA in politics from Princeton University, was previously chief of staff to Silicon Valley venture capitalist Peter Thiel before becoming the White House’s chief technology officer in 2017 at the start of Trump’s first stint as US president. In addition to his technology remit, Kratsios was effectively Trump’s science advisor until meteorologist Kelvin Droegemeier took that position in January 2019. Kratsios then became the Department of Defense’s acting undersecretary of research and engineering. After the 2020 presidential election, Kratsios left government to run the San Francisco-based company Scale AI.
Parker has a MS from the University of Tennessee and a PhD from Massachusetts Institute of Technology, both in computer science. She was founding director of the University of Tennessee’s AI Tennessee Initiative before spending four years as a member of OSTP, bridging the first Trump and Biden administrations. There, she served as deputy chief technology officer and was the inaugural director of OSTP’s National Artificial Intelligence Initiative Office.
Unlike some other Trump nominations, the appointments have been positively received by the science community. “APLU is enthusiastic that President-elect Trump has selected two individuals who recognize the importance of science to national competitiveness, health, and economic growth,” noted the Association of Public & Land Universities – a membership organisation of public research universities — in a statement. Analysts expect the nominations to reflect the returning president’s interest in pursuing AI, which could indicate a move towards technology over scientific research in the coming four years.
Bill Nelson – NASA’s departing administrator – has handed over a decision about when to retrieve samples from Mars to potential successor Jared Isaacman. In the wake of huge cost increases and long delays in the schedule for bringing back samples collected by the rover Perseverance, NASA had said last year that it would develop a fresh plan for the “Mars Sample Return” mission. Nelson now says the agency had two lower-cost plans in mind – but that a choice will not be made until mid-2026. One plan would use a sky crane system resembling that which delivered Perseverance to the Martian surface, while the other would require a commercially produced “heavy lift lander” to pick up samples. Each option could cost up to $7.5 bn – much less than the rejected plan’s $11 bn.
Lia Merminga has resigned as director of Fermilab – the US’s premier particle-physics lab. She stepped down yesterday after a turbulent year that saw staff layoffs, a change in the lab’s management contractor and accusations of a toxic atmosphere. Merminga is being replaced by Young-Kee Kim from the University of Chicago, who will serve as interim director until a permanent successor is found. Kim was previously Fermilab’s deputy director between 2006 and 2013.
Tracy Marc, a spokerperson for Fermilab, says that the search for Merminga’s successor has already begun, although without a specific schedule. “Input from Fermilab employees is highly valued and we expect to have Fermilab employee representatives as advisory members on the search committee, just as has been done in the past,” Marc told Physics World. “The search committee will keep the Fermilab community informed about the progress of this search.”
The departure of Merminga, who became Fermilab director in August 2022, was announced by Paul Alivisatos, president of the University of Chicago. The university jointly manages the lab with Universities Research Association (URA), a consortium of research universities, as well as the industrial firms Amentum Environment & Energy, Inc. and Longenecker & Associates.
“Her dedication and passion for high-energy physics and Fermilab’s mission have been deeply appreciated,” Alivisatos said in a statement. “This leadership change will bring fresh perspectives and expertise to the Fermilab leadership team.”
Turbulent times
The reasons for Merminga’s resignation are unclear but Fermilab has experienced a difficult last two years with questions raised about its internal management and external oversight. Last August, a group of anonymous self-styled whistleblowers published a 113-page “white paper” on the arXiv preprint server, asserting that the lab was “doomed without a management overhaul”.
The document highlighted issues such as management cover ups of dangerous behaviour including guns being brought onto Fermilab’s campus and a male employee’s attack on a female colleague. In addition, key experiments such as the Deep Underground Neutrino Experiment suffered notable delays. Cost overruns also led to a “limited operations period” with most staff on leave in late August.
In October, the US Department of Energy, which oversees Fermilab, announced a new organization – Fermi Forward Discovery Group – to manage the lab. Yet that decision came under scrutiny given it is dominated by the University of Chicago and URA, which had already been part of the management since 2007. Then a month later, almost 2.5% of Fermilab’s employees were laid off, adding to portray an institution in crisis.
The whistleblowers, who told Physics World that they still stand by their analysis of the lab’s issues, say that the layoffs “undermined Fermilab’s scientific mission” and claim that it sidelined “some of its most accomplished” researchers at the lab. “Meanwhile, executive managers, insulated by high salaries and direct oversight responsibilities, remained unaffected,” they allege.
Born in Greece, Merminga, 65, earned a BSc in physics from the University of Athens before moving to the University of Michigan where she completed an MS and PhD in physics. Before taking on Fermilab’s directorship, she held leadership posts in governmental physics-related institutions in the US and Canada.
CERN’s ALICE Collaboration has found the first evidence for antihyperhelium-4, which is an antimatter hypernucleus that is a heavier version of antihelium-4. It contains two antiprotons, an antineutron and an antilambda baryon. The latter contains three antiquarks (up, down and strange – making it an antihyperon), and is electrically neutral like a neutron. The antihyperhelium-4 was created by smashing lead nuclei together at the Large Hadron Collider (LHC) in Switzerland and the observation has a statistical significance of 3.5σ. While this is below the 5σ level that is generally accepted as a discovery in particle physics, the observation is in line with the Standard Model of particle physics. The detection therefore helps constrain theories beyond the Standard Model that try to explain why the universe contains much more matter than antimatter.
Hypernuclei are rare, short-lived atomic nuclei made up of protons, neutrons, and at least one hyperon. Hypernuclei and their antimatter counterparts can be formed within a quark–gluon plasma (QGP), which is created when heavy ions such as lead collide at high energies. A QGP is an extreme state of matter that also existed in the first millionth of a second following the Big Bang.
Exotic antinuclei
Just a few hundred picoseconds after being formed in collisions, antihypernuclei will decay via the weak force – creating two or more distinctive decay products that can be detected. The first antihypernucleus to be observed was a form of antihyperhydrogen called antihypertriton, which contains an antiproton, an antineutron, and an antilambda hyperon It was discovered in 2010 by the STAR Collaboration, who smashed together gold nuclei at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC).
Then in 2024, the STAR Collaboration at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) reported the first observations of the decay products of antihyperhydrogen-4, which contains one more antineutron than antihypertriton.
Now, ALICE physicists have delved deeper into the word of antihypernuclei by doing a fresh analysis of data taken at the LHC in 2018 – where lead ions were collided at 5 TeV.
Using a machine learning technique to analyse the decay products of the nuclei produced in these collisions, the ALICE team identified the same signature of antihyperhydrogen-4 detected by the STAR Collaboration. This is the first time an antimatter hypernucleus has been detected at the LHC.
Rapid decay
But that is not all. The team also found evidence for another, slightly lighter antihypernucleus, called antihyperhelium-4. This contains two antiprotons, an antineutron, and an antihyperon. It decays almost instantly into an antihelium-3 nucleus, an antiproton, and a charged pion. The latter is a meson comprising a quark–antiquark pair.
Physicists describe production of hypernuclei in a QGP using the statistical hadronization model (SHM). For both antihyperhydrogen-4 and antihyperhelium-4, the masses and production yields measured by the ALICE team closely matched the predictions of the SHM – assuming that the particles were produced in a certain mixture of their excited and ground states.
The team’s result further confirms that the SHM can accurately describe the production of hypernuclei and antihypernuclei from a QGP. The researchers also found that equal numbers of hypernuclei and antihypernuclei are produced in the collisions, within experimental uncertainty. While this provides no explanation as to why there is much more matter than antimatter in the observable universe, the research allows physicists to put further constraints on theories that reach beyond the Standard Model of particle physics to try to explain this asymmetry.
The research could also pave the way for further studies into how hyperons within hypernuclei interact with their neighbouring protons and neutrons. With a deeper knowledge of these interactions, astronomers could gain new insights into the mysterious interior properties of neutron stars.
The observation is described in a paper that has been submitted to Physical Review Letters.
The Electrochemical Society (ECS) is an international non-profit scholarly organization that promotes research, education and technological innovation in electrochemistry, solid-state science and related fields.
Founded in 1902, the ECS brings together scientists and engineers to share knowledge and advance electrochemical technologies.
As part of that mission, the society publishes several journals including the flagship Journal of the Electrochemical Society (JES), which is over 120 years old and covers a wide range of topics in electrochemical science and engineering.
Someone who has seen their involvement with the ECS and ECS journals increase over their career is chemist Trisha Andrew from the University of Massachusetts Amherst. She directs the wearable electronics lab, a multi-disciplinary research team that produces garment-integrated technologies using reactive vapor deposition.
Her involvement with the ECS began when she was invited by the editor-in-chief of ECS Sensors Plus to act as a referee for the journal. Andrew found the depth and practical application of the papers she reviewed interesting and of high quality. This resulted in her submitting her own work to ECS journals and she later became an associate editor for both ECS Sensors Plus and JES.
Professional Opportunities
Physical chemist Weiran Zheng from the Guangdong Technion – Israel Institute of Technology China, meanwhile, says that due to the reputation of ECS journals, they have been his “go-to” place to publish since graduate school.
One of his papers entitled “Python for electrochemistry: a free an all-in-one toolset” (ECS Adv. 2 040502) has been downloaded over 8000 times and is currently the most-read ECS Advances article. This led to an invitation to deliver an ECS webinar — Introducing Python for Electrochemistry Research. “I never expected such an impact when the paper was accepted, and none of this would be possible without the platform offered by ECS journals,” adds Zheng.
Publishing in ECS journals has helped Zheng’s career advance through new connections and becoming more involved with ECS activities. This has not only boosted his research but also professional network and given these benefits, Zheng plans to continue to publish his latest findings in ECS journals.
Highly cited papers
Battery researcher Thierry Brousse from Nantes University in France, came to electrochemistry later on in his career having first carried out a PhD in high-temperature superconducting thin films at the University of Caen Normandy.
When he began working in the field he collaborated with the chemist Donald Schleich from Polytech Nantes, who was an ECS member. It was then that he began to read the JES finding it a prestigious platform for his research in supercapacitors and microdevices for energy storage. “Most of the inspiring scientific papers I was reading at that time were from JES,” notes Brousse. “Naturally, my first papers were then submitted to this journal.”
Brousse says that publishing in ECS journals has provided him with new collaborations as well as invitations to speak at major conferences. He emphasizes the importance of innovative work and the positive impact of publishing in ECS journals where some of his most cited work has been published.
Brousse, who is an associate editor for JES, adds that he particularly values how publishing with ECS journals fosters a quick integration into specific research communities. This, he says, has been instrumental in advancing his career.
Long-standing relationships
Robert Savinell’s relationship with the ECS and ECS journals began during his PhD research in electrochemistry, which he carried out at the University of Pittsburgh. Now at Case Western Reserve University in Cleveland, Ohio, his research focusses on developing a flow battery for low-cost long duration energy storage primarily using iron and water. It is designed to improve the efficiency of the power grid and accelerate the addition of solar and wind power supplies.
Savinell also leads a Department of Energy funded Emerging Frontier Research Center on Breakthrough Electrolytes for Energy Storage. This Center focuses on fundamental research on nano to meso-scale structured electrolytes for energy storage.
ECS journals have been a cornerstone of his professional career, providing a platform for his research and fostering valuable professional connections. “Some of my research published in JES many years ago are still cited today,” says Savinell.
Savinell’s contributions to the ECS community have been recognized through various roles, including being elected a fellow of the ECS and he has previously served as chair of the ECS’s electrolytic and electrochemical engineering division. He was editor-in-chief of JES for the past decade and most recently was elected third vice president of the ECS.
Savinell says that the connections he has made through ECS have been significant, ranging from funding programme managers to personal friends. “My whole professional career has been focused around ECS,” he says, adding that he aims to continue to publish in ECS journals and hopes that his work will inspire solutions to some of society’s biggest problems.
Personal touch
For many researchers in the field, publishing in ECS journals has brought with it several benefits. That includes the high level of engagement and the personal touch within the ECS community and also the promotional support ECS provides for published work.
The ECS journals’ broad portfolio also ensure that researcher’s work reaches the right audience, and such a visibility and engagement is a significant factor when it comes to advancing the careers of scientists. “The difference between ECS journals is the amount of engagement, views and reception that you receive,” says Andrew. “That’s what I found to be the most unique”.
An international team of researchers has developed new analytical techniques that consider interactions between three or more regions of the brain – providing a more in-depth understanding of human brain activity than conventional analysis. Led by Andrea Santoro at the Neuro-X Institute in Geneva and Enrico Amico at the UK’s University of Birmingham, the team hopes its results could help neurologists identify a vast array of new patterns in human brain data.
To study the structure and function of the brain, researchers often rely on network models. In these, nodes represent specific groups of neurons in the brain and edges represent the electrical connections between neurons using statistical correlations.
Within these models, brain activity has often been represented as pairwise interactions between two specific regions. Yet as the latest advances in neurology have clearly shown, the real picture is far more complex.
“To better analyse how our brains work, we need to look at how several areas interact at the same time,” Santoro explains. “Just as multiple weather factors – like temperature, humidity, and atmospheric pressure – combine to create complex patterns, looking at how groups of brain regions work together can reveal a richer picture of brain function.”
Higher-order interactions
Yet with the mathematical techniques applied in previous studies, researchers have not confirmed whether network models incorporating these higher-order interactions between three or more brain regions could really be more accurate than simpler models, which only account for pairwise interactions.
To shed new light on this question, Santoro’s team built upon their previous analysis of functional MRI (fMRI) data, which identify brain activity by measuring changes in blood flow.
Their approach combined two powerful tools. One is topological data analysis. This identifies patterns within complex datasets like fMRI, where each data point depends on a large number of interconnected variables. The other is time series analysis, which is used to identify patterns in brain activity which emerge over time. Together, these tools allowed the researchers to identify complex patterns of activity occurring across three or more brain regions simultaneously.
To test their approach, the team applied it to fMRI data taken from 100 healthy participants in the Human Connectome Project. “By applying these tools to brain scan data, we were able to detect when multiple regions of the brain were interacting at the same time, rather than only looking at pairs of brain regions,” Santoro explains. “This approach let us uncover patterns that might otherwise stay hidden, giving us a clearer view of how the brain’s complex network operates as a whole.”
Just as they hoped, this analysis of higher-order interactions provided far deeper insights into the participants’ brain activity compared with traditional pairwise methods. “Specifically, we were better able to figure out what type of task a person was performing, and even uniquely identify them based on the patterns of their brain activity,” Santoro continues.
Distinguishing between tasks
With its combination of topological and time series analysis, the team’s method could distinguish between a wide variety of tasks in the participants: including their expression of emotion, use of language, and social interactions.
By building further on their approach, Santoro and colleagues are hopeful it could eventually be used to uncover a vast space of as-yet unexplored patterns within human brain data.
By tailoring the approach to the brains of individual patients, this could ultimately enable researchers to draw direct links between brain activity and physical actions.
“Down the road, the same approach might help us detect subtle brain changes that occur in conditions like Alzheimer’s disease – possibly before symptoms become obvious – and could guide better therapies and earlier interventions,” Santoro predicts.
This webinar will detail recent efforts in proton exchange membrane-based low temperature electrolysis degradation, focused on losses due to simulated start-stop operation and anode catalyst layer redox transitions. Ex situ testing indicated that repeated redox cycling accelerates catalyst dissolution, due to near-surface reduction and the higher dissolution kinetics of metals when cycling to high potentials. Similar results occurred in situ, where a large decrease in cell kinetics was found, along with iridium migrating from the anode catalyst layer into the membrane. Additional processes were observed, however, and included changes in catalyst oxidation, the formation of thinner and denser catalyst layers, and platinum migration from the transport layer coating. Complicating factors, including the loss of water flow and temperature control were evaluated, where a higher rate of interfacial tearing and delamination were found. Current efforts are focused on bridging these studies into a more relevant field-test and include evaluating the possible differences in catalyst reduction through an electrochemical process versus hydrogen exposure, either direct or through crossover. These studies seek to identify degradation mechanisms and voltage loss acceleration, and to demonstrate the impact of operational stops on electrolyzer lifetime.
An interactive Q&A session follows the presentation.
Shaun Alia has worked in several areas related to electrochemical energy conversion and storage, including proton and anion exchange membrane-based electrolyzers and fuel cells, direct methanol fuel cells, capacitors, and batteries. His current research involves understanding electrochemical and degradation processes, component development, and materials integration and optimization. Within HydroGEN, a part of the U.S. Department of Energy’s Energy Materials network, Alia has been involved in low temperature electrolysis through NREL capabilities in materials development and ex and in situ characterization. He is further active within in situ durability, diagnostics, and accelerated stress test development for H2@Scale and H2NEW.
Novel landmine detectors based on nuclear magnetic resonance (NMR) have passed their first field-trial tests. Built by the Sydney-based company mRead, the devices could speed up the removal of explosives in former war zones. The company tested its prototype detectors in Angola late last year, finding that they could reliably sense explosives buried up to 15 cm underground — the typical depth of a deployed landmine.
Landmines are a problem in many countries recovering from armed conflict. According to NATO, some 110 million landmines are located in 70 countries worldwide including Cambodia and Bosnia despite conflict ending in both nations decades ago. Ukraine is currently the world’s most mine-infested country, making vast swathes of Ukraine’s agricultural land potentially unusable for decades.
Such landmines also continue to kill innocent civilians. According to the Landmine and Cluster Munition Monitor, nearly 2000 people died from landmine incidents in 2023 – double the number compared to 2022 – and a further 3660 were injured. Over 80% of the casualties were civilians, with children accounting for 37% of deaths.
Humanitarian “deminers”, who are trying to remove these explosives, currently inspect suspected minefields with hand-held metal detectors. These devices use magnetic induction coils that respond to the metal components present in landmines. Unfortunately, they react to every random piece of metal and shrapnel in the soil, leading to high rates of false positives.
“It’s not unreasonable with a metal detector to see 100 false alarms for every mine that you clear,” says Matthew Abercrombie, research and development officer at the HALO Trust, a de-mining charity. “Each of these false alarms, you still have to investigate as if it were a mine.” But for every mine excavated, about 50 hours is wasted on excavating false positives, meaning that clearing a single minefield could take months or years.
“Landmines make time stand still,” adds HALO Trust research officer Ronan Shenhav. “They can lie silent and invisible in the ground for decades. Once disturbed they kill and maim civilians, as well as valuable livestock, preventing access to schools, roads, and prime agricultural land.”
Hope for the future
One alternative landmine-detecting technology option is NMR, which is already widely used to look for underground mineral resources and scan for drugs at airports. NMR results in nuclei inside atoms emitting a weak electromagnetic signal in the presence of a strong constant magnetic field and a weak oscillating field. As the frequency of the signal depends on the molecule’s structure, every chemical compound has a specific electromagnetic fingerprint.
The problem with using it to sniff out landmines is pervasive environmental radio noise, with the electromagnetic signal emitted by the excited molecules being 16 orders of magnitude weaker than that used to trigger the effect. Digital radio transmission, electricity generators and industrial infrastructure all produce noise of the same frequency as the one the detectors are listening for. Even thunderstorms trigger such a radio hum that can spread across vast distances.
“It’s easier to listen to the Big Bang at the edge of the Universe,” says Nick Cutmore, chief technology officer at mRead. “Because the signal is so small, every interference stops you. That stopped a lot of practical applications of this technique in the past.” Cutmore is part of a team that has been trying to cut the effects of noise since the early 2000s, eventually finding a way to filter out this persistent crackle through a proprietary sensor design.
MRead’s handheld detectors emit radio pulses at frequencies between 0.5 and 5 MHz, which are much higher than the kilohertz-range frequencies used by conventional metal detectors. The signal elicits the magnetic resonance response in atoms of sodium, potassium and chlorine, which are commonly found in explosives. A sensor inside the detector “listens out” for the particular fingerprint signal, locating a forgotten mine more precisely than is possible with conventional metal detectors.
With over two million landmines laid in Ukraine since 2022, landmine clearance needs to be faster, safer, and smarter
James Cowan
Given that the detected signal is so small, it has be amplified, but this resulted in adding noise. The company says it has found a way to make sure the electronics in the detector do not exacerbate the problem. “Our current handheld system only consumes 40 to 50 W when operating,” says Cutmore. “Previous systems have sometimes operated at a few kilowatts, making them power-hungry and bulky.”
Having tested the prototype detectors in a simulated minefield in Australia in August 2024, mRead engineers have now deployed them in minefields in Angola in cooperation with the HALO Trust. As the detectors respond directly to the explosive substance, they almost eliminated false positives completely, allowing deminers to double-check locations flagged by metal detectors before time-consuming digging took place.
During the three-week trial, the researchers also detected mines that had a low content of metal, which is difficult to spot with metal detectors.“Instead of doing 1000 metal detections and finding one mine, we can isolate those detections and very quickly before people start digging,” says Cutmore.
Researchers at mRead plan to return to Angola later this year for further tests. They also want to finetune their prototypes and begin working on devices that could be produced commercially. “I am tremendously excited by the results of these trials,” says James Cowan, chief executive officer of the HALO Trust. “With over two million landmines laid in Ukraine since 2022, landmine clearance needs to be faster, safer, and smarter.”
Physicists in the US have taken an important step towards a practical nuclear clock by showing that the physical vapour deposition (PVD) of thorium-229 could reduce the amount of this expensive and radioactive isotope needed to make a timekeeper. The research could usher in an era of robust and extremely accurate solid-state clocks that could be used in a wide range of commercial and scientific applications.
Today, the world’s most precise atomic clocks are the strontium optical lattice clocks created by Jun Ye’s group at JILA in Boulder, Colorado. These are accurate to within a second in the age of the universe. However, because these clocks use an atomic transition between electron energy levels, they can easily be disrupted by external electromagnetic fields. This means that the clocks must be operated in isolation in a stable lab environment. While other types of atomic clock are much more robust – some are deployed on satellites – they are no where near as accurate as optical lattice clocks.
Some physicists believe that transitions between energy levels in atomic nuclei could offer a way to make robust, portable clocks that deliver very high accuracy. As well as being very small and governed by the strong force, nuclei are shielded from external electromagnetic fields by their own electrons. And unlike optical atomic clocks, which use a very small number of delicately-trapped atoms or ions, many more nuclei can be embedded in a crystal without significantly affecting the clock transition. Such a crystal could be integrated on-chip to create highly robust and highly accurate solid-state timekeepers.
Sensitive to new physics
Nuclear clocks would also be much more sensitive to new physics beyond the Standard Model – allowing physicists to explore hypothetical concepts such as dark matter. “The nuclear energy scale is millions of electron volts; the atomic energy scale is electron volts; so the effects of new physics are also much stronger,” explains Victor Flambaum of Australia’s University of New South Wales.
Normally, a nuclear clock would require a laser that produces coherent gamma rays – something that does not exist. By exquisite good fortune, however, there is a single transition between the ground and excited states of one nucleus in which the potential energy changes due to the strong nuclear force and the electromagnetic interaction almost exactly cancel, leaving an energy difference of just 8.4 eV. This corresponds to vacuum ultraviolet light, which can be created by a laser.
That nucleus is thorium-229, but as Ye’s postgraduate student Chuankun Zhang explains, it is very expensive. “We bought about 700 µg for $85,000, and as I understand it the price has been going up”.
In September, Zhang and colleagues at JILA measured the frequency of the thorium-229 transition with unprecedented precision using their strontium-87 clock as a reference. They used thorium-doped calcium fluoride crystals. “Doping thorium into a different crystal creates a kind of defect in the crystal,” says Zhang. “The defects’ orientations are sort of random, which may introduce unwanted quenching or limit our ability to pick out specific atoms using, say, polarization of the light.”
Layers of thorium fluoride
In the new work, the researchers collaborated with colleagues in Eric Hudson’s group at University of California, Los Angeles and others to form layers of thorium fluoride between 30 nm and 100 nm thick on crystalline substrates such as magnesium fluoride. They used PVD, which is a well-established technique that evaporates a material from a hot crucible before condensing it onto a substrate. The resulting samples contained three orders of magnitude less thorium-229 than the crystals used in the September experiment, but had the comparable thorium atoms per unit area.
The JILA team sent the samples to Hudson’s lab for interrogation by a custom-built vacuum ultraviolet laser. Researchers led by Hudson’s student Richard Elwell observed clear signatures of the nuclear transition and found the lifetime of the excited state to be about four times shorter than observed in the crystal. While the discrepancy is not understood, the researchers say this might not be problematic in a clock.
More significant challenges lie in the surprisingly small fraction of thorium nuclei participating in the clock operation – with the measured signal about 1% of the expected value, according to Zhang. “There could be many reasons. One possibility is because the vapour deposition process isn’t controlled super well such that we have a lot of defect states that quench away the excited states.” Beyond this, he says, designing a mobile clock will entail miniaturizing the laser.
Flambaum, who was not involved in the research, says that it marks “a very significant technical advance,” in the quest to build a solid-state nuclear clock – something that he believes could be useful for sensing everything from oil to variations in the fine structure constant. “As a standard of frequency a solid state clock is not very good because it’s affected by the environment,” he says, “As soon as we know the frequency very accurately we will do it with [trapped] ions, but that has not been done yet.”
As I write this [and don’t tell the Physics World editors, please] I’m half-watching out of the corner of my eye the quirky French-made, video-game spin-off series Rabbids Invasion. The mad and moronic bunnies (or, in a nod to the original French, Les Lapins Crétins) are currently making another attempt to reach the Moon – a recurring yet never-explained motif in the cartoon – by stacking up a vast pile of junk; charming chaos ensues.
As explained in LUNAR: a History of the Moon in Myths, Maps + Matter – the exquisite new Thames & Hudson book that presents the stunning Apollo-era Lunar Atlas alongside a collection of charming essays – madness has long been associated with the Moon. One suspects there was a good kind of mania behind the drawing up of the Lunar Atlas, a series of geological maps plotting the rock formations on the Moon’s surface that are as much art as they are a visualization of data. And having drooled over LUNAR, truly the crème de la crème of coffee-table books, one cannot fail but to become a little mad for the Moon too.
Many faces of the Moon
As well as an exploration of the Moon’s connections (both etymologically and philosophically) to lunacy by science writer Kate Golembiewski, the varied and captivating essays of 20 authors collected in LUNAR cover the gamut from the Moon’s role in ancient times (did you know that the Greeks believed that the souls of the dead gather around the Moon?) through to natural philosophy, eclipses, the space race and the Artemis Programme. My favourite essays were the more off-beat ones: the Moon in silent cinema, for example, or its fascinating influence on “cartes de visite”, the short-lived 19th-century miniature images whose popularity was boosted by Queen Victoria and Prince Albert. (I, for one, am now quite resolved to have my portrait taken with a giant, stylized, crescent moon prop.)
The pulse of LUNAR, however, are the breathtaking reproductions of all 44 of the exquisitely hand-drawn 1:1,000,000 scale maps – or “quadrangles” – that make up the US Geological Survey (USGS)/NASA Lunar Atlas (see header image).
Drawn up between 1962 and 1974 by a team of 24 cartographers, illustrators, geographers and geologists, the astonishing Lunar Atlas captures the entirety of the Moon’s near side, every crater and lava-filled maria (“sea”), every terra (highland) and volcanic dome. The work began as a way to guide the robotic and human exploration of the Moon’s surface and was soon augmented with images and rock samples from the missions themselves.
One could be hard-pushed to sum it up better than the American science writer Dava Sobel, who pens the book’s forward: “I’ve been to the Moon, of course. Everyone has, at least vicariously, visited its stark landscapes, driven over its unmarked roads. Even so, I’ve never seen the Moon quite the way it appears here – a black-and-white world rendered in a riot of gorgeous colours.”
Many moons ago
Having been trained in geology, the sections of the book covering the history of the Lunar Atlas piqued my particular interest. The Lunar Atlas was not the first attempt to map the surface of the Moon; one of the reproductions in the book shows an earlier effort from 1961 drawn up by USGS geologists Robert Hackman and Eugene Shoemaker.
Hackman and Shoemaker’s map shows the Moon’s Copernicus region, named after its central crater, which in turn honours the Renaissance-era Polish polymath Nicolaus Copernicus. It served as the first demonstration that the geological principles of stratigraphy (the study of rock layers) as developed on the Earth could also be applied to other bodies. The duo started with the law of superposition; this is the principle that when one finds multiple layers of rock, unless they have been substantially deformed, the older layer will be at the bottom and the youngest at the top.
“The chronology of the Moon’s geologic history is one of violent alteration,” explains science historian Matthew Shindell in LUNAR’s second essay. “What [Hackman and Shoemaker] saw around Copernicus were multiple overlapping layers, including the lava plains of the maria […], craters displaying varying degrees of degradations, and materials and features related to the explosive impacts that had created the craters.”
From these the pair developed a basic geological timeline, unpicking the recent history of the Moon one overlapping feature at the time. They identified five eras, with the Copernican, named after the crater and beginning 1.1 billion years ago, being the most recent.
Considering it was based on observations of just one small region of the Moon, their timescale was remarkably accurate, Shidnell explains, although subsequent observations have redefined its stratigraphic units – for example by adding the Pre-Nectarian as the earliest era (predating the formation of Nectaris, the oldest basin), whose rocks can still be found broken up and mixed into the lunar highlands.
Accordingly, the different quadrants of the atlas very much represent an evolving work, developing as lunar exploration progressed. Later maps tended to be more detailed, reflecting a more nuanced understanding of the Moon’s geological history.
New moon
Parts of the Lunar Atlas have recently found new life in the development of the first-ever complete map of the lunar surface, the “Unified Geologic Map of the Moon”. The new digital map combines the Apollo-era data with that from more recent satellite missions, including the Japan Aerospace Exploration Agency (JAXA)’s SELENE orbiter.
As former USGS Director and NASA astronaut Jim Reilly said when the unified map was first published back in 2020: “People have always been fascinated by the Moon and when we might return. So, it’s wonderful to see USGS create a resource that can help NASA with their planning for future missions.”
I might not be planning a Moon mission (whether by rocket or teetering tower of clutter), but I am planning to give the stunning LUNAR pride of place on my coffee table next time I have guests over – that’s how much it’s left me, ahem, “over the Moon”.
An international team of physicists has used the principle of entanglement entropy to examine how particles are produced in high-energy electron–proton collisions. Led by Kong Tu at Brookhaven National Laboratory in the US, the researchers showed that quarks and gluons in protons are deeply entangled and approach a state of maximum entanglement when they take part in high-energy collisions.
While particle physicists have made significant progress in understanding the inner structures of protons, neutrons, and other hadrons, there is still much to learn. Quantum chromodynamics (QCD) says that the proton and other hadrons comprise quarks, which are tightly bound together via exchanges of gluons – mediators of the strong force. However, using QCD to calculate the properties of hadrons is notoriously difficult except under certain special circumstances.
Calculations can be simplified by describing the quarks and gluons as partons in a model that was developed in late 1960s by James Bjorken, Richard Feynman, Vladimir Gribov and others. “Here, all the partons within a proton appear ‘frozen’ when the proton is moving very fast relative to an observer, such as in high-energy particle colliders,” explains Tu.
Dynamic and deeply complex interactions
While the parton model is useful for interpreting the results of particle collisions, it cannot fully capture the dynamic and deeply complex interactions between quarks and gluons within protons and other hadrons. These interactions are quantum in nature and therefore involve entanglement. This is a purely quantum phenomenon whereby a group of particles can be more highly correlated than is possible in classical physics.
“To analyse this concept of entanglement, we utilize a tool from quantum information science named entanglement entropy, which quantifies the degree of entanglement within a system,” Tu explains.
In physics, entropy is used to quantify the degree of randomness and disorder in a system. However, it can also be used in information theory to measure the degree of uncertainty within a set of possible outcomes.
“In terms of information theory, entropy measures the minimum amount of information required to describe a system,” Tu says. “The higher the entropy, the more information is needed to describe the system, meaning there is more uncertainty in the system. This provides a dynamic picture of a complex proton structure at high energy.”
Deeply entangled
In this context, particles in a system with high entanglement entropy will be deeply entangled – whereas those in a system with low entanglement entropy will be mostly uncorrelated.
In recent studies, entanglement entropy has been used to described how hadrons are produced through deep inelastic scattering interactions – such as when an electron or neutrino collides with a hadron at high energy. However, the evolution with energy of entanglement entropy within protons had gone largely unexplored. “Before we did this work, no one had looked at entanglement inside of a proton in experimental high-energy collision data,” says Tu.
Now, Tu’s team investigated how entanglement entropy varies with the speed of the proton – and how this relationship relates to the hadrons created during inelastic collisions.
Matching experimental data
Their study revealed that the equations of QCD can accurately predict the evolution of entanglement entropy – with their results closely matching with experimental collision data. Perhaps most strikingly, they discovered that if this entanglement entropy is increased at high energies, it may approach a state of maximum entanglement under certain conditions. This high degree of entropy is evident in the large numbers of particles that are produced in electron–proton collisions.
The researchers are now confident that their approach could lead to further insights about QCD. “This method serves as a powerful tool for studying not only the structure of the proton, but also those of the nucleons within atomic nuclei.” Tu explains. “It is particularly useful for investigating the underlying mechanisms by which nucleons are modified in the nuclear environment.”
In the future, Tu and colleagues hope that their model could boost our understanding of processes such as the formation and fragmentation of hadrons within the high-energy jets created in particle collisions, and the resulting shift in parton distributions within atomic nuclei. Ultimately, this could lead to a fresh new perspective on the inner workings of QCD.
Each year, the International Association of Physics Students organizes a physics competition for bachelor’s and master’s students from across the world. Known as the Physics League Across Numerous Countries for Kick-ass Students (PLANCKS), it’s a three-day event where teams of three to four students compete to answer challenging physics questions.
In the UK and Ireland, teams compete in a preliminary competition to be sent to the final. Here are some fiendish questions from past PLANCKS UK and Ireland preliminaries and the 2024 final in Dublin, written by Anthony Quinlan and Sam Carr, for you to try this holiday season.
Question 1: 4D Sun
Imagine you have been transported to another universe with four spatial dimensions. What would the colour of the Sun be in this four-dimensional universe? You may assume that the surface temperature of the Sun is the same as in our universe and is approximately T = 6 × 103 K. [10 marks]
Boltzmann constant, kB = 1.38 × 10−23 J K−1
Speed of light, c = 3 × 108 m s−1
Question 2: Heavy stuff
In a parallel universe, two point masses, each of 1 kg, start at rest a distance of 1 m apart. The only force on them is their mutual gravitational attraction, F = –Gm1m2/r2. If it takes 26 hours and 42 minutes for the two masses to meet in the middle, calculate the value of the gravitational constant G in this universe. [10 marks]
Question 3: Just like clockwork
Consider a pendulum clock that is accurate on the Earth’s surface. Figure 1 shows a simplified view of this mechanism.
A pendulum clock runs on the gravitational potential energy from a hanging mass (1). The other components of the clock mechanism regulate the speed at which the mass falls so that it releases its gravitational potential energy over the course of a day. This is achieved using a swinging pendulum of length l (2), whose period is given by
where g is the acceleration due to gravity.
Each time the pendulum swings, it rocks a mechanism called an “escapement” (3). When the escapement moves, the gear attached to the mass (4) is released. The mass falls freely until the pendulum swings back and the escapement catches the gear again. The motion of the falling mass transfers energy to the escapement, which gives a “kick” to the pendulum that keeps it moving throughout the day.
Radius of the Earth, R = 6.3781 × 106 m
Period of one Earth day, τ0 = 8.64 × 104 s
How slow will the clock be over the course of a day if it is lifted to the hundredth floor of a skyscraper? Assume the height of each storey is 3 m. [4 marks]
Question 4: Quantum stick
Imagine an infinitely thin stick of length 1 m and mass 1 kg that is balanced on its end. Classically this is an unstable equilibrium, although the stick will stay there forever if it is perfectly balanced. However, in quantum mechanics there is no such thing as perfectly balanced due to the uncertainty principle – you cannot have the stick perfectly upright and not moving at the same time. One could argue that the quantum mechanical effects of the uncertainty principle on the system are overpowered by others, such as air molecules and photons hitting it or the thermal excitation of the stick. Therefore, to investigate we would need ideal conditions such as a dark vacuum, and cooling to a few millikelvins, so the stick is in its ground state.
Moment of inertia for a rod,
where m is the mass and l is the length.
Uncertainty principle,
There are several possible approximations and simplifications you could make in solving this problem, including:
sinθ ≈ θ for small θ
and
Calculate the maximum time it would take such a stick to fall over and hit the ground if it is placed in a state compatible with the uncertainty principle. Assume that you are on the Earth’s surface. [10 marks]
Hint: Consider the two possible initial conditions that arise from the uncertainty principle.
Answers will be posted here on the Physics World website next month. There are no prizes.
If you’re a student who wants to sign up for the 2025 edition of PLANCKS UK and Ireland, entries are now open at plancks.uk
Lithium iron phosphate (LFP) battery cells are ubiquitous in electric vehicles and stationary energy storage because they are cheap and have a long lifetime. This webinar will show our studies comparing 240 mAh LFP/graphite pouch cells undergoing charge-discharge cycles over 5 state of charge (SOC) windows (0%–25%, 0%–60%, 0%–80%, 0%–100%, and 75%–100%). To accelerate the degradation, elevated temperatures of 40°C and 55°C were used. In more realistic operating temperatures, it is expected that LFP cells will perform better with longer lifetimes. In this study, we found that cycling LFP cells across a lower average SOC result in less capacity fade than cycling across a higher average SOC, regardless of depth of discharge. The primary capacity fade mechanism is lithium inventory loss due to: lithiated graphite reactivity with electrolyte, which increases incrementally with SOC, and lithium alkoxide species causing iron dissolution and deposition on the negative electrode at high SOC which further accelerates lithium inventory loss. Our results show that even low voltage LFP systems (3.65 V) have a trade-off between average SOC and lifetime. Operating LFP cells at lower average SOC could extend their lifetime substantially in both EV and grid storage applications.
Eniko Zsoldos is a 5th year PhD candidate in chemistry at Dalhousie University in the Jeff Dahn research group. Her current research focuses on understanding degradation mechanisms in a variety of lithium-ion cell chemistries (NMC, LFP, LMO) using techniques such as isothermal microcalorimetry and electrolyte analysis. Eniko received her undergraduate degree in nanotechnology engineering from the University of Waterloo. During her undergrad, she was a member of the Waterloo Formula Electric team, building an electric race car for FSAE student competitions. She has completed internships at Sila Nanotechnologies working on silicon-based anodes for batteries, and at Tesla working on dry electrode processing in Fremont, CA.
A building may be little more than bricks and mortar, but behind the façade it can bring people together and catalyse change. That was the vision for the main facility of the UK’s National Quantum Computing Centre (NQCC), located on the Harwell Campus in Oxfordshire, which is designed to foster collaboration and accelerate innovation across all parts of the UK’s quantum ecosystem.
At the official opening of the building, held at the end of October 2024, the NQCC team showed how that original vision had been turned into reality. In the new experimental labs on the ground floor, NQCC scientists who were previously working as individual teams in borrowed facilities around the Harwell site are now working in an environment where they can swap notes with colleagues working on other hardware platforms.
“It is always useful to have other scientists around to share ideas and solve specific problems,” said Klara Theophilo, an atomic physicist who is setting up trapped-ion systems based on chips originally developed at the University of Oxford and the National Physical Laboratory (NPL). “Trapped-ion systems share some of the same challenges as hardware platforms based on neutral atoms, while the cryogenic engineering we need is also being used for systems based on superconducting qubits.”
Theophilo and her scientific colleagues are benefiting from state-of-the-art experimental facilities purpose-designed for building and testing quantum computers. “This lab has the best environmental control I have ever worked in,” she said. “To achieve high gate fidelities we need careful control of both the temperature and the humidity to ensure that our lasers can manipulate the qubits with high precision, and in our previous lab space there was a constant need to realign and recalibrate the lasers.”
Joining the NQCC technical teams will be scientists and engineers from commercial companies who are building their own systems for quantum computing. In the coming months, several firms are due to install prototype hardware platforms commissioned by the NQCC as part of its programme to establish seven experimental testbeds based on different qubit modalities.
Others will be hosted at the Innovation Hub, the NQCC’s other facility on the Harwell Campus, while quantum networking company NuQuantum is also preparing to establish a team within the main building for a three-year co-development project with the NQCC. The aim of this programme, called Project IDRA, will be to build a distributed quantum computing system that will connect together multiple hardware nodes by entangling the qubits in different quantum processors.
facility like the NQCC can act like an anchor for businesses to build around, creating a cluster of companies that form a supply chain for each other
Mark Thomson, executive chair of the Science and Technology Facilities Council (STFC)
For the NQCC and its backers, the longer term hope is that bringing these hardware companies into the national lab will catalyse the formation of a quantum cluster in and around the Harwell Campus.
“We have a unique ability on this site to connect academia and national infrastructure with start-up businesses and large enterprise,” said Mark Thomson, currently the executive chair of the Science and Technology Facilities Council (STFC) and soon to be the new director general of CERN. “A facility like the NQCC can act like an anchor for businesses to build around, creating a cluster of companies that form a supply chain for each other. We have already seen that in the space sector, and I genuinely believe that we will now see the same clustering effect for quantum technologies.”
Indeed, many of the hardware providers who are installing their prototype systems within the NQCC are eager to find new ways to work with the national lab and its growing network of academic and commercial partners. “Establishing a presence in the NQCC is a great way for us to become more connected with the UK’s wider quantum ecosystem,” said Alice Voaden, project manager for Rigetti, one of the testbed providers. “It puts us in a better position to identify future opportunities for collaboration, which could help us to explore how emerging applications and software strategies can work with our technology.”
Beyond the technical work, the new facility brings together the NQCC’s growing team of technical and innovation specialists under the same roof for the first time. Previously distributed among temporary office spaces across the Harwell Campus, around 80 people working across a diverse range of activities now have the chance to make new connections and forge a collective identity that will help to establish the NQCC as a focal point for quantum computing in the UK and beyond.
Indeed, since the NQCC was established in 2020 it has put an increasing emphasis on building a community of hardware providers, software developers and end users who can work together to explore the value of quantum computing for the benefit of society and the economy.
“The early vision for the NQCC was to address the issue of scaling in quantum computing, and originally we were primarily focused on technology development,” commented NQCC director Michael Cuthbert. “But increasingly we’ve been turning our attention to scaling the user community for quantum computing, and today is an opportunity for us to highlight our activities across the breadth of our programme.”
Those efforts include providing easy access to quantum computing resources, offering learning opportunities to boost the ranks of scientists and engineers with an understanding of quantum computers, and working directly with organizations in the public and private sectors to develop use cases where quantum computing can make a meaningful impact.
In one example highlighted at the inauguration, applications engineers from the NQCC are working with software company Unisys and the University of Newcastle to explore how today’s quantum computers could be used to optimize the loading of cargo onto aircraft, which can cut fuel costs and reduce carbon emissions.
“What happens here will create jobs and businesses, and it will benefit people across the UK and beyond,” said Science Minister Lord Patrick Vallance, who officially opened the building. “You have created something that will bring academics and people from industry together to harness the power of quantum computing to solve problems that really matter.”
Another element of the NQCC’s remit is to provide clear, trusted and impartial guidance to government, businesses and the ublic. It is already working with NPL and other government and industry bodies on standards development, with the NQCC spearheading the global debate around responsible and ethical quantum computing. “Gaining public trust is vital to drive user adoption,” said Cuthbert. “The NQCC is in a unique position to provide thought leadership on ethical considerations, which will ultimately benefit the whole community.”
While the inauguration of the UK’s newest national lab was focused on the prospects for quantum computing, there were also reminders that the NQCC is a direct result of the country’s established strength in quantum science and technology. Following decades of basic research across many contributing disciplines, the National Quantum Technologies Programme, which has seen more than £1bn of investment since 2014, has been created a collaborative culture in which academics work in tandem with start-up companies to translate scientific insights into innovative technologies.
“We know that quantum computing will be a long-haul journey that requires some patience, but the NQCC is already showing what can be achieved through collaboration and co-location,” said Peter Knight, the architect of the NQTP and the instigator behind the NQCC. “Bringing companies and academics into the facility will enable dialogue, drive future collaboration, and accelerate progress towards our mission of delivering quantum computing at scale.”
Microscopic robots with small-scale features that can control light at the microscale offer the potential to probe the microscopic world in more detail – with the scattering of light from such microbots able to induce diffractive optical effects.
To date, this combination of diffractive optics and tuneable mechanics has primarily exploited microelectromechanical systems (MEMS) devices, but creating actuatable microbots with features on the scale of the wavelength of light has been challenging.
To address this challenge, researchers at Cornell University turned to magnetically controlled microbots. While such robots have been developed at millimetre scales, the ability to perform magnetic actuation at the micron scale only became possible recently, due to the creation of protocols that encode magnetic information into microscale robotics and the use of atomic layer deposition (ALD) to create nanoscale hinges that make flexible micromachines capable of advanced navigation.
The team has now created magnetically controlled microbots that operate at the visible-light diffraction limit, so-called diffractive robots.
“A walking robot that’s small enough to interact with and shape light effectively takes a microscope’s lens and puts it directly into the microworld,” says team leader Paul McEuen in a press statement. “It can perform up-close imaging in ways that a regular microscope never could.”
New magnetic microbots
Using nanometre-scale mechanical membranes, rigid panels, programmable nanomagnets and diffractive optical elements, McEuen and colleagues created untethered microbots that are small enough to diffract visible light. They used the ALD hinges to connect the microbot’s rigid panels with magnetically actuatable joints, enabling them to reconfigure and move in millitesla-scale magnetic fields.
The core elements of the diffractive microbots comprise the light-diffracting panels with integrated nanomagnet arrays and the flexible hinges; the platform can also embed optical elements such as an optical diffraction grating. To enable the required mechanical, diffractive and magnetic performance, these integrated elements span several orders of magnitude in terms of their individual scales. The light diffracting grating panels were tens of microns in size, with each panel 1 µm wide, whereas the diffractive grating lines were on the scale of light wavelengths, the hinges had a thickness of 5 nm, and the magnetic domains were in the nanoscale realm.
The hinges played a crucial role, the researchers note, by providing a high degree of flexibility to an otherwise rigid robot. This flexibility allowed the microbots to rotate and reorientate themselves to dynamically change how light is diffracted, focused and redirected.
When manipulated with a magnetic field, the microrobots were able to simultaneously change shape, locomote along a surface and control diffracted light. This locomotion capability was due to the array of nanomagnets integrated into the light-diffracting grating panels.
By selectively controlling the aspect ratio of the nanomagnet domains and programming them using the strength of the external magnetic field, the researchers could control the movement of the microbots – including crawling forward on a solid surface and “swimming” through fluids while simultaneously steering and diffracting light.
“These robots are 5 microns to 2 microns,” says co-author Itai Cohen. “They’re tiny. And we can get them to do whatever we want by controlling the magnetic fields driving their motions.”
The researchers note that the tuneability of the optical elements could be further improved by adding more magnetic material to the microbots and/or increasing the size of the magnetic fields used to control them. And while this study centred around individual microbots, it should also be possible to use multiple microbots in magnetically actuated robot swarms to introduce collective optical effects.
Potential applications
As a generalized robotics platform, the microbots could easily be modified and produced with differing sizes, geometries and optical elements according to the intended application. Some key optical elements that could be integrated include meta-atoms, subwavelength apertures and plasmonic resonant probes.
The researchers have already demonstrated that the microbots have capabilities including force sensing with piconewton sensitivity, subdiffractive imaging using a type of structured illumination microscopy, and light beam steering and focusing using tunable diffractive optical elements. Other potential applications include endoscopic imaging and tissue ablation, high-resolution fluorescence microscopy of cells, and the high-resolution sensing of magnetic fields and current in integrated circuits.
Physics takes us from the far reaches of the universe to the subatomic scale. A passion for physics also takes us further than we imagined possible, building skills that set us up for life, no matter what path we follow in our careers.
If you’re a physicist or physics professional, your drive for the subject is invaluable. By sharing your passion, you show others how far physics could take them. It can be intimidating, but outreach is vital for nurturing the next generation of physicists, promoting public understanding of science and building a skilled physics community.
Outreach is also an important part of the mission of The Ogden Trust – a UK-based charitable organization that promotes the teaching and learning of physics. The trust has been supporting university physics outreach since 2005, with nearly all universities in England that offer physics undergraduate degrees – and several in Scotland and Wales too – having worked with the trust.
As well as providing funding for public engagement and outreach initiatives, the trust also supports universities through the Outreach Officer Network and annual Outreach Awards. So as a physicist, how can you get involved in outreach? Here are some tips and case studies to inspire you along your journey.
Starting out strong
Just as collaboration and shared tools are vital for physics research, there is also a wealth of support that physicists interested in outreach can draw on. No matter how ambitious your idea is, remember that others have been in your position before. Accessing shared resources and training will make starting out much easier (see box on the Physics Mentoring Project).
You could begin by signing up for The Interact Symposium, a biennial event for physical scientists seeking to gain new skills and share their experiences of public engagement. Run by the Science and Technology Facilities Council (STFC), the Institute of Physics (IOP), The Ogden Trust, the Royal Astronomical Society and the South East Physics Network (SEPnet), a bank of resources from the 2024 symposium is available online, including lots of examples of successful projects.
Meanwhile, many departments in universities, schools and workplaces have a specialist outreach co-ordinator whose experience you could tap into. If there isn’t, you might have a more experienced colleague who can advise you and share community or school links. You could also contact your local IOP branch committee or join the IOP’s Physics Communicators Group.
As with any scientific endeavour, it’s important to do your research. Attending local science festivals and community events will give you great ideas and inspiration. One day, they may even provide an opportunity to deliver your own outreach.
The Physics Mentoring Project
Set up in 2019, the Physics Mentoring Project is a collaboration across Wales – led by Cardiff University – that mentors school students, encouraging them to continue studying physics. It has so far delivered more than 7000 hours of mentoring in 36% of all secondary schools in the country.
Students at any of the eight participating universities who have a post-16 qualification in a physical science can sign up as a mentor. All receive a weekend of intense interactive training that covers mentoring theory, relationship building, and session planning, as well as safeguarding and health and safety.
Now in its seventh year, the project has developed into an active network. Mentors have access to an online community with peers and the project team. There are also “lead mentors” who give extra support to a small group of mentors (both new and experienced).
“[My] confidence in public speaking and the confidence in articulating points has come on leaps and bounds,” reported one mentor on the project. “Mentoring helped me understand a bit more about what teaching will be like,” added another.
Originally aimed at 15 and 16-year-olds, the project also mentors 17–18-year-olds doing A-levels and focuses on alternative routes into physics. Optionally, mentors can even take a Level 4 Unit in Increasing Engagement with Physics Through Mentoring, accredited by Agored Cymru as part of the Credit and Qualifications Framework for Wales.
The Physics Mentoring Project won an Ogden Outreach Award in 2022 for “supporting undergraduate ambassadors”.
Strategic thinking
So, you’ve tried outreach for the first time and are eager to do more. It’s tempting to jump straight in. But before making any big commitments, it is worth making a long-term strategic plan.
Your department might have an engagement-specific strategy or other priorities that could be linked to your activities. If there is a dedicated outreach or public engagement professional in your organization, they can advise on this. If your workplace doesn’t have a strategy for outreach and engagement, you could advocate for one to be written (see box on the Institute of Cosmology and Gravitation, University of Portsmouth, UK).
In the UK, the quality of research in higher-education institutions is assessed by the Research Excellence Framework (REF), the results of which informs research funding allocations. Part of the exercise considers the impact of research on people, culture and environment. In REF 2021 around half the impact case studies submitted featured outreach and engagement activities.
In 2021 The Ogden Trust released the Taking a Strategic Approach to Outreach guide. In partnership with the STFC, the trust also funds an annual leadership training course for outreach and public engagement which equips academics and teaching staff with the skills to plan and deliver effective outreach.
The Institute of Cosmology and Gravitation
In 2017 the Institute of Cosmology and Gravitation (ICG) at the University of Portsmouth, UK, introduced an outreach and public engagement strategy, which has since guided significant changes in Portsmouth. The strategy was a short, easy-to-use resource, intended as a working document that could be updated if needed. It outlined outreach and engagement goals over a five-year period, with budget and staffing allocated accordingly.
A crucial part of the process involved consulting people across the department, particularly the ICG directors and those doing innovation and impact work, as well as external supporters of the department’s outreach and public engagement.
Since the strategy was introduced, the department has created a new school outreach programme focusing on a small number of schools where the need for outreach is greatest. The ICG has also invested significantly in Tactile Universe, a project that engages visually impaired school pupils with astronomy research (see pictures).
Thanks to this new approach, outreach and public engagement have become firmly embedded in the ICG. An updated OPE strategy was introduced in 2022.
At this point, you should also consider whether you have all the resources you need. It is often possible to deliver activities with equipment from your institution but, as you do more, the cost of travel, time and equipment can add up. You may be able to fund activities from your existing budgets, particularly if they are closely related to your work. However, you may also need to consider external funding opportunities.
Engagement funding is available through a number of organizations. For example, the STFC has created the Spark awards (£1000–15,000), Nucleus awards (£15,000–125,000) and other grants to engage the public with STFC science. The IOP public-engagement grant scheme awards £500-4000 to improve young people’s relationship with physics. The Royal Academy of Engineering, meanwhile, has its Ingenious grant scheme, which offers funding of £3000–30,000 for projects that engage under-represented audiences.
Remember that while one-off outreach activities can spark your audience’s interest, building long-term partnerships is often more effective. Outreach work with schools is ideally suited for this kind of approach – in fact, regular interactions with a school can tackle systemic inequalities in UK STEM education (see box on Orbyts).
Orbyts
Orbyts links university researchers with pupils in some of the most deprived areas of the UK, empowering them to do original research. Projects last a minimum of five months and involve regular meetings between pupils and researchers. Orbyts projects currently run in three universities across England and received funding from The Ogden Trust to scale their approach.
So far, Orbyts has created over 100 partnerships between researchers and schools, enabling more than 1500 school students to undertake research projects. Topics have included life in the universe, black holes, quantum computing and cancer. Here are some comments from those involved.
“In a tough year with significant professional challenges to overcome, this has been a real “get me out of bed in the morning” kind of project.” Orbyts partner teacher
“The high-level provision offered by the Orbyts researchers raised enthusiasm and interest in STEM disciplines among our students. The researchers introduced our students to Python programming, as well as analysis and interpretation techniques of large data sets, skills that are of fundamental importance at research level in all areas of physics and STEM. Several of the female students taking part in Orbyts decided to apply to physics at university. They were inspired by the content and the overall experience, as well as by the high-calibre female researchers from Orbyts who visited our school every week for several months and acted as role models for them. Most of the students who took part in 2021/22 are now studying physics, engineering or material science at universities. Their participation in Orbyts was pivotal in making informed decisions about their academic future.”
Physics and maths teacher, Newham Collegiate Sixth Form, UK
“I’ve been fortunate enough to have been a part of Orbyts for the last two years. It has helped me gain invaluable skills and develop as a researcher in more ways than I ever expected. Orbyts has enabled me to gain confidence and ownership in my research, as well as providing opportunities to project manage and improve my public speaking and teaching skills in a proactive yet fun way. Working with students on an Orbyts project has been one of the most rewarding experiences of my research career. It has been incredible to see the students become more confident in their work and become enthusiastic researchers themselves across the short 14-week programme.” Shannon Killey, space physics PhD student, Northumbria University
You should also think about your target audience. A lot of physics engagement takes place in schools but partnerships with community organizations can reach those who may not attend science festivals or talks. There may be an increased willingness to engage in physics outside of the classroom, where it can capture the imagination of young people who find a school environment challenging (see box on My Place, My Science).
My Place, My Science
My Place, My Science is an initiative to support young people of African and Black Caribbean heritage in the UK to enjoy science and build cultural connections. It is a partnership between the physics, rheumatology and biochemistry departments at the University of Oxford, the History of Science Museum and the community organization African Families in the UK (AFiUK).
Launched in 2023, My Place, My Science has delivered a programme of activities where participants learn about topics including stargazing, magnets and sickle cell disease. It was also the winner of the Ogden Outreach Award for Engaging Communities in 2024.
“AFiUK has a deep understanding of local needs, priorities, and challenges,” says Sian Tedaldi, outreach programmes manager in Oxford’s physics department. “This understanding continues to shape and inform the development of the project. They have provided a familiar and trusted organization for participants, leading to greater participation and impact.”
“I have developed a toolkit of interactive activities to engage audiences with planetary research. I have been able to reach thousands of young people, families and adults through my work and have engaged with traditionally under-represented groups within physics, such as girls and children from disadvantaged backgrounds. I love talking to young people about space and the opportunity to speak with the enthusiastic and curious AFiUK community has been incredibly rewarding.” Katherine Shirley, planetary-physics postdoc at the University of Oxford
Steps to success
As with any activity in which you are investing your time and energy, it is important to know whether you are achieving your outreach goals. Having a clear strategy will give you a clear idea of what success looks like, but effective evaluation should also be built into your project from the start.
This will also be valuable if you have to justify the time and money spent on a project or make funding applications. The STFC has a useful public engagement evaluation framework that you can follow. The Ogden Trust has also published an evaluation toolkit for working with young people that uses the science capital framework.
Bear in mind that evaluation doesn’t always mean surveys and quantitative data. You might instead get verbal feedback from participants or ask someone else to observe you. In a university, you could consult colleagues in education or social-science departments who are familiar with such methodologies. For larger projects or those for REF or business cases, you could turn to an external evaluator to provide an independent perspective.
Physicists know that their subject impacts everything from space exploration to sustainable technology, but unfortunately many people don’t think physics is for them. Young people from disadvantaged backgrounds, in particular, struggle to see themselves as future physicists. Outreach can make a real difference by showing that you don’t need to belong to a specific group or demographic to be a physicist – all you need is a passion for the subject.
For more information about The Ogden Trust or to sign up for its Physics Outreach Network newsletter, visit its website or e-mail outreach@ogdentrust.com.