↩ Accueil

Vue lecture

International quantum year launches in style at UNESCO headquarters in Paris

More than 800 researchers, policy makers and government officials from around the world gathered in Paris this week to attend the official launch of the International Year of Quantum Science and Technology (IYQ). Held at the headquarters of the United Nations Educational, Scientific and Cultural Organisation (UNESCO), the two-day event included contributions from four Nobel prize-winning physicists – Alain Aspect, Serge Haroche, Anne l’Huillier and William Phillips.

Opening remarks came from Cephas Adjej Mensah, a research director in the Ghanaian government, which last year submitted the draft resolution to the United Nations for 2025 to be proclaimed as the IYQ. “Let us commit to making quantum science accessible to all,” Mensah declared, reminding delegates that the IYQ is intended to be a global initiative, spreading the benefits of quantum equitably around the world. “We can unleash the power of quantum science and technology to make an equitable and prosperous future for all.”

The keynote address was given by l’Huillier, a quantum physicist at Lund University in Sweden, who shared the 2023 Nobel Prize for Physics with Pierre Agostini and Ferenc Krausz for their work on attosecond pulses. “Quantum mechanics has been extremely successful,” she said, explaining how it was invented 100 years ago by Werner Heisenberg on the island of Helgoland. “It has led to new science and new technology – and it’s just the beginning.”

An on-stage panel in a large auditorium
Let’s go Stephanie Simmons, chief quantum officer at Photonic and co-chair of Canada’s National Quantum Strategy advisory council, speaking at the IYQ launch in Paris. (Courtesy: Matin Durrani)

Some of that promise was outlined by Phillips in his plenary lecture. The first quantum revolution led to lasers, semiconductors and transistors, he reminded participants, but said that the second quantum revolution promises more by exploiting effects such as quantum entanglement and superposition – even if its potential can be hard to grasp. “It’s not that there’s something deeply wrong with quantum mechanics – it’s that there’s something deeply wrong with our ability to understand it,” Phillips explained.

The benefits of quantum technology to society were echoed by leading Chinese quantum physicist Jian-Wei Pan of the University of Science and Technology of China in Hefei. “The second quantum revolution will likely provide another human leap in human civilization,” said Pan, who was not at the meeting, in a pre-recorded video statement. “Sustainable funding from government and private sector is essential. Intensive and proactive international co-operation and exchange will undoubtedly accelerate the benefit of quantum information to all of humanity.”

Leaders of the burgeoning quantum tech sector were in Paris too. Addressing the challenges and opportunities of scaling quantum technologies to practical use was a panel made up of Quantinuum chief executive Rajeeb Hazra, QuEra president Takuya Kitawawa, IBM’s quantum-algorithms vice president Katie Pizzoalato, ID Quantique boss Grégoire Ribordy and Microsoft technical fellow Krysta Svore. Also present was Alexander Ling from the National University of Singapore, co-founder of two hi-tech start-ups.

“We cannot imagine what weird and wonderful things quantum mechanics will lead to but you can sure it’ll be marvellous,” said Celia Merzbacher, executive director of the Quantum Economic Development Consortium (QED-C), who chaired the session. All panellists stressed the importance of having a supply of talented quantum scientists and engineers if the industry is to succeed. Hamza also underlined that new products based on “quantum 2.0” technology had to be developed with – and to serve the needs of – users if they are to turn a profit.

The ethical challenges of quantum advancements were also examined in a special panel, as was the need for responsible quantum innovation to avoid a “digital divide” where quantum technology benefits some parts of society but not others. “Quantum science should elevate human dignity and human potential,” said Diederick Croese, a lawyer and director of the Centre for Quantum and Society at Quantum Delta NL in the Netherlands.

A man stood beside a large panel of coloured lights creating an abstract picture
Science in action German artist Robin Baumgarten explains the physics behind his Quantum Jungle art installation. (Courtesy: Matin Durrani)

The cultural impact of quantum science and technology was not forgotten in Paris either. Delegates flocked to an art installation created by Berlin-based artist and game developer Robin Baumgarten. Dubbed Quantum Jungle, it attempts to “visualize quantum physics in a playful yet scientifically accurate manner” by using an array of lights controlled by flickable, bendy metal door stops. Baumgarten claims it is a “mathematically accurate model of a quantum object”, with the brightness of each ring being proportional to the chance of an object being there.

The post International quantum year launches in style at UNESCO headquarters in Paris appeared first on Physics World.

Spacewoman: trailblazing astronaut Eileen Collins makes for a compelling and thoughtful documentary subject

“What makes a good astronaut?” asks director Hannah Berryman in the opening scene of Spacewoman. It’s a question few can answer better than Eileen Collins. As the first woman to pilot and command a NASA Space Shuttle, her career was marked by historic milestones, extraordinary challenges and personal sacrifices. Collins looks down the lens of the camera and, as she pauses for thought, we cut to footage of her being suited up in astronaut gear for the third time. “I would say…a person who is not prone to panicking.”

In Spacewoman, Berryman crafts a thoughtful, emotionally resonant documentary that traces Collins’s life from a determined young girl in Elmira, New York, to a spaceflight pioneer.

The film’s strength lies in its compelling balance of personal narrative and technical achievement. Through intimate interviews with Collins, her family and former colleagues, alongside a wealth of archival footage, Spacewoman paints a vivid portrait of a woman whose journey was anything but straightforward. From growing up in a working-class family affected by her parents’ divorce and Hurricane Agnes’s destruction, to excelling in the male-dominated world of aviation and space exploration, Collins’s resilience shines through.

Berryman wisely centres the film on the four key missions that defined Collins’s time at NASA. While this approach necessitates a brisk overview of her early military career, it allows for an in-depth exploration of the stakes, risks and triumphs of spaceflight. Collins’s pioneering 1995 mission, STS-63, saw her pilot the Space Shuttle Discovery in the first rendezvous with the Russian space station Mir, a mission fraught with political and technical challenges. The archival footage from this and subsequent missions provides gripping, edge-of-your-seat moments that demonstrate both the precision and unpredictability of space travel.

Perhaps Spacewoman’s most affecting thread is its examination of how Collins’s career intersected with her family life. Her daughter, Bridget, born shortly after her first mission, offers a poignant perspective on growing up with a mother whose job carried life-threatening risks. In one of the film’s most emotionally charged scenes, Collins recounts explaining the Challenger disaster to a young Bridget. Despite her mother’s assurances that NASA had learned from the tragedy, the subsequent Columbia disaster two weeks later underscores the constant shadow of danger inherent in space exploration.

These deeply personal reflections elevate Spacewoman beyond a straightforward biographical documentary. Collins’s son Luke, though younger and less directly affected by his mother’s missions, also shares touching memories, offering a fuller picture of a family shaped by space exploration’s highs and lows. Berryman’s thoughtful editing intertwines these recollections with historic footage, making the stakes feel immediate and profoundly human.

The film’s tension peaks during Collins’s final mission, STS-114, the first “return to flight” after Columbia. As the mission teeters on the brink of disaster due to familiar technical issues, Berryman builds a heart-pounding narrative, even for viewers unfamiliar with the complexities of spaceflight. Without getting bogged down in technical jargon, she captures the intense pressure of a mission fraught with tension – for those on Earth, at least.

Berryman’s previous films include Miss World 1970: Beauty Queens and Bedlam and Banned, the Mary Whitehouse Story. In a recent episode of the Physics World Stories podcast, she told me that she was inspired to make the film after reading Collins’s autobiography Through the Glass Ceiling to the Stars. “It was so personal,” she said, “it took me into space and I thought maybe we could do that with the viewer.” Collins herself joined us for that podcast episode and I found her to be that same calm, centred, thoughtful person we see in the film and who NASA clearly very carefully chose to command such an important mission.

Spacewoman isn’t just about near-misses and peril. It also celebrates moments of wonder: Collins describing her first sunrise from space or recalling the chocolate shuttles she brought as gifts for the Mir cosmonauts. These light-hearted anecdotes reveal her deep appreciation for the unique experience of being an astronaut. On the podcast, I asked Collins what one lesson she would bring from space to life on Earth. After her customary moment’s pause for thought, she replied “Reading books about science fiction is very important.” She was a fan of science fiction in her younger years , which enabled her to dream of the future that she realized at NASA and in space. But, she told me, these days she also reads about real science of the future (she was deep into a book on artificial intelligence when we spoke) and history too. Looking back at Collins’s history in space certainly holds lessons for us all.

Berryman’s directorial focus ultimately circles back to a profound question: how much risk is acceptable in the pursuit of human progress? Spacewoman suggests that those committed to something greater than themselves are willing to risk everything. Collins’s career embodies this ethos, defined by an unshakeable resolve, even in the face of overwhelming odds.

In the film’s closing moments, we see Collins speaking to a wide-eyed girl at a book signing. The voiceover from interviews talks of the women slated to be instrumental in humanity’s return to the Moon and future missions to Mars. If there’s one thing I would change about the film, it’s that the final word is given to someone other than Collins. The message is a fitting summation of her life and legacy, but I would like to have seen it delivered with her understated confidence of someone who has lived it. It’s a quibble though in a compelling film that I would recommend to anyone with an interest in space travel or the human experience here on Earth.

When someone as accomplished as Collins says that you need to work hard and practise, practise, practise it has a gravitas few others can muster. After all, she spent 10 years practising to fly the Space Shuttle – and got to do it for real twice. We see Collins speak directly to the wide-eyed girl in a flight suit as she signs her book and, as she does so, you can feel the words really hit home precisely because of who says them: “Reach for the stars. Don’t give up. Keep trying because you can do it.”

Spacewoman is more than a tribute to a trailblazer; it’s a testament to human perseverance, curiosity and courage. In Collins’s story, Berryman finds a gripping, deeply personal narrative that will resonate with audiences across the planet.

  • Spacewoman premiered at DOC NYC in November 2024 and is scheduled for theatrical release in 2025. A Haviland Digital Film in association with Tigerlily Productions.

The post <em>Spacewoman</em>: trailblazing astronaut Eileen Collins makes for a compelling and thoughtful documentary subject appeared first on Physics World.

Introducing the Echo-5Q: a collaboration between FormFactor, Tabor Quantum Systems and QuantWare

Watch this short video filmed at the APS March Meeting in 2024, where Mark Elo, chief marketing officer of Tabor Quantum Solutions, introduces the Echo-5Q, which he explains is an industry collaboration between FormFactor and Tabor Quantum Systems, using the QuantWare quantum processing unit (QPU).

Elo points out that it is an out-of-the-box solution, allowing customers to order a full-stack system, including the software, refrigeration, control electronics and the actual QPU. With the Echo-5, it gets delivered and installed, so that the customer can start doing quantum measurements immediately. He explains that the Echo-5Q is designed at a price and feature point that increases the accessibility for on-site quantum computing.

Brandon Boiko, senior applications engineer with FormFactor, describes the how FormFactor developed the dilution refrigeration technology that the qubits get installed into. Boiko explains that the product has been designed to reduce the cost of entry into the quantum field – made accessible through FormFactor’s test-and- measurement programme, which allows people to bring their samples on site to take measurements.

Alessandro Bruno is founder and CEO of QuantWare, which provides the quantum processor for the Echo-5Q, the part that sits at the milli Kelvin stage of the dilution refrigerator, and that hosts five qubits. Bruno hopes that the Echo-5Q will democratize access to quantum devices – for education, academic research and start-ups.

The post Introducing the Echo-5Q: a collaboration between FormFactor, Tabor Quantum Systems and QuantWare appeared first on Physics World.

Tissue-like hydrogel semiconductors show promise for next-generation bioelectronics

Researchers at the University of Chicago’s Pritzker School of Molecular Engineering have created a groundbreaking hydrogel that doubles as a semiconductor. The material combines the soft, flexible properties of biological tissues with the electronic capabilities of semiconductors, making it ideal for advanced medical devices.

In a study published in Science, the research team, led by Sihong Wang, developed a stretchy, jelly-like material that provides the robust semiconducting properties necessary for use in devices such as pacemakers, biosensors and drug delivery systems.

Rethinking hydrogel design

Hydrogels are ideal for many biomedical applications because they are soft, flexible and water-absorbent – just like human tissues. Material scientists, long recognizing the vast potential of hydrogels, have pushed the boundaries of this class of material. One way is to create hydrogels with semiconducting abilities that can be useful for transmitting information between living tissues and bioelectronic device interfaces – in other words, a hydrogel semiconductor.

Imparting semiconducting properties to hydrogels is no easy task, however. Semiconductors, while known for their remarkable electronic properties, are typically rigid, brittle and water-repellent, making them inherently incompatible with hydrogels. By overcoming this fundamental mismatch, Wang and his team have created a material that could revolutionize the way medical devices interface with the human body.

Traditional hydrogels are made by dissolving hydrogel precursors (monomers or polymers) in water and adding chemicals to crosslink the polymers and form a water-swelled state. Since most polymers are inherently insulating, creating a hydrogel with semiconducting properties requires a special class of semiconducting polymers. The challenges do not stop there, however. These polymers typically only dissolve in organic solvents, not in water.

“The question becomes how to achieve a well-dispersed distribution of these semiconducting materials within a hydrogel matrix,” says first author Yahao Dai, a PhD student in the Wang lab. “This isn’t just about randomly dispersing particles into the matrix. To achieve strong electrical performance, a 3D interconnected network is essential for effective charge transport. So, the fundamental question is: how do you build a hydrophobic, 3D interconnected network within the hydrogel matrix?”

Sihong Wang and Yahao Dai
Innovative material Sihong Wang (left), Yahao Dai (right) and colleagues have developed a novel hydrogel with semiconducting properties. (Courtesy: UChicago Pritzker School of Molecular Engineering/John Zich)

To address this challenge, the researchers first dissolved the polymer in an organic solvent that is miscible with water, forming an organogel – a gel-like material composed of an organic liquid phase in a 3D gel network. They then immersed the organogel in water and allowed the water to gradually replace the organic solvent, transforming it into a hydrogel.

The researchers point out that this versatile solvent exchange process can be adapted to a variety of semiconducting polymers, opening up new possibilities for hydrogel semiconductors with diverse applications.

A two-in-one material

The result is a hydrogel semiconductor material that’s soft enough to match the feel of human tissue. With a Young’s modulus as low as 81 kPa – comparable to that of jelly – and the ability to stretch up to 150% of its original length, this material mimics the flexibility and softness of living tissue. These tissue-like characteristics allow the material to seamlessly interface with the human body, reducing the inflammation and immune responses that are often triggered by rigid medical implants.

The material also has a high charge carrier mobility, a measure of its ability to efficiently transmit electrical signals, of up to 1.4 cm2/V/s. This makes it suitable for biomedical devices that require effective semiconducting performance.

The potential applications extend beyond implanted devices. The material’s high hydration and porosity enable efficient volumetric biosensing and mass transport throughout the entire thickness of the semiconducting layer, which is useful for biosensing, tissue engineering and drug delivery applications. The hydrogel also responds to light effectively, opening up possibilities for light-controlled therapies, such as light-activated wireless pacemakers or wound dressings that use heat to accelerate healing.

A vision for transforming healthcare

The research team’s hydrogel material is now patented and being commercialized through UChicago’s Polsky Center for Entrepreneurship and Innovation. “Our goal is to further develop this material system and enhance its performance and application space,” says Dai. While the immediate focus is on enhancing the electrical and light modulation properties of the hydrogel, the team envisions future work in biochemical sensing.

“An important consideration is how to functionalize various bioreceptors within the hydrogel semiconductor,” explains Dai. “As each biomarker requires a specific bioreceptor, the goal is to target as many biomarkers as possible.”

The team is already exploring new methods to incorporate bioreceptors, such as antibodies and aptamers, within the hydrogels. With these advances, this class of semiconductor hydrogels could act as next-generation interfaces between human tissues and bioelectronic devices, from sensors to tailored drug-delivery systems. This breakthrough material may soon bridge the gap between living systems and electronics in ways once thought impossible.

The post Tissue-like hydrogel semiconductors show promise for next-generation bioelectronics appeared first on Physics World.

Reliability science takes centre stage with new interdisciplinary journal

Journal of Reliability Science and Engineering (Courtesy: IOP Publishing)

As our world becomes ever more dependent on technology, an important question emerges: how much can we truly rely on that technology? To help researchers explore this question, IOP Publishing (which publishes Physics World) is launching a new peer-reviewed, open-access publication called Journal of Reliability Science and Engineering (JRSE). The journal will operate in partnership with the Institute of Systems Engineering (part of the China Academy of Engineering Physics) and will benefit from the editorial and commissioning support of the University of Electronic Science and Technology of China, Hunan University and the Beijing Institute of Structure and Environment Engineering.

“Today’s society relies much on sophisticated engineering systems to manufacture products and deliver services,” says JRSE’s co-editor-in-chief, Mingjian Zuo, a professor of mechanical engineering at the University of Alberta, Canada. “Such systems include power plants, vehicles, transportation and manufacturing. The safe, reliable and economical operation of all these requires the continuing advancement of reliability science and engineering.”

Defining reliability

The reliability of an object is commonly defined as the probability that it will perform its intended function adequately for a specified period of time. “The object in question may be a human being, product, system, or process,” Zuo explains. “Depending on its nature, corresponding sub-disciplines are human-, material-, structural-, equipment-, software- and system reliability.”

Key concepts in reliability science include failure modes, failure rates and reliability function and coherency, as well as measurements such as mean time-to-failure, mean time between failures, availability and maintainability. “Failure modes can be caused by effects like corrosion, cracking, creep, fracture, fatigue, delamination and oxidation,” Zuo explains.

To analyse such effects, researchers may use approaches such as fault tree analysis (FTA); failure modes, effects and criticality analysis (FMECA); and binary decomposition, he adds. These and many other techniques lie within the scope of JRSE, which aims to publish high-quality research on all aspects of reliability. This could, for example, include studies of failure modes and damage propagation as well as techniques for managing them and related risks through optimal design and reliability-centred maintenance.

A focus on extreme environments

To give the journal structure, Zuo and his colleagues identified six major topics: reliability theories and methods; physics of failure and degradation; reliability testing and simulation; prognostics and health management; reliability engineering applications; and emerging topics in reliability-related fields.

Mingjian Zuo
JRSE’s co-editor-in-chief, Mingjian Zuo, a professor of mechanical engineering at the University of Alberta, Canada. (Courtesy: IOP Publishing)

As well as regular issues published four times a year, JRSE will also produce special issues. A special issue on system reliability and safety in varying and extreme environments, for example, focuses on reliability and safety methods, physical/mathematical and data-driven models, reliability testing, system lifetime prediction and performance evaluation. Intelligent operation and maintenance of complex systems in varying and extreme environments are also covered.

Interest in extreme environments was one of the factors driving the journal’s development, Zuo says, due to the increasing need for modern engineering systems to operate reliably in highly demanding conditions. As examples, he cites wind farms being built further offshore; faster trains; and autonomous systems such as drones, driverless vehicles and social robots that must respond quickly and safely to ever-changing surroundings in close proximity to humans.

“As a society, we are setting ever higher requirements on critical systems such as the power grid and Internet, water distribution and transport networks,” he says. “All of these demand further advances in reliability science and engineering to develop tools for the design, manufacture and operation as well as the maintenance of today’s sophisticated engineering systems.”

The go-to platform for researchers and industrialists alike

Another factor behind the journal’s launch is that previously, there were no international journals focusing on reliability research by Chinese organizations. Since the discipline’s leaders include several such organizations, Zuo says the lack of international visibility has seriously limited scientific exchange and promotion of reliability research between China and the global community. He hopes the new journal will remedy this. “Notable features of the journal include gold open access (thanks to our partnership with IOP Publishing, a learned-society publisher that does not have shareholders) and a fast review process,” he says.

In general, the number of academic journals focusing on reliability science and engineering is limited, he adds. “JRSE will play a significant role in promoting the advances in reliability research by disseminating cutting-edge scientific discoveries and creative reliability assurance applications in a timely way.

“We are aiming that the journal will become the go-to platform for reliability researchers and industrialists alike.”

The first issue of JRSE will be published in March 2025, and its editors welcome submissions of original research reports as well as review papers co-authored by experts. “There will also be space for perspectives, comments, replies, and news insightful to the reliability community,” says Zuo. In the future, the journal plans to sponsor reliability-related academic forums and international conferences.

With over 100 experts from around the world on its editorial board, Zuo describes JRSE as scientist-led, internationally-focused and highly interdisciplinary. “Reliability is a critical measure of performance of all engineering systems used in every corner of our society,” he says. “This journal will therefore be of interest to disciplines such as mechanical-, electrical-, chemical-, mining- and aerospace engineering as well as the mathematical and life sciences.”

The post Reliability science takes centre stage with new interdisciplinary journal appeared first on Physics World.

Elastic response explains why cordierite has ultra-low thermal expansion

Hot material The crystal structure of cordierite gives the material its unique thermal properties. (Courtesy: M Dove and L Li/Matter)

The anomalous and ultra-low thermal expansion of cordierite results from the interplay between lattice vibrations and the elastic properties of the material. That is the conclusion of Martin Dove at China’s Sichuan University and Queen Mary University of London in the UK and Li Li at the Civil Aviation Flight University of China. They showed that the material’s unusual behaviour stems from direction-varying elastic forces in its lattice, which act to vary cordierite’s thermal expansion along different directions.

Cordierite is a naturally-occurring mineral that can also be synthesized. Thanks to its remarkable thermal properties, it is used in products ranging from pizza stones to catalytic converters. When heated to high temperatures, it undergoes ultra-low thermal expansion along two directions, and it shrinks a tiny amount along the third direction. This makes it incredibly useful as a material that can be heated and cooled without changing size or suffering damage.

Despite its widespread use, scientists lack a fundamental understanding of how cordierite’s anomalous thermal expansion arises from the properties of its crystal lattice. Normally, thermal expansion (positive or negative) is understood in terms of Grüneisen parameters. These describe how vibrational modes (phonons) in the lattice cause it to expand or contract along each axis as the temperature changes.

Negative Grüneisen parameters describe a lattice that shrinks when heated, and are seen as key to understanding thermal contraction of cordierite. However, the material’s thermal response is not isotropic (it only contracts only along one axis when heated at high temperatures) so understanding cordierite in terms of its Grüneisen parameters alone is difficult.

Advanced molecular dynamics

In their study, Dove and Li used advanced molecular dynamics simulations to accurately model the behaviour of atoms in the cordierite lattice. Their closely matched experimental observations of the material’s thermal expansion, providing them with key insights into why the material has a negative thermal expansion in just one direction.

“Our research demonstrates that the anomalous thermal expansion of cordierite originates from a surprising interplay between atomic vibrations and elasticity,” Dove explains. The elasticity is described in the form of an elastic compliance tensor, which predicts how a material will distort in response to a force applied along a specific direction.

At lower temperatures, lattice vibrations occur at lower frequencies. In this case, the simulations predicted negative thermal expansion in all directions – which is in line with observations of the material.

At higher temperatures, the lattice becomes dominated by high-frequency vibrations. In principle, this should result in positive thermal expansion in all three directions. Crucially, however, Dove and Li discovered that this expansion is cancelled out by the material’s elastic properties, as described by its elastic compliance tensor.

What is more, the unique arrangement of crystal lattice meant that this tensor varied depending on the direction of the applied force, creating an imbalance that amplifies differences between the material’s expansion along each axis.

Cancellation mechanism

“This cancellation mechanism explains why cordierite exhibits small positive expansion in two directions and small negative expansion in the third,” Dove explains. “Initially, I was sceptical of the results. The initial data suggested uniform expansion behaviour at both high and low temperatures, but the final results revealed a delicate balance of forces. It was a moment of scientific serendipity.”

Altogether, Dove and Li’s result clearly shows that cordierite’s anomalous behaviour cannot be understood by focusing solely on the Grüneisen parameters of its three axes. It is crucial to take its elastic compliance tensor into account.

In solving this long-standing mystery, the duo now hope their results could help researchers to better predict how cordierite’s thermal expansion will vary at different temperatures. In turn, they could help to extend the useful applications of the material even further.

“Anisotropic materials like cordierite hold immense potential for developing high-performance materials with unique thermal behaviours,” Dove says. “Our approach can rapidly predict these properties, significantly reducing the reliance on expensive and time-consuming experimental procedures.”

The research is described in Matter.

The post Elastic response explains why cordierite has ultra-low thermal expansion appeared first on Physics World.

Researchers in China propose novel gravitational-wave observatory

Researchers in China have proposed a novel gravitational-wave observatory to search for cracks in Einstein’s general theory of relativity. The Tetrahedron Constellation Gravitational Wave Observatory (TEGO) would detect gravitational waves via four satellites that form a tetrahedral structure in space. Backers of the conceptual plan say TEGO offers significant advantages over designs consisting of a triangular configuration of three satellites.

Gravitational waves are distortions of space–time that occur when massive bodies, such as black holes, are accelerated. They were first detected in 2016 by researchers working on the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) located in Hanford, Washington and Livingston, Louisiana.

The current leading design for a space-based gravitational-wave detector is the Laser Interferometer Space Antenna (LISA). Led by the European Space Agency it is expected to launch in 2035 and operate for at least four years with an estimated to cost €1.5bn.

LISA comprises three identical satellites in an equilateral triangle in space, with each side of the triangle being 2.5 million kilometres – more than six times the distance between the Earth and the Moon.

While ground-based instruments detect gravitational waves with a frequency from a few Hz to a KHz, a space-based mission could pick up gravitational waves with frequencies between 10–4–10–1 Hz.

China has two proposals for a space-based gravitational-wave mission. Dubbed TAIJI and TianQin, they would be launched in the 2030s and, like LISA, consists of three spacecraft in a triangular formation each separated by 2.5 million km.

According to Hong-Bo Jin from the National Astronomical Observatories, Chinese Academy of Sciences, in Beijing, one disadvantage of a triangular array is that when the direction of gravitational-wave propagation as a transverse wave is parallel to the plane of the triangle, it is more difficult to detect the source of the gravitational wave.

A tetrahedral configuration could get around this problem while Jin says that an additional advantage is the extra combinations of optical paths possible with six arms. This means it could be sensitive to six polarization modes of gravitational waves. Einstein’s general theory of relativity predicts that gravitational waves have only two tensor polarization modes, so any detection of so-called vector or scalar polarization modes could signal new physics.

“Detecting gravitational waves based on the TEGO configuration will possibly reveal more polarization modes of gravitational waves, which is conducive to deepening our understanding of general relativity and revealing the essence of gravity and spacetime,” says Jin.

Yet such a design will come with costs. Given that the equipment for TEGO, including the telescopes and optical benches, is twice that of a triangular configuration, estimates for a tetrahedral set-up could also be double.

While TEGO has a separate technical route than TAIJI, Jin says it can “refer” to some of its mature technologies. Given that many technologies still need to be demonstrated and developed, however, TEGO has no specific timeline for when it could be launched.

Italian gravitational-wave physicist Stefano Vitale, a former principal investigator of the LISA Pathfinder mission, told Physics World that “polyhedric” configurations of gravitational-wave detectors are “not new” and are much more difficult to implement than LISA. He adds that even aligning a three-satellite configuration such as LISA is “extremely challenging” and is something the aerospace community has never tried before.

“Going off-plane, like the TEGO colleagues want to do, with telescope add-ons, opens a completely new chapter [and] cannot be considered as incremental relative to LISA,” adds Vitale.

The post Researchers in China propose novel gravitational-wave observatory appeared first on Physics World.

From banking to quantum optics: Michelle Lollie’s unique journey

Michelle Lollie
Quantum attraction Michelle Lollie. (Courtesy: Michelle Lollie)

Michelle Lollie is an advanced laser scientist at Quantinuum, supporting the design, development and construction of complex optical systems that will serve as the foundations of world-class quantum computers. Lollie also participates in various diversity, equity, inclusion and accessibility initiatives, advocating for those who are marginalized in STEM fields, particularly in physics. Outside of wrangling photons, you can often find her at home practicing the violin.

Your initial bachelors degree was in finance, and you went on to work in the field through your 20s before pivoting to physics – what made you take the leap to make this change, and what inspired you to pick physics for your second bachelors degree?

I had dreams of working in finance since high school – indeed, at the time I was on my way to being the most dedicated, most fashionable, and most successful investment banker on Wall Street. I would like to think that, in some other quantum universe, there’s still a Michelle Lollie – investment banker extraordinaire.

So my interest in physics wasn’t sparked until much later in life, when I was 28 years old – I was no longer excited by a career in finance, and was looking for a professional pivot. I came across a groundbreaking theory paper about the quantum teleportation of states. I honestly thought that it referred to “Beam me up, Scotty” from Star Trek, and
I was amazed.

But all jokes aside, quantum physics holds many a mystery that we’re still exploring. As a field, it’s quite new – there are approximately 100 years of dedicated quantum study and discovery, compared to millennia of classical physics. Perusing the paper and understanding about 2% of it, I just decided that this is what I would study. I wanted to learn about this “entanglement” business – a key concept of quantum physics. The rest is history.

Can you tell me a bit about your PhD pathway? You were a part of the APS Bridge Program at Indiana University – how did the programme help you?

After deciding to pursue a physics degree, I had to pick an academic institution to get said degree. What was news to me was that, for second baccalaureate degrees, funding at a public university was hard to come by. I was looking for universities with a strong optics programme, having decided that quantum optics was for me.

I learned about the Rose-Hulman Institute of Technology, in Terre Haute, Indiana by searching for optical engineering programmes. What I didn’t know was that, in terms of producing top engineers, you’d be hard pressed to find a finer institution. The same can be said for their pure science disciplines, although those disciplines aren’t usually ranked. I reached out to inquire about enrolment, was invited to visit and fell in love with the campus. I was funded and my physics journey began.

Prior to graduation, I was struggling with most of my grad-school applications being denied. I wasn’t the most solid student at Rose (it’s a rigorous place), but I wasn’t a poorly performing student, either. Enter the APS Bridge Program, which focuses on students who, for whatever reason, were having challenges applying to grad school. The programme funded two years of education, wherein the student could have more exposure to coursework (which was just what I needed) or have more opportunity for research, after which they could achieve a master’s degree and continue to a PhD.

I was accepted at a bridge programme site at Indiana University Bloomington. The additional two years allowed for a repeat of key undergraduate courses in the first year, with the second year filled with grad courses. I continued on and obtained my master’s degree. I decided to leave IU to collaborate with a professor at Louisiana State University (LSU) who I had always wanted to work with and had done prior research with. So I transferred to LSU and obtained my PhD, focusing on high-dimensional orbital angular momentum states of light for fibre-based quantum cryptography and communication protocols. Without the Bridge Program, it’s likely that you might not be reading this article.

You then went on to Louisiana State University where, in 2022, you were the first African American woman to complete a PhD in physics – what was that like?

It’s funny, but at the time, no-one was really talking about this. I think, for the individual who has to face various challenges due to race, sexual orientation and preference, gender, immigration status and the like, you just try to take your classes and do your research. But, just by your existence and certain aspects that may come along with that, you are often faced with a decision to advocate for yourself in a space that historically was not curated with you or your value in mind.

Michelle Lollie playing violin on stage for an audience
Beyond beamlines Alongside her days spent in the lab, Michelle Lollie is a keen violinist. (Courtesy: Samuel Cooper/@photoscoops)

So while no-one was going up and down the halls saying “Hey, look at us, we have five Black students in our department!”, most departments would bend over backwards for those diversity numbers. Note that five Black students in a department of well over 100 is nothing to write home about. It should be an order of magnitude higher, with 20–30 Black students at least. This is the sad state of affairs across physics and other sciences: people get excited about one Black student and think that they’re doing something great. But, once I brought this fact to the attention of those in the front office and my adviser, a bit of talk started. Consequently, and fortuitously, the president of the university happened to visit our lab the fall before my graduation. Someone at that event noticed me, a Black woman in the physics department, and reached out to have me participate in several high-profile opportunities within the LSU community. This sparked more interest in my identity as a Black woman in the field; and it turned out that I was the first Black woman who would be getting a PhD from the department, in 2022. I am happy to report that three more Black women have earned degrees (one master’s in medical physics, and two PhDs in physics) since then.

My family and I were featured on LSU socials for the historic milestone, especially thanks to Mimi LaValle, who is the media relations guru for the LSU Physics and Astronomy department. They even shared my grandmother’s experience as a  Black woman growing up in the US during the 1930s, and the juxtaposition of her opportunities versus mine were highlighted. It was a great moment and I’m glad that LSU not only acknowledged this story, but they emphasized and amplified it. I will always be grateful that I was able to hand my doctoral degree to my grandmother at graduation. She passed away in August 2024, but was always proud of my achievements. I was just as proud of her, for her determination to survive. Different times indeed. 

What are some barriers and challenges you have faced through your education and career, if any?

The barriers have mostly been structural, embedded within the culture and fabric of physics. But this has made my dedication to be successful in the field a more unique and customized experience that only those who can relate to my identity will understand. There is a concerted effort to say that science doesn’t see colour, gender, etc., and so these societal aspects shouldn’t affect change within the field. I’d argue that human beings do science, so it is a decidedly “social” science, which is impacted significantly by culture – past and present. In fact, if we had more actual social scientists doing research on effecting change in the field for us physical scientists, the negative aspects of working in the field – as told by those who have lived experience – would be mitigated and true scientific broadening could be achieved.

What were the pitfalls, or stresses, of following this career random walk?

Other than the internal work of recognizing that, on a daily basis, I have to make space for myself in a field that’s not used to me, there hasn’t been anything of the sort. I have definitely had to advocate for myself and my presence within the field. But I love what I do and that I get to explore the mysteries of quantum physics. So, I’m not going anywhere anytime soon. The more space that I create, others can come in and feel just fine.

I want things to be as comfortable as possible for future generations of Black scientists. I am a Black woman, so I will always advocate for Black people within the space. This is unique to the history of the African Diaspora. I often advocate for those with cross-marginalized identities not within my culture, but no-one else has as much incentive to root for Black people but Black people. I urge everyone to do the same in highlighting those in their respective cultures and identities. If not you, then who?

What were the next steps for you after your PhD – how did you decide between staying in academia or pursuing a role in industry?

I always knew I was going to industry. I was actually surprised to learn that many physics graduates plan to go into academia. I started interviewing shortly before graduation, I knew what companies I had on my radar. I applied to them, received several offers, and decided on Quantinuum.

A quantum optics lab bench
Tools of the trade At Quantinuum, Michelle Lollie works on the lasers and optics of quantum computers. (Courtesy: Quantinuum)

You are now an advanced laser scientist with Quantinuum – what does that involve, and whats a “day in the life” like for you now?

Nowadays, I can be found either doing CAD models of beamlines, or in the lab building said beamlines. This involves a lot of lasers, alignment, testing and validation. It’s so cool to see an optical system that you’ve designed come to life on an optical table. Its even more satisfying when it is integrated within a full ion-trap system, and it works.  I love practical work in the lab – when I have been designing a system for too long, I often say “Okay, I’ve been in front of this screen long enough. Time to go get the goggles and get the hands dirty.”

What do you know today, that you wish you knew when you were starting your career?

Had I known what I would have had to go through, I might not have ever done it. So, the ignorance of my path was actually a plus. I had no idea what this road entailed so, although the journey was a course in who-is-Michelle-going-to-be-101, I would wish for the “ignorance is bliss” state – on any new endeavour, even now. It’s in the unknowing that we learn who we are.

Be direct and succinct, and leave no room for speculation about what you are saying

What’s your advice for today’s students hoping to pursue a career in the quantum sector?

I always highlight what I’ve learned from Garfield Warren, a physics professor at Indiana University, and one of my mentors. He always emphasized learning skills beyond science that you’ll need to be successful. Those who work in physics often lack direct communication skills, and there can be a lot of miscommunication. Be direct and succinct, and leave no room for speculation about what you are saying. This skill is key.

Also, learn the specific tools of your trade. If you’re in optics, for example, learn the ins and outs of how lasers work. If you have opportunities to build laser set-ups, do so. Learn what the knobs do. Determine what it takes for you to be confident that the readout data is what you want. You should understand each and every component that relates to work that you are doing. Learn all that you can for each project that you work on. Employers know that they will need to train you on company-specific tasks, but technical acumen is assumed to a point. Whatever the skills are for your area, the more that you understand the minutiae, the better.

The post From banking to quantum optics: Michelle Lollie’s unique journey appeared first on Physics World.

Thermometer uses Rydberg atoms to make calibration-free measurements

A new way to measure the temperatures of objects by studying the effect of their black-body radiation on Rydberg atoms has been demonstrated by researchers at the US National Institute of Standards and Technology (NIST). The system, which provides a direct, calibration-free measure of temperature based on the fact that all atoms of a given species are identical, has a systematic temperature uncertainty of around 1 part in 2000.

The black-body temperature of an object is defined by the spectrum of the photons it emits. In the laboratory and in everyday life, however, temperature is usually measured by comparison to a reference. “Radiation is inherently quantum mechanical,” says NIST’s Noah Schlossberger, “but if you go to the store and buy a temperature sensor that measures the radiation via some sort of photodiode, the rate of photons converted into some value of temperature that you see has to be calibrated. Usually that’s done using some reference surface that’s held at a constant temperature via some sort of contact thermometer, and that contact thermometer has been calibrated to another contact thermometer – which in some indirect way has been tied into some primary standard at NIST or some other facility that offers calibration services.” However, each step introduces potential error.

This latest work offers a much more direct way of determining temperature. It involves measuring the black-body radiation emitted by an object directly, using atoms as a reference standard. Such a sensor does not need calibration because quantum mechanics dictates that every atom of the same type is identical. In Rydberg atoms the electrons are promoted to highly excited states. This makes the atoms much larger, less tightly bound and more sensitive to external perturbations. As part of an ongoing project studying their potential to detect electromagnetic fields, the researchers turned their attention to atom-based thermometry. “These atoms are exquisitely sensitive to black-body radiation,” explains NIST’s Christopher Holloway, who headed the work.

Packet of rubidium atoms

Central to the new apparatus is a magneto-optical trap inside a vacuum chamber containing a pure rubidium vapour. Every 300 ms, the researchers load a new packet of rubidium atoms into the trap, cool them to around 1 mK and excite them from the 5S energy level to the 32S Rydberg state using lasers. They then allow them to absorb black-body radiation from the surroundings for around 100 μs, causing some of the 32S atoms to change state. Finally, they apply a strong, ramped electric field, ionizing the atoms. “The higher energy states get ripped off easier than the lower energy states, so the electrons that were in each state arrive at the detector at a different time. That’s how we get this readout that tells us the population in each of the states,” explains Schlossberger, the work’s first author. The researchers can use this ratio to infer the spectrum of the black-body radiation absorbed by the atoms and, therefore, the temperature of the black body itself.

The researchers calculated the fractional systematic uncertainty of their measurement as 0.006, which corresponds to around 2 K at room temperature. Schlossberger concedes that this sounds relatively unimpressive compared to many commercial thermometers, but he notes that their thermometer measures absolute temperature, not relative temperature. “If I had two skyscrapers next to each other, touching, and they were an inch different in height, you could probably measure that difference to less than a millimetre,” he says, “If I asked you to tell me the total height of the skyscraper, you probably couldn’t.”

One application of their system, the researchers say, could lie in optical clocks, where frequency shifts due to thermal background noise are a key source of uncertainty. At present, researchers have to perform a lot of in situ thermometry to try to infer the black-body radiation experienced by the clock without disturbing the clock itself. Schlossberger says that, in future, one additional laser, could potentially allow the creation of Rydberg states in the clock atoms. “It’s sort of designed so that all the hardware is the same as atomic clocks, so without modifying the clock significantly it would tell you the radiation experienced by the same atoms that are used in the clock in the location they’re used.”

The work is described in a paper in Physical Review Research. Atomic physicist Kevin Weatherill of Durham University in the UK says “it’s an interesting paper and I enjoyed reading it”. “The direction of travel is to look for a quantum measurement for temperature – there are a lot of projects going on at NIST and some here in the UK,”, he says. He notes, however, that this experiment is highly complex and says “I think at the moment just measuring the width of an atomic transition in a vapour cell [which is broadened by the Doppler effect as atoms move faster] gives you a better bound on temperature than what’s been demonstrated in this paper.”

The post Thermometer uses Rydberg atoms to make calibration-free measurements appeared first on Physics World.

Ask me anything: Sophie Morley – ‘Active listening is the key to establishing productive research collaborations with our scientific end-users’

What skills do you use every day in your job?

I am one of two co-chairs, along with my colleague Hendrik Ohldag, of the Quantum Materials Research and Discovery Thrust Area at ALS. Among other things, our remit is to advise ALS management on long-term strategy regarding quantum science, We launch and manage beamline development projects to enhance the quantum research capability at ALS and, more broadly, establish collaborations with quantum scientists and engineers in academia and industry.

In terms of specifics, the thrust area addresses problems of condensed-matter physics related to spin and quantum properties – for example, in atomically engineered multilayers, 2D materials and topological insulators with unusual electronic structures. As a beamline scientist, active listening is the key to establishing productive research collaborations with our scientific end-users – helping them to figure out the core questions they’re seeking to answer and, by extension, the appropriate experimental techniques to generate the data they need.

The task, always, is to translate external users’ scientific goals into practical experiments that will run reliably on the ALS beamlines. High-level organizational skills, persistence and exhaustive preparation go a long way: it takes a lot of planning and dialogue to ensure scientific users get high-quality experimental results.

What do you like best and least about your job?

A core part of my remit is to foster the collective conversation between ALS staff scientists and the quantum community, demystifying synchrotron science and the capabilities of the ALS with prospective end-users. The outreach activity is exciting and challenging in equal measure – whether that’s initiating dialogue with quantum experts at scientific conferences or making first contact using Teams or Zoom.

Internally, we also track the latest advances in fundamental quantum science and applied R&D. In-house colloquia are mandatory, with guest speakers from the quantum community engaging directly with ALS staff teams to figure out how our portfolio of synchrotron-based techniques – whether spectroscopy, scattering or imaging – can be put to work by users from research or industry. This learning and development programme, in turn, underpins continuous improvement of the beamline support services we offer to all our quantum end-users.

As for downsides: it’s never ideal when a piece of instrumentation suddenly “breaks” on a Friday afternoon. This sort of troubleshooting is probably the part of the job I like least, though it doesn’t happen often and, in any case, is a hit I’m happy to take given the flexibility inherent to my role.

What do you know today that you wish you knew when you were starting out in your career?

It’s still early days, but I guess the biggest lesson so far is to trust in my own specialist domain knowledge and expertise when it comes to engaging with the diverse research community working on quantum materials. My know-how in photon science – from coherent X-ray scattering and X-ray detector technology to in situ magnetic- and electric-field studies and automated measurement protocols – enables visiting researchers to get the most out of their beamtime at ALS.

The post Ask me anything: Sophie Morley – ‘Active listening is the key to establishing productive research collaborations with our scientific end-users’ appeared first on Physics World.

Fast and predictable: RayStation meets the needs of online adaptive radiotherapy

Radiation therapy is a targeted cancer treatment that’s typically delivered over several weeks, using a plan that’s optimized on a CT scan taken before treatment begins. But during this time, the geometry of the tumour and the surrounding anatomy can vary, with different patients responding in different ways to the delivered radiation. To optimize treatment quality, such changes must be taken into consideration. And this is where adaptive radiotherapy comes into play.

Adaptive radiotherapy uses patient images taken throughout the course of treatment to update the initial plan and compensate for any anatomical variations. By adjusting the daily plan to match the patient’s daily anatomy, adaptive treatments ensure more precise, personalized and efficient radiotherapy, improving tumour control while reducing toxicity to healthy tissues.

The implementation of adaptive radiotherapy is continuing to expand, as technology developments enable adaptive treatments in additional tumour sites. And as more cancer centres worldwide choose this approach, there’s a need for flexible, innovative software to streamline this increasing clinical uptake.

Designed to meet these needs, RayStation – the treatment planning system from oncology software specialist RaySearch Laboratories – makes adaptive radiotherapy faster and easier to implement in clinical practice. The versatile and holistic RayStation software provides all of the tools required to support adaptive planning, today and into the future.

“We need to be fast, we need to be predictable and we need to be user friendly,” says Anna Lundin, technical product manager at RaySearch Laboratories.

Meeting the need for speed

Typically, adaptive radiotherapy uses the cone-beam CT (CBCT) images acquired for daily patient positioning to perform plan adaptation. For seamless implementation into the clinical workflow to fully reflect the daily anatomical changes, this procedure should be performed “online” with the patient on the treatment table, as opposed to an “offline” approach where plan adaptation occurs after the patient has left the treatment session. Such online adaptation, however, requires the ability to analyse patient scans and perform adaptive re-planning as rapidly as possible.

To fulfil the needs for streamlining all types of adaptive (online or offline) requirements, RayStation incorporates a package of advanced algorithms that perform key tasks, including segmentation, deformable registration, CBCT image enhancement and recontouring, all while the previously delivered dose is taken into consideration. By automating all of these steps, RayStation accelerates the replanning process to the speed needed for online adaptation, with the ability to create an adaptive plan in less than a minute.

Anna Lundin
Anna Lundin: “Fast and predictable replanning is crucial to allow us to treat more patients with greater specificity using less clinical resources.” (Courtesy: RaySearch Laboratories)

Central to this process is RayStation’s dose tracking, which uses the daily images to calculate the actual dose delivered to the patient in each fraction. This ability to evaluate treatment progress, both on a daily basis and considering the estimated total dose, enables informed decisions as to whether to replan or not. The software’s flexible workflow allows users to perform daily dose tracking, compare plans with daily anatomical information against the original plans and adapt when needed.

“You can document trigger points for when adaptation is needed,” Lundin explains. “So you can evaluate whether the original plan is still good to go or whether you want to update or adapt the treatment plan to changes that have occurred.”

User friendly

Another challenge when implementing online adaptation is that its time constraints necessitate access to intuitive tools that enable quick decision making. “One of the big challenges with adaptive radiotherapy has been that a lot of the decision making and processes have been done on an ad hoc basis,” says Lundin. “We need to utilize the same protocol-based planning for adaptive as we do for standard treatment planning.”

As such, RaySearch Laboratories has focused on developing software that’s easy to use, efficient and accessible to a large proportion of clinical personnel. RayStation enables clinics to define and validate clinical procedures for a specific patient category in advance, eliminating the need to repeat this each time.

“By doing this, we let the clinicians focus on what they do best – taking responsibility for the clinical decisions – while RayStation focuses on providing all the data that they need to make that possible,” Lundin adds.

Versatile design

Lundin emphasizes that this accelerated adaptive replanning solution is built upon RayStation’s pre-existing comprehensive framework. “It’s not a parallel solution, it’s a progression,” she explains. “That means that all the tools that we have for robust optimization and evaluation, tools to assess biological effects, support for multiple treatment modalities – all that is also available when performing adaptive assessments and adaptive planning.”

This flexibility allows RayStation to support both photon- and ion-based treatments, as well as multiple imaging modalities. “We have built a framework that can be configured for each site and each clinical indication,” says Lundin. “We believe in giving users the freedom to select which techniques and which strategies to employ.”

We let the clinicians focus on what they do best – taking responsibility for the clinical decisions – while RayStation focuses on providing all the data that they need to make that possible

In particular, adaptive radiotherapy is gaining interest among the proton therapy community. For such highly conformal treatments, it’s even more important to regularly assess the actual delivered dose and ensure that the plan is updated to deliver the correct dose each day. “We have the first clinics using RayStation to perform adaptive proton treatments in an online fashion,” Lundin says.

It’s likely that we will also soon see the emergence of biologically adapted radiotherapy, in which treatments are adapted not just to the patient’s anatomy, but to the tumour’s biological characteristics and biological response. Here again, RayStation’s flexible and holistic architecture can support the replanning needs of this advanced treatment approach.

Predictable performance

Lundin points out that the progression towards online adaptation has been valuable for radiotherapy as a whole. “A lot of the improvements required to handle the time-critical procedures of online adaptive are of large benefit to all adaptive assessments,” she explains. “Fast and predictable replanning is crucial to allow us to treat more patients with greater specificity using less clinical resources. I see it as strictly necessary for online adaptive, but good for all.”

Artificial intelligence (AI) is not only a key component in enhancing the speed and consistency of treatment planning (with tools such as deep learning segmentation and planning), but also enables the handling of massive data sets, which in turn allows users to improve the treatment “intents” that they prescribe.

AI plays a central role in RayStation
Key component AI plays a central role in enabling RayStation to deliver predictable and consistent treatment planning, with deep learning segmentation (shown in the image) being an integral part. (Courtesy: RaySearch Laboratories)

Learning more about how the delivered dose correlates with clinical outcome provides important feedback on the performance and effectiveness of current adaptive processes. This will help optimize and personalize future treatments and, ultimately, make the adaptive treatments more predictable and effective as a whole.

Lundin explains that full automation is the only way to generate the large amount of data in the predictable and consistent manner required for such treatment advancements, noting that it is not possible to achieve this manually.

RayStation’s ability to preconfigure and automate all of the steps needed for daily dose assessment enables these larger-scale dose follow-up clinical studies. The treatment data can be combined with patient outcomes, with AI employed to gain insight into how to best design treatments or predict how a tumour will respond to therapy.

“I look forward to seeing more outcome-related studies of adaptive radiotherapy, so we can learn from each other and have more general recommendations, as has been done in the field of standard radiotherapy planning,” says Lundin. “We need to learn and we need to improve. I think that is what adaptive is all about – to adapt each person’s treatment, but also adapt the processes that we use.”

Future evolution

Looking to the future, adaptive radiotherapy is expected to evolve rapidly, bolstered by ongoing advances in imaging techniques and increasing data processing speeds. RayStation’s machine learning-based segmentation and plan optimization algorithms will continue to play a central role in supporting this evolution, with AI making treatment adaptations more precise, personalized and efficient, enhancing the overall effectiveness of cancer treatment.

“RaySearch, with the foundation that we have in optimization and advancing treatment planning and workflows, is very well equipped to take on the challenges of these future developments,” Lundin adds. “We are looking forward to the improvements to come and determined to meet the expectations with our holistic software.”

The post Fast and predictable: RayStation meets the needs of online adaptive radiotherapy appeared first on Physics World.

Enhancing SRS/SBRT accuracy with RTsafe QA solutions: An overall experience

PRIME SBRT

This webinar will present the overall experience of a radiotherapy department that utilizes RTsafe QA solutions, including the RTsafe Prime and SBRT anthropomorphic phantoms for intracranial stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) applications, respectively, as well as the remote dosimetry services offered by RTsafe. The session will explore how these phantoms can be employed for end-to-end QA measurements and dosimetry audits in both conventional linacs and a Unity MR-Linac system. Key features of RTsafe phantoms, such as their compatibility with RTsafe’s remote dosimetry services for point (OSLD, ionization chamber), 2D (films), and 3D (gel) dosimetry, will be discussed. These capabilities enable a comprehensive SRS/SBRT accuracy evaluation across the entire treatment workflow – from imaging and treatment planning to dose delivery.

Christopher W Schneider
Christopher W Schneider

Christopher Schneider is the adaptive radiotherapy technical director at Mary Bird Perkins Cancer Center and serves as an adjunct assistant professor in the Department of Physics and Astronomy at Louisiana State University in Baton Rouge. Under his supervision, Mary Bird’s MR-guided adaptive radiotherapy program has provided treatment to more than 150 patients in its first year alone. Schneider’s research group focuses on radiation dosimetry, late effects of radiation, and the development of radiotherapy workflow and quality-assurance enhancements.

The post Enhancing SRS/SBRT accuracy with RTsafe QA solutions: An overall experience appeared first on Physics World.

PLANCKS physics quiz – the solutions

Question 1: 4D Sun

Imagine you have been transported to another universe with four spatial dimensions. What would the colour of the Sun be in this four-dimensional universe? You may assume that the surface temperature of the Sun is the same as in our universe and is approximately T = 6 × 103 K. [10 marks]

Boltzmann constant, kB = 1.38 × 10−23 J K−1

Speed of light, c = 3 × 108 m s−1

Solution

Black body radiation, spectral density: ε (ν) dν = ρ (ν) n (ν)

The photon energy, E = where h is Planck’s constant and ν is the photon frequency.

The density of states, ρ (ν) = n−1 where A is a constant independent of the frequency and the frequency term is the scaling of surface area of an n-dimensional sphere.

The Bose–Einstein distribution,

n(v)=1ehvkT1

where k is the Boltzmann constant and T is the temperature.

We let

x=hvkT

and get

ε(x)=xnex1

We do not need the constant of proportionality (which is not simple to calculate in 4D) to find the maximum of ε (x). Working out the constant just tells us how tall the peak is, but we are interested in where the peak is, not the total radiation.

dεdxnxn1ex1xnexex12

We set this equal to zero for the maximum of the distribution,

xn1exex12n1exx=0

This yields x = n (1 − ex) where

x=hvmaxkT

and we can relate

λmax=cvmax

and c being the speed of light.

This equation has the solution x = n +W (−ne−n) where W is the Lambert W function z = W (y) that solves zez = y (although there is a subtlety about which branch of the function). This is kind of useless to do anything with, though. One can numerically solve this equation using bisection/Newton–Raphson/iteration. Alternatively, one could notice that as the number of dimensions increases, e−x is small, so to leading approximation xn. One can do a little better iterating this, xnne−n which is what we will use. Note the second iteration yields

xnnennen

Number of dimensions, n Numerical solution Approximation
2 1.594 1.729
3 2.821 2.851
4 (the one we want) 3.921 3.927
5 4.965 4.966
6 5.985 5.985

Using the result above,

λmax=hckTxmax=6.63 ×1034·3×1081.38×1023·6×103·3.9=616 nm

616 nm is middle of the spectrum, so it will look white with a green-blue tint. Note, we have used T = 6000 K for the temperature here, as given in the question.

It would also be valid to look at ε (λ) dλ instead of ε (ν) .

Question 2: Heavy stuff

In a parallel universe, two point masses, each of 1 kg, start at rest a distance of 1 m apart. The only force on them is their mutual gravitational attraction, F = –Gm1m2/r2. If it takes 26 hours and 42 minutes for the two masses to meet in the middle, calculate the value of the gravitational constant G in this universe. [10 marks]

Solution

First we will set up the equations of motion for our system. We will set one mass to be at position −x and the other to be at x, so the masses are at a distance of 2x from each other. Starting from Newton’s law of gravity:

F=Gm22x2

we can then use Newton’s second law to rewrite the LHS,

mx¨=Gm24x2

which we can simplify to

x¨=Gm4x2

It is important that you get the right factor here depending on your choice for the particle coordinates at the start. Note there are other methods of getting this point, e.g. reduced mass.

We can now solve the second order ODE above. We will not show the whole process here but present the starting point and key results. We can write the acceleration in terms of the velocity. The initial velocity is zero and the initial position

xi=d2

So,

vdvdx=Gm4x20vvdv=Gm4xixdxx2

and once the integrals are solved we can rearrange for the velocity,

v=dxdt=Gm21x1xi

Now we can form an expression for the total time taken for the masses to meet in the middle,

T=2Gm0xidx1x1xi

There are quite a few steps involved in solving this integral, for these solutions, we shall make use of the following (but do attempt to solve it for yourselves in full).

01y1ydy=sin11=π2

Hence,

T=π22xi3Gm=π2d34Gm

We can now rearrange for G and substitute in the values given in the question, don’t forget to convert the time into seconds.

G=d34mπ2T2=6.67×1011 m3kg1s2

This is the generally accepted value for the gravitational constant of our universe as well.

Question 3: Just like clockwork

Consider a pendulum clock that is accurate on the Earth’s surface. Figure 1 shows a simplified view of this mechanism.

Simplified schematic of a pendulum clock mechanism
1 Tick tock Simplified schematic of a pendulum clock mechanism. When the pendulum swings one way (a), the escapement releases the gear attached to the hanging mass and allows it to fall. When the pendulum swings the other way (b) the escapement stops the gear attached to the mass moving so the mass stays in place. (Courtesy: Katherine Skipper/IOP Publishing)

A pendulum clock runs on the gravitational potential energy from a hanging mass (1). The other components of the clock mechanism regulate the speed at which the mass falls so that it releases its gravitational potential energy over the course of a day. This is achieved using a swinging pendulum of length l (2), whose period is given by

T=2πlg

where g is the acceleration due to gravity.

Each time the pendulum swings, it rocks a mechanism called an “escapement” (3). When the escapement moves, the gear attached to the mass (4) is released. The mass falls freely until the pendulum swings back and the escapement catches the gear again. The motion of the falling mass transfers energy to the escapement, which gives a “kick” to the pendulum that keeps it moving throughout the day.

Radius of the Earth, R = 6.3781 × 106 m

Period of one Earth day, τ0 = 8.64 × 104 s

How slow will the clock be over the course of a day if it is lifted to the hundredth floor of a skyscraper? Assume the height of each storey is 3 m. [4 marks]

Solution

We will write the period of oscillation of the pendulum at the surface of the Earth to be

T0=2πlg0.

At a height h above the surface of the Earth the period of oscillation will be

Th=2πlgh,

where g0 and gh are the acceleration due to gravity at the surface of the Earth and a height h above it respectively.

We can define τ0 to be the total duration of the day which is 8.64 × 104 seconds and equal to N complete oscillations of the pendulum at the surface. The lag is then τh which will equal N times the difference in one period of the two clocks, τh = NΔT, where ΔT = (ThT0). We can now take a ratio of the lag over the day and the total duration of the day:

τhτ0=NThT0NT0τh=τ0ThT0T0=τh=τ0ThT01

Then by substituting in the expressions we have for the period of a pendulum at the surface and height h we can write this in terms of the gravitational constant,

τh=τ0g0gh1

[Award 1 mark for finding the ratio of the lag over the day and the total period of the day.]

The acceleration due to gravity at the Earth’s surface is

g0=GMR2

where G is the universal gravitational constant, M is the mass of the Earth and R is the radius of the Earth. At an altitude h, it will be

gh=GMR+h2

[Award 1 mark for finding the expression for the acceleration due to gravity at height h.]

Substituting into our expression for the lag, we get:

τh=τ0R+h2R21=τ01+2hR+h2R21=τ0R2+2hR+h2R1=τ0R+hR1

This simplifies to an expression for the lag over a day. We can then substitute in the given values to find,

τh=τ0hR=8.64×104 s·300 m8.3781×106 m =4.064 s4 s

[Award 2 marks for completing the simplification of the ratio and finding the lag to be ≈ 4 s.]

Question 4: Quantum stick

Imagine an infinitely thin stick of length 1 m and mass 1 kg that is balanced on its end. Classically this is an unstable equilibrium, although the stick will stay there forever if it is perfectly balanced. However, in quantum mechanics there is no such thing as perfectly balanced due to the uncertainty principle – you cannot have the stick perfectly upright and not moving at the same time. One could argue that the quantum mechanical effects of the uncertainty principle on the system are overpowered by others, such as air molecules and photons hitting it or the thermal excitation of the stick. Therefore, to investigate we would need ideal conditions such as a dark vacuum, and cooling to a few milli­kelvins, so the stick is in its ground state.

Moment of inertia for a rod,

I=13ml2

where m is the mass and l is the length.

Uncertainty principle,

ΔxΔp2

There are several possible approximations and simplifications you could make in solving this problem, including:

sinθ ≈ θ for small θ

cosh1x=ln x+x21

and

sinh1x=ln x+x2+1

Calculate the maximum time it would take such a stick to fall over and hit the ground if it is placed in a state compatible with the uncertainty principle. Assume that you are on the Earth’s surface. [10 marks]

Hint: Consider the two possible initial conditions that arise from the uncertainty principle.

Solution

We can imagine this as an inverted pendulum, with gravity acting from the centre of mass l2 and at an angle θ from the unstable equilibrium point.

[Award 1 mark for a suitable diagram of the system.]

We must now find the equations of motion of the system. For this we can use Newton’s second law F=ma in its rotational form τ = Iα (torque = moment of inertia × angular acceleration). We have another equation for torque we can use as well

τ=r×F=rFsinθn^

where r is the distance from the pivot to the centre of mass l2 and F is the force, which in this case is gravity mg. We can then equate these giving

rFsinθ=Iα

Substituting in the given moment of inertia of the stick and that the angular acceleration

α=δ2θδt2=θ¨

We can cancel a few things and rearrange to get a differential equation of the form:

θ¨3g2lsinθ=0

we then can take the small angle approximation sin θ ≈ θ, resulting in

θ¨3g2lθ=0

[Award 2 marks for finding the equation of motion for the system and using the small angle approximation.]

Solve with ansatz of θ = Aeωt + Be−ωt, where we have chosen

ω2=3g2l

We can clearly see that this will satisfy the differential equation

θ˙=ωAeωtωBeωt and θ¨=ω2Aeωt+ω2Beωt

Now we can apply initial conditions to find A and B, by looking at the two cases from the uncertainty principle

ΔxΔp=ΔxmΔv2

Case 1: The stick is at an angle but not moving

At t = 0, θ = Δθ

θ = Δθ = A + B

At t = 0, θ˙=0

θ˙=0=ωAeω0ωBeω0 , A=B

This implies Δθ = 2A and we can then find

A=Δθ2=2Δx2l=Δxl

So we can now write

θ=Aeωteωt=Δvωeωteωt or θ=2Δvωlsinh ωt

Case 2: The stick is at upright but moving

At t = 0, θ = 0

This condition gives us A = −B.

At t = 0, θ¨=2vl

This initial condition has come from the relationship between the tangential velocity, Δv which equals the distance to the centre of mass from the pivot point, l2 and the angular velocity θ˙. Using the above initial condition gives us θ˙=2ωA where A=Δvωl

We can now write

θ=Aeωteωt=Δvωeωteωt or θ=2Δvωlsinh ωt

[Award 4 marks for finding the two expressions for θ by using the two cases of the uncertainty principle.]

Now there are a few ways we can finish off this problem, we shall look at three different ways. In each case when the stick has fallen on the ground θtf=π2.

Method 1

Take θ=2Δxlcosh ωt and θ=2Δvωlsinh ωt, use θtf=π2 then rearrange for tf in both cases. We have

tf=1ωcosh1πl4Δx and tf=1ωsinh1πωl4Δv

Look at the expression for cosh−1 x and sinh−1 x given in the question. They are almost identical, we can then approximate the two arguments to each other and we find,

Δx=Δvω

we can then substitute in the uncertainty principle ΔxΔp=2 as Δv=2mδx and then write an expression of Δx=2mω, which we can put back into our arccosh expression (or do it for Δv and put into arcsinh).

tf=1ωcosh1πl4Δx

where Δx=2mω and ω=3g2l.

Method 2

In this next method, when you get to the inverse hyperbolic functions, you can take an expansion of their natural log forms in the tending to infinity limit. To first order both functions give ln 2x, we can then equate the arguments and find Δx or Δv in terms of the other and use the uncertainty principle. This would give the time taken as,

tf=1ωlnπl2Δx

where Δx=2mω and ω=3g2l.

Method 3

Rather than using hyperbolic functions, you could do something like above and do an expansion of the exponentials in the two expressions for tf or we could make life even easier and do the following.

Disregard the e−ωt terms as they will be much smaller than the eωt terms. Equate the two expressions for θtf=π2 and then take the natural logs, once again arriving at an expression of

tf=1ωlnπl2Δx

where Δx=2mω and ω=3g2l.

This method efficiently sets B = 0 when applying the initial conditions.

[Award 2 marks for reaching an expression for t using one of the methods above or a suitable alternative that gives the correct units for time.]

Then, by using one of the expressions above for time, substitute in the values and find that t = 10.58 seconds.

[Award 1 mark for finding the correct time value of t = 10.58 seconds.]

  • If you’re a student who wants to sign up for the 2025 edition of PLANCKS UK and Ireland, entries are now open at plancks.uk

The post PLANCKS physics quiz – the solutions appeared first on Physics World.

Mark Thomson looks to the future of CERN and particle physics

This episode of the Physics World Weekly podcast features Mark Thomson, who will become the next director-general of CERN in January 2026. In a conversation with Physics World’s Michael Banks, Thomson shares his vision of the future of the world’s preeminent particle physics lab, which is home to the Large Hadron Collider (LHC).

They chat about the upcoming high-luminosity upgrade to the LHC (HL-LHC), which will be completed in 2030. The interview explores long-term strategies for particle physics research and the challenges of managing large international scientific organizations. Thomson also looks back on his career in particle physics and his involvement with some of the field’s biggest experiments.

 

 

This podcast is supported by Atlas Technologies, specialists in custom aluminium and titanium vacuum chambers as well as bonded bimetal flanges and fittings used everywhere from physics labs to semiconductor fabs.

The post Mark Thomson looks to the future of CERN and particle physics appeared first on Physics World.

Filter inspired by deep-sea sponge cleans up oil spills

Oil spills can pollute large volumes of surrounding water – thousands of times greater than the spill itself – causing long-term economic, environmental, social and ecological damage. Effective methods for in situ capture of spilled oil are thus essential to minimize contamination from such disasters.

Many oil spill cleanup technologies, however, exhibit poor hydrodynamic stability under complex flow conditions, which leads to poor oil-capture efficiency. To address this shortfall, researchers from Harbin Institute of Technology in China have come up with a new approach to oil cleanup using a vortex-anchored filter (VAF).

“Since the 1979 Atlantic Empress disaster, interception and adsorption have been the primary methods for oil spill recovery, but these are sensitive to water-flow fluctuation,” explains lead author Shijie You. Oil-in-water emulsions from leaking pipelines and offshore industrial discharge are particularly challenging, says You, adding that “these problems inspire us to consider how we can address hydrodynamic stability of oil-capture devices under turbulent conditions”.

Inspired by the natural world

You and colleagues believe that the answers to oil spill challenges could come from nature – arguably the world’s greatest scientist. They found that the deep-sea glass sponge E. aspergillum, which lives at depths of up to 1000 m in the Pacific Ocean, has an excellent ability to filter feed with a high effectiveness, selectivity and robustness, and that its food particles share similarities with oil droplets.

The anatomical structure of E. aspergillum – also known as Venus’ flower basket – provided inspiration for the researchers to design their VAF. By mimicking the skeletal architecture and filter feeding patterns of the sponge, they created a filter that exhibited a high mass transfer and hydrodynamic stability in cleaning up oil spills under turbulent flow.

“The E. aspergillum has a multilayered skeleton–flagellum architecture, which creates 3D streamlines with frequent collision, deflection, convergence and separation,” explains You. “This can dissipate macro-scale turbulent flows into small-scale swirling flow patterns called low-speed vortical flows within the body cavity, which reduces hydrodynamic load and enhances interfacial mass transfer.”

For the sponges, this allows them to maintain a high mechanical stability while absorbing nutrients from the water. The same principles can be applied to synthetic materials for cleaning up oil spills.

Design of the vortex-anchored filter
VAF design Skeletal motif of E. aspergillum and (right column) front and top views of the VAF with a bio-inspired hollow cylinder skeleton and flagellum adsorbent. (Courtesy: Y Yu et al. Nat. Commun. 10.1038/s41467-024-55587-y)

The VAF is a synthetic form of the sponge’s architecture and, according to You, “is capable of transferring kinematic energy from an external water flow into multiple small-scale low-speed vortical flows within the body cavity to enhance hydrodynamic stability and oil capture efficiency”.

The tubular outer skeleton of the VAF comprises a helical ridge and chequerboard lattice. It is this skeleton that creates a slow vortex field inside the cavity and enables mass transfer of oil during the filtering process. Once the oil has been forced into the filter, the internal area – composed of flagellum-shaped adsorbent materials – provides a large interfacial area for oil adsorption.

Using the VAF to clean up oil spills

The researchers used their nature-inspired VAF to clean up oil spills under complex hydrodynamic conditions. You states that “the VAF can retain the external turbulent-flow kinetic energy in the low-speed vortical flows – with a small Kolmogorov microscale (85 µm) [the size of the smallest eddy in a turbulent flow] – inside the cavity of the skeleton, leading to enhanced interfacial mass transfer and residence time”.

“This led to an improvement in the hydrodynamic stability of the filter compared to other approaches by reducing the Reynolds stresses in nearly quiescent wake flows,” You explains. The filter was also highly resistant to bending stresses caused at the boundary of the filter when trying separate viscous fluids. When put into practice, the VAF was able to capture more than 97% of floating, underwater and emulsified oils, even under strong turbulent flow.

When asked how the researchers plan to improve the filter further, You tells Physics World that they “will integrate the VAF with photothermal, electrothermal and electrochemical modules for environmental remediation and resource recovery”.

“We look forward to applying VAF-based technologies to solve sea pollution problems with a filter that has an outstanding flexibility and adaptability, easy-to-handle operability and scalability, environmental compatibility and life-cycle sustainability,” says You.

The research is published in Nature Communications.

The post Filter inspired by deep-sea sponge cleans up oil spills appeared first on Physics World.

Anomalous Hall crystal is made from twisted graphene

A topological electronic crystal (TEC) in which the quantum Hall effect emerges without the need for an external magnetic field has been unveiled by an international team of physicists. Led by Josh Folk at the University of British Columbia, the group observed the effect in a stack of bilayer and trilayer graphene that is twisted at a specific angle.

In a classical electrical conductor, the Hall voltage and its associated resistance appear perpendicular both to the direction of an applied electrical current and an applied magnetic field. A similar effect is also seen in 2D electron systems that have been cooled to ultra-low temperatures. But in this case, the Hall resistance becomes quantized in discrete steps.

This quantum Hall effect can emerge in electronic crystals, also known as Wigner crystals. These are arrays of electrons that are held in place by their mutual repulsion. Some researchers have considered the possibility of a similar effect occurring in structures called TECs, but without an applied magnetic field. This is called the “quantum anomalous Hall effect”.

Anomalous Hall crystal

“Several theory groups have speculated that analogues of these structures could emerge in quantized anomalous Hall systems, giving rise to a type of TEC termed an ‘anomalous Hall crystal’,” Folk explains. “This structure would be insulating, due to a frozen-in electronic ordering in its interior, with dissipation-free currents along the boundary.”

For Folk’s team, the possibility of anomalous hall crystals emerging in real systems was not the original focus of their research. Initially, a team at the University of Washington had aimed to investigate the diverse phenomena that emerge when two or more flakes of graphene are stacked on top of each other, and twisted relative to each other at different angles

While many interesting behaviours emerged from these structures, one particular stack caught the attention of Washington’s Dacen Waters, which inspired his team to get in touch with Folk and his colleagues in British Columbia.

In a vast majority of cases, the twisted structures studied by the team had moiré patterns that were very disordered. Moiré patterns occur when two lattices are overlaid and rotated relative to each other. Yet out of tens of thousands of permutations of twisted graphene stacks, one structure appeared to be different.

Exceptionally low levels of disorder

“One of the stacks seemed to have exceptionally low levels of disorder,” Folk describes. “Waters shared that one with our group to explore in our dilution refrigerator, where we have lots of experience measuring subtle magnetic effects that appear at a small fraction of a degree above absolute zero.”

As they studied this highly ordered structure, the team found that its moiré pattern helped to modulate the system’s electronic properties, allowing a TEC to emerge.

“We observed the first clear example of a TEC, in a device made up of bilayer graphene stacked atop trilayer graphene with a small, 1.5° twist,” Folk explains. “The underlying topology of the electronic system, combined with strong electron-electron interactions, provide the essential ingredients for the crystal formation.”

After decades of theoretical speculation, Folk, Waters and colleagues have identified an anomalous Hall crystal, where the quantum Hall effect emerges from an in-built electronic structure, rather than an applied magnetic field.

Beyond confirming the theoretical possibility of TECs, the researchers are hopeful that their results could lay the groundwork for a variety of novel lines of research.

“One of the most exciting long-term directions this work may lead is that the TEC by itself – or perhaps a TEC coupled to a nearby superconductor – may host new kinds of particles,” Folk says. “These would be built out of the ‘normal’ electrons in the TEC, but totally unlike them in many ways: such as their fractional charge, and properties that would make them promising as topological qubits.”

The research is described in Nature.

The post Anomalous Hall crystal is made from twisted graphene appeared first on Physics World.

Imaging reveals how microplastics may harm the brain

Pollution from microplastics – small plastic particles less than 5 mm in size – poses an ongoing threat to human health. Independent studies have found microplastics in human tissues and within the bloodstream. And as blood circulates throughout the body and through vital organs, these microplastics reach can critical regions and lead to tissue dysfunction and disease. Microplastics can also cause functional irregularities in the brain, but exactly how they exert neurotoxic effects remains unclear.

A research collaboration headed up at the Chinese Research Academy of Environmental Sciences and Peking University has shed light on this conundrum. In a series of cerebral imaging studies reported in Science Advances, the researchers tracked the progression of fluorescent microplastics through the brains of mice. They found that microplastics entering the bloodstream become engulfed by immune cells, which then obstruct blood vessels in the brain and cause neurobehavioral abnormalities.

“Understanding the presence and the state of microplastics in the blood is crucial. Therefore, it is essential to develop methods for detecting microplastics within the bloodstream,” explains principal investigator Haipeng Huang from Peking University. “We focused on the brain due to its critical importance: if microplastics induce lesions in this region, it could have a profound impact on the entire body. Our experimental technology enables us to observe the blood vessels within the brain and detect microplastics present in these vessels.”

In vivo imaging

Huang and colleagues developed a microplastics imaging system by integrating a two-photon microscopy system with fluorescent plastic particles and demonstrated that it could image brain blood vessels in awake mice. They then fed five mice with water containing 5-µm diameter fluorescent microplastics. After a couple of hours, fluorescence images revealed microplastics within the animals’ cerebral vessels.

The microplastic flash
Lightening bolt The “MP-flash” observed as two plastic particles rapidly fly through the cerebral blood vessels. (Courtesy: Haipeng Huang)

As they move through rapidly flowing blood, the microplastics generate a fluorescence signal resembling a lightning bolt, which the researchers call a “microplastic flash” (MP-flash). This MP-flash was observed in four of the mice, with the entire MP-flash trajectory captured in a single imaging frame of less than 208 ms.

Three hours after administering the microplastics, the researchers observed fluorescent cells in the bloodstream. The signals from these cells were of comparable intensity to the MP-flash signal, suggesting that the cells had engulfed microplastics in the blood to create microplastic-labelled cells (MPL-cells). The team note that the microplastics did not directly attach to the vessel wall or cross into brain tissue.

To test this idea further, the researchers injected microplastics directly into the bloodstream of the mice. Within minutes, they saw the MP-Flash signal in the brain’s blood vessels, and roughly 6 min later MPL-cells appeared. No fluorescent cells were seen in non-treated mice. Flow cytometry of mouse blood after microplastics injection revealed that the MPL-cells, which were around 21 µm in dimeter, were immune cells, mostly neutrophils and macrophages.

Tracking these MPL-cells revealed that they sometimes became trapped within a blood vessel. Some cells exited the imaging field following a period of obstruction while others remained in cerebral vessels for extended durations, in some instances for nearly 2.5 h of imaging. The team also found that one week after injection, the MPL-cells had still not cleared, although the density of blockages was much reduced.

“[While] most MPL-cells flow rapidly with the bloodstream, a small fraction become trapped within the blood vessels,” Huang tells Physics World. “We provide an example where an MPL-cell is trapped at a microvascular turn and, after some time, is fortunate enough to escape. Many obstructed cells are less fortunate, as the blockage may persist for several weeks. Obstructed cells can also trigger a crash-like chain reaction, resulting in several MPL-cells colliding in a single location and posing significant risks.”

The MPL-cell blockages also impeded blood flow in the mouse brain. Using laser speckle contrast imaging to monitor blood flow, the researchers saw reduced perfusion in the cerebral cortical vessels, notably at 30 min after microplastics injection and particularly affecting smaller vessels.

Laser speckle contrast images showing blood flow in the mouse brain
Reduced blood flow These laser speckle contrast images show blood flow in the mouse brain at various times after microplastics injection. The images indicate that blockages of microplastic-labelled cells inhibit perfusion in the cerebral cortical vessels. (Courtesy: Huang et al. Sci. Adv. 11 eadr8243 (2025))

Changing behaviour

Lastly, Huang and colleagues investigated whether the reduced blood supply to the brain caused by cell blockages caused behavioural changes in the mice. In an open-field experiment (used to assess rodents’ exploratory behaviour) mice injected with microplastics travelled shorter distances at lower speeds than mice in the control group.

The Y-maze test for assessing memory also showed that microplastics-treated mice travelled smaller total distances than control animals, with a significant reduction in spatial memory. Tests to evaluate motor coordination and endurance revealed that microplastics additionally inhibited motor abilities. By day 28 after injection, these behavioural impairments were restored, corresponding with the observed recovery of MPL-cell obstruction in the cerebral vasculature at 28 days.

The researchers conclude that their study demonstrates that microplastics harm the brain indirectly – via cell obstruction and disruption of blood circulation – rather than directly penetrating tissue. They emphasize, however, that this mechanism may not necessarily apply to humans, who have roughly 1200 times greater volume of circulating blood volume than mice and significantly different vascular diameters.

“In the future, we plan to collaborate with clinicians,” says Huang. “We will enhance our imaging techniques for the detection of microplastics in human blood vessels, and investigate whether ‘MPL-cell-car-crash’ happens in human. We anticipate that this research will lead to exciting new discoveries.”

Huang emphasizes how the use of fluorescent microplastic imaging technology has fundamentally transformed research in this field over the past five years. “In the future, advancements in real-time imaging of depth and the enhanced tracking ability of microplastic particles in vivo may further drive innovation in this area of study,” he says.

The post Imaging reveals how microplastics may harm the brain appeared first on Physics World.

What ‘equity’ really means for physics

If you have worked in a university, research institute or business during the past two decades you will be familiar with the term equality, diversity and inclusion (EDI). There is likely to be an EDI strategy that includes measures and targets to nurture a workforce that looks more like the wider population and a culture in which everyone can thrive. You may find a reasoned business case for EDI, which extends beyond the organization’s legal obligations, to reflect and understand the people that you work with.

Look more closely and it is possible that the “E” in EDI is not actually equality, but rather equity. Equity is increasingly being used as a more active commitment, not least by the Institute of Physics, which publishes Physics World.  How, though, is equity different to equality? What is causing this change of language and will it make any difference in practice?

These questions have become more pressing as discussions around equality and equity have become entwined in the culture wars.  This is a particularly live issue in the US as Donald Trump’s second term as US president has begun to withdraw funding from EDI activities.  But it has also influenced science policy in the UK.

The distinction between equality and equity is often illustrated by a cartoon published in 2016 by the UK artist Angus Maguire (above). It shows a fence and people of variable height gaining an equal view of a baseball match thanks to different numbers of crates that they stand on. This has itself, however, resulted in arguments about other factors such as the conditions necessary to watch the game in the stadium, or indeed even join in. That requires consideration about how the teams and the stadium could adapt to the needs of all potential participants, but also how these changes might affect the experience of others involved.

In terms of education, the Organization for Economic Co-operation and Development (OECD) states that equity “does not mean that all students obtain equal education outcomes, but rather that differences in students’ outcomes are unrelated to their background or to economic and social circumstances over which the students have no control”. This is an admirable goal, but there are questions about how to achieve it.

In OECD member countries, freedom of choice and competition yield social inequalities that flow through to education and careers. This means that governments are continually balancing the benefits of inspiring and rewarding individuals alongside concerns about group injustice.

In 2024, we hosted a multidisciplinary workshop about equity in science, and especially physics. Held at the University of Birmingham, it brought together physicists at different career stages with social scientists and people who had worked on science and education in government, charities and learned societies. At the event, social scientists told us that equality is commonly conceived as a basic right to be treated equally and not discriminated against, regardless of personal characteristics. This right provides a platform for “equality of opportunity” whereby barriers are removed so talent and effort can be rewarded.

In the UK, the promotion of equality of opportunity is enshrined within the country’s Equality Act 2010 and underpins current EDI work in physics. This includes measures to promote physics to young people in deprived areas, and to women and ethnic minorities, as well as mentoring and additional academic and financial support through all stages of education and careers.  It extends to re-shaping the content and promotion of physics courses in universities so they are more appealing and responsive to a wider constituency. In many organizations, there is also training for managers to combat discrimination and bias, whether conscious or not.

Actions like these have helped to improve participation and progression across physics education and careers, but there is still significant underrepresentation and marginalization due to gender, ethnicity and social background. This is not unusual in open and competitive societies where the effects of promoting equal opportunities are often outweighed by the resources and connections of people with characteristics that are highly represented. Talent and effort are crucial in “high-performance” sectors such as academia and industry, but they are not the only factors influencing success.

Physicists at the meeting told us that they are motivated by intellectual curiosity, fascination with the natural world and love for their subject. Yet there is also, in physics, a culture of “genius” and competition, in which confidence is crucial. Facilities and working conditions, which often involve short-term contracts and international mobility, are difficult to balance alongside other life commitments. Although inequalities and exclusions are recognized, they are often ascribed to broader social factors or the inherent requirements of research. As a result, physicists tend not to accept responsibility for inequities within the discipline.

Physics has a culture of “hyper-meritocracy” where being correct counts more than respecting others

Many physicists want merit to be a reflection of talent and effort. But we identified that physics has a culture of “hyper-meritocracy” where being correct counts more than respecting others. Across the community, some believe in positive action beyond the removal of discrimination, but others can be actively hostile to any measure associated with EDI. This is a challenging environment for any young researcher and we heard distressing stories of isolation from women and colleagues who had hidden disabilities or those who were the first in their family to go to university.

The experience, positive or not, when joining a research group as a postgraduate or postdoctoral researcher is often linked with the personality of leaders. Peer groups and networks have helped many physicists through this period of their career, but it is also where the culture in a research group or department can drive some to the margins and ultimately out of the profession. In environments like this, equal opportunities have proved insufficient to advance diversity, let alone inclusion.

Culture change

Organizations that have replaced equality with equity want to signal a commitment not just to equal treatment, but also more equitable outcomes. However, those who have worked in government told us that some people become disengaged, thinking such efforts can only be achieved by reducing standards and threatening cultures they value. Given that physics needs technical proficiency and associated resources and infrastructure, it is not a discipline where equity can mean an equal distribution of positions and resources.

Physics can, though, counter the influence of wider inequalities by helping colleagues who are under-represented to gain the attributes, experiences and connections that are needed to compete successfully for doctoral studentships, research contracts and academic positions. It can also face up to its cultural problems, so colleagues who are minoritized feel less marginalized and they are ultimately recognized for their efforts and contributions.

This will require physicists giving more prominence to marginalized voices as well as critically and honestly examining their culture and tackling unacceptable behaviour. We believe we can achieve this by collaborating with our social science colleagues. That includes gathering and interpreting qualitative data, so there is shared understanding of problems, as well as designing strategies with people who are most affected, so that everyone has a stake in success.

If this happens, we can look forward to a physics community that genuinely practices equity, rather than espousing equality of opportunity.

The post What ‘equity’ really means for physics appeared first on Physics World.

Watch this amazing quantum-inspired stained-glass artwork in all its glory

This video has no voice over. (Video courtesy: Space Production)

The aim of the International Year of Quantum Science & Technology (IYQ) in 2025 to help raise the public’s awareness of the importance and impact of quantum science and applications on all aspects of life.

Ukraine-born artist Oksana Kondratyeva has certainly taken that message to heart. A London-based designer and producer of architectural glass art, she has recently created an intriguing piece of stained glass inspired by the casing for a quantum computer.

In this video specially made by Kondratyeva for Physics World, you can see her artwork, which was displayed at the 2024 British Glass Biennale, and glimpse the artist in the protective gear she wears while working with the chemicals to make her piece.

To discover more on this topic, take a look at the recent Physics World article: A ‘quantum rose’ for the 21st century: Oksana Kondratyeva on her stained-glass art inspired by a quantum computer

In the feature, Kondratyeva describes how her work fuses science and art – and reveals how the collaboration with Rigetti came about. As it happens, it was an article in Physics World during another international year – devoted to glass – that inspired the project.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Watch this amazing quantum-inspired stained-glass artwork in all its glory appeared first on Physics World.

When Bohr got it wrong: the impact of a little-known paper on the development of quantum theory

Niels Bohr, illustration
Brilliant mind Illustration of the Danish physicist and Nobel laureate Niels Bohr (1885-1962). Bohr made numerous contributions to physics during his career, but it was his work on atomic structure and quantum theory that won him the 1922 Nobel Prize for Physics. (Courtesy: Sam Falconer, Debut Art/Science Photo Library)

One hundred and one years ago, Danish physicist Niels Bohr proposed a radical theory together with two young colleagues – Hendrik Kramers and John Slater – in an attempt to resolve some of the most perplexing issues in fundamental physics at the time. Entitled “The Quantum Theory of Radiation”, and published in the Philosophical Magazine, their hypothesis was quickly proved wrong, and has since become a mere footnote in the history of quantum mechanics.

Despite its swift demise, their theory perfectly illustrates the sense of crisis felt by physicists at that moment, and the radical ideas they were prepared to contemplate to resolve it. For in their 1924 paper Bohr and his colleagues argued that the discovery of the “quantum of action” might require the abandonment of nothing less than the first law of thermodynamics: the conservation of energy.

As we celebrate the centenary of Werner Heisenberg’s 1925 quantum breakthrough with the International Year of Quantum Science and Technology (IYQ) 2025, Bohr’s 1924 paper offers a lens through which to look at how the quantum revolution unfolded. Most physicists at that time felt that if anyone was going to rescue the field from the crisis, it would be Bohr. Indeed, this attempt clearly shows signs of the early rift between Bohr and Albert Einstein about the quantum realm, that would turn into a lifelong argument. Remarkably, the paper also drew on an idea that later featured in one of today’s most prominent alternatives to Bohr’s “Copenhagen” interpretation of quantum mechanics.

Genesis of a crisis

The quantum crisis began when German physicist Max Planck proposed the quantization of energy in 1900, as a mathematical trick for calculating the spectrum of radiation from a warm, perfectly absorbing “black body”. Later, in 1905, Einstein suggested taking this idea literally to account for the photoelectric effect, arguing that light consisted of packets or quanta of electromagnetic energy, which we now call photons.

Bohr entered the story in 1912 when, working in the laboratory of Ernest Rutherford in Manchester, he devised a quantum theory of the atom. In Bohr’s picture, the electrons encircling the atomic nucleus (that Rutherford had discovered in 1909) are constrained to specific orbits with quantized energies. The electrons can hop in “quantum jumps” by emitting or absorbing photons with the corresponding energy.

Albert Einstein and Niels Bohr
Conflicting views Stalwart physicists Albert Einstein and Niels Bohr had opposing views on quantum fundamentals from early on, which turned into a lifelong scientific argument between the two. (Paul Ehrenfest/Wikimedia Commons)

Bohr had no theoretical justification for this ad hoc assumption, but he showed that, by accepting it, he could predict (more or less) the spectrum of the hydrogen atom. For this work Bohr was awarded the 1922 Nobel Prize for Physics, the same year that Einstein collected the prize for his work on light quanta and the photoelectric effect (he had been awarded it in 1921 but was unable to attend the ceremony).

After establishing an institute of theoretical physics (now the Niels Bohr Institute) in Copenhagen in 1917, Bohr’s mission was to find a true theory of the quantum: a mechanics to replace, at the atomic scale, the classical physics of Isaac Newton that worked at larger scales. It was clear that classical physics did not work at the scale of the atom, although Bohr’s correspondence principle asserted that quantum theory should give the same results as classical physics at a large enough scale.

Hendrik Kramers
Mathematical mind Dutch physicist Hendrik Kramers spent 10 years as Niels Bohr’s assistant in Copenhagen. (Wikimedia Commons)

Quantum theory was at the forefront of physics at the time, and so was the most exciting topic for any aspiring young physicist. Three groups stood out as the most desirable places to work for anyone seeking a fundamental mathematical theory to replace the makeshift and sometimes contradictory “old” quantum theory that Bohr had cobbled together: that of Arnold Sommerfeld in Münich, of Max Born in Göttingen, and of Bohr in Copenhagen.

Dutch physicist Hendrik Kramers had hoped to work on his doctorate with Born – but in 1916 the First World War ruled that out, and so he opted instead for Copenhagen, in politically neutral Denmark. There he became Bohr’s assistant for ten years: as was the case with several of Bohr’s students, Kramers did the maths (it was never Bohr’s forte) while Bohr supplied the ideas, philosophy and kudos. Kramers ended up working on an impressive range of problems, from chemical physics to pure mathematics.

Reckless and radical

One of the most vexing question for Bohr and his Copenhagen circle in the early 1920s was how to think about electron orbits in atoms. Try as they might, they couldn’t find a way to make the orbits “fit” with experimental observations of atomic spectra.

Perhaps, in quantum systems like atoms, we have to abandon any attempt to construct a physical picture at all

Bohr and others, including Heisenberg, began to voice a possibility that seemed almost reckless: perhaps, in quantum systems like atoms, we have to abandon any attempt to construct a physical picture at all. Maybe we just can’t think of quantum particles as objects moving along trajectories in space and time.

This struck others, such as Einstein, as desperate, if not crazy. Surely the goal of science had always been to offer a picture of the world in terms of “things happening to objects in space”. What else could there be than that? How could we just give it all up?

But it was worse than that. For one thing, Bohr’s quantum jumps were supposed to happen instantaneously: an electron, say, jumping from one orbit to another in no time at all. In classical physics, everything happens continuously: a particle gets from here to there by moving smoothly across the intervening space, in some finite time. The discontinuities of quantum jumps seemed to some – like Austrian physicist Erwin Schrödinger in Vienna – bordering on the obscene.

Worse still was the fact that while the old quantum theory stipulated the energy of quantum jumps, there was nothing to dictate when they would happen – they simply did. In other words, there was no causal kick that instigated a quantum jump: the electron just seemed to make up its own mind about when to jump. As Heisenberg would later proclaim in his 1927 paper on the uncertainty principle (Zeitschrift für Physik 43 172),  quantum theory “establishes the final failure of causality”.

Such notions were not the only source of friction between the Copenhagen team and Einstein. Bohr didn’t like light quanta. While they seemed to explain the photoelectric effect, Bohr was convinced that light had to be fundamentally wave-like, so that photons (to use the anachronistic term) were only a way of speaking, not real entities.

To add to the turmoil in 1924, the French physicist Louis de Broglie had, in his doctoral thesis for the Sorbonne, turned the quantum idea on its head by proposing that particles such as electrons might show wave-like behaviour. Einstein had at first considered this too wild, but soon came round to the idea.

Go where the waves take you

In 1924 these virtually heretical ideas were only beginning to surface, but they were creating such a sense of crisis that it seemed anything was possible. In the 1960s, science historian Paul Forman suggested that the feverish atmosphere in physics was part of an even wider cultural current. By rejecting causality and materialism, the German quantum physicists, Forman said, were attempting to align their ideas with a rejection of mechanistic thinking while embracing the irrational – as was the fashion in the philosophical and intellectual circles of the beleaguered Weimar republic. The idea has been hotly debated by historians and philosophers of science – but it was surely in Copenhagen, not Munich or Göttingen, that the most radical attitudes to quantum theory were developing.

John Clark Slater
Particle pilot In 1923, US physicist John Clark Slater moved to Copenhagen, and suggested the concept of a “virtual field” that spread throughout a quantum system. (Emilio Segrè Visual Archives General Collection/MIT News Office)

Then, just before Christmas in 1923, a new student arrived at Copenhagen. John Clarke Slater, who had a PhD in physics from Harvard, turned up at Bohr’s institute with a bold idea. “You know those difficulties about not knowing whether light is old-fashioned waves or Mr Einstein’s light particles”, he wrote to his family during a spell in Cambridge that November. “I had a really hopeful idea… I have both the waves and the particles, and the particles are sort of carried along by the waves, so that the particles go where the waves take them.” The waves were manifested in a kind of “virtual field” of some kind that spread throughout the system, and they acted to “pilot” the particles.

Bohr was mostly not a fan of Slater’s idea, not least because it retained the light particles that he wished to dispose of. But he liked Slater’s notion of a virtual field that could put one part of a quantum system in touch with others. Together with Slater and Kramers, Bohr prepared a paper in a remarkably short time (especially for him) outlining what became known as the Bohr-Kramers-Slater (BKS) theory. They sent it off to the Philosophical Magazine (where Bohr had published his seminal papers on the quantum atom) at the end of January 1924, and it was published in May (47(281) 785). As was increasingly characteristic of Bohr’s style, it was free of any mathematics (beyond Einstein’s quantum relationship E=hν).

In the BKS picture, an excited atom about to emit light can “communicate continually” with the other atoms around it via the virtual field. The transition, with emission of a light quantum, is then not spontaneous but induced by the virtual field. This mechanism could solve the long-standing question of how an atom “knows” which frequency of light to emit in order to reach another energy level: the virtual field effectively puts the atom “in touch” with all the possible energy states of the system.

The problem was that this meant the emitting atom was in instant communication with its environment all around – which violated the law of causality. Well then, so much the worse for causality: BKS abandoned it. The trio’s theory also violated the conservation of energy and momentum – so they had to go too.

Causality and conservation, abandoned

But wait: hadn’t these conservation laws been proved? In 1923 the American physicist Arthur Compton in Cambridge had shown that when light is scattered by electrons, they exchange energy, and the frequency of the light decreases as it gives up energy to the electrons. The results of Compton’s experiments agreed perfectly with predictions made on the assumptions that light is a stream of quanta (photons) and that their collisions with electrons conserve energy and momentum.

Ah, said BKS, but that’s only true statistically. The quantities are conserved on average, but not in individual collisions. After all, such statistical outcomes were familiar to physicists: that was the basis of the second law of thermodynamics, which presented the inexorable increase in entropy as a statistical phenomenon that need not constrain processes involving single particles.

The radicalism of the BKS paper got a mixed reception. Einstein, perhaps predictably, was dismissive. “Abandonment of causality as a matter of principle should be permitted only in the most extreme emergency”, he wrote. Wolfgang Pauli, who had worked in Copenhagen in 1922–23, confessed to being “completely negative” about the idea. Born and Schrödinger were more favourable.

But the ultimate arbiter is experiment. Was energy conservation really violated in single-particle interactions? The BKS paper motivated others to find out. In early 1925, German physicists Walther Bothe and Hans Geiger in Berlin looked more closely at Compton’s X-ray scattering by electrons. Having read the BKS paper, Bothe felt that “it was immediately obvious that this question would have to be decided experimentally, before definite progress could be made.

Walther Bothe and Hans Geiger
Experimental arbitrators German physicists Walther Bothe and Hans Geiger (right) conducted an experiment to explore the BKS paper, that looked at X-ray scattering from electrons to determine the conservation of energy at microscopic scales. (IPP/© Archives of the Max Planck Society)

Geiger agreed, and the duo devised a scheme for detecting both the scattered electron and the scattered photon in separate detectors. If causality and energy conservation were preserved, the detections should be simultaneous; while any delay between them could indicate a violation. As Bothe would later recall “The ‘question to Nature’ which the experiment was designed to answer could therefore be formulated as follows: is it exactly a scatter quantum and a recoil electron that are simultaneously emitted in the elementary process, or is there merely a statistical relationship between the two?” It was incredibly painstaking work to seek such coincident detections using the resources then available. But in April 1925 Geiger and Bothe reported simultaneity within a millisecond – close enough to make a strong case that Compton’s treatment, which assumed energy conservation, was correct. Compton himself, working with Alfred Simon using a cloud chamber, confirmed that energy and momentum were conserved for individual events (Phys. Rev. 26 289).

Revolutionary defeat… singularly important

Bothe was awarded the 1954 Nobel Prize for Physics for the work. He shared it with Born for his work on quantum theory, and Geiger would surely have been a third recipient, if he had not died in 1945. In his Nobel speech, Bothe definitively stated that “the strict validity of the law of the conservation of energy even in the elementary process had been demonstrated, and the ingenious way out of the wave-particle problem discussed by Bohr, Kramers, and Slater was shown to be a blind alley.”

Bohr was gracious in his defeat, writing to a colleague in April 1925 that “It seems… there is nothing else to do than to give our revolutionary efforts as honourable a funeral as possible.” Yet he was soon to have no need of that particular revolution, for just a few months later Heisenberg, who had returned to Göttingen after working with Bohr in Copenhagen for six months, came up the first proper theory of quantum mechanics, later called matrix mechanics.

“In spite of its short lifetime, the BKS theory was singularly important,” says historian of science Helge Kragh, now emeritus professor at the Niels Bohr Institute. “Its radically new approach paved the way for a greater understanding, that methods and concepts of classical physics could not be carried over in a future quantum mechanics.”

The Bothe-Geiger experiment that [the paper] inspired was not just an important milestone in early particle physics. It was also a crucial factor in Heisenberg’s argument [about] the probabilistic character of his matrix mechanics

The BKS paper was thus in a sense merely a mistaken curtain-raiser for the main event. But the Bothe-Geiger experiment that it inspired was not just an important milestone in early particle physics. It was also a crucial factor in Heisenberg’s argument that the probabilistic character of his matrix mechanics (and also of Schrödinger’s 1926 version of quantum mechanics, called wave mechanics) couldn’t be explained away as a statistical expression of our ignorance about the details, as it is in classical statistical mechanics.

Quantum concept
Radical approach Despite its swift defeat, the BKS proposal showed how classical concepts could not apply to a quantum reality. (Courtesy: Shutterstock/Vink Fan)

Rather, the probabilities that emerged from Heisenberg’s and Schrödinger’s theories applied to individual events: they were, Heisenberg said, fundamental to the way single particles behave. Schrödinger was never happy with that idea, but today it seems inescapable.

Over the next few years, Bohr and Heisenberg argued that the new quantum mechanics indeed smashed causality and shattered the conventional picture of reality as an objective world of objects moving in space–time with fixed properties. Assisted by Born, Wolfgang Pauli and others, they articulated the “Copenhagen interpretation”, which became the predominant vision of the quantum world for the rest of the century.

Failed connections

Slater wasn’t at all pleased with what became of the idea he took to Copenhagen. Bohr and Kramers had pressured him into accepting their take on it, “without the little lump carried along on the waves”, as he put it in mid-January. “I am willing to let them have their way”, he wrote at the time, but in retrospect he felt very unhappy about his time in Denmark. After the BKS theory was disproved, Bohr wrote to Slater saying “I have a bad conscience in persuading you to our views”.

Slater replied that there was no need for that. But in later life – after he had made a name for himself in solid-state physics – Slater admitted to a great deal of resentment. “I completely failed to make any connection with Bohr”, he said in a 1963 interview with the historian of science Thomas Kuhn. “I fought with them [Bohr and Kramers] so seriously that I’ve never had any respect for those people since. I had a horrible time in Copenhagen.” While most of Bohr’s colleagues and students expressed adulation, Slater’s was a rare dissenting voice.

But Slater might have reasonably felt more aggrieved at what became of his “pilot-wave” idea. Today, that interpretation of quantum theory is generally attributed to de Broglie – who intimated a similar notion in his 1924 thesis, before presenting the theory in more detail at the famous 1927 Solvay Conference – and to American physicist David Bohm, who revitalized the idea in the 1950s. Initially dismissed on both occasions, the de Broglie-Bohm theory has gained advocates in recent years, not least because it can be applied to a classical hydrodynamic analogue, in which oil droplets are steered by waves on an oil surface.

Whether or not it is the right way to think about quantum mechanics, the pilot-wave theory touches on the deep philosophical problems of the field. Can we rescue an objective reality of concrete particles with properties described by hidden variables, as Einstein had advocated, from the fuzzy veil that Bohr and Heisenberg seemed to draw over the quantum world? Perhaps Slater would at least be gratified to know that Bohr has not yet had the last word.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post When Bohr got it wrong: the impact of a little-known paper on the development of quantum theory appeared first on Physics World.

Theorists propose a completely new class of quantum particles

In a ground-breaking theoretical study, two physicists have identified a new class of quasiparticle called the paraparticle. Their calculations suggest that paraparticles exhibit quantum properties that are fundamentally different from those of familiar bosons and fermions, such as photons and electrons respectively.

Using advanced mathematical techniques, Kaden Hazzard at Rice University in the US and his former graduate student Zhiyuan Wang, now at the Max Planck Institute of Quantum Optics in Germany, have meticulously analysed the mathematical properties of paraparticles and proposed a real physical system that could exhibit paraparticle behaviour.

“Our main finding is that it is possible for particles to have exchange statistics different from those of fermions or bosons, while still satisfying the important physical principles of locality and causality,” Hazzard explains.

Particle exchange

In quantum mechanics, the behaviour of particles (and quasiparticles) is probabilistic in nature and is described by mathematical entities known as wavefunctions. These govern the likelihood of finding a particle in a particular state, as defined by properties like position, velocity, and spin. The exchange statistics of a specific type of particle dictates how its wavefunction behaves when two identical particles swap places.

For bosons such as photons, the wavefunction remains unchanged when particles are exchanged. This means that many bosons can occupy the same quantum state, enabling phenomena like lasers and superfluidity. In contrast, when fermions such as electrons are exchanged, the sign of the wavefunction flips from positive to negative or vice versa. This antisymmetric property prevents fermions from occupying the same quantum state. This underpins the Pauli exclusion principle and results in the electronic structure of atoms and the nature of the periodic table.

Until now, physicists believed that these two types of particle statistics – bosonic and fermionic – were the only possibilities in 3D space. This is the result of fundamental principles like locality, which states that events occurring at one point in space cannot instantaneously influence events at a distant location.

Breaking boundaries

Hazzard and Wang’s research overturns the notion that 3D systems are limited to bosons and fermions and shows that new types of particle statistics, called parastatistics, can exist without violating locality.

The key insight in their theory lies in the concept of hidden internal characteristics. Beyond the familiar properties like position and spin, paraparticles require additional internal parameters that enable more complex wavefunction behaviour. This hidden information allows paraparticles to exhibit exchange statistics that go beyond the binary distinction of bosons and fermions.

Paraparticles exhibit phenomena that resemble – but are distinct from – fermionic and bosonic behaviours. For example, while fermions cannot occupy the same quantum state, up to two paraparticles could be allowed to coexist in the same point in space. This behaviour strikes a balance between the exclusivity of fermions and the clustering tendency of bosons.

Bringing paraparticles to life

While no elementary particles are known to exhibit paraparticle behaviour, the researchers believe that paraparticles might manifest as quasiparticles in engineered quantum systems or certain materials. A quasiparticle is particle-like collective excitation of a system. A familiar example is the hole, which is created in a semiconductor when a valence-band electron is excited to the conduction band. The vacancy (or hole) left in the valence band behaves as a positively-charged particle that can travel through the semiconductor lattice.

Experimental systems of ultracold atoms created by collaborators of the duo could be one place to look for the exotic particles. “We are working with them to see if we can detect paraparticles there,” explains Wang.

In ultracold atom experiments, lasers and magnetic fields are used to trap and manipulate atoms at temperatures near absolute zero. Under these conditions, atoms can mimic the behaviour of more exotic particles. The team hopes that similar setups could be used to observe paraparticle-like behaviour in higher-dimensional systems, such as 3D space. However, further theoretical advances are needed before such experiments can be designed.

Far-reaching implications

The discovery of paraparticles could have far-reaching implications for physics and technology. Fermionic and bosonic statistics have already shaped our understanding of phenomena ranging from the stability of neutron stars to the behaviour of superconductors. Paraparticles could similarly unlock new insights into the quantum world.

“Fermionic statistics underlie why some systems are metals and others are insulators, as well as the structure of the periodic table,” Hazzard explains. “Bose-Einstein condensation [of bosons] is responsible for phenomena such as superfluidity. We can expect a similar variety of phenomena from paraparticles, and it will be exciting to see what these are.”

As research into paraparticles continues, it could open the door to new quantum technologies, novel materials, and deeper insights into the fundamental workings of the universe. This theoretical breakthrough marks a bold step forward, pushing the boundaries of what we thought possible in quantum mechanics.

The paraparticles are described in Nature.

The post Theorists propose a completely new class of quantum particles appeared first on Physics World.

The secret to academic success? Publish a top paper as a postdoc, study finds

If you’re a postdoc who wants to nail down that permanent faculty position, it’s wise to publish a highly cited paper after your PhD. That’s the conclusion of a study by an international team of researchers, which finds that publication rates and performance during the postdoc period is key to academic retention and early-career success. Their analysis also reveals that more than four in 10 postdocs drop out of academia.

A postdoc is usually a temporary appointment that is seen as preparation for an academic career. Many researchers, however, end up doing several postdocs in a row as they hunt for a permanent faculty job. “There are many more postdocs than there are faculty positions, so it is a kind of systemic bottleneck,” says Petter Holme, a computer scientist at Aalto University in Finland, who led the study.

Previous research into academic career success has tended to overlook the role of a postdoc, focusing instead on, say, the impact of where researchers did their PhD. To eke out the effect of a postdoc, Holme and colleagues combined information of academics’ career stages from LinkedIn with their publication history obtained from Microsoft Academic Graph. The resulting global dataset covered 45, 572 careers spanning 25 years across all academic disciplines.

Overall, they found, 41% of postdocs left academia. But researchers who publish a highly cited paper as a postdoc are much more likely to pursue a faculty career – whether they published a highly cited paper during their PhD degree, or not. Publication rate is also vital, with researchers who publish less as postdocs compared to their PhD days being more likely to drop out of academia. Conversely, as productivity increased, so did the likelihood of a postdoc gaining a faculty position.

Expanding horizons

Holme says their results suggest that a researcher only has a few years “to get on the positive feedback loop, where one success leads to another”. In fact, the team found that a “moderate” change in research topic when moving from PhD to postdoc could improve future success. “It is a good thing to change your research focus, but not too much,” says Holme because it widens perspective without having to learn an entire new research topic from scratch.

Likewise, shifting perspective by moving abroad can also benefit postdocs. The analysis shows that a researcher moving abroad for a postdoc boosts their citations, but a move to a different institution in the same country has a negligible impact.

The post The secret to academic success? Publish a top paper as a postdoc, study finds appeared first on Physics World.

Alternative building materials could store massive amounts of carbon dioxide

Replacing conventional building materials with alternatives that sequester carbon dioxide could allow the world to lock away up to half the CO2 generated by humans each year – about 16 billion tonnes. This is the finding of researchers at the University of California Davis and Stanford University, both in the US, who studied the sequestration potential of materials such as carbonate-based aggregates and biomass fibre in brick.

Despite efforts to reduce greenhouse gas emissions by decarbonizing industry and switching to renewable sources of energy, it is likely that humans will continue to produce significant amounts of CO2 beyond the target “net zero” date of 2050. Carbon storage and sequestration – either at source or directly from the atmosphere – are therefore worth exploring as an additional route towards this goal. Researchers have proposed several possible ways of doing this, including injecting carbon underground or deep under the ocean. However, all these scenarios are challenging to implement practically and pose their own environmental risks.

Modifying common building materials

In the present work, a team of civil engineers and earth systems scientists led by Elisabeth van Roijen (then a PhD student at UC Davis) calculated how much carbon could be stored in modified versions of several common building materials. These include concrete (cement) and asphalt containing carbonate-based aggregates; bio-based plastics; wood; biomass-fibre bricks (from waste biomass); and biochar filler in cement.

The researchers obtained the “16 billion tonnes of CO2” figure by assuming that all aggregates currently employed in concrete would be replaced with carbonate-based versions. They also supplemented 15% of cement with biochar and the remainder with carbonatable cements; increased the amount of wood used in all new construction by 20%; and supplemented 15% of bricks with biomass and the remainder with carbonatable calcium hydroxide. A final element in their calculation was to replace all plastics used in construction today with bio-based plastics and all bitumen with bio-oil in asphalt.

“We calculated the carbon storage potential of each material based on the mass ratio of carbon in each material,” explains van Roijen. “These values were then scaled up based on 2016 consumption values for each material.”

“The sheer magnitude of carbon storage is pretty impressive”

While the production of some replacement materials would need to increase to meet the resulting demand, van Roijen and colleagues found that resources readily available today – for example, mineral-rich waste streams – would already let us replace 10% of conventional aggregates with carbonate-based ones. “These alone could store 1 billion tonnes of CO2,” she says. “The sheer magnitude of carbon storage is pretty impressive, especially when you put it in context of the level of carbon dioxide removal needed to stay below the 1.5 and 2 °C targets set by The Intergovernmental Panel on Climate Change (IPCC).”

Indeed, even if the world doesn’t implement these technologies until 2075, we could still store enough carbon between 2075 and 2100 to stay below these targets, she tells Physics World. “This is assuming, of course, that all other decarbonization efforts outlined in the IPCC reports are also implemented to achieve net-zero emissions,” she says.

Building materials are a good option for carbon storage

The motivation for the study, she explains, came from the urgent need – as expressed by the IPCC – to not only reduce new carbon emissions through rapid and significant decarbonization, but to also remove large amounts of COalready present in the atmosphere. “Rather than burying it in geological, terrestrial or ocean reservoirs, we wanted to look into the possibility of leveraging existing technology – namely conventional building materials – as a way to store CO2. Building materials are a good option for carbon storage given the massive quantity (30 billion tonnes) produced each year, not to mention their durability.”

Van Roijen, who is now a postdoctoral researcher at the US Department of Energy Renewable Energy Laboratory, hopes that this work, which is detailed in Science, will go beyond the reach of the research lab and attract the attention of policymakers and industrialists. While some of the technologies outlined in this study are new and require further research, others, such as bio-based plastics, are well established and simply need some economic and political support, she says. “That said, conventional building materials such as concrete and plastics are pretty cheap, so there will need to be some incentive for industries to make the switch over to these low-carbon materials.”

The post Alternative building materials could store massive amounts of carbon dioxide appeared first on Physics World.

Flexible tactile sensor reads braille in real time

Braille is a tactile writing system that helps people who are blind or partially sighted acquire information by touching patterns of tiny raised dots. Braille uses combinations of six dots (two columns of three) to represent letters, numbers and punctuation. But learning to read braille can be challenging, particularly for those who lose their sight later in life, prompting researchers to create automated braille recognition technologies.

One approach involves simply imaging the dots and using algorithms to extract the required information. This visual method, however, struggles with the small size of braille characters and can be impacted by differing light levels. Another option is tactile sensing; but existing tactile sensors aren’t particularly sensitive, with small pressure variations leading to incorrect readings.

To tackle these limitations, researchers from Beijing Normal University and Shenyang Aerospace University in China have employed an optical fibre ring resonator (FRR) to create a tactile braille recognition system that accurately reads braille in real time.

“Current braille readers often struggle with accuracy and speed, especially when it comes to dynamic reading, where you move your finger across braille dots in real time,” says team leader Zhuo Wang. “I wanted to create something that could read braille more reliably, handle slight variations in pressure and do it quickly. Plus, I saw an opportunity to apply cutting-edge technology – like flexible optical fibres and machine learning – to solve this challenge in a novel way.”

Flexible fibre sensor

At the core of the braille sensor is the optical FRR – a resonant cavity made from a loop of fibre containing circulating laser light. Wang and colleagues created the sensing region by embedding an optical fibre in flexible polymer and connecting it into the FRR ring. Three small polymer protrusions on top of the sensor act as probes to transfer the applied pressure to the optical fibre. Spaced 2.5 mm apart to align with the dot spacing, each protrusion responds to the pressure from one of the three braille dots (or absence of a dot) in a vertical column.

Fabricating the fibre ring resonator sensor
Sensor fabrication The optical FRR is made by connecting ports of a 2×2 fibre coupler to form a loop. The sensing region is then connected into the loop. (Courtesy: Optics Express 10.1364/OE.546873)

As the sensor is scanned over the braille surface, the pressure exerted by the raised dots slightly changes the length and refractive index of the fibre, causing tiny shifts in the frequency of the light travelling through the FRR. The device employs a technique called Pound-Drever-Hall (PDH) demodulation to “lock” onto these shifts, amplify them and convert them into readable data.

“The PDH demodulation curve has an extremely steep linear slope, which means that even a very tiny frequency shift translates into a significant, measurable voltage change,” Wang explains. “As a result, the system can detect even the smallest variations in pressure with remarkable precision. The steep slope significantly enhances the system’s sensitivity and resolution, allowing it to pick up subtle differences in braille dots that might be too small for other sensors to detect.”

The eight possible configurations of three dots generate eight distinct pressure signals, with each braille character defined by two pressure outputs (one per column). Each protrusion has a slightly different hardness level, enabling the sensor to differentiate pressures from each dot. Rather than measuring each dot individually, the sensor reads the overall pressure signal and instantly determines the combination of dots and the character they correspond to.

The researchers note that, in practice, the contact force may vary slightly during the scanning process, resulting in the same dot patterns exhibiting slightly different pressure signals. To combat this, they used neural networks trained on large amounts of experimental data to correctly classify braille patterns, even with small pressure variations.

“This design makes the sensor incredibly efficient,” Wang explains. “It doesn’t just feel the braille, it understands it in real time. As the sensor slides over a braille board, it quickly decodes the patterns and translates them into readable information. This allows the system to identify letters, numbers, punctuation, and even words or poems with remarkable accuracy.”

Stable and accurate

Measurements on the braille sensor revealed that it responds to pressures of up to 3 N, as typically exerted by a finger when touching braille, with an average response time of below 0.1 s, suitable for fast dynamic braille reading. The sensor also exhibited excellent stability under temperature or power fluctuations.

To assess its ability to read braille dots, the team used the sensor to read eight different arrangements of three dots. Using a multilayer perceptron (MLP) neural network, the system effectively distinguished the eight different tactile pressures with a classification accuracy of 98.57%.

Next, the researchers trained a long short-term memory (LSTM) neural network to classify signals generated by five English words. Here, the system demonstrated a classification accuracy of 100%, implying that slight errors in classifying signals in each column will not affect the overall understanding of the braille.

Finally, they used the MLP-LSTM model to read short sentences, either sliding the sensor manually or scanning it electronically to maintain a consistent contact force. In both cases, the sensor accurately recognised the phrases.

The team concludes that the sensor can advance intelligent braille recognition, with further potential in smart medical care and intelligent robotics. The next phase of development will focus on making the sensor more durable, improving the machine learning models and making it scalable.

“Right now, the sensor works well in controlled environments; the next step is to test its use by different people with varying reading styles, or under complex application conditions,” Wang tells Physics World. “We’re also working on making the sensor more affordable so it can be integrated into devices like mobile braille readers or wearables.”

The sensor is described in Optics Express.

The post Flexible tactile sensor reads braille in real time appeared first on Physics World.

The physics of George R R Martin’s Wild Card virus revealed

It’s not every day that a well-known author writes a physics paper. But George R R Martin, who is best known for his Song of Ice and Fire series of fantasy novels, has co-authored a paper in the American Journal of Physics with the title “Ergodic Lagrangian dynamics in a superhero universe”.

Written with Los Alamos National Laboratory theoretical physicist Ian Tregillis, who is also a science-fiction author of several books, they have derived a mathematical model of the so-called wild cards virus.

The Wild Cards universe is a series of novels created by a consortium of writers including Martin and Tregillis.

Set largely during an alternate history of the US following the Second World War, the series follows events after an extraterrestrial virus, known as the Wild Card virus, has spread worldwide. It mutates human DNA causing profound changes in human physiology and society at large.

The virus follows a fixed statistical distribution of outcomes in that 90% of those infected die, 9% become physically mutated (referred to as “jokers”) and 1% gain superhuman abilities (known as “aces”). Such capabilities include the ability to fly as well as being able to move between dimensions. The stories in the series then follow the individuals that have been impacted by the virus.

Tregillis and Martin have now derived a formula for the viral behaviour of the Wild Card virus. “Like any physicist, I started with back-of-the-envelope estimates, but then I went off the deep end,” notes Tregillis. “Being a theoretician, I couldn’t help but wonder if a simple underlying model might tidy up the canon.”

The model takes into consideration the severity of the changes (for the 10% that don’t instantly die) and the mix of joke/ace traits. After all, those infected can also become cryto-jokers or crypto-aces – undetected cases where individuals have subtle changes or powers – as well as joker-aces, in which a human develops both mutations and superhuman abilities.

The result is a dynamical system in which a carrier’s state vector constantly evolves through the model space — until their “card” turns. At that point the state vector becomes fixed and its permanent location determines the fate of the carrier. “The time-averaged behavior of this system generates the statistical distribution of outcomes,” adds Tregillis.

The purpose of the paper, and the model, is also to provide an exercise in demonstrating how “whimsical” scenarios can be used to explore concepts in physics and mathematics.

“The fictional virus is really just an excuse to justify the world of Wild Cards, the characters who inhabit it, and the plot lines that spin out from their actions,” says Tregillis.

The post The physics of George R R Martin’s Wild Card virus revealed appeared first on Physics World.

Fast radio burst came from a neutron star’s magnetosphere, say astronomers

The exact origins of cosmic phenomena known as fast radio bursts (FRBs) are not fully understood, but scientists at the Massachusetts Institute of Technology (MIT) in the US have identified a fresh clue: at least one of these puzzling cosmic discharges got its start very close to the object that emitted it. This result, which is based on measurements of a fast radio burst called FRB 20221022A, puts to rest a long-standing debate about whether FRBs can escape their emitters’ immediate surroundings. The conclusion: they can.

“Competing theories argued that FRBs might instead be generated much farther away in shock waves that propagate far from the central emitting object,” explains astronomer Kenzie Nimmo of MIT’s Kavli Institute for Astrophysics and Space Research. “Our findings show that, at least for this FRB, the emission can escape the intense plasma near a compact object and still be detected on Earth.”

As their name implies, FRBs are brief, intense bursts of radio waves. The first was detected in 2007, and since then astronomers have spotted thousands of others, including some within our own galaxy. They are believed to originate from cataclysmic processes involving compact celestial objects such as neutron stars, and they typically last a few milliseconds. However, astronomers have recently found evidence for bursts a thousand times shorter, further complicating the question of where they come from.

Nimmo and colleagues say they have now conclusively demonstrated that FRB 20221022A, which was detected by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) in 2022, comes from a region only 10 000 km in size. This, they claim, means it must have originated in the highly magnetized region that surrounds a star: the magnetosphere.

“Fairly intuitive” concept

The researchers obtained their result by measuring the FRB’s scintillation, which Nimmo explains is conceptually similar to the twinkling of stars in the night sky. The reason stars twinkle is that because they are so far away, they appear to us as point sources. This means that their apparent brightness is more affected by the Earth’s atmosphere than is the case for planets and other objects that are closer to us and appear larger.

“We applied this same principle to FRBs using plasma in their host galaxy as the ‘scintillation screen’, analogous to Earth’s atmosphere,” Nimmo tells Physics World. “If the plasma causing the scintillation is close to the FRB source, we can use this to infer the apparent size of the FRB emission region.”

According to Nimmo, different models of FRB origins predict very different sizes for this region. “Emissions originating within the magnetized environments of compact objects (for example, magnetospheres) would produce a much smaller apparent size compared to emission generated in distant shocks propagating far from the central object,” she explains. “By constraining the emission region size through scintillation, we can determine which physical model is more likely to explain the observed FRB.”

Challenge to existing models

The idea for the new study, Nimmo says, stemmed from a conversation with another astronomer, Pawan Kumar of the University of Texas at Austin, early last year. “He shared a theoretical result showing how scintillation could be used a ‘probe’ to constrain the size of the FRB emission region, and, by extension, the FRB emission mechanism,” Nimmo says. “This sparked our interest and we began exploring the FRBs discovered by CHIME to search for observational evidence for this phenomenon.”

The researchers say that their study, which is detailed in Nature, shows that at least some FRBs originate from magnetospheric processes near compact objects such as neutron stars. This finding is a challenge for models of conditions in these extreme environments, they say, because if FRB signals can escape the dense plasma expected to exist near such objects, the plasma may be less opaque than previously assumed. Alternatively, unknown factors may be influencing FRB propagation through these regions.

A diagnostic tool

One advantage of studying FRB 20221022A is that it is relatively conventional in terms of its brightness and the duration of its signal (around 2 milliseconds). It does have one special property, however, as discovered by Nimmo’s colleagues at McGill University in Canada: its light is highly polarized. What is more, the pattern of its polarization implies that its emitter must be rotating in a way that is reminiscent of pulsars, which are highly magnetized, rotating neutron stars. This result is reported in a separate paper in Nature.

In Nimmo’s view, the MIT team’s study of this (mostly) conventional FRB establishes scintillation as a “powerful diagnostic tool” for probing FRB emission mechanisms. “By applying this method to a larger sample of FRBs, which we now plan to investigate, future studies could refine our understanding of their underlying physical processes and the diverse environments they occupy.”

The post Fast radio burst came from a neutron star’s magnetosphere, say astronomers appeared first on Physics World.

Explore the quantum frontier: all about the International Year of Quantum Science and Technology 2025

In June 1925 a relatively unknown physics postdoc by the name of Werner Heisenberg developed the basic mathematical framework that would be the basis for the first quantum revolution. Heisenberg, who would later win the Nobel Prize for Physics, famously came up with quantum mechanics on a two-week vacation on the tiny island of Helgoland off the coast of Germany, where he had gone to cure a bad bout of hay fever.

Now, a century later, we are on the cusp of a second quantum revolution, with quantum science and technologies growing rapidly across the globe. According to the State of Quantum 2024 report, a total of 33 countries around the world currently have government initiatives in quantum technology, of which more than 20 have national strategies with large-scale funding. The report estimates that up to $50bn in public cash has already been committed.

It’s a fitting tribute, then, that the United Nations (UN) has chosen 2025 to be the International Year of Quantum Science and Technology (IYQ). They hope that the year will raise global awareness of the impact that quantum physics and its applications have already had on our world. The UN also aims to highlight to the global public the myriad potential future applications of quantum technologies and how they could help tackle universal issues – from climate and clean energy to health and infrastructure – while also addressing the UN’s sustainable development goals.

The Institute of Physics (IOP), which publishes Physics World, is one of the IYQ’s six “founding partners” alongside the German (DPG) and American physical societies (APS), SPIE, Optica and the Chinese Optical Society. “The UNESCO International Year of Quantum is a wonderful opportunity to spread the word about quantum research and technology and the transformational opportunities it is opening up” says Tom Grinyer, chief executive of the IOP. “The Institute of Physics is co-ordinating the UK and Irish elements of the year, which mark the 100th anniversary of the first formulation of quantum mechanics, and we are keen to celebrate the milestone, making sure that as many people as possible get the opportunity to find out more about this fascinating area of science and technology,” he adds.

“IYQ provides the opportunity for societies and organizations around the world to come together in marking both the 100-year history of the field, as well as the longer-term real-world impact that quantum science is certain to have for decades to come,” says Tim Smith, head of portfolio development at IOP Publishing. “Quantum science and technology represents one of the most exciting and rapidly developing areas of science today, encompassing the global physical-sciences community in a way that connects scientific wonder with fundamental research, technological innovation, industry, and funding programmes worldwide.”

Taking shape

The official opening ceremony for IYQ takes place on 4–5 February at the UNESCO headquarters in Paris, France, although several countries, including Germany and India, held their own launches in advance of the main event. Working together, the IOP and IOP Publishing have developed a wide array of quantum resources, talks, conferences, festivals and public-themed events planned as a part of the UK’s celebrations for IYQ. 

In late February, meanwhile, the Royal Society – the world’s oldest continuously active learned society – will host a two-day quantum conference. Dubbed “Quantum Information”, it will bring together scientists, industry leaders and public-sector stakeholders to discuss the current challenges involved in quantum computing, networks and sensing systems.

In Scotland, the annual Edinburgh Science Festival , which takes place in April, will likely include a special “quantum explorers” exhibit and workshop by the UK’s newly launched National Quantum Computing Centre. Elsewhere, the Quantum Software Lab at the School of Informatics at the University of Edinburgh is hosting a month-long “Quantum Fringe 2025” event across Scotland. It will include a quantum machine-learning school on the Isle of Skye and well as the annual UK Quantum Hackathon, which brings together teams of aspiring coders with industry mentors to tackle practical challenges and develop solutions using quantum computing.

In June, the Institution of Engineering and Technology is hosting a Quantum Engineering and Technologies conference, as part of its newly launched Quantum technologies and 6G and Future Networks events. The event’s themes include everything from information processing and memories to photon sources and cryptography.

The IOP will use the focus this year gives us to continue to make the case for the investment in research and development, and support for physics skills, which will be crucial if we are to fully unlock the economic and social potential of the quantum sector

Further IYQ-themed events will take place at  QuAMP, the IOP’s biennial international conference on quantum, atomic and molecular physics in September. Activities culminate in a three-part celebration in November, with a quantum community event led by the IOP’s History of Physics and quantum Business and Innovation Growth (qBIG) special interest groups, a schools event at the Royal Institution, and a public celebration with a keynote speech from University of Surrey quantum physicist and broadcaster Jim Al-Khalili. “The UK and Ireland already have a globally important position in many areas of quantum research, with the UK, for instance, having established one of the world’s first National Quantum Technology Programmes,” explains Grinyer. “We will also be using the focus this year gives us to continue to make the case for the investment in research and development, and support for physics skills, which will be crucial if we are to fully unlock the economic and social potential of what is both a fascinating area of research, and a fast growing physics-powered business sector,” he adds.

Quantum careers

With the booming quantum marketplace, it’s no surprise that employers are on the hunt for many skilled physicists to join the workforce. And indeed, there is a significant scarcity of skilled quantum professionals for the many roles across industry and academia. Also, with quantum research advancing everything from software and machine learning to materials science and drug discovery, your skills will be transferable across the board.

If you plan to join the quantum workforce, then choosing the right PhD programme, having the right skills for a specific role and managing risk and reward in the emerging quantum industry are all crucial. There are a number of careers events on the IYQ calendar, to learn more about the many career prospects for physicists in the sector. In April, for example, the University of Bristol’s Quantum Engineering Centre for Doctoral Training is hosting a Careers in Quantum event, while the Economist magazine is hosting its annual Commercialising Quantum conference in May.

There will also be a special quantum careers panel discussion, including top speakers from the UK and the US, as part of our newly launched Physics World Live panel discussions in April. This year’s Physics World Careers 2025 guide has a special quantum focus, and there’ll also be a bumper, quantum-themed issue of the Physics World Briefing in June. The Physics World quantum channel will be regularly updated throughout the year so you don’t miss a thing.

Read all about it

IOP Publishing’s journals will include specially curated content – from a series of Perspectives articles – personal viewpoints from leading quantum scientists – in Quantum Science and Technology. The journal will also be publishing roadmaps in quantum computing, sensing and communication, as well as focus issues on topics such as quantum machine learning and technologies for quantum gravity and thermodynamics in quantum coherent platforms.

“Going right to the core of IOP Publishing’s own historic coverage we’re excited to be celebrating the IYQ through a year-long programme of articles in Physics World and across our journals, that will hopefully show a wide audience just why everyone should care about quantum science and the people behind it,” says Smith.

Of course, we at Physics World have a Schrödinger’s box full of fascinating quantum articles for the coming year – from historical features to the latest cutting-edge developments in quantum tech. So keep your eyes peeled.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Explore the quantum frontier: all about the International Year of Quantum Science and Technology 2025 appeared first on Physics World.

Helgoland: leading physicists to gather on the tiny island where quantum mechanics was born

In this episode of Physics World Stories, we celebrate the 100th anniversary of Werner Heisenberg’s trip to the North Sea island of Helgoland, where he developed the first formulation of quantum theory. Listen to the podcast as we delve into the latest advances in quantum science and technology with three researchers who will be attending a 6-day workshop on Helgoland in June 2025.

Featuring in the episode are: Nathalie De Leon of Princeton University, Ana Maria Rey from the University of Colorado Boulder, and Jack Harris from Yale University, a member of the programme committee. These experts share their insights on the current state of quantum science and technology: discussing the latest developments in quantum sensing, quantum information and quantum computing.

They also reflect on the significance of attending a conference at a location that is so deeply ingrained in the story of quantum mechanics. Talks at the event will span the science and the history of quantum theory, as well as the nature of scientific revolutions.

This episode is part of Physics World’s quantum coverage throughout 2025, designated by the UN as the International Year of Quantum Science and Technology (IYQ). Check out this article, for all you need to know about IYQ.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Helgoland: leading physicists to gather on the tiny island where quantum mechanics was born appeared first on Physics World.

💾

Terahertz light produces a metastable magnetic state in an antiferromagnet

Physicists in the US, Europe and Korea have produced a long-lasting light-driven magnetic state in an antiferromagnetic material for the first time. While their project started out as a fundamental study, they say the work could have applications for faster and more compact memory and processing devices.

Antiferromagnetic materials are promising candidates for future high-density memory devices. This is because in antiferromagnets, the spins used as the bits or data units flip quickly, at frequencies in the terahertz range. Such rapid spin flips are possible because, by definition, the spins in antiferromagnets align antiparallel to each other, leading to strong interactions among the spins. This is different from ferromagnets, which have parallel electron spins and are used in today’s memory devices such as computer hard drives.

Another advantage is that antiferromagnets display almost no macroscopic magnetization. This means that bits can be packed more densely onto a chip than is the case for the ferromagnets employed in conventional magnetic memory, which do have a net magnetization.

A further attraction is that the values of bits in antiferromagnetic memory devices are generally unaffected by the presence of stray magnetic fields. However, Nuh Gedik of the Massachusetts Institute of Technology (MIT), who led the latest research effort, notes that this robustness can be a double-edged sword: the fact that antiferromagnet spins are insensitive to weak magnetic fields also makes them difficult to control.

Antiferromagnetic state lasts for more than 2.5 milliseconds

In the new work, Gedik and colleagues studied FePS3, which becomes an antiferromagnet below a critical temperature of around 118 K. By applying intense pulses of terahertz-frequency light to this material, they were able to control this transition, placing the material in a metastable magnetic state that lasts for more than 2.5 milliseconds even after the light source is switched off. While such light-induced transitions have been observed before, Gedik notes that they typically only last for picoseconds.

The technique works because the terahertz source stimulates the atoms in the FePS3 at the same frequency at which the atoms collectively vibrate (the resonance frequency). When this happens, Gedik explains that the atomic lattice undergoes a unique form of stretching. This stretching cannot be achieved with external mechanical forces, and it pushes the spins of the atoms out of their magnetically alternating alignment.

The result is a state in which the spin in one direction is larger, transforming the originally antiferromagnetic material into a state with net magnetization. This metastable state becomes increasingly robust as the temperature of the material approaches the antiferromagnetic transition point. That is a sign that critical fluctuations near the phase transition point are a key factor in enhancing both the magnitude and lifetime of the new magnetic state, Gedik says.

A new experimental setup

The team, which includes researchers from the Max Planck Institute for the Structure and Dynamics of Matter in Germany, the University of the Basque Country in Spain, Seoul National University and the Flatiron Institute in New York, wasn’t originally aiming to produce long-lived magnetic states. Instead, its members were investigating nonlinear interactions among low-energy collective modes, such as phonons (vibrations of the atomic lattice) and spin excitations called magnons, in layered magnetic materials like FePS3. It was for this purpose that they developed a new experimental setup capable of generating strong terahertz pulses with a wide spectral bandwidth.

“Since nonlinear interactions are generally weak, we chose a family of materials known for their strong coupling between magnetic spins and phonons,” Gedik says. “We also suspected that, under such intense resonant excitation in these particular materials, something intriguing might occur – and indeed, we discovered a new magnetic state with an exceptionally long lifetime.”

While the researchers’ focus remains on fundamental questions, they say the new findings may enable a “significant step” toward practical applications for ultrafast science. “The antiferromagnetic nature of the material holds great potential for potentially enabling faster and more compact memory and processing devices,” says. Gedik’s MIT colleague Batyr Ilyas. He adds that the observed long lifetime of the induced state means that it can be explored further using conventional experimental probes used in spintronic technologies.

The team’s next step will be to study the nonlinear interactions between phonons and magnons more closely using two-dimensional spectroscopy experiments. “Second, we plan to demonstrate the feasibility of probing this metastable state through electrical transport experiments,” Ilyas tells Physics World. “Finally, we aim to investigate the generalizability of this phenomenon in other materials, particularly those exhibiting enhanced fluctuations near room temperature.”

The work is detailed in Nature.

The post Terahertz light produces a metastable magnetic state in an antiferromagnet appeared first on Physics World.

Why electrochemistry lies at the heart of modern technology

This episode of the Physics World Weekly podcast features a conversation with Colm O’Dwyer, who is professor of chemical energy at University College Cork in Ireland and president of the Electrochemical Society.

He talks about the role that electrochemistry plays in the development of modern technologies including batteries, semiconductor chips and pharmaceuticals. O’Dwyer chats about the role that the Electrochemical Society plays in advancing the theory and practice of electrochemistry and solid-state science and technology. He also explains how electrochemists collaborate with scientists and engineers in other fields including physics – and he looks forward to the future of electrochemistry.

Courtesy: American Elements

 

This podcast is supported by American Elements. Trusted by researchers and industries the world over, American Elements is helping shape the future of battery and electrochemistry technology.

The post Why electrochemistry lies at the heart of modern technology appeared first on Physics World.

China’s Experimental Advanced Superconducting Tokamak smashes fusion confinement record

A fusion tokamak in China has smashed its previous fusion record of maintaining a steady-state plasma. This week, scientists working on the Experimental Advanced Superconducting Tokamak (EAST) announced that they had produced a steady-state high-confinement plasma for 1066 seconds, breaking EAST’s previous 2023 record of 403 seconds.

EAST is an experimental superconducting tokamak fusion device located in Hefei, China. Operated by the Institute of Plasma Physics (AISPP) at the Hefei Institute of Physical Science, it began operations in 2006. It is the first tokamak to contain a deuterium plasma using superconducting niobium-titanium toroidal and poloidal magnets.

EAST has recently undergone several upgrades, notably with new plasma diagnostic tools and a doubling in the power of the plasma heating system. EAST is also acting as a testbed for the ITER fusion reactor that is currently being built in Cadarache, France.

The EAST tokamak is able to maintain a plasma in the so-called “H‐mode”. This is the high-confinement regime that modern tokamaks, including ITER, employ. It occurs when the plasma undergoes intense heating by a neutral beam and results in a sudden improvement of plasma confinement by a factor of two.

In 2017 scientists at EAST broke the 100 seconds barrier for a steady-state H-mode plasma and then in 2023 achieved a 403 seconds, a world record at the time. On Monday, EAST officials announced that they had almost tipled that time, delivering H-mode operation for 1066 seconds.

ASIPP director Song Yuntao notes that the new record is “monumental” and represents a “critical step” toward realizing a functional fusion reactor. “A fusion device must achieve stable operation at high efficiency for thousands of seconds to enable the self-sustaining circulation of plasma,” he says, “which is essential for the continuous power generation of future fusion plants”.

The post China’s Experimental Advanced Superconducting Tokamak smashes fusion confinement record appeared first on Physics World.

New candidate emerges for a universal quantum electrical standard

Physicists in Germany have developed a new way of defining the standard unit of electrical resistance. The advantage of the new technique is that because it is based on the quantum anomalous Hall effect rather than the ordinary quantum Hall effect, it does not require the use of applied magnetic fields. While the method in its current form requires ultracold temperatures, an improved version could allow quantum-based voltage and resistance standards to be integrated into a single, universal quantum electrical reference.

Since 2019, all base units in the International System of Units (SI) have been defined with reference to fundamental constants of nature. For example, the definition of the kilogram, which was previously based on a physical artefact (the international prototype kilogram), is now tied to Planck’s constant, h.

These new definitions do come with certain challenges. For example, today’s gold-standard way to experimentally determine the value of h (as well the elementary charge e, another base SI constant) is to measure a quantized electrical resistance (the von Klitzing constant RK = h/e2) and a quantized voltage (the Josephson constant KJ = 2e/h). With RK and KJ pinned down, scientists can then calculate e and h.

To measure RK with high precision, physicists use the fact that it is related to the quantized values of the Hall resistance of a two-dimensional electron system (such as the ones that form in semiconductor heterostructures) in the presence of a strong magnetic field. This quantized change in resistance is known as the quantum Hall effect (QHE), and in semiconductors like GaAs or AlGaAs, it shows up at fields of around 10 Tesla. In graphene, a two-dimensional carbon sheet, fields of about 5 T are typically required.

The problem with this method is that KJ is measured by means of a separate phenomenon known as the AC Josephson effect, and the large external magnetic fields that are so essential to the QHE measurement render Josephson devices inoperable. According to Charles Gould of the Institute for Topological Insulators at the University of Würzburg (JMU), who led the latest research effort, this makes it difficult to integrate a QHE-based resistance standard with the voltage standard.

A way to measure RK at zero external magnetic field

Relying on the quantum anomalous Hall effect (QAHE) instead would solve this problem. This variant of the QHE arises from electron transport phenomena recently identified in a family of materials known as ferromagnetic topological insulators. Such quantum spin Hall systems, as they are also known, conduct electricity along their (quantized) edge channels or surfaces, but act as insulators in their bulk. In these materials, spontaneous magnetization means the QAHE manifests as a quantization of resistance even at weak (or indeed zero) magnetic fields.

In the new work, Gould and colleagues made Hall resistance quantization measurements in the QAHE regime on a device made from V-doped (Bi,Sb)2Te3. These measurements showed that the relative deviation of the Hall resistance from RK at zero external magnetic field is just (4.4 ± 8.7) nΩ Ω−1. The method thus makes it possible to determine RK at zero magnetic field with the needed precision — something Gould says was not previously possible.

The snag is that the measurement only works under demanding experimental conditions: extremely low temperatures (below about 0.05 K) and low electrical currents (below 0.1 uA). “Ultimately, both these parameters will need to be significantly improved for any large-scale use,” Gould explains. “To compare, the QHE works at temperatures of 4.2 K and electrical currents of about 10 uA; making its detection much easier and cheaper to operate.”

Towards a universal electrical reference instrument

The new study, which is detailed in Nature Electronics, was made possible thanks to a collaboration between two teams, he adds. The first is at Würzburg, which has pioneered studies on electron transport in topological materials for some two decades. The second is at the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, which has been establishing QHE-based resistance standards for even longer. “Once the two teams became aware of each other’s work, the potential of a combined effort was obvious,” Gould says.

Because the project brings together two communities with very different working methods and procedures, they first had to find a window of operations where their work could co-exist. “As a simple example,” explains Gould, “the currents of ~100 nA used in the present study are considered extremely low for metrology, and extreme care was required to allow the measurement instrument to perform under such conditions. At the same time, this current is some 200 times larger than that typically used when studying topological properties of materials.”

As well as simplifying access to the constants h and e, Gould says the new work could lead to a universal electrical reference instrument based on the QAHE and the Josephson effect. Beyond that, it could even provide a quantum standard of voltage, resistance, and (by means of Ohm’s law) current, all in one compact experiment.

The possible applications of the QAHE in metrology have attracted a lot of attention from the European Union, he adds. “The result is a Europe-wide EURAMET metrology consortium QuAHMET aimed specifically at further exploiting the effect and operation of the new standard at more relaxed experimental conditions.”

The post New candidate emerges for a universal quantum electrical standard appeared first on Physics World.

Nanocrystals measure tiny forces on tiny length scales

Two independent teams in the US have demonstrated the potential of using the optical properties of nanocrystals to create remote sensors that measure tiny forces on tiny length scales. One team is based at Stanford University and used nanocrystals to measure the micronewton-scale forces exerted by a worm as it chewed bacteria. The other team is based at several institutes and used the photon avalanche effect in nanocrystals to measure sub-nanonewton to micronewton forces. The latter technique could potentially be used to study forces involved in processes such as stem cell differentiation.

Remote sensing of forces at small scales is challenging, especially inside living organisms. Optical tweezers cannot make remote measurements inside the body, while fluorophores – molecules that absorb and re-emit light – can measure forces in organisms, but have limited range, problematic stability or, in the case of quantum dots, toxicity. Nanocrystals with optical properties that change when subjected to external forces offer a way forward.

At Stanford, materials scientist Jennifer Dionne led a team that used nanocrystals doped with ytterbium and erbium. When two ytterbium atoms absorb near-infrared photons, they can then transfer energy to a nearby erbium atom. In this excited state, the erbium can either decay directly to its lowest energy state by emitting red light, or become excited to an even higher-energy state that decays by emitting green light. These processes are called upconversion.

Colour change

The ratio of green to red emission depends on the separation between the ytterbium and erbium atoms, and the separation between the erbium atoms – explains Dionne’s PhD student Jason Casar, who is lead author of a paper describing the Stanford research. Forces on the nanocrystal can change these separations and therefore affect that ratio.

The researchers encased their nanocrystals in polystyrene vessels approximately the size of a E coli bacterium. They then mixed the encased nanoparticles with E coli bacteria that were then fed to tiny nematode worms. To extract the nutrients, the worm’s pharynx needs to break open the bacterial cell wall. “The biological question we set out to answer is how much force is the bacterium generating to achieve that breakage?” explains Stanford’s Miriam Goodman.

The researchers shone near-infrared light on the worms, allowing them to monitor the flow of the nanocrystals. By measuring the colour of the emitted light when the particles reached the pharynx, they determined the force it exerted with micronewton-scale precision.

Meanwhile, a collaboration of scientists at Columbia University, Lawrence Berkeley National Laboratory and elsewhere has shown that a process called photon avalanche can be used to measure even smaller forces on nanocrystals. The team’s avalanching nanoparticles (ANPs) are sodium yttrium fluoride nanocrystals doped with thulium – and were discovered by the team in 2021.

The fun starts here

The sensing process uses a laser tuned off-resonance from any transition from the ground state of the ANP. “We’re bathing our particles in 1064 nm light,” explains James Schuck of Columbia University, whose group led the research. “If the intensity is low, that all just blows by. But if, for some reason, you do eventually get some absorption – maybe a non-resonant absorption in which you give up a few phonons…then the fun starts. Our laser is resonant with an excited state transition, so you can absorb another photon.”

This creates a doubly excited state that can decay radiatively directly to the ground state, producing an upconverted photon. Or, it energy can be transferred to a nearby thulium atom, which becomes resonant with the excited state transition and can excite more thulium atoms into resonance with the laser. “That’s the avalanche,” says Schuck; “We find on average you get 30 or 40 of these events – it’s analogous to a chain reaction in nuclear fission.”

Now, Schuck and colleagues have shown that the exact number of photons produced in each avalanche decreases when the nanoparticle experiences compressive force. One reason is that the phonon frequencies are raised as the lattice is compressed, making non-radiatively decay energetically more favourable.

The thulium-doped nanoparticles decay by emitting either red or near infrared photons. As the force increases, the red dims more quickly, causing a change in the colour of the emitted light. These effects allowed the researchers to measure forces from the sub-nanonewton to the micronewton range – at which point the light output from the nanoparticles became too low to detect.

Not just for forces

Schuck and colleagues are now seeking practical applications of their discovery, and not just for measuring forces.

“We’re discovering that this avalanching process is sensitive to a lot of things,” says Schuck. “If we put these particles in a cell and we’re trying to measure a cellular force gradient, but the cell also happened to change its temperature, that would also affect the brightness of our particles, and we would like to be able to differentiate between those things. We think we know how to do that.”

If the technique could be made to work in a living cell, it could be used to measure tiny forces such as those involved in the extra-cellular matrix that dictate stem cell differentiation.

Andries Meijerink of Utrecht University in the Netherlands believes both teams have done important work that is impressive in different ways. Schuck and colleagues for unveiling a fundamentally new force sensing technique and Dionne’s team for demonstrating a remarkable practical application.

However, Meijerink is sceptical that photon avalanching will be useful for sensing in the short term. “It’s a very intricate process,” he says, adding, “There’s a really tricky balance between this first absorption step, which has to be slow and weak, and this resonant absorption”. Nevertheless, he says that researchers are discovering other systems that can avalanche. “I’m convinced that many more systems will be found,” he says.

Both studies are described in Nature. Dionne and colleagues report their results here, and Schuck and colleagues here.

The post Nanocrystals measure tiny forces on tiny length scales appeared first on Physics World.

IOP president Keith Burnett outlines a ‘pivotal’ year ahead for UK physics

Last year was the year of elections and 2025 is going to be the year of decisions.

After many countries, including the UK, Ireland and the US, went to the polls in 2024, the start of 2025 will see governments at the beginning of new terms, forced to respond swiftly to mounting economic, social, security, environmental and technological challenges.

These issues would be difficult to address at any given time, but today they come amid a turbulent geopolitical context. Governments are often judged against short milestones – the first 100 days or a first budget – but urgency should not come at the cost of thinking long-term, because the decisions over the next few months will shape outcomes for years, perhaps decades, to come. This is no less true for science than it is for health and social care, education or international relations.

In the UK, the first half of the year will be dominated by the government’s spending review. Due in late spring, it could be one of the toughest political tests for UK science, as the implications of the tight spending plans announced in the October budget become clear. Decisions about departmental spending will have important implications for physics funding, from research to infrastructure, facilities and teaching.

One of the UK government’s commitments is to establish 10-year funding cycles for key R&D activities – a policy that could be a positive improvement. Physics discoveries often take time to realise in full, but their transformational nature is indisputable. From fibre-optic communications to magnetic resonance imaging, physics has been indispensable to many of the world’s most impactful and successful innovations.

Emerging technologies, enabled by physicists’ breakthroughs in fields such as materials science and quantum physics, promise to transform the way we live and work, and create new business opportunities and open up new markets. A clear, comprehensive and long-term vision for R&D would instil confidence among researchers and innovators, and long-term and sustainable R&D funding would enable people and disruptive ideas to flourish and drive tomorrow’s breakthroughs.

Alongside the spending review, we are also expecting the publication of the government’s industrial strategy. The focus of the green paper published last year was an indication of how the strategy will place significance on science and technology in positioning the UK for economic growth.

If we don’t recognise the need to fund more physicists, we will miss so many of the opportunities that lie ahead

Physics-based industries are a foundation stone for the UK economy and are highly productive, as highlighted by research commissioned by the Institute of Physics, which publishes Physics World. Across the UK, the physics sector generates £229bn gross value added, or 11% of total UK gross domestic product. It creates a collective turnover of £643bn, or £1380bn when indirect and induced turnover is included.

Labour productivity in physics-based businesses is also strong at £84 300 per worker, per year. So, if physics is not at the heart of this effort, then the government’s mission of economic revival is in danger of failing to get off the launch pad.

A pivotal year

Another of the new government’s policy priorities is the strategic defence review, which is expected to be published later this year. It could have huge implications for physics given its core role in many of the technologies that contribute to the UK’s defence capabilities. The changing geopolitical landscape, and potential for strained relations between global powers, may well bring research security to the front of the national mind.

Intellectual property, and scientific innovation, are some of the UK’s greatest strengths and it is right to secure them. But physics discoveries in particular can be hampered by overzealous security measures. So much of the important work in our discipline comes from years of collaboration between researchers across the globe. Decisions about research security need to protect, not hamper, the future of UK physics research.

This year could also be pivotal for UK universities, as securing their financial stability and future will be one of the major challenges. Last year, the pressures faced by higher education institutions became apparent, with announcements of course closures, redundancies and restructures as a way of saving money. The rise in tuition fees has far from solved the problem, so we need to be prepared for more turbulence coming for the higher education sector.

These things matter enormously. We have heard that universities are facing a tough situation, and it’s getting harder for physics departments to exist. But if we don’t recognise the need to fund more physicists, we will miss so many of the opportunities that lie ahead.

As we celebrate the International Year of Quantum Science and Technology that marks the centenary of the initial development of quantum mechanics by Werner Heisenberg, 2025 is a reminder of how the benefits of physics span over decades.

We need to enhance all the vital and exciting developments that are happening in physics departments. The country wants and needs a stronger scientific workforce – just think about all those individuals who studied physics and now work in industries that are defending the country – and that workforce will be strongly dependent on physics skills. So our priority is to make sure that physics departments keep doing world-leading research and preparing the next generation of physicists that they do so well.

The post IOP president Keith Burnett outlines a ‘pivotal’ year ahead for UK physics appeared first on Physics World.

Why telling bigger stories is the only way to counter misinformation

If aliens came to Earth and tried to work out how we earthlings make sense of our world, they’d surely conclude that we take information and slot it into pre-existing stories – some true, some false, some bizarre. Ominously, these aliens would be correct. You don’t need to ask earthling philosophers, just look around.

Many politicians and influencers, for instance, are convinced that scientific evidence does not tell the reality about, for instance, autism or AIDS, the state of the atmosphere or the legitimacy of elections, or even about aliens. Truth comes to light only when you “know the full story”, which will eventually reveal the scientific data to be deceptive or irrelevant.

To see how this works in practice, suppose you hear someone say that a nearby lab is leaking x picocuries of a radioactive substance, potentially exposing you to y millirems of dose. How do you know if you’re in danger? Well, you’ll instinctively start going through a mental checklist of questions.

Who’s speaking – scientist, politician, reporter or activist? If it’s a scientist, are they from the government, a university, or an environmental or anti-nuclear group? You might then wonder: how trustworthy are the agencies that regulate the substance? Is the lab a good neighbour, or did it cover up past incidents? How much of the substance is truly harmful?

Your answers to all these questions will shape the story you tell yourself. You might conclude: “The lab is a responsible organization and will protect me”. Or perhaps you’ll think: “The lab is a thorn in the side of the community and is probably doing weapons-related work. The leak’s a sign of something far worse.”

Perhaps your story will be: “Those environmentalists are just trying to scare us and the data indicate the leak is harmless”. Or maybe it’ll be: “I knew it! The lab’s sold out, the data are terrifying, and the activists are revealing the real truth”. Such stories determine the meaning of the picocuries and millirems for humans, not the other way around.

Acquiring data

Humans gain a sense of what’s happening in several ways. Three of them, to use philosophical language, are deferential, civic and melodramatic epistemology.

In “deferential epistemology”, citizens habitually take the word of experts and institutions about things like the dangers of picocuries and exposures of millirems. In his 1624 book New Atlantis, the philosopher Francis Bacon famously crafted a fictional portrait of an island society where deferential epistemology rules and people instinctively trust the scientific infrastructure.

Earthlings haven’t seen deferential epistemology in a while

We may think this is how people ought to behave. But Bacon, who was also a politician, understood that deference to experts is not automatic and requires constantly curating the public face of the scientific infrastructure. Earthlings haven’t seen deferential epistemology in a while.

“Civic epistemology”, meanwhile, is how people acquire knowledge in the absence of that curation. Such people don’t necessarily reject experts but hear their voices alongside many others claiming to know best how to pursue our interests and values. Civic epistemology is when we negotiate daily life not by first consulting scientists but by pursuing our concerns with a mix of habit, trust, experience and friendly advice.

We sometimes don’t, in fact, take scientific advice when it collides with how we already behave; we may smoke or drink, for instance, despite warnings not to. Or we might seek guidance from non-scientists about things like the harms of radiation.

Finally, what I call “melodramatic epistemology” draws on the word “melodrama”, a genre of theatre involving extreme plots, obvious villains, emotional appeal, sensational language and moral outrage (the 1939 film Gone with the Wind comes to mind).

A melodramatic lens can be a powerful and irresistible way for humans to digest difficult and emotionally charged events

Melodramas were once considered culturally insignificant, but scholars such as Peter Brooks from Yale University have shown that a melodramatic lens can be a powerful and irresistible way for humans to digest difficult and emotionally charged events. The clarity, certainty and passion provided by a melodramatic read on a situation tends to displace the complexities, uncertainties and dispassion of scientific evaluation and evidence.

One example from physics occurred at the Lawrence Berkeley Laboratory in the late 1990s when activists fought, successfully, for the closing of its National Tritium Labeling Facility (NTLF). As I have written before, the NTLF had successfully developed techniques for medical studies while releasing tritium emissions well below federal and state environmental standards.

Activists, however, used melodramatic epistemology to paint the NTLF’s scientists as villains spreading breast cancer throughout the area, and denounced them as making “a terrorist attack on the citizens of Berkeley”. One activist called the scientists “piano players in a nuclear whorehouse.”

The critical point

The aliens studying us would worry most about melodramatic epistemology. Melodramatic epistemology, though dangerous, is nearly impervious to being altered, for any contrary data, studies and expert judgment are considered to spring from the villain’s allies and therefore to incite rather than allay fear.

Two US psychologists – William Brady from Northwestern University and Molly Crockett from Princeton University – recently published a study of how and why misinformation spreads (Science 386 991). By analysing data from Facebook and Twitter and by conducting real experiments with participants, they found that sources of misinformation evoke more outrage than trustworthy sources. Worse still, the outrage encourages us to share the misinformation even if we haven’t fully read the original source.

This makes it hard to counter misinformation. As the authors tactfully conclude: “Outrage-evoking misinformation may be difficult to mitigate with interventions that assume users want to share accurate information”.

The best, and perhaps only, way to challenge melodramatic stories is to write bigger, more encompassing stories that reveal that a different plot is unfolding

In my view, the best, and perhaps only, way to challenge melodramatic stories is to write bigger, more encompassing stories that reveal that a different plot is unfolding. Such a story about the NTLF, for instance, would comprise story lines about the benefits of medical techniques, the testing of byproducts, the origin of regulations of toxins, the perils of our natural environment, the nature of fear and its manipulation, and so forth. In such a big story, those who promote melodramatic epistemology show up as an obvious, and dangerous, subplot.

If the aliens see us telling such bigger stories, they might not give up earthlings for lost.

The post Why telling bigger stories is the only way to counter misinformation appeared first on Physics World.

SMART spherical tokamak produces its first plasma

A novel fusion device based at the University of Seville in Spain has achieved its first plasma. The SMall Aspect Ratio Tokamak (SMART) is a spherical tokamak that can operate with a “negative triangularity” – the first spherical tokamak specifically designed to do so. Work performed on the machine could be useful when designing compact fusion power plants based on spherical tokamak technology.

SMART has been constructed by the University of Seville’s Plasma Science and Fusion Technology Laboratory. With a vessel dimension of 1.6 × 1.6 m, SMART has a 30 cm diameter solenoid wrapped around 12 toroidal field coils while eight poloidal field coils are used to shape the plasma.

Triangularity refers to the shape of the plasma relative to the tokamak. The cross section of the plasma in a tokamak is typically shaped like a “D”. When the straight part of the D faces the centre of the tokamak, it is said to have positive triangularity. When the curved part of the plasma faces the centre, however, the plasma has negative triangularity.

It is thought that negative triangularity configurations can better suppress plasma instabilities that expel particles and energy from the plasma, helping to prevent damage to the tokamak wall.

Last year, researchers at the University of Seville began to prepare the tokamak’s inner walls for a high pressure plasma by heating argon gas with microwaves. When those tests were successful, engineers then worked toward producing the first plasma.

“This is an important achievement for the entire team as we are now entering the operational phase,” notes SMART principal investigator Manuel García Muñoz. “The SMART approach is a potential game changer with attractive fusion performance and power handling for future compact fusion reactors. We have exciting times ahead.”

The post SMART spherical tokamak produces its first plasma appeared first on Physics World.

When charging quantum batteries, decoherence is a friend, not a foe

Devices like lasers and other semiconductor-based technologies operate on the principles of quantum mechanics, but they only scratch the surface. To fully exploit quantum phenomena, scientists are developing a new generation of quantum-based devices. These devices are advancing rapidly, fuelling what many call the “second quantum revolution”.

One exciting development in this domain is the rise of next-generation energy storage devices known as quantum batteries (QBs).  These devices leverage exotic quantum phenomena such as superposition, coherence, correlation and entanglement to store and release energy in ways that conventional batteries cannot. However, practical realization of QBs has its own challenges  such as reliance on fragile quantum states and difficulty in operating at room temperature.

A recent theoretical study by Rahul Shastri and colleagues from IIT Gandhinagar, India, in collaboration with researchers at China’s Zhejiang University and the China Academy of Engineering Physics takes significant strides towards understanding how QBs can be charged faster and more efficiently, thereby lowering some of the barriers restricting their use.

How does a QB work?

The difference between charging a QB and charging a mobile phone is that with a QB, both the battery and the charger are quantum systems. Shastri and colleagues focused on two such systems: a harmonic oscillator (HO) and a two-level system.  While a two-level system can exist in just two energy states, a harmonic oscillator has an evenly spaced range of energy levels. These systems therefore represent two extremes – one with a discrete, bounded energy range and the other with a more complex, unbounded energy spectrum approaching a continuous limit – making them ideal for exploring the versality of QBs.

In the quantum HO-based setup, a higher-energy HO acts as the charger and a lower-energy one as the battery. When the two are connected, or coupled, energy transfers from the charger to the battery. The two-level system follows the same working principle.  Such coupled quantum systems are routinely realized in experiments.

Using decoherence as a tool to improve QB performance

The study’s findings, which are published in npj Quantum Information, are both surprising and promising, illustrating how a phenomenon typically seen as a challenge in quantum systems – decoherence – can become a solution.

The term “decoherence” refers to the process where a quantum system loses its unique quantum properties (such as quantum correlation, coherence and entanglement). The key trigger for decoherence is quantum noise caused by interactions between a quantum system and its environment.

Since no real-world physical system is perfectly isolated, such noise is unavoidable, and even minute amounts of environmental noise can lead to decoherence. Maintaining quantum coherence is thus extremely challenging even in controlled laboratory settings, let alone industrial environments producing large-scale practical devices. For this reason, decoherence represents one of the most significant obstacles in advancing quantum technologies towards practical applications.

Shastri and colleagues, however discovered a way to turn this foe into a friend. “Instead of trying to eliminate these naturally occurring environmental effects, we ask: why not use them to our advantage?” Shashtri says.

The method they developed speeds up the charging process using a technique called controlled dephasing. Dephasing is a form of decoherence that usually involves the gradual loss of quantum coherence, but the researchers found that when managed carefully, it can actually boost the battery’s performance.

Dissipative effects, traditionally seen as a hindrance, can be harnessed to enhance performance

Rahul Shastri

To understand how this works, it’s important to note that at low levels of dephasing, the battery undergoes smooth energy oscillations. Too much dephasing, however, freezes these oscillations in what’s known as the quantum Zeno effect, essentially stalling the energy transfer. But with just the right amount of dephasing, the battery charges faster while maintaining stability. By precisely controlling the dephasing rate, therefore, it becomes possible to strike a balance that significantly improves charging speed while still preserving stability. This balance leads to quicker, more robust charging that could overcome challenges posed by environmental factors.

“Our study shows how dissipative effects, traditionally seen as a hindrance, can be harnessed to enhance performance,” Shastri notes. This opens the door to scalable, robust quantum battery designs, which could be extremely useful for energy management in quantum computing and other quantum-enabled applications.

Implications for scalable quantum technologies

The results of this study are encouraging for the quantum-technology industry. As per Shastri, using dephasing to optimize the charging speed and stability of QBs not only advances fundamental understanding but also addresses practical challenges in quantum energy storage.

“Our proposed method could be tested on existing platforms such as superconducting qubits and NMR systems, where dephasing control is already experimentally feasible,” he says. These platforms offer experimentalists a tangible starting point for verifying the study’s predictions and further refining QB performance.

Experimentalists testing this theory will face challenges. Examples include managing additional decoherence mechanisms like amplitude damping and achieving the ideal balance of controlled dephasing in realistic setups. However, Shastri says that these challenges present valuable opportunities to refine and expand the proposed theoretical model for optimizing QB performance under practical conditions. The second quantum revolution is already underway, and QBs might just be the power source that charges our quantum future.

The post When charging quantum batteries, decoherence is a friend, not a foe appeared first on Physics World.

Microbeams plus radiosensitizers could optimize brain cancer treatment

Brain tumours are notoriously difficult to treat, resisting conventional treatments such as radiation therapy, where the deliverable dose is limited by normal tissue tolerance. To better protect healthy tissues, researchers are turning to microbeam radiation therapy (MRT), which uses spatially fractionated beams to spare normal tissue while effectively killing cancer cells.

MRT is delivered using arrays of ultrahigh-dose rate synchrotron X-ray beams tens of microns wide (high-dose peaks) and spaced hundreds of microns apart (low-dose valleys). A research team from the Centre for Medical Radiation Physics at the University of Wollongong in Australia has now demonstrated that combining MRT with targeted radiosensitizers – such as nanoparticles or anti-cancer drugs – can further boost treatment efficacy, reporting their findings in Cancers.

“MRT is famous for its healthy tissue-sparing capabilities with good tumour control, whilst radiosensitizers are known for their ability to deliver targeted dose enhancement to cancer,” explains first author Michael Valceski. “Combining these modalities just made sense, with their synergy providing the potential for the best of both worlds.”

Enhancement effects

Valceski and colleagues combined MRT with thulium oxide nanoparticles, the chemotherapy drug methotrexate and the radiosensitizer iododeoxyuridine (IUdR). They examined the response of monolayers of rodent brain cancer cells to various therapy combinations. They also compared conventional broadbeam orthovoltage X-ray irradiation with synchrotron broadbeam X-rays and synchrotron MRT.

Synchrotron irradiations were performed on the Imaging and Medical Beamline at the ANSTO Australian Synchrotron, using ultrahigh dose rates of 74.1 Gy/s for broadbeam irradiation and 50.3 Gy/s for MRT. The peak-to-valley dose ratio (PVDR, used to characterize an MRT field) of this set-up was measured as 8.9.

Using a clonogenic assay to measure cell survival, the team observed that synchrotron-based irradiation enhanced cell killing compared with conventional irradiation at the same 5 Gy dose (for MRT this is the valley dose, the peaks experience 8.9 times higher dose), demonstrating the increased cell-killing effect of these ultrahigh-dose rate X-rays.

Adding radiosensitizers further increased the impact of synchrotron broadbeam irradiation, with DNA-localized IUdR killing more cells than cytoplasm-localized nanoparticles. Methotrexate, meanwhile, halved cell survival compared with conventional irradiation.

The team observed that at 5 Gy, MRT showed equivalent cell killing to synchrotron broadbeam irradiation. Valceski explains that this demonstrates MRT’s tissue-sparing potential, by showing how MRT can maintain treatment efficacy while simultaneously protecting healthy cells.

MRT also showed enhanced cell killing when combined with radiosensitizers, with the greatest effect seen for IUdR and IUdR plus methotrexate. This local dose enhancement, attributed to the DNA localization of IUdR, could further improve the tissue-sparing capabilities of MRT by enabling a lower per-fraction dose to reduce patient exposure whilst maintaining tumour control.

Imaging valleys and peaks

To link the biological effects with the physical collimation of MRT, the researchers performed confocal microscopy (at the Fluorescence Analysis Facility in Molecular Horizons, University of Wollongong) to investigate DNA damage following treatment at 0.5 and 5 Gy. Twenty minutes after irradiation, they imaged fixed cells to visualize double-strand DNA breaks (DSBs), as shown by γH2AX foci (representing a nuclear DSB site).

Spatially fractionated beams
Spatially fractionated beams Imaging DNA damage following MRT confirms that the cells’ biological responses match the beam collimation. The images show double-strand DNA breaks (green) overlaid on a nuclear counterstain (blue). (Courtesy: CC BY/Cancers 10.3390/cancers16244231)

The images verified that the cells’ biological responses corresponded with the MRT beam patterns, with the 400 µm microbeam spacing clearly seen in all treated cells, both with and without radiosensitizers.

In the 0.5 Gy images, the microbeam tracks were consistent in width, while the 5 Gy MRT tracks were wider as DNA damage spread from peaks into the valleys. This radiation roll-off was also seen with IUdR and IUdR plus methotrexate, with numerous bright foci visible in the valleys, demonstrating dose enhancement and improved cancer-killing with these radiosensitizers.

The researchers also analysed the MRT beam profiles using the γH2AX foci intensity across the images. Cells treated with radiosensitizers had broadened peaks, with the largest effect seen with the nanoparticles. As nanoparticles can be designed to target tumours, this broadening (roughly 30%) can be used to increase the radiation dose to cancer cells in nearby valleys.

“Peak broadening adds a novel benefit to radiosensitizer-enhanced MRT. The widening of the peaks in the presence of nanoparticles could potentially ‘engulf’ the entire cancer, and only the cancer, whilst normal tissues without nanoparticles retain the protection of MRT tissue sparing,” Valceski explains. “This opens up the potential for MRT radiosurgery, something our research team has previously investigated.”

Finally, the researchers used γH2AX foci data for each peak and valley to determine a biological PVDR. The biological PDVR values matched the physical PVDR of 8.9, confirming for the first time a direct relationship between physical dose delivered and DSBs induced in the cancer cells. They note that adding radiosensitizers generally lowered the biological PVDRs from the physical value, likely due to additional DSBs induced in the valleys.

The next step will be to perform preclinical studies of MRT. “Trials to assess the efficacy of this multimodal therapy in treating aggressive cancers in vivo are key, especially given the theragnostic potential of nanoparticles for image-guided treatment and precision planning, as well as cancer-specific dose enhancement,” senior author Moeava Tehei tells Physics World. “Considering the radiosurgical potential of stereotactic, radiosensitizer-enhanced MRT fractions, we can foresee a revolutionary multimodal technique with curative potential in the near future.”

The post Microbeams plus radiosensitizers could optimize brain cancer treatment appeared first on Physics World.

Wrinkles in space–time could remember the secrets of exploding stars

Permanent distortions in space–time caused by the passage of gravitational waves could be detectable from Earth. Known as “gravitational memory”, such distortions are predicted to occur most prominently when the core of a supernova collapses. Observing them could therefore provide a window into the death of massive stars and the creation of black holes, but there’s a catch: the supernova might have to happen in our own galaxy.

Physicists have been detecting gravitational waves from colliding stellar-mass black holes and neutron stars for almost a decade now, and theory predicts that core-collapse supernovae should also produce them. The difference is that unlike collisions, supernovae tend to be lopsided – they don’t explode outwards equally in all directions. It is this asymmetry – in both the emission of neutrinos from the collapsing core and the motion of the blast wave itself – that produces the gravitational-wave memory effect.

“The memory is the result of the lowest frequency aspects of these motions,” explains Colter Richardson, a PhD student at the University of Tennessee in Knoxville, US and co-lead author (with Haakon Andresen of Sweden’s Oskar Klein Centre) of a Physical Review Letters paper describing how gravitational-wave memory detection might work on Earth.

Filtering out seismic noise

Previously, many physicists assumed it wouldn’t be possible to detect the memory effect from Earth. This is because it manifests at frequencies below 10 Hz, where noise from seismic events tends to swamp detectors. Indeed, Harvard astrophysicist Kiranjyot Gill argues that detecting gravitational memory “would require exceptional sensitivity in the millihertz range to separate it from background noise and other astrophysical signals” – a sensitivity that she says Earth-based detectors simply don’t have.

Anthony Mezzacappa, Richardson’s supervisor at Tennessee, counters this by saying that while the memory signal itself cannot be detected, the ramp-up to it can. “The signal ramp-up corresponds to a frequency of 20–30 Hz, which is well above 10 Hz, below which the detector response needs to be better characterized for what we can detect on Earth, before dropping down to virtually 0 Hz where the final memory amplitude is achieved,” he tells Physics World.

The key, Mezzacappa explains, is a “matched filter” technique in which templates of what the ramp-up should look like are matched to the signal to pick it out from low-frequency background noise. Using this technique, the team’s simulations show that it should be possible for Earth-based gravitational-wave detectors such as LIGO to detect the ramp-up even though the actual deformation effect would be tiny – around 10-16 cm “scaled to the size of a LIGO detector arm”, Richardson says.

The snag is that for the ramp-up to be detectable, the simulations suggest the supernova would need to be close – probably within 10 kiloparsecs (32,615 light-years) of Earth. That would place it within our own galaxy, and galactic supernovae are not exactly common. The last to be observed in real time was spotted by Johannes Kepler in 1604; though there have been others since, we’ve only identified their remnants after the fact.

Going to the Moon

Mezzacappa and colleagues are optimistic that multimessenger astronomy techniques such as gravitational-wave and neutrino detectors will help astronomers identify future Milky Way supernovae as they happen, even if cosmic dust (for example) hides their light for optical observers.

Gill, however, prefers to look towards the future. In a paper under revision at Astrophysical Journal Letters, and currently available as a preprint, she cites two proposals for detectors on the Moon that could transform gravitational-wave physics and extend the range at which gravitational memory signals can be detected.

The first, called the Lunar Gravitational Wave Antenna, would use inertial sensors to detect the Moon shaking as gravitational waves ripple through it. The other, known as the Laser Interferometer Lunar Antenna, would be like a giant, triangular version of LIGO with arms spanning tens of kilometres open to space. Both are distinct from the European Space Agency’s Laser Interferometer Space Antenna, which is due for launch in the 2030s, but is optimized to detect gravitational waves from supermassive black holes rather than supernovae.

“Lunar-based detectors or future space-based observatories beyond LISA would overcome the terrestrial limitations,” Gill argues. Such detectors, she adds, could register a memory effect from supernovae tens or even hundreds of millions of light-years away. This huge volume of space would encompass many galaxies, making the detection of gravitational waves from core-collapse supernovae almost routine.

The memory of something far away

In response, Richardson points out that his team’s filtering method could also work at longer ranges – up to approximately 10 million light-years, encompassing our own Local Group of galaxies and several others – in certain circumstances. If a massive star is spinning very quickly, or it has an exceptionally strong magnetic field, its eventual supernova explosion will be highly collimated and almost jet-like, boosting the amplitude of the memory effect. “If the amplitude is significantly larger, then the detection distance is also significantly larger,” he says.

Whatever technologies are involved, both groups agree that detecting gravitational-wave memory is important. It might, for example, tell us whether a supernova has left behind a neutron star or a black hole, which would be valuable because the reasons one forms and not the other remain a source of debate among astrophysicists.

“By complementing other multimessenger observations in the electromagnetic spectrum and neutrinos, gravitational-wave memory detection would provide unparalleled insights into the complex interplay of forces in core-collapse supernovae,” Gill says.

Richardson agrees that a detection would be huge and hopes that his work and that of others “motivates new investigations into the low-frequency region of gravitational-wave astronomy”.

The post Wrinkles in space–time could remember the secrets of exploding stars appeared first on Physics World.

‘Why do we have to learn this?’ A physics educator’s response to every teacher’s least favourite question

Several years ago I was sitting at the back of a classroom supporting a newly qualified science teacher. The lesson was going well, a pretty standard class on Hooke’s law, when a student leaned over to me and asked “Why are we doing this? What’s the point?”.

Having taught myself, this was a question I had been asked many times before. I suspect that when I was a teacher, I went for the knee-jerk “it’s useful if you want to be an engineer” response, or something similar. This isn’t a very satisfying answer, but I never really had the time to formulate a real justification for studying Hooke’s law, or physics in general for that matter.

Who is the physics curriculum designed for? Should it be designed for the small number of students who will pursue the subject, or subjects allied to it, at the post-16 and post-18 level? Or should we be reflecting on the needs of the overwhelming majority who will never use most of the curriculum content again? Only about 10% of students pursue physics or physics-rich subjects post-16 in England, and at degree level, only around 4000 students graduate with physics degrees in the UK each year.

One argument often levelled at me is that learning this is “useful”, to which I retort – in a similar vein to the student from the first paragraph – “In what way?” In the 40 years or so since first learning Hooke’s law, I can’t remember ever explicitly using it in my everyday life, despite being a physicist. Whenever I give a talk on this subject, someone often pipes up with a tenuous example, but I suspect they are in the minority. An audience member once said they consider the elastic behaviour of wire when hanging pictures, but I suspect that many thousands of pictures have been successfully hung with no recourse to F = –kx.

Hooke’s law is incredibly important in engineering but, again, most students will not become engineers or rely on a knowledge of the properties of springs, unless they get themselves a job in a mattress factory.

From a personal perspective, Hooke’s law fascinates me. I find it remarkable that we can see the macroscopic properties of materials being governed by microscopic interactions and that this can be expressed in a simple linear form. There is no utilitarianism in this, simply awe, wonder and aesthetics. I would always share this “joy of physics” with my students, and it was incredibly rewarding when this was reciprocated. But for many, if not most, my personal perspective was largely irrelevant, and they knew that the curriculum content would not directly support them in their future careers.

At this point, I should declare my position – I don’t think we should take Hooke’s law, or physics, off the curriculum, but my reason is not the one often given to students.

A series of lessons on Hooke’s law is likely to include: experimental design; setting up and using equipment; collecting numerical data using a range of devices; recording and presenting data, including graphs; interpreting data; modelling data and testing theories; devising evidence-based explanations; communicating ideas; evaluating procedures; critically appraising data; collaborating with others; and working safely.

Science education must be about preparing young people to be active and critical members of a democracy, equipped with the skills and confidence to engage with complex arguments that will shape their lives. For most students, this is the most valuable lesson they will take away from Hooke’s law. We should encourage students to find our subject fascinating and relevant, and in doing so make them receptive to the acquisition of scientific knowledge throughout their lives.

At a time when pressures on the education system are greater than ever, we must be able to articulate and justify our position within a crowded curriculum. I don’t believe that students should simply accept that they should learn something because it is on a specification. But they do deserve a coherent reason that relates to their lives and their careers. As science educators, we owe it to our students to have an authentic justification for what we are asking them to do. As physicists, even those who don’t have to field tricky questions from bored teenagers, I think it’s worthwhile for all of us to ask ourselves how we would answer the question “What is the point of this?”.

The post ‘Why do we have to learn this?’ A physics educator’s response to every teacher’s least favourite question appeared first on Physics World.

❌