↩ Accueil

Vue lecture

A bursting bubble can make a puddle jump

Jiangtao Cheng of Virginia Tech
Breaking the capillary length limitation Jiangtao Cheng of Virginia Tech and collaborators have found a way to launch much larger water droplets into the air than scientists once thought possible. (Courtesy: Jiangtao Cheng)

On a quiet spring morning, when dew settles on leaves, something curious sometimes happens. A droplet sitting there peacefully will suddenly lift off. No wind. No vibration. Just a tiny leap into the air.

Physicists call this phenomenon droplet jumping. In simple terms, it means that a droplet lifts off from the surface it sits on. If a raindrop hits a leaf and rebounds upward, that rebound can also be considered droplet jumping.

While this may seem like a minor detail in fluid behaviour, removing liquid from surfaces is important for many technologies. When droplets detach from a contaminated surface, they can carry away particles, a process that forms the basis of self-cleaning materials. When droplets leave hot surfaces, they remove heat. And on cold surfaces, quickly removing droplets can help prevent ice buildup.

For years, scientists believed that there was a physical limit to how large these jumping droplets could be. A new study published in Nature has now shown that this limit can be broken, with the help of a bubble.

The research was headed up by Jiangtao Cheng’s lab at Virginia Tech, and performed in collaboration with researchers from the Hong Kong University of Science and Technology and Wuhan University of Technology.

A stubborn limit in droplet physics

Within a droplet, two forces compete constantly: the first is surface tension, the other is gravity.

Surface tension tries to pull the droplet into a sphere, which minimizes its surface area and, therefore, its energy. Gravity, meanwhile, pulls the droplet downward, flattening it against the surface.

The balance between these two forces defines the so-called capillary length – which for water is 2.7 mm. Below this length, surface tension dominates and droplets can sometimes propel themselves upward. Above this length limitation, gravity takes over.

This balance has long been a fundamental barrier in the field of self-propelled droplet jumping. “For droplets larger than the capillary length, gravity dominates,” Cheng tells Physics World. “Simply releasing surface energy from shape relaxation is no longer sufficient to generate enough upward momentum for jumping.”

That is why most previous studies have observed droplets no larger than about 3 mm jumping on their own.

Inspiration from nature

The idea behind the new research began with observations in nature. First author Wenge Huang, who grew up in rural South China, often saw dew droplets on lotus leaves containing tiny air bubbles. Occasionally, when those bubbles burst, the droplets moved.

Years later, that observation led to a question: “could a bubble trapped inside a droplet provide the extra energy needed for jumping?”

A bubble-powered launch

To test this idea, the researchers placed a water droplet on a superhydrophobic surface, which strongly repels water. They then injected air into the droplet using a fine needle, forming a bubble inside the liquid. After a short time, the bubble burst.

High-speed cameras captured what happened next: the droplet lifted cleanly off the surface.

What surprised the researchers most was that droplets nearly 1 cm wide were able to jump – far exceeding the previously accepted capillary length limitation.

A bubble inside the droplet creates additional air–liquid interfaces, increasing the system’s stored surface energy while adding almost no mass. When the bubble bursts, that energy is released as capillary waves that focus momentum upward.

“Embedding a bubble increases the system’s surface energy without increasing its weight,” explains Cheng.

Small bubbles, strong possibilities

The researchers also found that the mechanism was extremely efficient, converting more than 90% of the energy into upward momentum, well above that of many conventional droplet propulsion methods.

The implications extend beyond basic physics; the discovery could help improve self-cleaning surfaces, heat transfer systems and anti-icing technologies. The bubble-burst process can also create directional liquid jets, which could be useful for microscale 3D printing and material deposition.

In simple terms, the study revealed something unexpected. A single bursting bubble can launch a much larger droplet than scientists once thought possible, even at the centimetre scale.

The post A bursting bubble can make a puddle jump appeared first on Physics World.

  •  

Droplet scientists push the boundary between living and non-living matter

In this episode of the Physics World Weekly podcast, we hear from a trio of scientists with a common interest in the physics of droplets. Specifically, Joe Forth, Rob Malinowski and Giorgio Volpe share a fascination with droplets that are “animate” – that is, capable of responding to their surroundings in ways that resemble the behaviour of living organisms.

As they explain in the podcast, systems must tick three boxes to qualify as animate. First, they must be active, able to use energy from their environment to do work and perform tasks. Second, they must be adaptive, able to move between different dynamical states in response to changes to their environment or their own internal states. Finally, they must be autonomous, able to process multiple inputs and choose how to respond to them without intervention from the outside world.

Incorporating all these behaviours into a droplet – or a system of many droplets – is challenging. The boundary between autonomous and non-autonomous systems is proving especially hard to overcome, and Volpe, Malinowski and Forth have a friendly disagreement over whether any droplet-based system has managed it yet.

Crosses disciplinary borders

Part of the challenge, they say, is that the field crosses disciplinary borders. Although Volpe thinks the community of droplet researchers is getting better at finding a common vocabulary for discussions, Forth jokes that it is still the case that “the chemists are scared of physics, the physicists are scared of chemists, everyone is scared of biology”. The potential rewards of overcoming these fears are great, however, with possible future applications of animate droplets ranging from consumer products such as deodorant to oil spill clean-up.

This discussion is based on a Perspective article that Volpe (a professor of soft matter in the chemistry department at University College London, UK), Malinowski (a research fellow in soft matter physics in the same department) and Forth (a colloid scientist and lecturer in the chemistry department at the University of Liverpool, UK) wrote for the journal EPL, which sponsors this episode of the podcast.

The post Droplet scientists push the boundary between living and non-living matter appeared first on Physics World.

  •  

The American Physical Society’s 2026 Global Physics Summit opens in Denver

The Global Physics Summit (GPS) bills itself as “the world’s largest physics research conference”. Organized by the American Physical Society (APS), it combines the previously separate APS March and April meetings, with at least 14,000 people expected to attend this year’s event in Denver, Colorado, which has the theme “science for a shared future”.

The two APS meetings (especially APS March) have long been pilgrimages for physicists. They’re a chance to meet people whose papers you’ve read, learn about new research, land a dream job or perhaps decide what your future physics career should look like. They offer unparalleled opportunities for gossiping, networking and making your name.

Sometimes they even host extraordinary announcements, such as in 2023 when one group claimed to have discovered room-temperature superconductors, or in 1987 when several groups really did present the first data on high-temperature ones.

Due to the current state of US politics, however, physicists from many countries may well have second thoughts about travelling to this and other scientific meetings in the US.

Indeed, if you’re from one of almost 40 nations to which the US government has partially or fully suspended issuing visas – supposedly “to protect the security of the United States” – you probably won’t be able to get into the country at all.

Among the countries affected by the Trump administration’s ban is Ethiopia, which is home to people like the physicist Mulugeta Bekele, who almost single-handedly kept Ethiopian physics alive in the 1970s and 1980s despite being jailed and tortured.

As Robert P Crease recounts in his latest feature, Mulugeta was awarded the APS’s Sakharov human-rights prize in 2012, picking up his award at that year’s APS March meeting in Boston. Would Mulugeta, I wonder, be able to enter the US in current circumstances?

One US physicist told me that outsiders should respond to the situation in America by boycotting the US entirely. To me, that’s a step too far, not least because breaking contact would show a lack of solidarity with US-based scientists suffering from funding cuts or worse. After all, physics is a global enterprise, as two recent Physics World articles make clear.

The first is a feature about quantifying the environmental impact of military conflicts by Ben Skuse. Numbers are hard to come by, but according to a 2022 estimate extrapolated from the small number of nations that do share their data, the total military carbon footprint is about 5.5% of global emissions. This would make the world’s militaries the fourth biggest carbon emitter if they were a nation.

In another feature, Michael Allen examines how climate change could trigger extreme changes in the activity of earthquakes and volcanoes. Worryingly, increased volcanic eruptions not only contribute to the build-up of greenhouse gases but also create other problems too. In particular, a warming climate melts ice caps, lowering surface loads and potentially causing more earthquakes to occur.

Both issues – and many more besides – will only be solved through global, interdisciplinary collaborations. As the theme of the GPS quite rightly puts it, we need science for a shared future.

That’s why it’s great that the APS, along with AIP Publishing and IOP Publishing, which together form the Purpose-led Publishing (PLP) Coaltion, are hosting a network of 23 satellite events in Africa, Asia and South America to expand participation in this year’s GPS.

PLP’s satellite hubs, which will take place both in person and online, aim to let researchers engage with the summit programme, contribute to discussions, and take part in locally organised workshops and presentations.

Taking place in countries ranging from Brazil and Benin to the Philippines and Pakistan, the events will host livestreamed and recorded content from Denver as well as offering debates, expert-led sessions and opportunities for networking.

One event will be held in Ethiopia, which, I hope, Mulugeta at least will be pleased to hear.

The post The American Physical Society’s 2026 Global Physics Summit opens in Denver appeared first on Physics World.

  •  

Interplaying hazards: can you solve our crossword on geophysical processes?

See how much you know about the subject by trying our interactive crossword. Most of the clues are based on the article, but there are a few additional brain teasers thrown in. If you’re feeling stuck, check out the “assist” menu for help.

If you would like to sponsor a puzzle on Physics World please contact Edward Jost at: edward.jost@ioppublishing.org.

The post Interplaying hazards: can you solve our crossword on geophysical processes? appeared first on Physics World.

  •  

Lunar magnetic field mystery may finally have an explanation

When the Apollo astronauts returned from the Moon, they brought a puzzle back with them. Some of the rocks they collected were so strongly magnetic, it implied that the Moon’s magnetic field must have been stronger than the Earth’s when the rocks formed 3.9‒3.5 billion years ago. “That doesn’t make any sense with the physics that we understand about how planets generate magnetic fields,” says Claire Nichols, a planetary geologist at the University of Oxford, UK.

Nichols and her Oxford colleagues Jon Wade and Simon N Stephenson have now identified a possible explanation. The key, they say, lies in the rocks’ composition, which happens to provide ideal spacecraft landing sites, leading to sampling bias. “It was a proper kind of Eureka moment,” Nichols says.

The lunar dynamo

The magnetic fields of planets and moons stem from convective currents in their largely iron cores. Scientists expect that objects with smaller cores, such as the Moon, will have lower magnetic field strengths. But measurements of the Apollo samples suggested that the magnetic field strength might, in some cases, have exceeded 100 μT – higher than the typical value of 40μT on the surface of the Earth. It’s as if an AA battery were somehow powering a fridge.

“The dynamo modelling community have been trying to come up with all sorts of mechanisms to give you these really strong fields,” Nichols tells Physics World.

When Nichols mentioned this problem to Wade, a petrologist, his response intrigued her. “He said, kind of as a throwaway comment, ‘Have you looked to see if there’s any link between the composition and the intensities?’”

Upon inspecting the data, Nichols realized that Wade could be onto something. While all the lunar basalt samples with high magnetization contained large quantities of titanium, samples with low magnetization contained little.

A possible mechanism

Other researchers had previously suggested a process that could have supercharged the Moon’s dynamo, boosting the magnetization of titanium-bearing basalt in the process. When the Moon formed, an ocean of molten magma developed that gradually crystallized into today’s lunar mantle. The last material to solidify was a titanium-rich mineral called ilmenite. Solid ilmenite is incredibly dense, so once it solidified, it sank towards the Moon’s magnetic core.

According to the hypothesis, heat transfer across the core-mantle boundary then pushed the ilmenite to its melting point and increased the local temperature gradient, thereby boosting convection and, by extension, magnetic field strength. This means that the ilmenite-bearing rocks supercharged the dynamo behind the Moon’s magnetic field and became unusually highly magnetized in the process. Eventually, volcanic activity brought the rocks to the lunar surface, where the Apollo astronauts collected them.

The problem with this explanation, Nichols says, is that the heat flux at the boundary would only be raised for brief periods, meaning that by this mechanism, only two in every thousand Apollo samples would be strongly magnetized. The real figure is roughly half.

A further role for heat transfer?

Nichols and her colleagues therefore dug deeper into the process. They realized that while the period of melting was brief, it played a crucial role in creating the samples the Apollo astronauts found. “Those samples are all being erupted only at the times where the heat flux is high,” Nichols tells Physics World. And when they eventually made their way to the lunar surface, they did so as part of basaltic flows, which happen to make perfect landing sites for spacecraft.

Case solved? Not quite. According to widely accepted theories of convection in the lunar mantle, the ilmenite lumps could not have got as far as the boundary between the core and mantle, because if they did, they would have lacked the buoyancy to rise again. Still, John Tarduno, whose research at the University of Rochester, US, centres on the origins of Earth’s dynamo, describes Nichols and colleagues’ ideas as “intriguing and certainly worth further consideration through data collection and modelling”.

Tarduno, who was not involved in this work, adds that he isn’t sure that core heat flux alone would ensure that the lunar core once had an intermittent strong dynamo. “The work should motivate numerical dynamo simulations as well as modelling of mantle evolution to test the authors’ ideas,” he says.

Nichols is up for the challenge. By studying additional Apollo samples, together with new ones from the Artemis and Chang’e missions to other parts of the Moon, she aims to determine whether magnetization intensity really does correlate with titanium content, and thereby lay the mystery to rest.

The study appears in Nature Geoscience.

The post Lunar magnetic field mystery may finally have an explanation appeared first on Physics World.

  •  

Licensing puts the power into nuclear fusion

Superheated: A growing number of companies are aiming to build compact reactors that will deliver electricity from nuclear fusion (Credit: shutterstock/Love Employee)
Superheated: A growing number of companies are aiming to build compact reactors that will deliver electricity from nuclear fusion (Credit: shutterstock/Love Employee)

Nuclear fusion has long held the promise of providing an unlimited supply of clean energy, but turning such a compelling concept into a practical reality has always seemed just beyond reach. That could be about to change, with a new wave of commercial operators developing compact nuclear reactors that they believe could be providing the grid with useful amounts of electricity within the next 10 years.

Leading the way is the US, where a combination of federal grants and private capital is fuelling the drive towards commercial production. One company grabbing the headlines is Helion, which has broken ground on a power plant that is due to supply 50 MW of power to Microsoft by 2028. Commonwealth Fusion Systems, set up with the backing of the Massachusetts Institute of Technology, has also announced an agreement with Google that trades an early strategic investment for 200 MW of power when the company’s first reactor comes online in the early 2030s.

Such commercial interest has been buoyed by a clarification in the licensing regime, at least within the US. In 2023 the Nuclear Regulatory Commission (NRC), the federal agency responsible for nuclear safety, ruled that fusion reactors need not be governed by the highly restrictive framework that applies to existing power plants based on nuclear fission. Instead, fusion developers must comply with the part of the code that is primarily focused on the handling of radioactive material.

“That was a big win for the industry,” says Steve Bump, an expert in radiation safety and licensing at consultancy firm Dade Moeller, part of the NV5 group. “Fusion is a much safer process because there is no spent fuel to deal with and there is no risk of the reaction running out of control. In the event of a system failure, everything just stops.”

Growth industry

Almost 50 companies are now actively involved in fusion development and research within the US, while others are active in the UK, China and Europe. Different reactor designs are being pursued, but each rely on heating a plasma containing deuterium and tritium to extreme temperatures and then confining the superheated plasma. When the light atomic nuclei collide and fuse together – which requires the plasma to reach temperatures above 100 million degrees Celsius – the nuclear reaction releases helium gas and high-energy neutrons, along with a vast amount of energy.

Nuclear fusion has already been shown to deliver intense bursts of energy that exceed the power needed to generate and sustain the plasma, but no-one has yet managed to produce a steady supply of electricity from the process. “The fusion industry is often characterized as a race,” says Bump. “There are many new companies that are aiming to build a commercially viable power plant that can be scaled up and replicated in multiple locations.”

Amid this rapid expansion, one upshot from the NRC ruling is that state-level regulators now have the authority to award licences for fusion reactors, provided that they follow the framework set out by the federal agency. But these state regulators are more accustomed to issuing licences to healthcare providers or research institutes that need to handle small amounts of radioactive material, and they are often wary of applications from fusion developers that ask for large quantities of radioactive tritium. “The amounts required for fusion can produce thousands and thousands of curies, while most other applications need less than a microcurie,” says Bump. “That makes it very different from a licensing standpoint, and the state agencies don’t have much experience with activities that use that much material. It makes them nervous.”

A big priority for them is to ensure that people in and around the plant are safe from any exposure, and we can help to ensure that the information provided by the company is clear, thorough and accurate.

Bump and his colleagues can help fusion companies to reassure the state regulators that all the evaluations have been done correctly. “Each state agency is a little different, and we need to work with each one to find out what they need and what they will accept,” adds Bump. “They need to consider the impact of the facility on public safety and the local environment, and they are going to ask questions before they are confident enough to issue a licence.”

That abundance of caution means that each application must be customized to address the concerns of each regulator. One area that receives particular scrutiny is the amount of shielding needed to protect people from the energetic neutrons produced by the fusion reaction. Slowing down and absorbing these neutral particles is a difficult process, requiring a multi-stage strategy that typically includes water-cooled steel and walls made of reinforced concrete.

As part of the licence application, companies need to demonstrate that their shielding mechanisms reduce the radiation dose to acceptable levels, both for people working inside the facility and those living and working in the neighbourhood. “We can review the shielding evaluations produced by companies before they are submitted to the state regulators,” says Bump. “A big priority for them is to ensure that people in and around the plant are safe from any exposure, and we can help to ensure that the information provided by the company is clear, thorough and accurate.”

Practical advice

The experts at Dade Moeller can also help fusion developers to make a realistic assessment of the amount of tritium they will need, since any licence will place a limit on how much radioactive material can be held within the facility. In addition, they can advise companies on how to establish and document failsafe procedures for storing and using tritium, along with real-time monitoring systems to ensure that emissions of tritium gas are kept within regulated limits. “We also look at the potential dose consequences if there is an accidental release, along with any emergency planning that may be needed if any radioactive material does escape,” adds Bump.

As well as providing the technical documentation needed by the regulators, fusion companies need to gain the support of local residents and businesses. Outreach events and public meetings are critical to explain how the technology works, openly discuss the risks and mitigation strategies, and highlight the benefits to the surrounding community. “We have attended some of the public meetings where people have had the opportunity to ask questions and voice their concerns,” says Bump. “We can help companies to prepare helpful and informative answers, particularly when questions are submitted prior to the meeting.”

If these efforts are successful, many local communities welcome the economic boost that could be produced by a commercial power plant, such as the creation of highly skilled jobs and the potential to attract other businesses to the area. Several fusion companies are planning to build their production facilities on the sites of previous coal-fired power stations, potentially breathing new life into small cities suffering from a post-industrial malaise.

These sites also provide prospective commercial operators with easy access to the existing electrical infrastructure. “It’s convenient for them because there is no need to install new transmission lines,” says Bump. “If they can make electricity, they can simply connect to the grid through the existing substation.”

Most commercial developers are currently building and testing pilot machines, with commercial production expected in the 2030s. As they make that transition, Bump and his colleagues can provide the expertise needed to navigate the licensing requirements across different states. “We can offer advice on how to get started, and how to set up a framework for radiation protection that will support companies as they scale up their operations,” says Bump. “It’s a growing industry, and we are here to help.”

 

The post Licensing puts the power into nuclear fusion appeared first on Physics World.

  •  

Celebrating 100 years of physics at Tsinghua University

Can you tell us about your career in physics?

My academic path studying physics at Tsinghua University began in 1981 where I completed a Bachelor’s and Master’s before earning a PhD in 1992. I then did a postdoc at the Central Iron & Steel Research Institute in Beijing before returning to Tsinghua University in 1994 as a faculty member in the physics department.

Have you always studied and worked in China?

During my time at Tsinghua I carried out two research visits abroad, first at the University of Minnesota from 1996 to 1999 and then at the University of California, Berkeley from 2002 to 2003.

What is your research focus?

My career has been centred on employing and developing theoretical computational methods to understand, predict and design the physical properties of materials from the microscopic level of atoms and electrons. My work is an attempt to use a “computational microscope” to probe the fundamental nature of materials and sketch blueprints for new ones. This journey from fundamental theory to potential application has been continuously challenging and immensely rewarding.

Can you explain some examples?

One is in the theoretical study of topological quantum materials. We have performed theoretical work predicting the potential for the quantum spin Hall effect in two-dimensional systems and we have explored new states of matter such as topological semimetals. Another avenue of research is on the physics of low-dimensional and artificial microstructures. My group has a long-standing interest in the electronic structure, magnetic properties, and optical responses of low-dimensional systems like graphene and two-dimensional magnetic materials. Recently, our team discovered a novel spin chirality-driven nonlinear optical effect in a 2D magnetic material.

Are you using AI in this endeavour?

Yes. A significant recent focus is pioneering the integration of artificial intelligence with computational materials science. We are developing deep-learning models that are compatible with mainstream computational frameworks to increase the efficiency of simulating complex material systems and accelerate the discovery of new materials.

What areas of physics research is Tsinghua active in?

Our department boasts a robust and comprehensive research portfolio. Our research can be mainly outlined as three core directions. The first is condensed-matter physics, which has historically been one of our largest and most prominent areas. Research here spans from fundamental quantum phenomena to materials design for future technologies.

Experimentally we work in areas such as topological quantum materials, high-temperature superconductivity, two-dimensional systems, and novel magnetic phenomena. The recent experimental discovery of the quantum anomalous Hall effect at Tsinghua is one example. Theoreticians, including my group, focus on predicting new quantum states and understanding complex electronic behaviours using first-principles calculations and model analysis.

A more diverse international community brings essential perspectives that challenge assumptions, spark innovation and elevate our collective work to a global standard

What about the other two areas?

The second area is atomic, molecular, and optical physics. Key topics include ultra-cold atoms for quantum simulation of complex many-body problems, quantum optics and quantum communication and precision measurement science. Work here often provides the physical platforms and techniques that enable advances in quantum-information science.

The other area is nuclear physics and particle physics: In particle physics, our faculty and students work in major international collaborations such as the Large Hadron Collider. Besides these core directions, our research is also focused on programmes in astrophysics/cosmology and in biophysics. The emergent field of quantum-information science also connects nearly all these areas making it a defining feature of our current research environment.

Are there some areas of physics that Tsinghua might increase its efforts in?

One is the integration of artificial intelligence and machine learning with fundamental physics research. In my own field of computational materials science, we are already using AI to accelerate the discovery of new quantum materials and predict complex properties with unprecedented speed. This approach should be expanded and deepened across the department — from using AI to analyse data from particle colliders and gravitational-wave detectors, to developing new algorithms for quantum many-body problems and astrophysical simulations.

Any other areas?

We must also intensify our efforts in the development and application of quantum technologies. We already have excellent groups in quantum information, quantum optics and quantum materials so the next step is to combine these strengths towards the engineering of functional quantum systems.

What are some of the major international institutions that Tsinghua collaborates with?

Internationally, our researchers are embedded in several “big science” projects such as the XENON collaboration for direct dark-matter detection, particle physics experiments like ATLAS, CMS and FASER at CERN as well as the LIGO collaboration in gravitational-wave astronomy.

What about those closer to home?

Domestically, we work with the Institute of Physics at the Chinese Academy of Sciences and the Beijing Academy of Quantum Information Sciences, particularly in areas like condensed matter and quantum science. We also value industry partnerships, a notable example being our long-standing collaboration with Foxconn, which formed the joint Foxconn Nanotechnology Center within our department.

How many students and staff are there in Tsinghua’s physics department?

We have an academic community of more than 900 people: 85 faculty members, around 100 staff members, 420 graduate students and 320 undergraduate students.

How many foreign staff and students do you have?

We currently have four foreign professors together with 11 international undergraduates and five international PhD candidates – from Malaysia, Germany, Belarus, Russia, and Iran.

Would you like to see these numbers increase?

Yes, but my emphasis is more on qualitative enhancement than just quantitative increase. A more diverse international community brings essential perspectives that challenge assumptions, spark innovation and elevate our collective work to a global standard. We are working to create an even more welcoming and supportive environment – through dedicated discussions on internationalization, fostering research collaborations, and hosting global conferences.

I hope we are known not just for our discoveries, but for building essential research “bridges” that solve big problems

Why is Tsinghua an attractive place to work?

It’s appeal lies not in any single attribute, but in a unique ecosystem that fosters research and innovation. First, is Tsinghua’s strengths across science and engineering that create a natural incubator for interdisciplinary work. My own research, particularly in integrating advanced computational methods with materials discovery, has been significantly accelerated by collaboration with leading experts in adjacent fields.

Second, is the balance of academic freedom and responsibility. The university provides substantial intellectual freedom and long-term support allowing researchers to pursue high-risk, fundamental questions without being bound solely by short-term deliverables. Coupled with this freedom is a profound sense of responsibility to contribute to national and global scientific efforts, an ethos deeply embedded in Tsinghua’s tradition.

Third, it is the quality of the students. Engaging with some of China’s most talented and driven young minds is perhaps the greatest privilege. Their curiosity, rigor and fresh perspectives constantly challenge and renew my own thinking. Mentoring them from promising undergraduates to independent researchers is a core part of the scientific legacy we build here.

What events do you have planned to mark the centenary of physics at Tsinghua?

We have a number of activities planned including the publication of an updated departmental history book that formally documents our century-long journey from 1926 to the present as well producing a centennial documentary film. We also have an alumni interview series and department exhibitions to visually narrate our history and scientific contributions.

We are collaborating with the Chinese Physical Society, the Chinese Academy of Sciences and the National Natural Science Foundation as well as IOP Publishing to publish commemorative special issues throughout the year. There will also be a series of high-level academic forums and lecture series at Tsinghua, the culmination of the year’s celebration will be Centennial Commemoration Conference on Saturday on 5 September.

What do you hope for Tsinghua in the coming 100 years?

First, I hope we become the world’s leading centre for a new way of doing physics: integrating AI directly into the core of our research cycle. This means moving beyond using AI just as a tool. I envision a future where AI actively helps us formulate new theories about quantum materials, guides the design of critical experiments in astrophysics and particle detection and even controls advanced instruments to run complex measurements. Our goal should be to pioneer a “AI-scientist” partnership, making it as natural as using a microscope.

Second, I hope we are known not just for our discoveries, but for building essential research “bridges” that solve big problems. This means deeply partnering with our engineering schools to turn quantum science into reliable technology as well as with life sciences and environmental science to apply physical principles to global challenges in health and sustainability. We aim to educate students who are not just technically able, but who are also ethically grounded and driven.

If we succeed, then Tsinghua Physics will continue to contribute meaningfully, not just to the scientific community, but to the broader human endeavour of understanding our world. That is the enduring legacy we strive for.

The post Celebrating 100 years of physics at Tsinghua University appeared first on Physics World.

  •  

Cobalt dissolution from PtₓCo/C cathode catalysts in PEM fuel cells: in situ quantification and removal methods

Pt-alloy/C catalysts, such as PtxCo/C, are used as cathode catalysts in proton-exchange membrane (PEM) fuel cells due to their exceptionally high kinetic activity for the oxygen reduction reaction (ORR). However, the performance and durability of membrane electrode assemblies (MEAs) with a PtxCo/C cathode catalyst are impaired by the dissolution of Co2+ cations in the ionomer phase of the MEA.

In the first part of this webinar, an in situ method to quantify the amount of Co2+ contamination in an MEA via electrochemical impedance spectroscopy (EIS) is presented. Pt/C model MEAs doped with different amounts of Co2+ ions are used to analyze the effects of Co2+ contamination on the H2/air performance and on ionic resistances under various conditions, highlighting the role of the inactive membrane area. Based on these model MEAs, a calibration curve is established that correlates the high-frequency resistance (HFR) under dry conditions to the amount of Co2+ in the MEA. Due to the high sensitivity of the dry HFR to metal cations, this method enables the tracking of Co2+ leaching from a Pt2.5Co/C MEA in voltage cycling accelerated stress tests.

In the second part, a recovery method to remove cationic contaminants from an MEA using CO2–O2 cathode gas feeds is presented. With this method, cation-induced performance losses of aged PtxCo/C MEAs can be largely recovered. The mechanism of cation removal and opportunities for the durability of Pt-alloy/C MEAs are discussed.

Markus Schilling
Markus Schilling

Markus Schilling is a PhD student at the chair of technical electrochemistry under the supervision of Prof Hubert A Gasteiger at the Technische Universität München. In his research, he investigates the degradation of Pt-alloy on carbon cathode catalysts (e.g., PtCo/C) for PEM fuel cells, with the aim of deepening the understanding of aging mechanisms and identifying strategies to increase durability. Current works include catalyst pre-treatments, development of diagnostic methods on the cell level, voltage cycling accelerated stress testing, and recovery methods.

Schilling received his BSc in 2019 from the Universität Konstanz and his MSc in 2022 from the Technische Universität München, where he investigated PEM fuel cell catalyst inks in his thesis, supervised by Prof Gasteiger.

The post Cobalt dissolution from PtₓCo/C cathode catalysts in PEM fuel cells: in situ quantification and removal methods appeared first on Physics World.

  •  

Compact optical amplifier is efficient enough for on-chip integration

Light forms the backbone of many of today’s advanced technologies, offering the ability to transmit data and information much quicker than electrons. Within optical networks, optical amplifiers are used to increase the intensity of light and enable its transmission over long distances. Without this ability to amplify optical signals, satellite technology, long-distance fibre-optic communications and quantum information processing would not be possible. But many optical amplifiers use a lot of power, limiting their deployment.

Modern-day photonics are continually getting smaller and more efficient, and researchers from Stanford University have now developed an optical amplifier that uses a low amount of energy on a fingertip-sized device – achieved by recycling the energy used to power it.

The low-power optical amplifier operates across the optical spectrum and is small and efficient enough to be integrated on a chip. The device achieved more than 17 dB gain using less than 200 mW of input power – an order of magnitude improvement over previous optical amplifiers of a similar size.

“We wanted to store up optical energy and release it in intense bursts, kind of like how a Q-switched laser works, but now with an optical resonator being the store of energy that fills up,” explains senior author Amir Safavi-Naeini. “After a few months we started to see that it could address other challenges we had in the lab, like building a broadband low-power amplifier for squeezing light in a chip-scale device.”

Optical parametric amplifiers

There are many types of optical amplifiers. Erbium-doped amplifiers are common in telecommunications but only work within specific wavelength bands, while semiconductor amplifiers function over a larger range of wavelengths but are limited by high noise. Optical parametric amplifiers (OPAs) are seen as the bridge between the two. OPAs, which use nonlinear interactions to transfer energy from a pump beam into signal photons, offer high gain, wide bandwidth and low noise.

A high gain boosts signals above noise levels, while the broad bandwidth enables amplification of ultrafast or wavelength-division-multiplexed optical signals. However, as they typically require watt-level power, OPAs have been difficult to miniaturize and integrate onto tiny photonic chips. For most amplifiers, achieving a high gain requires a high power input, which is counterproductive to miniaturization.

Integrating lasers into the photonic chip is not ideal and an external optical pump is now seen as an alternative option, but usually requires a pump at the second harmonic (twice the wavelength frequency being amplified). In the new design, the researchers use an external pump laser at the fundamental wavelength, coupled by lensed fibre onto the chip, where it generates the resonant second-harmonic pump – using a new loop design to reduce power requirements.

“The trick is that we trap and recirculate the shorter-wavelength pump light in a loop, not the signal,” Safavi-Naeini explains. “This gives you the efficiency boost of a resonator without narrowing the amplification bandwidth.”

A low-power optical amplifier

The team built the low-power OPA using thin-film lithium niobate, which offers large second-order nonlinearity and tight optical confinement. The big advantage, however, lies in its second-harmonic resonant design, in which the optical pump is doubled into a second harmonic inside a cavity. The pump light travels in a circular loop, increasing its intensity until the desired power is met. Once this amplification is complete, the signal is output with a near-quantum-limited noise performance over a broad amplification bandwidth of 110 nm.

Performing the amplification inside the cavity reduces the required power because the OPA is powered by energy stored inside the light beam. “The pump light is generated inside the pump resonator, not coupled in. This means we can efficiently fill up this resonator without dealing with impedance matching constraints that limit other nonlinear devices,” explains Safavi-Naeini. “The pump field is therefore larger than what we can even couple into the chip, so we get a boost that otherwise wouldn’t be possible.”

The small-scale and low-power architecture could be used to build on-chip OPAs across a range of applications, including data communications technology, biosensors and novel light sources. The amplifier is also small and efficient enough to be powered by a battery, making it suitable for use in laptops and smartphones.

Looking ahead, Safavi-Naeini says that the goal is “to combine this amplifier with a small on-chip laser, so the whole thing is self-contained without bulky external equipment, and use it to generate large amounts of quantum squeezing in an integrated device”. In the short-term, he suggests that fabrication improvements could cut the power requirements by another factor of ten. “We’re looking to push the sensitivity beyond what’s currently possible with classical technologies.”

The research is reported in Nature.

The post Compact optical amplifier is efficient enough for on-chip integration appeared first on Physics World.

  •  

The search for new bosons beyond Higgs

Particle physicists have been searching for new fundamental scalar and pseudoscalar bosons because, if discovered, they could reveal physics beyond the Standard Model and help explain mysteries such as dark matter and even why the Higgs exists. The Higgs remains the only confirmed scalar boson, and no pseudoscalar bosons have yet been observed, though they are predicted, for example, in theories involving axions and axion‑like particles. One promising way to find them is to look for their decay into a top quark and antiquark pair (tt̄).

Using the CMS detector at the Large Hadron Collider, researchers analysed 138 fb⁻¹ of proton–proton collision data. They reconstructed the invariant mass of the tt̄ system and used angular variables sensitive to its spin and parity to distinguish potential signals from the Standard Model background. Crucially, the analysis includes interference between any new boson and the Standard Model tt̄ production, which can create peak-dip distortions in the invariant mass of the tt̄ system rather than a simple bump. The observed event yield is consistent with the Standard Model prediction over the majority of the invariant mass spectrum, thus excluding a contribution from a potential new boson.

However, CMS observed a significant excess near the threshold of tt̄  production where the energy of colliding particles is just enough to produce top quarks and antiquarks. This excess has a local significance above five standard deviations and the kinematics of these events is more consistent with a pseudoscalar than a scalar interpretation. However, the excess could also be explained by a predicted tt̄ quasi‑bound state, known as toponium, which fits the data without requiring new particles beyond the Standard Model.

The researchers set upper limits on how strongly new bosons could couple to top quarks across masses from 365 to 1000 GeV and widths from 0.5% to 25%. These constraints exclude couplings down to around 0.3 for pseudoscalars and 0.4 for scalars, providing the most stringent limits to date for scalar resonances decaying to tt̄.

Do you want to learn more about this topic?

Prospects for Higgs physics at energies up to 100 TeV by Julien BaglioAbdelhak Djouadi and Jérémie Quevillon (2016)

The post The search for new bosons beyond Higgs appeared first on Physics World.

  •  

Pushing thermopower to the 2D limit

Thermoelectric materials convert heat into electricity, and their effectiveness is largely determined by their thermopower, which reflects how charge carriers respond to their environment. Designing materials with very high thermopower is important because it boosts overall thermoelectric efficiency, enabling sensors with stronger voltage output, higher sensitivity, and the ability to detect smaller temperature changes. High thermopower also allows for thinner, lighter, and potentially flexible devices that use less material. In 2D materials, electrons become confined to very thin layers, altering their energy levels in ways that can dramatically increase thermopower.

The researchers explore this effect using superlattices made of La-doped EuTiO3 and La-doped EuTiO3 (LETO/ETO), where both dimensional confinement and electronic correlation effects play key roles. These structures achieve stronger 2D confinement than the commonly used SrTiO₃, which has a large Bohr radius that prevents electrons from being tightly localized. In contrast, the LETO/ETO system has a much smaller effective Bohr radius, allowing electrons to behave more like a true 2D gas. The Eu 4f electrons further modify the local potential landscape, strengthening confinement and producing orbital‑selective localization, particularly of the Ti 3dₓᵧ states that dominate the enhanced thermopower response.

A group photo of the Epitaxial Complex Oxide Laboratory at the summit of Halla Mountain on Jeju Island. Pictured is first author Dr. Dongwon Shin (front row, centre) alongside corresponding author Prof. Woo Seok Choi (back row, second from the right).
A group photo of the Epitaxial Complex Oxide Laboratory at the summit of Halla Mountain on Jeju Island. Pictured is first author Dr. Dongwon Shin (front row, centre) alongside corresponding author Prof. Woo Seok Choi (back row, second from the right). (Courtesy: Seok, Sungkyunkwan University)

As a result, the thermopower becomes extremely large, up to 950 μV K⁻¹, and as much as 20 times higher in the 2D configuration than in the 3D case, an improvement roughly twice that achieved in comparable SrTiO₃-based superlattices. Thermopower measurements and hybrid density functional theory calculations confirm that this enhancement arises from the combined effects of strong confinement, modified band structure, and correlation-driven changes to the Ti 3d electron distribution.

Overall, the study demonstrates a new design strategy for thermoelectric materials that combines material selection (small Bohr radius, 4f-assisted confinement) with dimensional engineering to create ultrathin superlattices that force electrons into 2D behaviour. The authors note that future Hall measurements and conductivity optimization will be important for evaluating power factor and ZT (a measure used in thermoelectrics to describe how good a thermoelectric material is), and that integrating these oxide superlattices with emerging freestanding membrane techniques could enable flexible, high-sensitivity thermal sensing platforms.

Read the full article

Improving 2D-ness to enhance thermopower in oxide superlattices

Dongwon Shin et al 2026 Rep. Prog. Phys. 89 010501

Do you want to learn more about this topic?

Tuning phonon properties in thermoelectric materials by G P Srivastava (2015)

The post Pushing thermopower to the 2D limit appeared first on Physics World.

  •  

A physicist’s journey into nuclear energy

When I started my physics degree, I knew it could open the door to a range of career opportunities, but I wasn’t sure what path it would take me down. In the end, it was the optional modules that encouraged my interest in nuclear energy physics, steering me towards my current job as a nuclear safety engineer.

When I was looking at university degrees, I thought about studying chemical engineering, but my A-level physics teacher inspired me to consider physics instead. I’d always been fascinated with the subject, and enjoyed maths (and a challenge) too, so I thought why not give it a go.

I went on to study physics at the University of Liverpool, graduating in 2021. I absolutely loved the city and would highly recommend it to anyone considering physics – or any degree, for that matter. The campus is fantastic and Liverpool is an amazing place to be a student.

My undergraduate experience was incredibly rewarding. I met some of my closest friends and had countless memorable adventures. While the course was challenging at times, I have no regrets about choosing physics. I particularly enjoyed being able to pick specialist optional modules as it meant I could follow my interest in applied physics with topics such as nuclear power and medical physics.

Making a difference

In my final year, I started doing the obligatory job applications for those wanting to go into industry. But after receiving some rejections, I decided to explore an opportunity outside of science and ended up working for nearly a year in the charity sector as a Climate Action intern. There I undertook research projects related to decolonization in international development, and anti-racism and social justice, supporting the delivery of international development programmes.

While my time at Climate Action was rewarding and worthwhile, I wanted to move back into science and use my degree. Nuclear physics had been an area of interest for me since school, and my modules at university had encouraged that, so I turned my attention to the nuclear energy sector. Having worked for a charity, I was keen to find an organization whose values aligned with mine. Employee-owned engineering, management and development consultancy, Mott MacDonald, caught my eye, with its commitment to net zero, social outcomes and the UN’s Sustainable Development Goals.

I joined the the company’s three-year graduate scheme and, although I didn’t have any direct experience in safety, was offered a graduate nuclear safety position. It is a great role that ties in skills from my degree and my interest in nuclear while still presenting challenges and an opportunity to learn.

After two years at Mott MacDonald, I won Graduate of the Year at the UK Nuclear Skills Awards 2024. My colleagues had kindly nominated me, recognizing my dedication and drive, and the contribution I’d made to the organization. This opportunity was highly valuable for me and elevated my profile not only at Mott MacDonald but also within the sector. Then, after only two and half years in the graduate scheme, I was promoted to my current position of nuclear safety engineer.

My role focuses on developing nuclear safety cases with the guidance and support of our experienced team. The work involves analysing potential hazards and risks, outlining safety measures, and presenting a structured, evidence-based argument that the facility is safe for operation. I’ve worked on a variety of different projects including small modular reactors, nuclear medicine and flood alleviation schemes.

A typical day for me involves project meetings, writing safety reports, conducting hazard identification studies, and reviewing documents. A key aspect of the work is identifying, assessing and effectively controlling all project-related risks.

Nuclear reactor at night
Wealth of opportunity Natasha Khan believes that now is a great time to join the nuclear industry. (Courtesy: US Nuclear Regulatory Commission)

Beyond my technical role at Mott MacDonald, I am also part of committees for our internal Women in Nuclear and Europe and UK Advancing Race and Culture networks. These positions allow me to contribute to a range of equality, diversity and inclusion (EDI) initiatives. Creating an inclusive environment is important to allow people the space to be authentically themselves, share and bring diverse perspectives and feel psychologically safe. This is a big driver for me – by supporting equity and equal opportunities, I am helping ensure others like me have role models in the sector.

A nuclear skillset

Physics plays a crucial role in nuclear safety by providing the fundamental principles underlying nuclear processes. Studying nuclear physics at university has helped me understand and analyse reactor behaviour, radiation effects and potential hazards. This knowledge forms the basis for designing nuclear facility safety systems, for the protection of the workforce, environment and general public.

Throughout my degree, I also developed transferable skills such as analytical thinking, logical problem-solving and teamwork, all of which I apply daily in my role. As a safety-case engineer, I work as part of a team, and collaborate with specialists across fields, including process engineering, mechanical engineering and radioactive waste management. My ability to work effectively in teams and maintain strong interpersonal relationships has been key to success in my role.

I would encourage other physics students to explore a career in the nuclear industry

Applying my research and scientific report writing skills I developed at university, I can identify relevant information for safety-case updates, and present safety claims, arguments and evidence in a way that is understandable to a broad, non-specialist audience.

I also mentor and support more junior colleagues with various project and non-project related issues. Skills like critical thinking and the ability to tailor my communication style directly influence how I approach my work and support others.

I would encourage other physics students to explore a career in the nuclear industry. It offers a broad range of career paths, and the opportunity to contribute to some of the most diverse, exciting and challenging projects within the energy sector. You don’t need an engineering background to have a career in nuclear – there are many ways to contribute including beyond the technical route. As physicists we have a wide range of transferable skills, often more than we realize, making us highly adaptable and valuable in this sector.

It’s an incredible time to join the nuclear industry. With advancements like Sizewell C, small modular reactors, and cutting-edge medical nuclear-research facilities, there’s a wealth of diverse projects happening right now to get involved in. I hadn’t planned on a career in nuclear safety, but honestly, I’m really glad my path led this way. I am passionate about driving innovative nuclear solutions, and support progress towards reduced emissions and the global transition to net zero.

While I may be early on in my nuclear career, I have already worked on some interesting projects and met fantastic people. Now, I’m going through a structured training programme at Mott MacDonald to help me achieve chartership status with the Institute of Physics. I look forward to seeing what the future has to offer.

The post A physicist’s journey into nuclear energy appeared first on Physics World.

  •  

A glimpse into the future of particle therapy

Particle therapy is an incredibly powerful cancer treatment. But it is also an incredibly expensive option that relies on massive, bulky accelerator systems. As such, in 2025 there were only 137 proton and carbon-ion therapy facilities in operation worldwide. So how can more people benefit?

Hoping to resolve this challenge, the LhARA collaboration is investigating a new take on particle therapy delivery: a laser-hybrid accelerator for radiobiological applications. The idea is to use laser-driven proton and ion beams to create a compact, high-throughput treatment facility to advance our understanding of cancer and its response to radiation (see: “A novel hybrid design”).

Last month, in the first of a series of CP4CT workshops, experts in the field came together at Imperial College London to discuss the potential advantages of laser-driven charged particles. The workshop aimed to examine the current status of particle therapy technology, assess how the unique properties of laser-driven beams could revolutionize particle therapy, and identify the key research needed to develop personalized cancer therapy with laser-driven ions.

“We want to lay the foundation for the transformation of ion beam therapy,” said Kenneth Long (Imperial College London/STFC), who co-organized the event together with Richard Amos (University College London). “We are aiming to engage with the communities that we will target when the technology is mature.”

A novel hybrid design

LhARA uses a high-power, fast-pulsed laser to create high-flux proton and ion beams with arbitrary spatial and time structures, such as bunches as short as 10 to 40 ns. The beams are captured and focused by a novel electron-plasma lens, and then accelerated using a fixed-field alternating gradient accelerator, to energies of 15–127 MeV for protons and 5–34 MeV/u for ion beams.

LhARA concept
Courtesy: LhARA collaboration

The LhARA team recently completed its conceptual design report for the proposed new accelerator facility and is now running radiobiology programmes to prove the feasibility of laser-driven hybrid acceleration, for both radiation biology and clinical studies.

Particle therapy today

The day’s first speaker, Alejandro Mazal (Centro de Protonterapia Quirónsalud) pointed out that despite huge clinical potential, only about 400,000 patients have been treated with proton therapy to date (and 65,000 with carbon ions), with a typical saturation of about 250 patients per year per treatment room. To increase this throughput, factors such as image guidance, adaptive tools, uptime and modularity for upgrades could prove vital.

Mazal cited some development priorities to address, including cost control, vendor robustness, system reliability and throughput optimization. It’s also vital to consider biological modulation techniques, integration into hospitals and generation of clinical evidence. “We used to say that randomized trials are not ethical with particle therapy but this is not always true, evidence must guide expansion,” he said.

Mazal emphasized that technology itself is not the endpoint, but that specifications must be driven by clinical benefit. “The goal is to be transformative, but only when we can measure a clinical value,” he explained.

Sandro Rossi (CNAO) then presented an update on the latest developments at the National Centre of Oncological Hadronotherapy (CNAO) in Italy. Since starting clinical treatments in 2011, the facility has now treated over 6000 patients – roughly half with protons and half with carbon ions. He noted that for some of the most challenging tumours, CNAO’s particle therapy delivered considerably better local tumour control than achieved by conventional X-ray treatments.

CNAO is also a research facility, currently hosting 17 funded research projects and seven active clinical trials. Looking forward, an expansion project will see the centre commission an additional proton therapy gantry, introduce boron neutron capture therapy (BNCT) and install an upright positioning system (from Leo Cancer Care) in one of the treatment rooms.

The killer biological questions

In parallel with the development of laser-based accelerators, researchers are investigating various radiobiological modulation strategies that could enhance the impact of particle therapy. The workshop examined three such options: proton minibeams, FLASH irradiation and combination with immunotherapies.

Minibeam therapy uses an array of submillimetre-sized radiation beams to deliver a pattern of alternating high-dose peaks and low-dose valleys. This spatially fractionated dose greatly reduces treatment toxicity while providing excellent tumour control, as demonstrated in extensive preclinical experiments.

Richard Amos, Yolanda Prezado and Kenneth Long
Strategic discussions Left to right: Richard Amos, Yolanda Prezado and Kenneth Long. (Courtesy: Tami Freeman)

The first patient treatments (using X-ray minibeams) took place in 2024, and clinical investigations on proton minibeams are just starting, explained Yolanda Prezado (CiMUS). Recent studies revealed that minibeams induce a favourable immune response, with high T cell infiltration, vascular renormalization and reduced hypoxia dependence. Further evaluation is essential to explore the underlying radiobiological mechanisms, but Prezado noted that existing accelerators are limited in their ability to modulate treatment beams.

“It would be really interesting to have a system where we can flexibly vary all of the parameters to understand all of these techniques; LhARA could be a very interesting facility for this,” she suggested.

As for the second option, FLASH therapy, this is an emerging treatment approach in which radiation delivery at ultrahigh dose rates reduces normal tissue damage while effectively killing cancer cells. But how the FLASH effect works, and how to optimize this approach, remain key questions.

Joao Seco (DKFZ) presented a novel interpretation of FLASH, focusing on radiation chemistry and emphasizing the role of H2O2 generation in the FLASH process. Production of H2O2, a key molecule in cell damage, depends on the activity of a particular enzyme called superoxide dismutase 1 (SOD1). Seco hypothesized that inhibiting SOD1 could control H2O2 production and thus control cellular damage, effectively mimicking the FLASH effect.

“Forget radiation biology, we are missing a key component: redox chemistry,” he said. “If we know the redox chemistry, we can predict the response before we give radiotherapy.”

Marco Durante (GSI) suggested that the most urgent challenge for radiotherapy may be to combine it with immunotherapy, noting that charged particle beams offer both physical and biological advantages to achieve this. Citing various trials of combined immunotherapy and X-ray-based radiotherapy for cancer treatment, he showed some impressive examples of the benefit of the combination, but also cases with negative results.

“The question to understand is why doesn’t it always work,” he explained, suggesting that this may be due to the timing and sequencing of the two therapies, the fractionation scheme or biological factors. But perhaps a more promising approach would be to combine immunotherapy with particle therapy, he said, sharing examples where immunotherapy plus carbon-ions had better clinical outcomes than combinations with X-ray radiotherapy.

This superior outcome may arise from the various biological advantages of high-LET irradiation. Alongside, the lower integral dose from particle therapy compared with X-rays results in less lymphopenia (a low level of white blood cells), which is indicative of improved prognosis.

“Pre-clinical studies are essential to address timing and sequencing,” he concluded. “We also need more clinical trials to determine the impact of physical and biological properties of charged particles in radioimmunotherapy.”

Democratizing access

Manjit Dosanjh (University of Oxford) discussed the continuing need to increase global access to radiotherapy, noting that while radiotherapy is a key tool for over 50% of cancer patients, not all countries have access to sufficient treatment systems, nor to the expert personnel needed to run them.

Across Africa, for instance, there is  just one linac per 3.5 million people, in stark contrast to the one per 86, 000 people in the US. Many European countries also lack sufficient quality or quantity of radiotherapy facilities – a disparity that’s mirrored in terms of access to CT scanners, oncologists and medical physicists, which must be addressed in tandem. “If we could improve imaging, treatments and care quality, we could prevent 9.6 million deaths per year worldwide,” Dosanjh said.

Manjit Dosanjh
Addressing global disparity Manjit Dosanjh emphasized the importance of collaborations in improving access to cancer therapy. (Courtesy: Alex Gerbershagen)

She described some initiatives designed to encourage collaboration and increase access, including ENLIGHT, the European Network for Light Ion Hadron Therapy. Launched in 2002 at CERN, ENLIGHT brings together clinicians, physicists, biologists and engineers working within particle therapy to develop new technologies and provide training, education and access to beams to move the field forward.

More recently, the STELLA (smart technologies to extend lives with linear accelerators) project was established to create a cost-effective, robust radiotherapy linac with lower staff requirements and maximal uptime. A global collaboration, STELLA aims to expand access to high-quality cancer treatment for all patients via innovative transformation of the treatment system, as well as providing training, education and mentoring.

Dosanjh also introduced SAPPHIRE, a UK-led initiative that partners with institutions in Ghana and South Africa to strengthen radiotherapy services across Africa. She stressed that improving access to radiotherapy is a big challenge that can only be achieved by building really good collaborations. “Collaboration is the invisible force that makes the impossible possible,” she said.

Konrad Nesteruk (Harvard) continued the theme of democratizing particle therapy, noting that advancement of beam technologies calls for innovations in space (the facility size), time (both irradiation and total treatment time) and dose (via techniques such as FLASH, proton arc and minibeams). All of these factors interact to create a multidimensional optimization problem, he explained.

The final speaker in this session, Rock Mackie (University of Wisconsin) examined how to translate innovative radiotherapy technology into clinical practice. Academia is the source of breakthrough ideas, he said, but most R&D is funded and refined by companies. And forming a company involves a series of key tasks: identifying an important problem; developing a technical solution; patenting it; customer testing; and procuring investment. If this final stage doesn’t happen, Mackie remarked, it wasn’t an important enough problem.

In particle therapy, the main problems are size and cost limiting patient access, a lack of effective imaging solutions and the fact that the gain in therapeutic ratio does not compensate for increased costs. Aiming to solve these problems, Mackie co-founded Leo Cancer Care in 2018 to commercialize an upright patient positioning system and CT scanner. This approach enables a proton therapy machine to fit into a photon vault, as well as easing patient positioning, thus reducing installation costs while simultaneously increasing throughput.

Mackie applied this startup scenario to LhARA. Here, the problem to solve is achieving high-energy, multi-ion, high-intensity beams for radiotherapy, FLASH, spatial fractionation and proton imaging. The solution is the development of a low-cost particle accelerator that meets all of these needs and fits in a single-storey vault. He also emphasized the importance of consulting with as many potential customers as time permits before defining specifications.

“The most important problem is finding a big enough problem to solve,” he concluded. “It will find a market if the product is less costly, works better and is easier to use.”

Development roadmap

Alexander Gerbershagen (PARTREC) told delegates about PARTREC, the particle therapy research centre at the University Medical Center Groningen. The facility’s superconducting accelerator, AGOR, provides protons with energies up to 190 MeV, as well as ion beams of all elements up to xenon. Ongoing projects at PARTREC include: developing glioblastoma treatments using boron proton capture therapy (NuCapCure); production of terbium isotopes for theranostics; image-guided pharmacotherapy using photon-activated drugs; and real-time in vivo verification of proton therapy dose.

The day closed with a look at the potential of LhARA as an international research facility. Kenneth Long emphasized the importance of investigating how ionizing radiation interacts with tissue, in vivo and in vitro, while considering all of the factors that may impact outcome. This includes time and space domains, different ion species and energies, and combinations with chemo- and immunotherapy. “If one flexible beam facility can do all that, it’s a substantial opportunity for a step change in understanding,” he said.

Long presented some initial cell irradiations using laser-driven beams at the SCAPA research centre in Strathclyde, and noted that component optimization is also underway in Swansea. He also shared designs for the envisaged research facility, with various in vivo and in vitro end-stations and robotic automation to move experiments around. “We have written a mission statement, now our business is to execute that programme,” he concluded.

The post A glimpse into the future of particle therapy appeared first on Physics World.

  •  

Mulugeta Bekele: the jailed and tortured scientist who kept Ethiopian physics alive

Mulugeta Bekele paid a heavy price for remaining in Ethiopia in the 1970s and 1980s. While many other academics had fled their homeland to avoid being targeted by its military rulers, Mulugeta did not. He stayed to teach physics, almost single-handedly keeping it alive in the country. But Mulugeta was arrested and brutally tortured by members of the Derg, Ethiopia’s ruling military junta. “I still have scars,” he says when we meet at his tiny, second-floor office at Addis Ababa University (AAU) in January 2026.

Gentle and softly spoken, Mulugeta, 79, is formally retired but still active as a research physicist. In 2012 his efforts led to him being awarded the Sakharov prize by the American Physical Society (APS) “for his tireless efforts in defence of human rights and freedom of expression and education anywhere in the world, and for inspiring students, colleagues and others to do the same”.

Mulugeta was born in 1947 near Asela, a small town south of Ethiopia’s capital Addis Ababa. The district had only a single secondary school that depended on volunteer teachers from other countries. One was a US Peace Corps volunteer named Ronald Lee, who taught history, maths and science for two years. Mulugeta recalls Lee as a dramatic and inventive teacher, who would climb trees in physics classes to demonstrate the actions of pulleys and hold special after-school calculus classes for advanced students.

Mulugeta and other Asela students were entranced. So when he entered AAU – then called Haile Selassie 1 University – in 1965, Mulugeta declared he wanted to study both mathematics and physics. Impossible, he was informed; he could do one or the other but not both. “I told myself that if I choose mathematics I will miss physics,” Mulugeta says. “But if I do physics, I will be continually engaged with mathematics.” Physics it was.

At the end of his third year, Mulugeta’s studies appeared in doubt. The university’s only physics teacher was an American named Ennis Pilcher, who was about to return to Union College in Schenectady, New York, after spending a year in Addis on a fellowship from the Fulbright Program. Pilcher, though, managed to convince Union to support Mulugeta so he could travel to the US and study physics there for his final year.

As I talk to Mulugeta, he pulls a dusty book off his shelf. “This was given to me by Pilcher,” he says, pointing to Walter Meyerhof’s classic undergraduate textbook Elements of Nuclear Physics. Mulugeta turns to the inside of the front cover and proudly shows me the inscription: “Mulugeta Bekele, Union College. Schenectady, 1969–1970”.

When Mulugeta returned to AAU in the summer of 1970, he was awarded a BSc in physics. He then received a grant from the US Agency for International Development (USAID) to attend the University of Maryland for a master’s degree. After two more years in the US, Mulugeta returned to Addis Ababa in 1973. As an accomplished researcher and teacher, he was made department chair and began to expand the physics programme at the university.

In the firing line

It was a time when political turmoil was upending Ethiopia, as well as the lives of Mulugeta and many other academics. For centuries the country had been ruled by a dynasty whose present emperor was Haile Selassie. Having come to the throne in 1930, he had tried to reform Ethiopia by bringing it into the League of Nations, drawing up a constitution, and taking measures to abolish slavery.

When fascist Italy invaded Ethiopia in May 1935, Selassie left, spending six years in exile in the UK during the Italian occupation of the country. He returned as emperor in 1941 after British and Ethiopian forces recaptured Addis Ababa. But famine, unemployment and corruption, as well as a brief unsuccessful coup attempt, undermined his rule and made him unexpectedly vulnerable.

While in Maryland, Mulugeta and other Ethiopian students in the US started supporting the Ethiopian People’s Revolutionary Party (EPRP) – a pro-democracy group that sought to build popular momentum against the monarchy. In February 1974 Selassie was deposed by the Derg – a repressive military junta named after the word for “committee” in Amharic, the most widely spoken language in Ethiopia. Selassie was assassinated the following year.

Mengistu Haile Mariam - official portrait plus leaders of the Derg
Ruthless ruler Mengistu Haile Mariam (left) was leader of the Derg military junta and communist dictator in Ethiopia between 1977 and 1991. Mengistu is also shown (right) with two other senior members of the Derg: Tafari Benti (middle) and Atnafu Abate (right). (Images: Public Domain)

Led by an army officer named Mengistu Haile Mariam, the Derg’s radical totalitarianism was in sharp contrast to the student-led EPRP’s efforts and its agenda included seizing property from landowners. Mulugeta’s family lost all its land, and his father was killed fighting the Derg. “Land ownership was still inequitable,” Mulugeta remarks ruefully, “only the landlords changed.”

In September 1976 the EPRP tried, unsuccessfully, to assassinate Mengistu. The following February, on becoming chairman of Derg – and therefore head of state – Mengistu began ruthlessly to crush any opposition, particularly the EPRP, in what he himself called the “Red terror” campaign of political suppression. About half a million people in Ethiopia were killed.

“It was a police state,” recalls Solomon Bililign, Mulugeta’s then graduate assistant, now a professor of atomic and molecular physics at North Carolina Agricultural and Technical State University. “The police didn’t need any reason to arrest you. They would arrest people openly in the streets, break into homes, and left people dead in roads and parks. Many were tortured; others simply disappeared.”

Captured and tortured

Mulugeta himself was a target. In the summer of 1977, a policeman showed up at his office with an informant. Mulugeta was arrested and imprisoned for his role in helping to organize anti-Derg activities, as was Bililign. Mulugeta still recalls exactly how long he was jailed for: “Eight months and 20 days”.

After his release, Mulugeta knew it would be unsafe to stay in Addis and lived in hiding for several months. So he devised a plan to travel 500 km north to a holdout region not controlled by the Derg. However, while using a fake ID to pass through checkpoints to reach a compatriot, he was betrayed again, captured, and taken back to Addis.

Mulugeta was savagely tortured using a method that the Derg meted out on thousands of other prisoners

En route to Addis, he managed to steal back the fake ID that he’d been using from the pocket of the policeman travelling with him. He then tore it up to shield the identity of his compatriot, and tossed the pieces into a toilet. But the policeman noticed and retrieved the pieces. Mulugeta was then savagely tortured using a method that the Derg meted out on thousands of other prisoners. His arms and legs were tied around a pole, and he was hung in the foetal position between two chairs, upside down. His feet were then beaten until he could no longer walk.

Mulugeta was sent to Maekelawi, an infamous jail in Addis, in which up to 70 prisoners could be jammed in rooms each barely four metres long and four metres wide. Inmates were tortured without warning, could not have visitors, never had trials, were denied books and paper, and at night heard screams from periodic executions. Mulugeta helped those who were beaten by tending to their wounds.

“People who knew him in prison told me that his mental strength helped all of them endure,” remembers Mesfin Tsige, an undergraduate student of Mulugeta at the time, who is now a polymer physicist at the University of Akron in Ohio. Despite the awful conditions, Mulugeta managed to continue working on physics by surreptitiously taking paper from the foil linings of cigarette packets to compose problems.

Mulugeta, Bililign and Mekonnen
Happier times Mulugeta Bekele (front centre in the white top), Solomon Bililign (next to him in the purple shirt) and Nebiy Mekonnen (back row, with the hat) pictured with their family and friends. All three were incarcerated together at the notorious Maekelawi prison.

Another prisoner was Nebiy Mekonnen, a chemistry student of Mulugeta. Later a gifted artist, translator and newspaper editor, Mekonnen began translating the US writer Margaret Mitchell’s classic 1936 book Gone with the Wind into Amharic. It was the one book that the Maekelawi prisoners had in their hands, having retrieved it from the possessions of someone who had been executed.

Surreptitiously writing his translation onto the foil linings of cigarette packets, Mekonnen would read passages to fellow prisoners in the evening for what passed for entertainment. Mekonnen’s translation of Mitchell’s almost 1000-page book was recorded onto 3000 of the linings, which were then smuggled out of the prison stuffed in tobacco pouches and published years later.

Gone with the Wind might seem a strange choice to translate, but as Mulugeta reminds me: “It was the only book we had at the time”. More smuggled books did eventually arrive at the prison, but Gone with the Wind, which describes life in a war-torn country, has several passages that resonated with prisoners. One was: “In the end what will happen will be what has happened whenever a civilization breaks up. The people with brains and courage come through and the ones who haven’t are winnowed out.”

Release and recapture

In 1982 Mulugeta was moved to Kerchele, another prison. There, as at Maekelawi, inmates were forced to listen to Mengistu’s pompous speeches on radio and TV. During one, Mengistu pontificated that he would turn prisons into places of education. A clever inmate, knowing that the prison wardens were also cowering in terror, proposed that Kerchele establish a school with the prisoners as teachers.

The wardens found this a great idea, not least because it let them show off their loyalty to Mengistu. The Kerchele prisoners were promptly put to work erecting a schoolhouse of half a dozen rooms out of asbestos slabs. Unlike schools in the rest of Ethiopia, the Kerchele prison school was not short of teachers, as the prisoners included a wide range of professionals, such as architects, scientists and engineers.

Students included prison guards and their families, along with numerous inmates who had been jailed for non-political reasons. Mulugeta and Bililign taught physics. “It was therapy for us,” Bililign says – and the school was soon known as one of the best in Ethiopia.

When I ask Mulugeta how he maintained his interest in physics in jail, despite being locked up for so many years, he becomes animated

When I ask Mulugeta how he maintained his interest in physics in jail, despite being locked up for so many years, he becomes animated. “In those days, prisons were full of ideas,” he smiles. “We were university students, university teachers. We had a cause. It was exciting. Intellectually, we flourished.”

In the summer of 1985 Mulugeta was released. Many colleagues were not. “They were given release papers and as they left the building, one by one, they were strangled. I had a tenth-grade student who was one of the best; he didn’t make it. There were plenty of stories like this.” Mulugeta pauses. “Somehow we survived. But not them.”

Mulugeta returned to the university, now renamed from Haile Selassie University to Addis Ababa University, and started teaching physics full time. As the Derg was in full control no opposition was possible except in outer regions of Ethiopia. In summer 1991, after Mulugeta had taught physics for another six years, political turmoil erupted yet again.

Mengistu was overthrown that May by a political coalition representing pro-democracy groups from five of Ethiopia’s ethnic regions, the Ethiopian People’s Revolutionary Democratic Front (EPRDF). But ethnic tensions rose and human rights violations continued. “Even though the Derg was overthrown,” Mulugeta recalls, “we knew we were entering another dark age.”

In the same year Mulugeta was put in touch with a Swedish programme seeking to build networks of scientists across countries in the southern hemisphere. Mulugeta knew a physicist from Bangalore, India, who had visited Addis twice as an examiner for his master’s programme and arranged to work with him for his PhD.

That July, Mulugeta married Malefia, who worked in the university’s registrar office, and the two left for Bangalore. As a wedding present, his student Mekonnen painted a picture of two hands coming together, each with a ring on a finger, against a black Sun in the background. “Two rings, in the time of a dark sun” Mekonnen’s caption read, “Happy marriage!” Mulugeta still has the painting.

Mulugeta thrived in Bangalore. Here, he was finally able to combine his two loves, physics and maths, studying statistical physics and stochastic processes and applying them to issues in non-equilibrium thermodynamics. He has worked in that field ever since. He received his PhD in 1998 from the Indian Institute of Science in Bangalore and returned to Addis once more to teach.

Shortly after Mulugeta’s return from Bangalore to Ethiopia in August 1998, some of his former students formed the Ethiopian Physical Society, electing him as its first president. Other students of his who had taken positions in the US created the Ethiopian Physical Society of North America (EPSNA), formally established in 2008. Bililign organized and convened its first meeting.

In 2007 Philip Taylor, a soft-condensed-matter physicist from Case Western Reserve University in the US, who had been Tsige’s PhD supervisor, heard the story of Mulugeta’s imprisonment. Astonished, he spearheaded the successful 2012 application for Mulugeta to receive the APS’s Sakharov prize, which is given every two years to physicists who have displayed “outstanding leadership and achievements of scientists in upholding human rights”.

Mulugeta Bekele with his wife Malefia
Honoured figure Mulugeta Bekele with his wife Malefia at the March 2012 meeting of the American Physical Society in Boston, where he was awarded the Sakharov medal for his “tireless efforts in defence of human rights and freedom of expression”. (Courtesy: Solomon Bililign)

Unsure that he would receive travel funds to attend a special award ceremony at that year’s APS March meeting in Boston, the EPSNA raised money for Mulugeta and his wife to attend. Jetlagged, worn out by the cold, and somewhat overwhelmed by the attention, Mulugeta could not be found as the ceremony began. EPSNA members tracked him down to his hotel room, where he was dressing in traditional Ethiopian clothes for the occasion – all white from head to toe, including shoes.

Under a dark Sun

In recent years, Mulugeta has continued to teach and collaborate with students and former students, publishing in a wide range of journals, as well as helping out with the Ethiopian Physical Society. But while I was in Ethiopia to talk to Mulugeta at the start of 2026, the Trump administration curtailed immigrant visas from Ethiopia and almost half of all nations in Africa supposedly in an attempt to “protect the security of the United States”. A few months before, it had imposed a $100,000 fee on work visas, all but preventing US universities from hiring non-US citizens. It killed the USAID programme that had once sent Mulugeta to the US for his master’s degree.

The Trump administration has also withdrawn the US from international scientific organizations, conventions and panels, and has gutted the most important US scientific agencies. These and other measures are destroying the networks of international physics collaborations of the kind that Mulugeta both promoted and benefited from – networks that nurture education, careers and knowledge.

“We are not yet in good hands,” Mulugeta warns me as I start to leave. “We are,” he says, “still under the dark Sun.”

The post Mulugeta Bekele: the jailed and tortured scientist who kept Ethiopian physics alive appeared first on Physics World.

  •  

Condensed-matter physics pioneer and Nobel laureate Anthony Leggett dies aged 87

The British-American theoretical physicist Anthony Leggett died on 8 March at the age of 87. Leggett shared the 2003 Nobel Prize in Physics with Alexei Abrikosov and Vitaly Ginzburg for their “pioneering contributions to the theory of superconductors and superfluidity”.

Born on 26 March 1938 in London, UK, Leggett graduated in literae humaniores (classical languages and literature, philosophy and Greco-Roman history) at the University of Oxford in 1959.

While philosophy was Leggett’s strongest subject, he did not envisage a career as a philosopher because he felt that the subject depended more on turns of phrase than objective criteria.

As part of an experiment at Oxford to see if it was possible to convert a classicist with minimal qualifications in maths and science into a physicist, Leggett was awarded a degree in physics in 1961.

Leggett then embarked on a DPhil in physics, which he completed at Oxford in 1964, followed by postdocs at the University of Illinois Urbana-Champaign in the US and Kyoto University, Japan.

In 1967 he moved back to the UK, spending the next 15 years at Sussex University. It was at Sussex that he carried out his Nobel-prize-winning work on the theory of superfluidity – the ability of a fluid to flow without viscosity.

Superfluidity in helium-4 was discovered in the 1930s, and in the 1960s several theorists predicted that helium-3 might also be a superfluid.

However, the two forms of helium are fundamentally different. Helium-4 atoms are bosons and can all condense into the same quantum ground state at low enough temperatures – an essential feature of both superfluidity and superconductivity.

Helium-3 atoms, on the other hand, are fermions and the Pauli exclusion principle prevents them from entering such a quantum state.

Electrons, which are also fermions, overcome this problem by forming Cooper pairs as described by the BCS theory of superconductivity that was developed in the mid-1950s by John Bardeen, Leon Cooper and Robert Schrieffer.

Theorists predicted that helium-3 atoms could do something similar and in 1972 superfluidity in helium-3 was finally observed at Cornell University – a feat that earned David Lee, Douglas Osheroff and Robert Richardson the 1996 Nobel Prize in Physics.

Yet many of the results puzzled theorists. In particular there were three different superfluid phases, and the results of nuclear magnetic resonance experiments on the samples could not be explained.

Leggett showed that these results could be explained by the spontaneous breaking of various symmetries in the superfluid and for the work he was awarded a third of the 2003 Nobel Prize in Physics, with Abrikosov and Ginzburg being honoured for their work on type-II superconductors.

A life in science

In 1983 Leggett moved to the University of Illinois at Urbana-Champaign where he remained for the rest of his career until retiring in 2019. There he focussed on problems in high-temperature superconductivity, superfluidity in quantum gases and the fundamentals of quantum mechanics.

In 1998 he was elected an Honorary Fellow of the Institute of Physics and in 2004 was appointed Knight Commander of the Order of the British Empire (KBE) “for services to physics”. In 2023 the Institute for Condensed Matter Theory at the University of Illinois at Urbana-Champaign was renamed the Sir Anthony Leggett Institute.

As well as the Nobel prize, Leggett won many other awards including the 2002 Wolf Prize for physics. He also published two books: The Problems of Physics (Oxford University Press, 1987) and Quantum Liquids (Oxford University Press, 2006).

Peter McClintock from Lancaster University, who has carried out work in superfluidity, says he is “very sad” to hear the news. “[Leggett] was a brilliant physicist whose genius was to comprehend underlying mechanisms and processes and explain their physical essence in comprehensible ways,” says McClintock. “My dominant memory is of the discovery of the superfluid phases of helium-3 and of the way in which [Leggett] was able to interpret each new item of experimental information and slot it into a nascent theoretical framework to build up a coherent picture of what was going on – while always enumerating the remaining loose ends and possible alternative explanations.”

James Sauls, a theorist at Louisiana State University, says that Leggett made discoveries in several areas of physics such as the foundations of quantum mechanics, quantum tunnelling in amorphous glasses and superconducting devices as well as the theory of heat transport at ultra-low temperatures. “Leggett’s contributions to quantum mechanics and low-temperature physics are remarkable and enduring,” adds Sauls. “[Leggett’s] style in theoretical physics was unique in its clarity and originality.”

In a statement, Makoto Gonokami, president of the RIKEN labs in Japan, also expressed that he is “deeply saddened” by the news and that Leggett had “provided warm support for researchers in Japan” through his many trips to the country.

“Leggett made pioneering contributions to our understanding of how quantum mechanics manifests itself in macroscopic matter [and] his theoretical work on superfluid helium-3 provided profound insights into quantum order in strongly interacting fermionic systems,” notes Gonokami. “His work significantly advanced the study of quantum condensed matter and macroscopic quantum coherence.”

The post Condensed-matter physics pioneer and Nobel laureate Anthony Leggett dies aged 87 appeared first on Physics World.

  •  

Physicists identify unexpected quantum advantage in a permutation parity task

Imagine all the different ways you can rearrange a list of labelled items. If you know only a tiny fraction of the labels describing the elements of the list, it’s easy to assume you have almost no information about the order of the list as a whole under permutations. After all, if you shuffle a large deck of cards and then hide most of the labels on the cards, how could anyone possibly tell what permutations you made?

Recent theoretical work by physicists at Universitat Autonoma de Barcelona (UAB), Spain, and Hunter College of the City University of New York (CUNY), US, reveals that this intuition can fail in surprising ways, hinting at deep links between information, symmetry and computation. Specifically, the UAB-CUNY team found that quantum mechanics plays a key role in preserving parity – a global property of a permutation – even when most local information is erased.

An impressive parity identification

Imagine a clever magician named Alice. She hands you a stack of n coloured disks in a known order and leaves the room while you shuffle them. When she returns, she asks: “Can I tell how you permuted the disks?”

If every disk has its own unique label, the answer is obviously “yes”. But if Alice removes some of the labels, she can pose a subtler challenge: “Can I at least tell whether your shuffle swapped the positions of the cards an even or odd number of times?”

Classically, the answer is “no”. With fewer labels than disks, some labels must be repeated. Swapping two disks with the same label leaves the observed configuration unchanged, yet flips the parity of the underlying permutation. As a result, determining parity with certainty requires one unique label per disk. Anything less, and the information is fundamentally lost.

Quantum mechanics changes this conclusion. In their paper, which is published in Physical Review Letters, UAB’s Arnau Diebra and colleagues showed that as long as there are at least √n labels, far fewer than the total number of disks, one can still determine the parity of any permutation applied to the system when the game follows the rules of quantum mechanics. The problem remains the same; the only difference is that we are now preparing our initial state as a quantum state. In other words, even when most of the detailed information about individual elements is erased, a global feature of the transformation survives and exploiting quantum features makes it possible to extract it with carefully chosen information. This is not sleight of hand: it is a genuine mathematical insight into how much information certain global properties retain under massive data reduction.

Quantum advantage

In the field of quantum science, it’s common to ask whether quantum systems can outperform classical ones at specific tasks, a phenomenon known as quantum advantage. Here, “advantage” does not necessarily mean doing everything faster, but rather the ability to solve carefully chosen problems using fewer resources such as time, memory or information. Notable examples include quantum algorithms that factor large numbers more efficiently than any known classical method, and quantum communication protocols that achieve tasks that would be impossible with classical correlations alone.

The parity-identification problem fits naturally into this landscape. Parity is a global property, insensitive to most local details. In this respect, it resembles many other quantities studied in quantum physics, from topological invariants to entanglement measures.

What makes quantum advantage possible in this problem is entanglement – and lots of it. A compound quantum system is said to be entangled when its subsystems are correlated in a nonclassical way. A simple example might be a pair of qubits (quantum bits) for which measuring the state of one qubit gives you information about the state of the other in a way that cannot be reproduced by any classical correlation. In their work, the UAB-CUNY physicists used a geometric measure of entanglement: the “distance” between the state of the system and a state in which all subsystems are separable (that is, not entangled). If this distance is too short, the protocol fails entirely.

The crucial point is that entanglement allows information about the permutation to be stored in genuinely nonlocal correlations among particles (the “cards” in the deck), rather than in properties of each particle/card individually. In effect, the “memory” needed to identify the parity is written into the joint quantum state. No single particle carries the answer, but the system as a whole does. This is precisely what classical systems cannot replicate: once local labels are lost, there is nowhere left for the information to hide.

Can one do better than √n ?

The fact that the threshold for quantum advantage scales with √n  is one of the most intriguing aspects of the work. At present, the reason for this remains an open question. While Diebra and colleagues emphasize that the scaling is provably optimal within quantum mechanics, they acknowledge that a more intuitive or fundamental explanation is still missing. Finding such an explanation could illuminate broader principles governing how quantum systems compress and protect global information.

While the parity-identification problem has no immediate known applications, understanding how properties can be inferred from limited information is also crucial when dealing with realistic quantum devices, where noise, decoherence and imperfect measurements severely restrict what information can be accessed. Results like this therefore suggest that some computational or informational tasks may remain feasible even when our view of the system is drastically incomplete.

Speaking more broadly, the conceptual implications of proving new examples quantum advantage are clear: even for extremely simple inference tasks, quantum strategies can outperform classical ones in unexpected and qualitative ways. The result therefore provides a clean testing ground for deeper questions about quantum resources, symmetry and information compression. Which specific features of entanglement are responsible for the advantage? Can similar thresholds be found for other groups or more complex symmetries? And does the square-root scaling reflect a universal principle?

For now, the work serves as a reminder that – even decades into the development of quantum information theory – basic questions about how information is stored, hidden, and revealed in quantum systems can still produce genuine surprises.

The post Physicists identify unexpected quantum advantage in a permutation parity task appeared first on Physics World.

  •  

Long-distance quantum sensor network advances the search for dark matter

A new of way of searching for dark-matter candidate particles called axions has produced the tightest constraint yet on how they can interact with normal matter. Using a two-city network of quantum sensors based on nuclear spins, physicists in China narrowed the possible values of a parameter known as axion-nucleon coupling below a limit previously set by astrophysical observations. As well as insights on the nature of dark matter, the technique could aid investigations of other beyond-the-Standard-Model physics phenomena such as axion stars, axion strings and Q-balls.

Dark matter is thought to make up over 25% of the universe’s mass, but it has never been detected directly. Instead, we infer its existence from its gravitational interactions with visible matter and its effect on the large-scale structure of the universe.

While the Standard Model of particle physics does not incorporate dark matter, several physicists have proposed ideas for how to bring it into the fold. One of the most promising involves particles called axions. First hypothesized in the 1970s as a way of explaining unresolved questions about charge-parity violation, axions are chargeless and much less massive than electrons. This means they interact only weakly with matter and electromagnetic radiation.

According to theoretical calculations, the Big Bang should have produced axions in abundance. During phase transitions in the early universe, these axions would have formed topological defects – defects that study leader Xinhua Peng of the University of Science and Technology of China (USTC) says should, in principle, be detectable. “These defects are expected to interact with nuclear spins and induce signals as the Earth crosses them,” Peng explains.

A new axion search method

The problem, Peng continues, is that such signals are expected to be extremely weak and transient. She and her colleagues therefore developed an alternative axion search method that exploits a different predicted behaviour.

When fermions (particles with half-integer spin) interact, or couple, with axions, they should produce a pseudo-magnetic field. Peng and colleagues looked for evidence of this interaction using a network of five quantum sensors, four in Hefei and one in Hangzhou. These sensors combined a large ensemble of polarized rubidium-87 (87Rb) atoms with polarized xeon-129 (129Xe) nuclear spins.

“Using nuclear spins has many advantages,” Peng explains. “These include a higher energy resolution detection for topological dark matter (TDM) axions thanks to a much smaller gyromagnetic ratio of nuclear spins; substantial spin amplification owing to the high ensemble density of noble-gas spins; and efficient optimal filtering enabled by the long nuclear-spin coherence time.”

The USTC researchers’ setup also has other advantages over previous laboratory-based TDM searches, including the Global Network of Optical Magnetometers for Exotic physics searches (GNOME). While GNOME operates in a steady-state detection mode, the USTC researchers use a detection scheme that probes transient “free-decay oscillating” signals generated on spins after a TDM crossing. The USTC team also implemented a dual-phase optimal filtering algorithm to extract TDM signals with a signal-to-noise ratio at the theoretical maximum.

Peng tells Physics World that these advantages enabled the team to explore regions of TDM parameter space well beyond limits set by astrophysical searches. The transient-state detection scheme also enables sensitive searches for TDM in the region where the axion mass exceeds 100 peV – a region that GNOME cannot access.

Most stringent constraints

The researchers have not yet recorded a statistically significant topological crossing event using their setup, so the dark matter search is not over. However, they have set more stringent constraints on axion-nucleon coupling across a range of axion masses from 10 peV to 0.2 μeV. Notably, they calculated that the coupling strength must be greater than 4.1 x 1010 GeV at an axion mass of 84 peV. This limit is stricter than those obtained from astrophysical observations, though Peng notes that these rely on different assumptions.

Peng says the technique developed in this study, which is published in Nature, could lead to the development of even larger, more sensitive networks for detecting transient spin signals such as those from TDM. It also opens new avenues for investigating other physical phenomena beyond the Standard Model that have been theoretically proposed, but have so far lacked a pathway for experimental exploration.

The researchers now plan to increase the number of sensor stations in their network and extend their geographical baselines to intercontinental and even space-based scales. Peng explains that doing so will enhance the network’s detection sensitivity and boost signal confidence. “We also want to enhance the sensitivity of individual sensors via better spin polarization, longer coherence times and advanced quantum control techniques,” she says. Switching to a ³He–K system, she adds, could boost their current spin-rotation sensitivity by up to four orders of magnitude.

The post Long-distance quantum sensor network advances the search for dark matter appeared first on Physics World.

  •  

Pathways to a career in quantum: what skills do you need?

Careers in Quantum, which was held on 5 March 2026, is an unusual event. Now in its seventh year, it’s entirely organized by PhD students who are part of the Quantum Engineering Centre for Doctoral Training (CDT) at the University of Bristol in the UK.

As well as giving them valuable practical experience of creating an event featuring businesses in the burgeoning quantum sector, it also lets them build links with the very firms they – and the students and postdocs who attended – might end up working for.

A clever win-win if you like, with the day featuring talks, panel discussion and a careers fair made up companies such as Applied Quantum Computing, Duality, Hamamatsu, Orca Computing, Phasecraft, QphoX, Riverlane, Siloton and Sparrow Quantum.

IOP Publishing featured too with Antigoni Messaritaki talking about her journey from researcher to senior publisher and Physics World features and careers editor Tushna Commissariat taking part in a panel discussion on careers in quantum.

The importance of communication and other “soft skills” was emphasized by all speakers in the discussion, but what struck me most was a comment by Carrie Weidner, a lecturer in quantum engineering at Bristol, who underlined that it’s fine – in fact important – to learn to fail.

“If you’re resilient and can think critically, you can do anything,” said Weidner, who is also director of the quantum-engineering CDT. She warned too of the dangers of generative AI, joking that “every time you use ChatGPT, your brain is atrophying”.

Photo of Diya Nair
Breaking barriers Diya Nair explains the aims and activities of Girls in Quantum. (Courtesy: Matin Durrani)

Another great talk was by Diya Nair, a computer-science undergraduate at the University of Birmingham, who is head of global outreach and UK ambassador for Girls in Quantum.

The organization is now active in almost 70 countries around the world, with the aim of “democratizing quantum education”. As Nair explained, Girls in Quantum does everything from arrange quantum computing courses and hackathons to creating its crowdfunded quantum-computing game called Hop.

The event also included a discussion about taking quantum research “from concept to commercialization”. It featured Jack Russel Bruce from Universal Quantum, Euan Allen from eye-imaging tech firm Siloton, Joe Longden from Duality Quantum Photonics, and Stewart Noakes, who has mentored numerous companies over the years.

Noakes emphasized that all hi-tech firms have three main needs: talent, money and ideas. In fact, as he explained, companies can sometimes suffer from having too much money as well as too little, especially if they grow too fast and hire people on big salaries who might then need to be let go if funding dries up.

Bruce, though, was positive about the overall state of the quantum-tech sector. “For me, the future is bright,” he said. But as all speakers underlined, if you want to join the industry, make sure you’ve got good communication skills, an open-minded attitude – and a willingness to learn on the go.

The post Pathways to a career in quantum: what skills do you need? appeared first on Physics World.

  •  

Metamaterial antennas enhance MR images of the eye and brain

In vivo MR imaging
In vivo imaging T2-weighted MRI of three healthy volunteers (left to right columns) using the bend-MTMA and bend-loop antennas reveals increased intraocular signal for the metamaterial-based bend-MTMA configuration. (Courtesy: CC BY 4.0/Advanced Materials 10.1002/adma.202517760)

MRI is one of the most important imaging tools employed in medical diagnostics. But for deep-lying tissues or complex anatomic features, MRI can struggle to create clear images in a reasonable scan time. A research team led by Thoralf Niendorf at the Max Delbrück Center in Germany is using metamaterials to create a compact radiofrequency (RF) antenna that enhances image quality and enables faster MRI scanning.

Imaging the subtle structures of the eye and orbit (the surrounding eye socket) is a particular challenge for MRI, due to the high spatial resolution and small fields-of-view required, which standard MRI systems struggle to achieve. These limitations are generally due to the antennas (or RF coils) that transmit and receive the RF signals. Increasing the sensitivity of these antennas will increase signal strength and improve the resolution of the resulting MR images.

To achieve this, Niendorf and colleagues turned to electromagnetic metamaterials – artificially manufactured, regularly arranged structures made of periodic subwavelength unit cells (UCs) that interact with electromagnetic waves in ways that natural materials do not. They designed the metamaterial UCs based on a double-square split-ring resonator design, tailored for operation at a high magnetic field strength of 7.0 T.

Metamaterials improve transmit–receive performance

In their latest study, led by doctoral student Nandita Saha and reported in Advanced Materials, the researchers created a metamaterial-integrated RF antenna (MTMA) by fabricating the UCs into a 5 x 8 array. They built two configurations: a planar antenna (planar-MTMA); and a version with a 90° bend in the centre (bend-MTMA) to conform to the human face. For comparison, they also built conventional counterparts without the metamaterial (planar-loop and bend-loop).

The researchers simulated the MRI performances of the four antennas and validated their findings via measurements at 7.0 T. Tests in a rectangular phantom showed that the planar-MTMA demonstrated between 14% and 20% higher transmit efficiency than the planar-loop (assessed via B₁+ mapping).

They next imaged a head phantom, placing planar antennas behind the head to image the occipital lobe (the part of the brain involved in visual processing) and bend antennas over the eyes for ocular imaging. For the planar antennas, B₁+ mapping revealed that the planar-MTMA generated around 21% (axial), 19% (sagittal) and 13% (coronal) higher intensity than the planar-loop. Gradient-echo imaging showed that planar-MTMA also improved the receive sensitivity, by 106% (axial), 94% (sagittal) and 132% (coronal).

Antenna design and deployment
Antenna design and deployment Layout of the planar and bend antennas, and the experimental setups for imaging an anatomical head phantom and a volunteer in a 7.0 T whole-body MRI system. (Courtesy: CC BY 4.0/Advanced Materials 10.1002/adma.202517760)

The bend antennas exhibited similar trends, with B₁+ maps showing transmit gains of roughly 20% for the bend-MTMA over the bend-loop. The bend-MTMA also outperformed the bend-loop in terms of receive signal intensity, by approximately 30%.

“With the metamaterials we developed, we were able to guide and modulate the RF fields generated in MRI more efficiently,” says Niendorf. “By integrating metamaterials into MRI antennas, we created a new type of transmitter and detector hardware that increases signal strength from the target tissue, improves image sharpness and enables faster data acquisition.”

In vivo imaging

Importantly, the new MRI antenna design is compatible with existing MRI scanners, meaning that no new infrastructure is needed for use in the clinic. The researchers validated their technology in a group of volunteers, working closely with partners at Rostock University Medical Center.

Before use on human subjects, the researchers evaluated the MRI safety of the four antennas. All configurations remained well below the IEC’s specific absorption rate (SAR) limit. They also assessed the bend-MTMA (which showed the highest SAR) using MR thermometry and fibre optic sensors. After 30 min at 10 W input power, the temperature increased by about 1.5°C. At 5 W, the increase was below 0.5°C, well within IEC safety thresholds and thus used for the in vivo MRI exams.

The team first performed MRI of the eye and orbit in three healthy adults, using the bend-loop and bend-MTMA antennas positioned over the eyes. Across all volunteers, the bend-MTMA exhibited better transmit performance in the ocular region that the bend-loop.

The bend-MTMA antenna also generated larger intraocular signals than the bend-loop (assessed via T2-weighted turbo spin-echo imaging), with signal increases of 51%, 28% and 25% in the left eyes, for volunteers 1, 2 and 3, respectively, and corresponding gains of 27%, 26% and 29% for their right eyes. Overall, the bend-MTMA provided more uniform and higher-intensity signal coverage of the ocular region at 7.0 T than the bend-loop.

To further demonstrate clinical application of the bend-MTMA, the team used it to image a volunteer with a retinal haemangioma in their left eye. A 7.0 T MRI scan performed 16 days after treatment revealed two distinct clusters of structural change due to the therapy. In addition, one of the volunteer’s ocular scans revealed a sinus cyst, an unexpected finding that showed the diagnostic benefit of the bend-MTMA being able to image beyond the orbit and into the paranasal sinuses and inferior frontal lobe.

The team used the planar antennas to image the occipital lobe, a clinically relevant target for neuro-ophthalmic examinations. The planar-MTMA exhibited significantly higher transmit efficiency than the planar-loop, as well as higher signal intensity and wider coverage, enhancing the anatomical depiction of posterior brain regions.

“Clearer signals and better images could open new doors in diagnostic imaging,” says Niendorf. “Early ophthalmology applications could include diagnostic confirmation of ambiguous ophthalmoscopic findings, visualization and local staging of ocular masses, 3D MRI, fusion with colour Doppler ultrasound, and physio-metabolic imaging to probe iron concentration or water diffusion in the eye.”

He notes that with slight modifications, the new antennas could enable MRI scans depicting the release and transport of drugs within the body. Their geometry and design could also be tuned to image organs such as the heart, kidneys or brain. “Another pioneering clinical application involves thermal magnetic resonance, which adds a thermal intervention dimension to an MRI device and integrates diagnostic guidance, thermal treatment and therapy monitoring facilitated by metamaterial RF antenna arrays,” he tells Physics World.

The post Metamaterial antennas enhance MR images of the eye and brain appeared first on Physics World.

  •  

Laser-written glass plates could store data for thousands of years

Humans are generating more data than ever before. While much of these data do not need to be stored long-term, some – such as scientific and historical records – would ideally still be retrievable in decades, or even centuries. The problem is that modern digital archive systems such as hard disk drives do not last that long. This means that data must regularly be transferred to new media, which is costly and time-consuming.

A team at Microsoft Research now claims to have found a solution. By using ultrashort, intense laser pulses to “write” data units called phase voxels into glass chips, the team says it has created a medium that could store 4.8 terabytes (TB) of data error-free for more than 10::000 years – a span that exceeds the age of history’s oldest surviving written records.

Direct laser writing

The idea of writing data into glass or other durable media with lasers is not new. Direct laser writing, as it is known, involves focusing high-power pulses, usually just femtoseconds (10-15 s) long, on a three-dimensional region within a medium. This modifies the medium’s optical properties in that region, and each modified region becomes a data-storage unit known as a voxel, which is the 3D equivalent of a pixel.

Because the laser’s energy is focused on a very small volume, the voxels created with this method can be very densely packed. Changing the amplitude and polarization of the laser’s output changes what information gets encoded at each voxel, and an optical microscope can “read out” this information by picking up changes in the light as it passes through each modified region. In terms of the media used, glass is particularly promising because it is thermally and chemically stable and is robust to moisture and electromagnetic interference.

Direct laser writing does have some limitations, however. In particular, encoding information generally requires multiple laser pulses per voxel, restricting the technique’s throughput and efficiency.

Two types of voxel, one laser pulse

Microsoft Research’s “Project Silica” team says it overcame this problem by encoding information in two types of voxel: phase voxels and birefringent voxels. Both types involve modifying the refractive index of the medium, and thus the speed of light within it. The difference is that whereas phase voxels create an isotropic change in the refractive index, birefringent voxels create an anisotropic change by rotating the voxel in the plane of the 120-mm square, 2-mm-thick glass chip.

Crucially, both types of voxel can be produced using a single laser pulse. According to Project Silica team leader Richard Black, this makes the modified region smaller and more uniform, minimizing effects such as light scattering that can interfere with read-outs from neighbouring voxels. It also allows many voxel layers to be written into, and then read out from, a single glass chip. The result is a system that can generate up to 10 million voxels per second, which equates to 25.6 million bits of data per second (Mbit s−1).

Performance of different types of glass

The Microsoft researchers studied two types of glass, both of which have better mechanical properties than ordinary window glass. In 301 layers of fused silica glass, they achieved a data density of 1.59 Gbit mm−3 using birefringent voxels, with a write throughput of 25.6 Mbit s−1 and a write efficiency of 10.1 nJ per bit. In 258 layers of borosilicate glass, the data density reached 0.678 Gbit mm−3 using phase voxels. Here, the write throughput was 18.4 Mbit s−1 and the write efficiency 8.85 nJ per bit.

“The phase voxel discovery in particular is quite notable because it lets us store data in ordinary borosilicate glass, rather than pure fused silica; do it with a single laser pulse per voxel; and do it highly parallel in close proximity,” says Black. “That combination of cheaper material and much simpler and faster writing and reading was a genuinely exciting moment for us.”

The researchers also showed that they could directly inscribe the glass using four independent laser beams in parallel, further increasing the write speeds for both types of glass.

Surviving “benign neglect”

To determine how long these inscribed glass plates could store data, the team repeatedly heated them to 500 °C, simulating their long-term ageing at lower temperatures. The results of these experiments suggest that encoded data could be retrieved after 10::000 years of storage at 290 °C. However, Black acknowledges that this figure does not account for external effects such as mechanical stress or chemical corrosion that could degrade the glass and the data it stores. Another unaddressed challenge is that storage capacity and writing speed will both need to grow before the technology can compete with today’s data centres.

If these deficiencies can be remedied, Black thinks the clearest potential applications would be in national libraries and other facilities that store scientific data and cultural records. “It’s also compelling for cloud archives where data is written once and kept indefinitely,” Black says. He points out that the team has already demonstrated proofs of concept with Warner Bros., the Global Music Vault and the Golden Record 2.0 project, a “cultural time capsule” inspired by the literal golden records launched on the Voyager spacecraft in the 1970s.

A common factor across all these organizations, Black explains, is that they need media that can survive “benign neglect” – something he says Project Silica delivers. He adds that the project also provides what he calls operational proportionality, meaning that its costs are primarily a function of the operations performed on the data, not the length of time the data are kept. “This completely alters the way we think about keeping archival material,” he says. “Once you have paid to keep the data, there is little point in deleting it, and you might as well keep it.”

Microsoft began exploring direct laser data storage in glass nearly a decade ago thanks to team member Ant Rowstron, who recognized the potential of work being done by physicist Peter Kazansky and colleagues at the University of Southampton, UK. The latest version of the technique, which is detailed in Nature, grew out of that collaboration, and Black says its capabilities are limited only by the power and speed of the femtosecond laser being used. “We have now concluded our research study and are sharing our results so that others may build on our work,” he says.

The post Laser-written glass plates could store data for thousands of years appeared first on Physics World.

  •  

Ultrasound system solves the ‘unsticking problem’ in biomedical research

“Surround sound for biological cells,” is how Luke Cox describes the ultrasound technology that Impulsonics has developed to solve the “unsticking problem” in biomedical science. Cox is co-founder and chief executive of UK-based Impulsonics, which spun-out of the University of Bristol in 2023.

He is also my guest in this episode of the Physics World Weekly podcast. He explains why living cells grown in a petri dish tend to stick together, and why this can be a barrier to scientific research and the development of new medical treatments.

The system uses an array of ultrasound transducers to focus sound so that it frees-up and manipulates cells in a way that does not alter their biological properties. This is unlike chemical unsticking processes, which can change cells and impact research results.

We also chat about Cox’s career arc from PhD student to chief executive and explore opportunities for physicists in the biomedical industry.

The following articles are mentioned in the podcast:

The post Ultrasound system solves the ‘unsticking problem’ in biomedical research appeared first on Physics World.

  •  

Scientists are failing to disclose their use of AI despite journal mandates, finds study

An analysis of more than 5.2 million papers in 5000 different journals has revealed a dramatic rise in the use of artificial intelligence (AI) tools in academic writing across all scientific disciplines, especially physics.

However, the analysis has revealed a big gap between the number of researchers who use AI and those who admit to doing so – even though most scientific journals have policies requiring the use of AI to be disclosed.

Carried out by data scientist Yi Bu from Peking University and colleagues, the analysis looks at papers that are listed in the OpenAlex dataset and were published between 2021 and 2025.

To assess the impact of editorial guidelines introduced in response to the growing use of generative AI tools such as ChatGPT, they examined journal AI-writing policies, looked at author disclosures and used AI to see if papers had been written with the help of technology.

The AI detection analysis reveals that the use of AI writing tools has increased dramatically across all scientific disciplines since 2023. It also finds that 70% of journals have adopted AI policies, which primarily require authors to disclose the use of AI-writing tools.

IOP Publishing, which publishes Physics World, for example, has a journals policy that supports authors who use AI in a “responsible and appropriate” manner. It encourages authors, however, to be “transparent about their use of any generative AI tools in either the research or the drafting of the manuscript”.

A new framework

But in the new study, a full-text analysis of 75 000 papers published since 2023, reveals that only 76 articles (about 0.1% of the total) explicitly disclosed the use of AI writing tools.

In addition, the study finds no significant difference in the use of AI between journals that have disclosure policies and those that do not, which suggests that disclosure requirements are being ignored – what the authors call a “transparency gap”.

The study also finds that researchers from non-English-speaking countries are more likely to rely on AI writing tools than native English speakers. Increases in the use of AI writing tools are found to be particularly rapid in journals with high levels of open-access publishing.

The authors now call for a re-evaluation of ethical frameworks to foster responsible AI integration in science. They state that prohibition or disclosure requirements are insufficient to regulate AI use, with their results showing that researchers are not complying with policies.

The authors argue that instead of “opposition and resistance”, “proactive engagement and institutional innovation” is needed “to ensure AI technology truly enhances the value of science”.

The post Scientists are failing to disclose their use of AI despite journal mandates, finds study appeared first on Physics World.

  •  

The humanity of machines: the relationship between technology and our bodies

Humanity has had a complicated relationship with machines and technology for centuries. While we created these inventions to make our lives easier, and have become heavily reliant upon them, we have often feared their impact on society.

In her debut book, The Body Digital: a Brief History of Humans and Machines from Cuckoo Clocks to ChatGPT, Vanessa Chang tells the story of this symbiotic partnership, covering tools as diverse as the self-playing piano and generative AI products. The short book combines creative storytelling, an inward look at our bodies and interpersonal relationships, and a detailed history of invention. Chang – who is the director of programmes at Leonardo, the International Society for the Arts, Sciences, and Technology in California – offers us a framework for examining future worlds based on the relationship between humanity and machines.

“Technology” has no easy definition. The Body Digital therefore takes a broad approach, looking at software, machines, infrastructure and tools. Chang examines objects as mundane as the pen and as complex as the road networks that define our cities. She focuses on the interplay between machine and human: how tools have lightened our load and become embedded in our behaviour. In doing this she asks the reader: is it possible for the human body to extract itself from technology?

Each chapter of the book centres on a different part of the human anatomy – hand, voice, ear, eye, foot, body and mind – looking at the historical relationship between that body part and technology. Chang follows this thread through to the modern day and the large-scale impact these technologies have had on the development of our communities, communications and social structures. The chapters are a vehicle for Chang to present interesting pieces of history and discussions about society and culture. Her explanations are tightly knit, and the book covers huge ground in its relatively concise page count.

Chang avoids “doomerism”, remaining even-handed about our reservations towards technological advancement. She is careful in her discussion of new technology, particularly those that are often fraught in the public discourse, such as the use of generative AI in creating art, and the potential harms of facial-recognition software.

She includes genuine concerns – like biases creeping into training data for large language models – but mitigates these fears by discussing how technologies have become enmeshed in human culture through history. Our fear of some technologies has been unfounded – take, for example, the idea that the self-playing piano would supersede live piano concerts. These debates, Chang argues, have happened throughout the history of technology, and some of the same arguments from the past can easily be applied to future technology.

While this commentary is often thought-provoking, it sometimes doesn’t go as far as it might. There is relatively limited discussion throughout the book about the technological ecosystem we currently live in and how that might impact our level of optimism about the future. In particular, the topics of human labour being supplanted by machine labour, and the impacts of tech monoliths like Apple and Google, are relatively minimal.

In one example, Chang discusses the ways in which “telecommunication technologies might serve as channels into the afterlife”, allowing us to use technology to artificially recreate the voices of our loved ones after death. While the book contains a full discussion of how uncanny and alarming this type of “artistic necrophilia” might be, Chang tempers fear by pointing out that by being careful with our data, careful with our digital selves, we might be able to “mitigate the transformation of [our] voices into pure commodities”. However, the questions of who controls our data, the relationship between data and capital, and the level of control that we have over the use of our data, is somewhat limited.

Poetic technology

The difference between offering interesting ideas and overexplaining is a hard needle to thread, and one that Chang navigates successfully. One striking feature of The Body Digital is the quality of the prose. Chang has a background in fiction writing and her descriptions reflect this. An automaton is anthropomorphized as a “petite, barefoot boy” with a “cloud of brown hair”; and the humble footpath is described as “veer[ing] at a jaunty angle from the pavement, an unruly alternative to concrete”. As a consequence, her ideas are interesting and memorable, making the book readable and often moving.

Particularly impressive is Chang’s attitude to exposition, which mimics fiction’s age-old adage of “show, don’t tell”. She gives the reader enough information to learn something new in context and ask follow-up questions, without banging the reader over the head with an answer to these questions. The book mimics the same relationship between the written word and human consciousness that Chang discusses within it. The Body Digital marinates with the reader in the way any good novel might, while teaching them something new.

The result is a poetic and well-observed text, which offers the reader a different way of understanding humanity’s relationship with technology. It reminds us that we have coexisted with machines throughout the history of our species, and that they have been helpful and positively shaped the direction of our world. While she covers too much ground to gaze in any one direction for too long, the reader is likely to come away enriched and perhaps even hopeful. And, as Chang points out, we have the opportunity to shape the future of technology, by “attending to the rich, idiosyncratic intelligence of our bodies”.

  • 2025 Melville House Publishing 256pp £14.99 pb / £9.49 ebook

The post The humanity of machines: the relationship between technology and our bodies appeared first on Physics World.

  •  
❌