↩ Accueil

Vue lecture

Quantum states that won’t entangle

Quantum entanglement is a uniquely quantum link between particles that makes their properties inseparable. It underlies the power of many quantum technologies from secure communication to quantum computing, by enabling correlations impossible in classical physics.

Entanglement nevertheless remains poorly understood and is therefore the subject of a lot of research, both in the fields of quantum technologies as well as fundamental physics.

In this context, the idea of separability refers to a composite system that can be written as a simple product (or mixture of products) of the states of its individual parts. This implies there is no entanglement between them and to create entanglement, a global transformation is needed.

A system that remains completely free of entanglement, even after any possible global invertible transformation is applied, is called absolutely separable.  In other words, it can never become entangled under the action of quantum gates.

Absolutely separable
Separable, Absolutely Separable and Entangled sets: It is impossible to make absolutely separable states entangled with a global transformation (Courtesy J. Abellanet Vidal and A. Sanpera Trigueros)

Necessary and sufficient conditions to ensure separability exist only in the simplest cases or for highly restricted families of states. In fact, entanglement verification and quantification is known to be generically an NP-hard problem.

Recent research published by a team of researchers from Spain and Poland has tackled this problem head-on. By introducing new analytical tools such as linear maps and their inverses, they were able to identify when a quantum state is guaranteed to be absolutely separable.

These tools work in any number of dimensions and allow the authors to pinpoint specific states that are on the border of being absolutely separable or not (mathematically speaking, ones that lie on the boundary of the set). They also show how different criteria for absolute separability, which may not always agree with each other, can be combined and refined using convex geometry optimisation.

Being able to more easily and accurately determine whether a quantum state is absolutely separable will be invaluable in quantum computation and communication.

The team’s results for multipartite systems (systems with more than two parts) also reveal how little we currently understand about the entanglement properties of mixed, noisy states. This knowledge gap suggests that much more research is needed in this area.

Read the full article

Sufficient criteria for absolute separability in arbitrary dimensions via linear map inverses – IOPscience

J. Abellanet Videl et al, 2025 Rep. Prog. Phys. 88 107601

The post Quantum states that won’t entangle appeared first on Physics World.

  •  

The secret limits governing quantum relaxation

When we interact with everyday objects, we take for granted that physical systems naturally settle into stable, predictable states. A cup of coffee cools down. A playground swing slows down after being pushed.  Quantum systems, however, behave very differently.

These systems can exist in multiple states at once, and their evolution is governed by probabilities rather than certainties. Nevertheless, even these strange systems do eventually relax and settle down, losing information about their earlier state. The speed at which this happens is called the relaxation rate.

Relaxation rates tell us how fast a quantum system forgets its past, how quickly it thermalises, reaches equilibrium, decoheres, or dissipates energy. These rates are important not just for theorists but also for experimentalists, who can measure them directly in the lab.

Recently, researchers discovered that these rates obey a surprisingly universal rule. For a broad class of quantum processes (those described by what physicists call Markovian semigroups) the fastest possible relaxation rate cannot exceed a certain limit. Specifically, it must be no larger than the sum of all relaxation rates divided by the system’s dimension. This constraint, originally a conjecture, was first proven using tools from classical mathematics known as Lyapunov theory.

In a new paper published recently, an international team of researchers provided a new, more direct algebraic proof of this universal bound. There are a number of advantages of the new proof compared to the older one, and it can be generalised more easily, but that’s not all.

The very surprising outcome of their work is that the rule doesn’t require complete positivity. Instead, a weaker condition – two‑positivity is enough. The distinction between these two requirements is crucial.

Essentially, both are measures of how well-behaved a quantum system is, how it is protected from providing nonsensical results. The difference is that two-positivity is slightly less stringent but far more general, and hence very useful for many real-world applications.

The fact that the new proof only requires two-positivity means that it this new universal relaxation rate can actually be applied to a lot more scenarios.

What’s more, even when weakened even further, a slightly softer version of the universal constraint still holds. This shows that the structure behind these bounds is richer and more subtle than previously understood.

Read the full article

A universal constraint for relaxation rates for quantum Markov generators: complete positivity and beyond – IOPscience

D. Chruściński et al, 2025 Rep. Prog. Phys. 88 097602

The post The secret limits governing quantum relaxation appeared first on Physics World.

  •  

Implanted electrodes provide intuitive control of prosthetic hand

Loss of a limb can significantly impact a person’s independence and quality-of-life, with arm amputations particularly impeding routine daily activities. Prosthetic limbs can restore some of the lost function, but often rely on surface electrodes with low signal quality. A research team at the University of Michigan has now shown that implanted electrodes could provide more accurate and reliable control of hand and wrist prostheses.

Today, most upper-limb prostheses are controlled using surface electrodes placed on the skin to detect electrical activity from underlying muscles. The recorded electromyography (EMG) signals are then used to classify different finger and wrist movements. Under real-world conditions, however, these signals can be impaired by inconsistent electrode positioning, changes in limb volume, exposure to sweat and artefacts from user movements.

Implanted electrodes, tiny contacts that are surgically sutured into muscles, could do a better job. By targeting muscles deeper in the arm, they offer higher signal-to-noise ratios and less susceptibility to daily variations. And although amputation can eliminate many of the muscles that control hand functions, techniques such as regenerative peripheral nerve interface (RPNI) surgery – in which muscle tissue is grafted to nerves in the residual limb – enable electrodes to target missing muscles and record relevant signals for prosthetic control.

Senior author Cynthia Chestek points out that such RPNI grafts are also beneficial for the nerve itself. “They provide a target for nerve endings that prevent the formation of painful neuromas, and that may in turn help reduce phantom limb pain,” she explains “In future, it would also be possible to place electrodes and a wireless transmitter during that same surgery, such that no additional surgeries are required other than the original amputation.”

In their latest work, reported in the Journal of Neural Engineering, Chestek and colleagues investigated whether implanted electrodes could provide stable and high-quality signals for  controlling prosthetic hand and wrist function.

Performance comparisons

The study involved two individuals with forearm amputations and EMG electrodes implanted into RPNIs and muscles in their residual limb. The subjects performed various experiments, during which the team recorded EMG signals from the implanted electrodes plus dry-domed and gelled (used to improve contact with the skin) surface electrodes.

In one experiment, participants were tasked with controlling a virtual hand and wrist in real time by mimicking movements (various grips) on a screen. The researchers used the recorded EMG signals to train linear discriminant analysis classifiers to distinguish the cued grips, training separate classifiers for each electrode type.

They then evaluated the performance of these grip classifiers during a posture classification experiment, in which the subjects actively controlled hand or wrist movements of a virtual hand. Participants achieved faster, more accurate and more reliable control using the implanted electrodes than the surface electrodes.

With participants sitting and keeping their arm still, the implanted electrodes achieved average per-bin accuracies (the percentage of correctly classified time bins) of 82.1% and 91.2% for subjects 1 and 2, respectively. The surface electrodes performed worse, with accuracies of 77.1% and 81.3% for gelled electrodes, and 58.2% and 67.1% for dry-domed electrodes, for subjects 1 and 2, respectively.

The researchers repeated this experiment with the subjects standing and moving their arm to mimic daily activities. Adding movement reduced the classification accuracy in all cases, but affected the implanted electrodes to a far smaller degree. The control success rate (the ability to hold a grip for at least 1 s, within 3 s of seeing a movement cue) also diminished between still and moving conditions, but again, the implanted electrodes experienced smaller decreases.

Overall, the performance of online classifiers using implanted electrodes was only slightly affected by arm movements, while classifiers trained on surface electrodes became unstable. Investigating the reasons underlying this difference revealed that implanted electrodes exhibited higher EMG signal amplitudes, lower cross-correlation between channels, and smaller signal deviations between still and moving conditions.

The Coffee Task

To examine a real-world scenario, subject 1 completed the “Coffee Task”, which involves performing the various grips and movements required to: place a cup into a coffee machine; place a coffee pod into the machine; push the start button; move the filled cup onto a table; and open a sugar packet and pour it into the cup.

The subject performed the task using an iLimb Quantum myoelectric prosthetic hand controlled by either implanted or dry surface electrodes, with and without control of wrist rotation. The participant performed the task faster using implanted electrodes, successfully completing the task on all three attempts. For surface-based control, they reached the maximum time limit of 150 s in two out of three attempts.

Although gelled electrodes are the gold standard for surface EMG, they cannot be used whilst wearing a standard prosthetic socket. “With the Coffee Task, use of the physical prosthetic  hand is needed, so this was only performed with dry-domed surface electrodes and implanted electrodes,” explains first author Dylan Wallace.

The researchers also assessed whether simultaneous wrist and hand control can reduce compensatory body movements (measured using reflective markers on the subject’s torso), compared with hand control alone. Without wrist rotation, the subject had to lean their entire upper body to complete the pouring task. With wrist rotation enabled, this lean was greatly reduced.

This finding emphasizes how wrist control provides significant functional benefit for prosthesis users during daily activities. Chestek notes that in a previous study where participants wore a prosthesis without an active wrist, “almost everything we asked them to do required large body movements”.

“Fortunately, the implantable electrodes provide highly specific and high-amplitude signals, such that we were able to add that wrist movement without losing the ability to classify multiple different grasps,” she explains. “The next step would be to pursue continuous, rather than discrete, movement for all of the individual joints of the hand –  though that will not happen quickly.”

The post Implanted electrodes provide intuitive control of prosthetic hand appeared first on Physics World.

  •  

Flight heritage? It isn’t what you think

Falcon 9 launch

In space procurement, there are few phrases that carry more weight than “flight heritage.” Once a supplier claims it, the rest of the room can relax. The hardware has flown, goes the thinking. It worked. The risk of using such hardware is vanishingly small, even absent. This is understandable. Space is famously unforgiving, and if […]

The post Flight heritage? It isn’t what you think appeared first on SpaceNews.

  •  

New cosmic map will put dark-matter theories to the test

Astronomers have created the most detailed map to date of the vast structures of dark matter that appear to permeate the universe. Using the James Webb Space Telescope (JWST), the team, led by Diana Scognamiglio at NASA’s Jet Propulsion Laboratory, used gravitational lensing to map the dark matter filaments and clusters with unprecedented resolution. As a result, physicists have new and robust data to test theories of dark matter.

Dark matter is a hypothetical substance that appears to account for about 85% of the mass in universe – yet it has never been observed directly. Dark matter is invoked by physicists to explain the dynamics and evolution of large scale structures in the universe. This includes the gravitational formation of galaxy clusters and the cosmic filaments connecting them over 100-million-light–year distances.

Light from very distant objects beyond these structures is deflected by the gravitational tug of dark matter within the clusters and filaments. This can be observed on Earth as the gravitational lensing of these distant objects. This distorts images of the distant objects and affects their observed brightness. These effects can be used to determine the dark-matter content of the clusters and filaments.

In 2007, the Cosmic Evolution Survey (COSMOS) used the Hubble Space Telescope to create a map of cosmic filaments in an area of the sky about nine times larger than that occupied by the Moon.

“The COSMOS field was published by Richard Massey and my advisor, Jason Rhodes,” Scognamiglio recounts. “It has a special place in the history of dark-matter mapping, with the first wide-area map of space-based weak lensing mass.”

However, Hubble’s limited resolution meant that many smaller-scale features remained invisible in COSMOS. In a new survey called COSMOS-Web, Scognamiglio’s team harnessed the vastly improved imaging capabilities of the JWST, which offers over twice the resolution of its predecessor.

Sharp and sensitive

“We used JWST’s exceptional sharpness and sensitivity to measure the shapes of many more faint, distant galaxies in the COSMOS-Web field – the central part of the original COSMOS field,” Scognamiglio describes. “This allowed us to push weak gravitational lensing into a new regime, producing a much sharper and more detailed mass map over a contiguous area.”

With these improvements, the team could measure the shapes of 129 galaxies per square arcminute in area of sky the size of 2.5 full moons. With thorough mathematical analysis, they could then identify which of these galaxies had been distorted by dark-matter lensing.

“The map revealed fine structure in the cosmic web, including filaments and mass concentrations that were not visible in previous space-based maps,” Scognamiglio says.

Peak star formation

The map allowed the team to identify lensing structures out to distances of roughly 5 billion light–years, corresponding to the universe’s peak era of star formation. Beyond this point, galaxies became too sparse and dim for their shapes to be measured reliably, placing a new limit on the COSMOS-Web map’s resolution.

With this unprecedented resolution, the team could also identify features as small as the dark matter halos encircling small clusters of galaxies, which were invisible in the original COSMOS survey. The astronomers hope their result will set a new, higher-resolution benchmark for future studies using JWST’s observations to probe the elusive nature of dark matter, and its intrinsic connection with the formation and evolution of the universe’s largest structures.

“It also sets the stage for current and future missions like ESA’s Euclid and NASA’s Nancy Grace Roman Space Telescope, which will extend similar dark matter mapping techniques to much larger areas of the sky,” Scognamiglio says.

The observations are described in Nature Astronomy.

The post New cosmic map will put dark-matter theories to the test appeared first on Physics World.

  •  

Top-cited authors from India and North America share their tips for early-career researchers

Some 20 papers from researchers based in North America have been recognized with a Top Cited Paper award for 2025 from IOP Publishing, which publishes Physics World.

The prize is given to corresponding authors who have papers published in both IOP Publishing and its partners’ journals from 2022 to 2024 that are in the top 1% of the most cited papers.

Meanwhile, 29 papers from India have been recognized with a Top Cited Paper award for 2025.

Below, some of the winners of the 2025 top-cited paper award from India and North America outline their tips for early-career researchers who are looking to boost the impact of their work.

Answers have been edited for clarity and brevity.

Shikhar Mittal from Tata Institute of Fundamental Research in Mumbai: Early-career researchers, especially PhD students, often underestimate the importance of presentation and visibility when it comes to their work. While doing high-quality research is, of course, essential, it is equally important to write your paper clearly and professionally. Even the tiniest of details, such as consistent scientific notation, clean figures, correct punctuation and avoiding typos can make a big difference. A paper full of careless errors may not be taken seriously, even if it contains strong scientific results.

Another crucial aspect is visibility. It is important to actively advertise your research by presenting your work at conferences and reaching out to researchers who are working on related topics. If someone misses citing your relevant work, a polite message can often lead to recognition and even collaboration. Being proactive in how you communicate and share your research can significantly improve its impact.

Sandip Mondal from the Indian Institute of Technology Bombay: Don’t try to solve everything at once. Pick a focused, well-motivated question and go deep into it. It’s tempting to jump on “hot topics”, but the work that lasts – and gets cited – is methodologically sound, reproducible and well-characterized. Even incremental advances, if rigorously done, can be very impactful.

Another tip is to work with people who bring complementary skills — whether in theory, device fabrication or characterization. Collaboration isn’t just about co-authors; it’s about deepening the quality of your work. And once your paper is published, your job isn’t done. Promote it as visibility breeds engagement, which leads to impact.

Sarika Jalan from the Indian Institute of Technology Indore: Try to go in-depth into the problem you are working on. Publications alone cannot give visibility, its understanding and creativity that will matter in the long run.

Marcia Rieke from the University of Arizona: Write clearly and concisely. I would also suggest being careful with your choice of journal – high-impact-factor journals can be great but may lead to protracted refereeing while other journals are very reputable and sometimes have faster publication rates.

Dan Scolnic from Duke University: At some point there needs to be a transition from thinking about “number of papers” to “number of citations” instead. Graduate students typically talk about writing as many papers as possible – that’s the metric. But at some point scientists start getting judged on the impact of their papers, which is most easily understood with citations. I’m not saying one should e-mail anyone with a paper to cite them, but rather, to think about what one wants to put time in to work on. One should say “I’d like to work on this because I think it can have a big impact”.

P Veeresha from CHRIST University in Bangalore: Build a strong foundation in the fundamentals and always think critically about what society truly needs. Also focus on how your research can be different, novel, and practically useful. It’s important to present your work in a simple and clear way so that it connects with both the academic community and real-world applications.

Parasuraman Swaminathan from the Indian Institute of Technology Madras: Thorough research is critical for good quality research, be bold and try to push the boundaries of your chosen topic.

Arnab Pal from the Institute of Mathematical Sciences in Chennai: Focus on asking meaningful, well-motivated questions rather than just solving technically difficult problems. Write clearly and communicate your ideas with simplicity and purpose. Engage with the research community early through talks, preprints and collaborations. Above all, be patient and consistent; impactful work often takes time to be recognized.

Steven Finkelstein from the University of Texas at Austin: Work on topics that both you think are interesting, and that others find interesting, and above all work with people who you trust.

The post Top-cited authors from India and North America share their tips for early-career researchers appeared first on Physics World.

  •  

Twenty-three nominations, yet no Nobel prize: how Chien-Shiung Wu missed out on the top award in physics

The facts seem simple enough. In 1957 Chen Ning Yang and Tsung-Dao Lee won the Nobel Prize for Physics “for their penetrating investigation of the so-called parity laws which has led to important discoveries regarding the elementary particles”. The idea that parity is violated shocked physicists, who had previously assumed that every process in nature remains the same if you reverse all three spatial co-ordinates.

Thanks to the work of Lee and Yang, who were Chinese-American theoretical physicists, it now appeared that this fundamental physics concept wasn’t true (see box below). As Yang once told Physics World columnist and historian of science Robert Crease, the discovery of parity violation was like having the lights switched off and being so confused that you weren’t sure you’d be in the same room when they came back on.

But one controversy has always surrounded the prize.

Lee and Yang published their findings in a paper in October 1956 (Phys. Rev. 1 254), meaning that their Nobel prize was one of the rare occasions that satisfied Alfred Nobel’s will, which says the award should go to work done “during the preceding year”. However, the first verification of parity violation was published in February 1957 (Phys. Rev. 105 1413) by a team of experimental physicists led by Chien-Shiung Wu at Columbia University, where Lee was also based. (Yang was at the Institute for Advanced Study in Princeton at the time.)

Surely Wu, an eminent experimentalist (see box below “Chien-Shiung Wu: a brief history”), deserved a share of the prize for contributing to such an fundamental discovery? In her paper, entitled “Experimental Test of Parity Conservation in Beta Decay”, Wu says she had “inspiring discussions” with Lee and Yang. Was gender bias at play, did her paper miss the deadline, or was she simply never nominated?

The Wu experiment

Wu's parity conservation experimental results
(Courtesy: IOP Publishing)

Parity is a property of elementary particles that says how they behave when reflected in a mirror. If the parity of a particle does not change during reflection, parity is said to be conserved. In 1956 Tsung-Dao Lee and Chen Ning Yang realized that while parity conservation had been confirmed in electromagnetic and strong interactions, there was no compelling evidence that it should also hold in weak interactions, such as radioactive decay. In fact, Lee and Yang thought parity violation could explain the peculiar decay patterns of K mesons, which are governed by the weak interaction.

In 1957 Chien-Shiung Wu suggested an experiment to check this based on unstable cobalt-60 nuclei radioactively decaying into nickel-60 while emitting beta rays (electrons). Working at very low temperatures to ensure almost no random thermal motion – and thereby enabling a strong magnetic field to align the cobalt nuclei with their spins parallel – Wu found that far more electrons were emitted in a downward direction than upward.

In the figure, (a) shows how a mirror image of this experiment should also produce more electrons going down than up. But when the experiment was repeated, with the direction of the magnetic field reversed to change the direction of the spin as it would be in the mirror image, Wu and colleagues found that more electrons were produced going upwards (b). The fact that the real-life experiment with reversed spin direction behaved differently from the mirror image proved that parity is violated in the weak interaction of beta decay.

Back then, the Nobel statutes stipulated that all details about who had been nominated for a Nobel prize – and why the winners were chosen by the Nobel committee – were to be kept secret forever. Later, in 1974, the rules were changed, allowing the archives to be opened 50 years after an award had been made. So why did the mystery not become clear in 2007, half a century after the 1957 prize?

The reason is that there is a secondary criterion for prizes awarded by the Royal Swedish Academy of Sciences – in physics and chemistry – which is that the archive must stay shut for as long as a laureate is still alive. Lee and Yang were in their early 30s when they were awarded the prize and both went on to live very long lives. Lee died on 24 August 2024 aged 97 and it was not until the death of Yang on 18 October 2025 at 103 that the chance to solve the mystery finally arose.

Chien-Shiung Wu: a brief history

Chien-Shiung Wu by some experimental apparatus
Overlooked for a Nobel Chien-Shiung Wu in 1963 at Columbia University by which time she had already received the first three of her 23 known nominations for a Nobel prize. (Courtesy: Smithsonian Institution)

Born on 31 May 1912 in Jiangsu province in eastern China, Chien-Shiung Wu graduated with a degree in physics from National Central University in Nanjing. After a few years of research in China, she moved to the US, gaining a PhD at the University of California at Berkeley in 1940. Three years later Wu took up a teaching job at Princeton University in New Jersey – a remarkable feat given that women were not then even allowed to study at Princeton.

During the Second World War, Wu joined the Manhattan atomic-bomb project, working on radiation detectors at Columbia University in New York. After the conflict was over, she started studying beta decay – one of the weak interactions associated with radioactive decay. Wu famously led a crucial experiment studying the beta decay of cobalt-60 nuclei, which confirmed a prediction made in October 1956 by her Columbia colleague Tsung-Dao Lee and Chen Ning Yang in Princeton that parity can be violated in the weak interaction.

Lee and Yang went on to win the 1957 Nobel Prize for Physics but the Nobel Committee was not aware that Lee had in fact consulted Wu in spring 1956 – several months before their paper came out – about potential experiments to prove their prediction. As she was to recall in 1973, studying the decay of cobalt-60 was “a golden opportunity” to test their ideas that she “could not let pass”.

The first woman in the Columbia physics department to get a tenured position and a professorship, Wu remained at Columbia for the rest of her career. Taking an active interest in physics well into retirement, she died on 16 February 1997 at the age of 84. Only now, with the publication of this Physics World article, has it become clear that despite receiving 23 nominations from 18 different physicists in 16 years between 1958 and 1974, she never won a Nobel prize.

Entering the archives

As two physicists based in Stockholm with a keen interest in the history of science, we had already examined the case of Lise Meitner, another female physicist who never won a Nobel prize – in her case for fission. We’d published our findings about Meitner in the December 2023 issue of Fysikaktuellt – the journal of the Swedish Physical Society. So after Yang died, we asked the Center for History of Science at the Royal Swedish Academy of Sciences if we could look at the 1957 archives.

A previous article in Physics World from 2012 by Magdolna Hargittai, who had spoken to Anders Bárány, former secretary of the Nobel Committee for Physics, seemed to suggest that Wu wasn’t awarded the 1957 prize because her Physical Review paper had been published in February of that year. This was after the January cut-off and therefore too late to be considered on that occasion (although the trio could have been awarded a joint prize in a subsequent year).

Mats Larsson and Ramon Wyss at the Center for History of Science at the Royal Swedish Academy of Sciences in Stockholm, Sweden
History in the making Left image: Mats Larsson (centre) and Ramon Wyss (left) at the Center for History of Science at the Royal Swedish Academy of Sciences in Stockholm, Sweden, on 13 November 2025, where they become the first people to view the archive containing information about the nominations for the 1957 Nobel Prize for Physics. They are shown here in the company of centre director Karl Grandin (right). Right image: Larsson and Wyss with their hands on the archives, on which this Physics World article is based. (Courtesy: Anne Miche de Malleray)

After receiving permission to access the archives, we went to the centre on Thursday 13 November 2025, where – with great excitement – we finally got our hands on the thick, black, hard-bound book containing information about the 1957 Nobel prizes in physics and chemistry. About 500 pages long, the book revealed that there were a total of 58 nominations for the 1957 Nobel Prize for Physics – but none at all for Wu that year. As we shall go on to explain, she did, however, receive a total of 23 nominations over the next 16 years.

Lee and Yang, we discovered, received just a single nomination for the 1957 prize, submitted by John Simpson, an experimental physicist at the University of Chicago in the US. His nomination reached the Nobel Committee on 29 January 1957, just before the deadline of 31 January. Simpson clearly had a lot of clout with the committee, which commissioned two reports from its members – both Swedish physicists – based on his recommendation. One was by Oskar Klein on the theoretical aspects of the prize and the other by Erik Hulthén on the experimental side of things.

Report revelations

Klein devotes about half of his four-page report to the Hungarian-born theorist Eugene Wigner, who – we discovered – received seven separate nominations for the 1957 prize. In his opening remarks, Klein notes that Wigner’s work on symmetry principles in physics, first published in 1927, had gained renewed relevance in light of recent experiments by Wu, Leon Lederman and others. According to Klein, these experiments cast a new light on the fundamental symmetry principles of physics.

Klein then discusses three important papers by Wigner and concludes that he, more than any other physicist, established the conceptual background on symmetry principles that enabled Lee and Yang to clarify the possibilities of experimentally testing parity non-conservation. Klein also analyses Lee and Yang’s award-winning Physical Review paper in some detail and briefly mentions subsequent articles of theirs as well as papers by two future Nobel laureates – Lev Landau and Abdus Salam.

Klein does not end his report with an explicit recommendation, but identifies Lee, Yang and Wigner as having made the most important contributions. It is noteworthy that every physicist mentioned in Klein’s report – apart from Wu – eventually went on to receive a Nobel Prize for Physics. Wigner did not have to wait long, winning the 1963 prize together with Maria Goeppert Mayer and Hans Jensen, who had also been nominated in 1957.

As for Hulthén’s experimental report, it acknowledges that Wu’s experiment started after early discussions with Lee and Yang. In fact, Lee had consulted Wu at Columbia on the subject of parity conservation in beta-decay before Lee and Yang’s famous paper was published. According to Wu, she mentioned to Lee that the best way would be to use a polarized cobalt-60 source for testing the assumption of parity violation in beta-decay.

Many physicists were aware of Lee and Yang’s paper, which was certainly seen as highly speculative, whereas Wu realized the opportunity to test the far-reaching consequences of parity violation. Since she was not a specialist of low-temperature nuclear alignment, she contacted Ernest Ambler at the National Bureau of Standards in Washington DC, who was a co-author on her Physics Review paper of 15 February 1957.

Hulthén describes in detail the severe technical challenges that Wu’s team had to overcome to carry out the experiment. These included achieving an exceptionally low temperature of 0.001 K, placing the detector inside the cryostat, and mitigating perturbations from the crystalline field that weakened the magnetic field’s effectiveness.

Despite these difficulties, the experimentalists managed to obtain a first indication of parity violations, which they presented on 4 January 1957 at a regular lunch that took place at Columbia every Friday. The news of these preliminary results spread like wildfire throughout the physics community, prompting other groups to immediately follow suit.

Hulthén mentions, for example, a measurement of the magnetic moment of the mu (μ) meson (now known as the muon) that Richard Garvin, Leon Lederman and Marcel Weinrich performed at Columbia’s cyclotron almost as soon as Lederman had obtained information of Wu’s work. He also cites work at the University of Leiden in the Netherlands led by C J Gorter that apparently had started to look into parity violation independently of Wu’s experiment (Physica 23 259).

Wu’s nominations

It is clear from Hulthén’s report that the Nobel Physics Committee was well informed about the experimental work carried out in the wake of Lee and Yang’s paper of October 1956, in particular the groundbreaking results of Wu. However, it is not clear from a subsequent report dated 20 September 1957 (see box below) from the Nobel Committee why Wigner was not awarded a share of the 1957 prize, despite his seven nominations. Nor is there any suggestion of postponing the prize a year in order to include Wu. The report was discussed on 23 October 1957 by members of the “Physics Class” – a group of physicists in the academy who always consider the committee’s recommendations – who unanimously endorsed it.

The Nobel Committee report of 1957

Sheet from a Nobel report written on 20 September 1957 by the Nobel Committee for Physics
(Courtesy: The Nobel Archive, The Royal Swedish Academy of Sciences, Stockholm)

This image is the final page of a report written on 20 September 1957 by the Nobel Committee for Physics about who should win the 1957 Nobel Prize for Physics. Dated 20 September 1957 and published here for the first time since it was written, the English translation is as follows. “Although much experimental and theoretical work remains to be done to fully clarify the necessary revision of the parity principle, it can already be said that a discovery with extremely significant consequences has emerged as a result of the above-mentioned study by Lee and Yang. In light of the above, the committee proposes that the 1957 Nobel Prize in Physics be awarded jointly to: Dr T D Lee, New York, and Dr C N Yang, Princeton, for their profound investigation of the so-called parity laws, which has led to the discovery of new properties of elementary particles.” The report was signed by Manne Siegbahn (chair), Gudmund Borelius, Erik Hulthén, Oskar Klein, Erik Rudberg and Ivar Waller.

Most noteworthy with regard to this meeting of the Physics Class was that Meitner – who had also been overlooked for the Nobel prize – took part in the discussions. Meitner, who was Austrian by birth, had been elected a foreign member of the Royal Swedish Academy of Sciences in 1945, becoming a “Swedish member” after taking Swedish citizenship in 1951. In the wake of these discussions, the academy decided on 31 October 1957 to award the 1957 Nobel Prize for Physics to Lee and Yang. We do not know, though, if Meitner argued for Wu to be awarded a share of that year’s prize.

A total of 23 nominations to give a Nobel prize to Wu reached the Nobel Committee on 10 separate years and she was nominated by 18 leading physicists, including various Nobel-prize winners and Tsung-Dao Lee himself

Although Wu did not receive any nominations in 1957, she was nominated the following year by the 1955 Nobel laureates in physics, Willis Lamb and Polykarp Kusch. In fact, after Lee and Yang won the prize, nominations to give a Nobel prize to Wu reached the committee on 10 separate years out of the next 16 (see graphic below). She was nominated by a total of 18 leading physicists, including various Nobel-prize winners and Lee himself. In fact, Lee nominated Wu for a Nobel prize on three separate occasions – in 1964, 1971 and 1972.

However, it appears she was never nominated by Yang (at the time of writing, we only have archive information up to 1974). One reason for Lee’s support and Yang’s silence could be attributed to the early discussions that Lee had with Wu, influencing the famous Lee and Yang paper, which Yang may not have been aware of. It is also not clear why Lee and Yang never acknowledged their discussion with Wu about the cobalt-60 experiment that was proposed in their paper; further research may shed more light on this topic.

Following Wu’s nomination in 1958, the Nobel Committee simply re-examined the investigations already carried out by Klein and Hulthén. The same procedure was repeated in subsequent years, but no new investigations into Wu’s work were carried out until 1971 when she received six nominations – the highest number she got in any one year.

Nominations for Wu from 1958 to 1974

Diagram showing the nominations for Wu from 1958 to 1974
(Courtesy: IOP Publishing)

Our examination of the newly released Nobel archive from 1957 indicates that although Chien-Shiung Wu was not nominated for that year’s prize, which was won by Chen Ning Yang and Tsung-Dao Lee, she did receive a total of 23 nominations over the next 16 years (1974 being the last open archive at the time of writing). Those 23 nominations were made by 18 different physicists, with Lee nominating Wu three times and Herwig Schopper, Emilio Segrè and Ryoya Utiyama each doing so twice. The peak year for nominations for her was 1971 when she received six nominations. The archives also show that in October 1957 Werner Heisenberg submitted a nomination for Lee (but not Yang); it was registered as a nomination for 1958. The nomination is very short and it is not clear why Heisenberg did not nominate Yang.

That year the committee decided to ask Bengt Nagel, a theorist at KTH Royal Institute of Technology, to investigate the theoretical importance of Wu’s experiments. The nominations she received for the Nobel prize concerned three experiments. In addition to her 1957 paper on parity violation there was a 1949 article she’d written with her Columbia colleague R D Albert verifying Enrico Fermi’s theory of beta decay (Phys. Rev. 75 315) and another she wrote in 1963 with Y K Lee and L W Mo on the conserved vector current, which is a fundamental hypothesis of the Standard Model of particle physics (Phys. Rev. Lett. 10 253).

After pointing out that four of the 1971 nominations came from Wu’s colleagues at Columbia, which to us may have hinted at a kind of lobbying campaign for her, Nagel stated that the three experiments had “without doubt been of great importance for our understanding of the weak interaction”. However, he added, “the experiments, at least the last two, have been conducted to certain aspects as commissioned or direct suggestions of theoreticians”.

In Nagel’s view, Wu’s work therefore differed significantly from, for example, James Cronin and Val Fritsch’s famous discovery in 1964 of charge-parity (CP) violation in the decay of Ko mesons. They had made their discovery under their own steam, whereas (Nagel suggested) Wu’s work had been carried out only after being suggested by theorists. “I feel somewhat hesitant whether their theoretical importance is a sufficient motivation to render Wu the Nobel prize,” Nagel concluded.

Missed opportunity

The Nobel archives are currently not open beyond 1974 so we don’t know if Wu received any further nominations over the next 23 years until her her death in 1997. Of course, had Wu not carried out her experimental test of parity violation, it is perfectly possible that another physicist or group of physicists would have something similar in due course.

Nevertheless, to us it was a missed opportunity not to include Wu as the third prize winner alongside Lee and Yang. Sure, she could not have won the prize in 1957 as she was not nominated for it and her key publication did not appear before the January deadline. But it would simply have been a case of waiting a year and giving Wu and her theoretical colleagues the prize jointly in 1958.

Another possible course of action would have been to single out the theoretical aspects of symmetry violation and award the prize to Lee, Wigner and Yang, as Klein had suggested in his report. Unfortunately, full details of the physics committee’s discussions are not contained in the archives, which means we don’t know if this was a genuine possibility being considered at the time.

But what is clear is that the Nobel committee knew full well the huge importance of Wu’s experimental confirmation of parity violation following the bold theoretical insights of Lee and Yang. Together, their work opened a new chapter in the world of physics. Without Wu’s interest in parity violation and her ingenious experimental knowledge, Lee and Yang would never have won the Nobel prize.

The post Twenty-three nominations, yet no Nobel prize: how Chien-Shiung Wu missed out on the top award in physics appeared first on Physics World.

  •  

Multi-ion cancer therapy tackles the LET trilemma

Cancer treatments using heavy ions offer several key advantages over conventional proton therapy: a sharper Bragg peak and small lateral scattering for precision tumour targeting, as well as high linear energy transfer (LET). High-LET radiation induces complex DNA damage in cancer cells, enabling effective treatment of even hypoxic, radioresistant tumours. A team at the National Institutes for Quantum Science and Technology (QST) in Japan is now exploring the potential benefits of multi-ion therapy combining beams of carbon, oxygen and neon ions.

“Different ions exhibit distinct physical and biological characteristics,” explains QST researcher Takamitsu Masuda. “Combining them in a way that is tailored to the specific characteristics of a tumour and its environment allows us to enhance tumour control while reducing damage to surrounding healthy tissues.”

The researchers are using multi-ion therapy to increase the dose-averaged LET (LETd) within the tumour, performing a phase I trial at the QST Hospital to evaluate the safety and feasibility of this LETd escalation for head-and-neck cancers. But while high LETd prescriptions can improve treatment efficacy, increasing LETd can also deteriorate plan robustness. This so-called “LET trilemma” – a complex trade-off between target dose homogeneity, range robustness and high LETd – is a major challenge in particle therapy optimization.

In their latest study, reported in Physics in Medicine & Biology, Masuda and colleagues evaluated the impact of range and setup uncertainties on LETd-optimized multi-ion treatment plans, examining strategies that could potentially overcome this LET trilemma.

Robustness evaluation

The team retrospectively analysed the data of six patients who had previously been treated with carbon-ion therapy. Patients 1, 2 and 3 had small, medium and large central tumours, respectively, and adjacent dose-limiting organs-at-risk (OARs); and patients 4, 5 and 6 had small, medium and large peripheral tumours and no dose-limiting OARs.

Multi-ion therapy plans
Multi-ion therapy plans Reference dose and LETd distributions for patients 1, 2 and 3 for multi-ion therapy with a target LETd of 90 keV/µm. The GTV, clinical target volume (CTV) and OARs are shown in cyan, green and magenta, respectively. (Courtesy: Phys. Med. Biol.10.1088/1361-6560/ae387b)

For each case, the researchers first generated baseline carbon-ion therapy plans and then incorporated oxygen- or neon-ion beams and tuned the plans to achieve a target LETd of 90 keV/µm to the gross tumour volume (GTV).

Particle therapy plans can be affected by both range uncertainties and setup variations. To assess the impact of these uncertainties, the researchers recalculated the multi-ion plans to incorporate range deviations of +2.5% (overshoot) and –2.5% (undershoot) and various setup uncertainties, evaluating their combined effects on dose and LETd distributions.

They found that range uncertainty was the main contributor to degraded plan quality. In general, range overshoot increased dose to the target, while undershoot decreased dose. Range uncertainties had the largest effect on small tumours and central tumours: patient #1 exhibited a deviation of around ±6% from the reference, while patient #3 showed a dose deviation of just ±1%. Robust target coverage was maintained in all large or peripheral tumours, but deteriorated in patient 1, leading to an uncertainty band of roughly 11%.

“Wide uncertainty bands indicate a higher risk that the intended dose may not be accurately delivered,” Masuda explains. “In particular, a pronounced lower band for the GTV suggests the potential for cold spots within the tumour, which could compromise local tumour control.”

The team also observed that range undershoot increased LETd and overshoot decreased it, although absolute differences in LETd within the entire target were small. Importantly, all OAR dose constraints were satisfied even in the largest error scenarios, with uncertainty bands comparable to those of conventional carbon-ion treatment plans.

Addressing the LET trilemma

To investigate strategies to improve plan robustness, the researchers created five new plans for patient 1, who had a small, central tumour that was particularly susceptible to uncertainties. They modified the original multi-ion plan (carbon- and oxygen-ion beams delivered at 70° and 290°) in five ways: expanding the target; altering the beam angles to orthogonal or opposing arrangements; increasing the number of irradiation fields to a four-field arrangement; and using oxygen ions for both beam ports (“heavier-ion selection”).

The heavier-ion selection plan proved the most effective in mitigating the effects of range uncertainty, substantially narrowing the dose uncertainty bands compared with the original plan. The team attribute this to the inherently higher LETd in heavier ions, making the 90 keV/µm target easier to achieve with oxygen-ion beams alone. The other plan modifications led to limited improvements.

Dose–volume histograms
Improving robustness Dose–volume histograms for patient 1, for the original multi-ion plan and the heavier-ion selection plan, showing the combined effects of range and setup uncertainties. Solid, dashed and dotted curves represent the reference plans, and upper and lower uncertainty scenarios, respectively. (Courtesy: Phys. Med. Biol.10.1088/1361-6560/ae387b)

These findings suggest that strategically employing heavier ions to enhance plan robustness could help control the balance among range robustness, uniform dose and high LETd – potentially offering a practical strategy to overcome the LET trilemma.

“Clinically, this strategy is particularly well-suited for small, deep-seated tumours and complex, variable sites such as the nasal cavity, where range uncertainties are amplified by depth, steep dose gradients and daily anatomical changes,” says Masuda. “In such cases, the use of heavier ions enables robust dose delivery with high LETd.”

The researchers are now exploring the integration of emerging technologies – such as robust optimization, arc therapy, dual-energy CT, in-beam PET and online adaptation – to minimize uncertainties. “This integration is highly desirable for applying multi-ion therapy to challenging cases such as pancreatic cancer, where uncertainties are inherently large, or hypofractionated treatments, where even a single error can have a significant impact,” Masuda tells Physics World.

The post Multi-ion cancer therapy tackles the LET trilemma appeared first on Physics World.

  •  
❌