↩ Accueil

Vue lecture

Twenty-three nominations, yet no Nobel prize: how Chien-Shiung Wu missed out on the top award in physics

The facts seem simple enough. In 1957 Chen Ning Yang and Tsung-Dao Lee won the Nobel Prize for Physics “for their penetrating investigation of the so-called parity laws which has led to important discoveries regarding the elementary particles”. The idea that parity is violated shocked physicists, who had previously assumed that every process in nature remains the same if you reverse all three spatial co-ordinates.

Thanks to the work of Lee and Yang, who were Chinese-American theoretical physicists, it now appeared that this fundamental physics concept wasn’t true (see box below). As Yang once told Physics World columnist and historian of science Robert Crease, the discovery of parity violation was like having the lights switched off and being so confused that you weren’t sure you’d be in the same room when they came back on.

But one controversy has always surrounded the prize.

Lee and Yang published their findings in a paper in October 1956 (Phys. Rev. 1 254), meaning that their Nobel prize was one of the rare occasions that satisfied Alfred Nobel’s will, which says the award should go to work done “during the preceding year”. However, the first verification of parity violation was published in February 1957 (Phys. Rev. 105 1413) by a team of experimental physicists led by Chien-Shiung Wu at Columbia University, where Lee was also based. (Yang was at the Institute for Advanced Study in Princeton at the time.)

The Wu experiment

Wu's parity conservation experimental results
(Courtesy: IOP Publishing)

Parity is a property of elementary particles that says how they behave when reflected in a mirror. If the parity of a particle does not change during reflection, parity is said to be conserved. In 1956 Tsung-Dao Lee and Chen Ning Yang realized that while parity conservation had been confirmed in electromagnetic and strong interactions, there was no compelling evidence that it should also hold in weak interactions, such as radioactive decay. In fact, Lee and Yang thought parity violation could explain the peculiar decay patterns of K mesons, which are governed by the weak interaction.

In 1957 Chien-Shiung Wu suggested an experiment to check this based on unstable cobalt-60 nuclei radioactively decaying into nickel-60 while emitting beta rays (electrons). Working at very low temperatures to ensure almost no random thermal motion – and thereby enabling a strong magnetic field to align the cobalt nuclei with their spins parallel – Wu found that far more electrons were emitted in a downward direction than upward.

In the figure, (a) shows how a mirror image of this experiment should also produce more electrons going down than up. But when the experiment was repeated, with the direction of the magnetic field reversed to change the direction of the spin as it would be in the mirror image, Wu and colleagues found that more electrons were produced going upwards (b). The fact that the real-life experiment with reversed spin direction behaved differently from the mirror image proved that parity is violated in the weak interaction of beta decay.

Surely Wu, an eminent experimentalist (see box below “Chien-Shiung Wu: a brief history”), deserved a share of the prize for contributing to such an fundamental discovery? In her paper, entitled “Experimental Test of Parity Conservation in Beta Decay”, Wu says she had “inspiring discussions” with Lee and Yang. Was gender bias at play, did her paper miss the deadline, or was she simply never nominated?

Back then, the Nobel statutes stipulated that all details about who had been nominated for a Nobel prize – and why the winners were chosen by the Nobel committee – were to be kept secret forever. Later, in 1974, the rules were changed, allowing the archives to be opened 50 years after an award had been made. So why did the mystery not become clear in 2007, half a century after the 1957 prize?

The reason is that there is a secondary criterion for prizes awarded by the Royal Swedish Academy of Sciences – in physics and chemistry – which is that the archive must stay shut for as long as a laureate is still alive. Lee and Yang were in their early 30s when they were awarded the prize and both went on to live very long lives. Lee died on 24 August 2024 aged 97 and it was not until the death of Yang on 18 October 2025 at 103 that the chance to solve the mystery finally arose.

Chien-Shiung Wu: a brief history

Chien-Shiung Wu by some experimental apparatus
Overlooked for a Nobel Chien-Shiung Wu in 1963 at Columbia University by which time she had already received the first three of her 23 known nominations for a Nobel prize. (Courtesy: Smithsonian Institution)

Born on 31 May 1912 in Jiangsu province in eastern China, Chien-Shiung Wu graduated with a degree in physics from National Central University in Nanjing. After a few years of research in China, she moved to the US, gaining a PhD at the University of California at Berkeley in 1940. Three years later Wu took up a teaching job at Princeton University in New Jersey – a remarkable feat given that women were not then even allowed to study at Princeton.

During the Second World War, Wu joined the Manhattan atomic-bomb project, working on radiation detectors at Columbia University in New York. After the conflict was over, she started studying beta decay – one of the weak interactions associated with radioactive decay. Wu famously led a crucial experiment studying the beta decay of cobalt-60 nuclei, which confirmed a prediction made in October 1956 by her Columbia colleague Tsung-Dao Lee and Chen Ning Yang in Princeton that parity can be violated in the weak interaction.

Lee and Yang went on to win the 1957 Nobel Prize for Physics but the Nobel Committee was not aware that Lee had in fact consulted Wu in spring 1956 – several months before their paper came out – about potential experiments to prove their prediction. As she was to recall in 1973, studying the decay of cobalt-60 was “a golden opportunity” to test their ideas that she “could not let pass”.

The first woman in the Columbia physics department to get a tenured position and a professorship, Wu remained at Columbia for the rest of her career. Taking an active interest in physics well into retirement, she died on 16 February 1997 at the age of 84. Only now, with the publication of this Physics World article, has it become clear that despite receiving 23 nominations from 18 different physicists in 16 years between 1958 and 1974, she never won a Nobel prize.

Entering the archives

As two physicists based in Stockholm with a keen interest in the history of science, we had already examined the case of Lise Meitner, another female physicist who never won a Nobel prize – in her case for fission. We’d published our findings about Meitner in the December 2023 issue of Fysikaktuellt – the journal of the Swedish Physical Society. So after Yang died, we asked the Center for History of Science at the Royal Swedish Academy of Sciences if we could look at the 1957 archives.

A previous article in Physics World from 2012 by Magdolna Hargittai, who had spoken to Anders Bárány, former secretary of the Nobel Committee for Physics, seemed to suggest that Wu wasn’t awarded the 1957 prize because her Physical Review paper had been published in February of that year. This was after the January cut-off and therefore too late to be considered on that occasion (although the trio could have been awarded a joint prize in a subsequent year).

Mats Larsson and Ramon Wyss at the Center for History of Science at the Royal Swedish Academy of Sciences in Stockholm, Sweden
History in the making Left image: Mats Larsson (centre) and Ramon Wyss (left) at the Center for History of Science at the Royal Swedish Academy of Sciences in Stockholm, Sweden, on 13 November 2025, where they become the first people to view the archive containing information about the nominations for the 1957 Nobel Prize for Physics. They are shown here in the company of centre director Karl Grandin (right). Right image: Larsson and Wyss with their hands on the archives, on which this Physics World article is based. (Courtesy: Anne Miche de Malleray)

After receiving permission to access the archives, we went to the centre on Thursday 13 November 2025, where – with great excitement – we finally got our hands on the thick, black, hard-bound book containing information about the 1957 Nobel prizes in physics and chemistry. About 500 pages long, the book revealed that there were a total of 58 nominations for the 1957 Nobel Prize for Physics – but none at all for Wu that year. As we shall go on to explain, she did, however, receive a total of 23 nominations over the next 16 years.

Lee and Yang, we discovered, received just a single nomination for the 1957 prize, submitted by John Simpson, an experimental physicist at the University of Chicago in the US. His nomination reached the Nobel Committee on 29 January 1957, just before the deadline of 31 January. Simpson clearly had a lot of clout with the committee, which commissioned two reports from its members – both Swedish physicists – based on his recommendation. One was by Oskar Klein on the theoretical aspects of the prize and the other by Erik Hulthén on the experimental side of things.

Report revelations

Klein devotes about half of his four-page report to the Hungarian-born theorist Eugene Wigner, who – we discovered – received seven separate nominations for the 1957 prize. In his opening remarks, Klein notes that Wigner’s work on symmetry principles in physics, first published in 1927, had gained renewed relevance in light of recent experiments by Wu, Leon Lederman and others. According to Klein, these experiments cast a new light on the fundamental symmetry principles of physics.

Klein then discusses three important papers by Wigner and concludes that he, more than any other physicist, established the conceptual background on symmetry principles that enabled Lee and Yang to clarify the possibilities of experimentally testing parity non-conservation. Klein also analyses Lee and Yang’s award-winning Physical Review paper in some detail and briefly mentions subsequent articles of theirs as well as papers by two future Nobel laureates – Lev Landau and Abdus Salam.

Klein does not end his report with an explicit recommendation, but identifies Lee, Yang and Wigner as having made the most important contributions. It is noteworthy that every physicist mentioned in Klein’s report – apart from Wu – eventually went on to receive a Nobel Prize for Physics. Wigner did not have to wait long, winning the 1963 prize together with Maria Goeppert Mayer and Hans Jensen, who had also been nominated in 1957.

As for Hulthén’s experimental report, it acknowledges that Wu’s experiment started after early discussions with Lee and Yang. In fact, Lee had consulted Wu at Columbia on the subject of parity conservation in beta-decay before Lee and Yang’s famous paper was published. According to Wu, she mentioned to Lee that the best way would be to use a polarized cobalt-60 source for testing the assumption of parity violation in beta-decay.

Many physicists were aware of Lee and Yang’s paper, which was certainly seen as highly speculative, whereas Wu realized the opportunity to test the far-reaching consequences of parity violation. Since she was not a specialist of low-temperature nuclear alignment, she contacted Ernest Ambler at the National Bureau of Standards in Washington DC, who was a co-author on her Physics Review paper of 15 February 1957.

Hulthén describes in detail the severe technical challenges that Wu’s team had to overcome to carry out the experiment. These included achieving an exceptionally low temperature of 0.001 K, placing the detector inside the cryostat, and mitigating perturbations from the crystalline field that weakened the magnetic field’s effectiveness.

Despite these difficulties, the experimentalists managed to obtain a first indication of parity violations, which they presented on 4 January 1957 at a regular lunch that took place at Columbia every Friday. The news of these preliminary results spread like wildfire throughout the physics community, prompting other groups to immediately follow suit.

Hulthén mentions, for example, a measurement of the magnetic moment of the mu (μ) meson (now known as the muon) that Richard Garvin, Leon Lederman and Marcel Weinrich performed at Columbia’s cyclotron almost as soon as Lederman had obtained information of Wu’s work. He also cites work at the University of Leiden in the Netherlands led by C J Gorter that apparently had started to look into parity violation independently of Wu’s experiment (Physica 23 259).

Wu’s nominations

It is clear from Hulthén’s report that the Nobel Physics Committee was well informed about the experimental work carried out in the wake of Lee and Yang’s paper of October 1956, in particular the groundbreaking results of Wu. However, it is not clear from a subsequent report dated 20 September 1957 (see box below) from the Nobel Committee why Wigner was not awarded a share of the 1957 prize, despite his seven nominations. Nor is there any suggestion of postponing the prize a year in order to include Wu. The report was discussed on 23 October 1957 by members of the “Physics Class” – a group of physicists in the academy who always consider the committee’s recommendations – who unanimously endorsed it.

The Nobel Committee report of 1957

Sheet from a Nobel report written on 20 September 1957 by the Nobel Committee for Physics
(Courtesy: The Nobel Archive, The Royal Swedish Academy of Sciences, Stockholm)

This image is the final page of a report written on 20 September 1957 by the Nobel Committee for Physics about who should win the 1957 Nobel Prize for Physics. Dated 20 September 1957 and published here for the first time since it was written, the English translation is as follows. “Although much experimental and theoretical work remains to be done to fully clarify the necessary revision of the parity principle, it can already be said that a discovery with extremely significant consequences has emerged as a result of the above-mentioned study by Lee and Yang. In light of the above, the committee proposes that the 1957 Nobel Prize in Physics be awarded jointly to: Dr T D Lee, New York, and Dr C N Yang, Princeton, for their profound investigation of the so-called parity laws, which has led to the discovery of new properties of elementary particles.” The report was signed by Manne Siegbahn (chair), Gudmund Borelius, Erik Hulthén, Oskar Klein, Erik Rudberg and Ivar Waller.

Most noteworthy with regard to this meeting of the Physics Class was that Meitner – who had also been overlooked for the Nobel prize – took part in the discussions. Meitner, who was Austrian by birth, had been elected a foreign member of the Royal Swedish Academy of Sciences in 1945, becoming a “Swedish member” after taking Swedish citizenship in 1951. In the wake of these discussions, the academy decided on 31 October 1957 to award the 1957 Nobel Prize for Physics to Lee and Yang. We do not know, though, if Meitner argued for Wu to be awarded a share of that year’s prize.

A total of 23 nominations to give a Nobel prize to Wu reached the Nobel Committee on 10 separate years and she was nominated by 18 leading physicists, including various Nobel-prize winners and Tsung-Dao Lee himself

Although Wu did not receive any nominations in 1957, she was nominated the following year by the 1955 Nobel laureates in physics, Willis Lamb and Polykarp Kusch. In fact, after Lee and Yang won the prize, nominations to give a Nobel prize to Wu reached the committee on 10 separate years out of the next 16 (see graphic below). She was nominated by a total of 18 leading physicists, including various Nobel-prize winners and Lee himself. In fact, Lee nominated Wu for a Nobel prize on three separate occasions – in 1964, 1971 and 1972.

However, it appears she was never nominated by Yang (at the time of writing, we only have archive information up to 1974). One reason for Lee’s support and Yang’s silence could be attributed to the early discussions that Lee had with Wu, influencing the famous Lee and Yang paper, which Yang may not have been aware of. It is also not clear why Lee and Yang never acknowledged their discussion with Wu about the cobalt-60 experiment that was proposed in their paper; further research may shed more light on this topic.

Following Wu’s nomination in 1958, the Nobel Committee simply re-examined the investigations already carried out by Klein and Hulthén. The same procedure was repeated in subsequent years, but no new investigations into Wu’s work were carried out until 1971 when she received six nominations – the highest number she got in any one year.

Nominations for Wu from 1958 to 1974

Diagram showing the nominations for Wu from 1958 to 1974
(Courtesy: IOP Publishing)

Our examination of the newly released Nobel archive from 1957 indicates that although Chien-Shiung Wu was not nominated for that year’s prize, which was won by Chen Ning Yang and Tsung-Dao Lee, she did receive a total of 23 nominations over the next 16 years (1974 being the last open archive at the time of writing). Those 23 nominations were made by 18 different physicists, with Lee nominating Wu three times and Herwig Schopper, Emilio Segrè and Ryoya Utiyama each doing so twice. The peak year for nominations for her was 1971 when she received six nominations. The archives also show that in October 1957 Werner Heisenberg submitted a nomination for Lee (but not Yang); it was registered as a nomination for 1958. The nomination is very short and it is not clear why Heisenberg did not nominate Yang.

That year the committee decided to ask Bengt Nagel, a theorist at KTH Royal Institute of Technology, to investigate the theoretical importance of Wu’s experiments. The nominations she received for the Nobel prize concerned three experiments. In addition to her 1957 paper on parity violation there was a 1949 article she’d written with her Columbia colleague R D Albert verifying Enrico Fermi’s theory of beta decay (Phys. Rev. 75 315) and another she wrote in 1963 with Y K Lee and L W Mo on the conserved vector current, which is a fundamental hypothesis of the Standard Model of particle physics (Phys. Rev. Lett. 10 253).

After pointing out that four of the 1971 nominations came from Wu’s colleagues at Columbia, which to us may have hinted at a kind of lobbying campaign for her, Nagel stated that the three experiments had “without doubt been of great importance for our understanding of the weak interaction”. However, he added, “the experiments, at least the last two, have been conducted to certain aspects as commissioned or direct suggestions of theoreticians”.

In Nagel’s view, Wu’s work therefore differed significantly from, for example, James Cronin and Val Fritsch’s famous discovery in 1964 of charge-parity (CP) violation in the decay of Ko mesons. They had made their discovery under their own steam, whereas (Nagel suggested) Wu’s work had been carried out only after being suggested by theorists. “I feel somewhat hesitant whether their theoretical importance is a sufficient motivation to render Wu the Nobel prize,” Nagel concluded.

Missed opportunity

The Nobel archives are currently not open beyond 1974 so we don’t know if Wu received any further nominations over the next 23 years until her her death in 1997. Of course, had Wu not carried out her experimental test of parity violation, it is perfectly possible that another physicist or group of physicists would have something similar in due course.

Nevertheless, to us it was a missed opportunity not to include Wu as the third prize winner alongside Lee and Yang. Sure, she could not have won the prize in 1957 as she was not nominated for it and her key publication did not appear before the January deadline. But it would simply have been a case of waiting a year and giving Wu and her theoretical colleagues the prize jointly in 1958.

Another possible course of action would have been to single out the theoretical aspects of symmetry violation and award the prize to Lee, Wigner and Yang, as Klein had suggested in his report. Unfortunately, full details of the physics committee’s discussions are not contained in the archives, which means we don’t know if this was a genuine possibility being considered at the time.

But what is clear is that the Nobel committee knew full well the huge importance of Wu’s experimental confirmation of parity violation following the bold theoretical insights of Lee and Yang. Together, their work opened a new chapter in the world of physics. Without Wu’s interest in parity violation and her ingenious experimental knowledge, Lee and Yang would never have won the Nobel prize.

The post Twenty-three nominations, yet no Nobel prize: how Chien-Shiung Wu missed out on the top award in physics appeared first on Physics World.

  •  

Multi-ion cancer therapy tackles the LET trilemma

Cancer treatments using heavy ions offer several key advantages over conventional proton therapy: a sharper Bragg peak and small lateral scattering for precision tumour targeting, as well as high linear energy transfer (LET). High-LET radiation induces complex DNA damage in cancer cells, enabling effective treatment of even hypoxic, radioresistant tumours. A team at the National Institutes for Quantum Science and Technology (QST) in Japan is now exploring the potential benefits of multi-ion therapy combining beams of carbon, oxygen and neon ions.

“Different ions exhibit distinct physical and biological characteristics,” explains QST researcher Takamitsu Masuda. “Combining them in a way that is tailored to the specific characteristics of a tumour and its environment allows us to enhance tumour control while reducing damage to surrounding healthy tissues.”

The researchers are using multi-ion therapy to increase the dose-averaged LET (LETd) within the tumour, performing a phase I trial at the QST Hospital to evaluate the safety and feasibility of this LETd escalation for head-and-neck cancers. But while high LETd prescriptions can improve treatment efficacy, increasing LETd can also deteriorate plan robustness. This so-called “LET trilemma” – a complex trade-off between target dose homogeneity, range robustness and high LETd – is a major challenge in particle therapy optimization.

In their latest study, reported in Physics in Medicine & Biology, Masuda and colleagues evaluated the impact of range and setup uncertainties on LETd-optimized multi-ion treatment plans, examining strategies that could potentially overcome this LET trilemma.

Robustness evaluation

The team retrospectively analysed the data of six patients who had previously been treated with carbon-ion therapy. Patients 1, 2 and 3 had small, medium and large central tumours, respectively, and adjacent dose-limiting organs-at-risk (OARs); and patients 4, 5 and 6 had small, medium and large peripheral tumours and no dose-limiting OARs.

Multi-ion therapy plans
Multi-ion therapy plans Reference dose and LETd distributions for patients 1, 2 and 3 for multi-ion therapy with a target LETd of 90 keV/µm. The GTV, clinical target volume (CTV) and OARs are shown in cyan, green and magenta, respectively. (Courtesy: Phys. Med. Biol.10.1088/1361-6560/ae387b)

For each case, the researchers first generated baseline carbon-ion therapy plans and then incorporated oxygen- or neon-ion beams and tuned the plans to achieve a target LETd of 90 keV/µm to the gross tumour volume (GTV).

Particle therapy plans can be affected by both range uncertainties and setup variations. To assess the impact of these uncertainties, the researchers recalculated the multi-ion plans to incorporate range deviations of +2.5% (overshoot) and –2.5% (undershoot) and various setup uncertainties, evaluating their combined effects on dose and LETd distributions.

They found that range uncertainty was the main contributor to degraded plan quality. In general, range overshoot increased dose to the target, while undershoot decreased dose. Range uncertainties had the largest effect on small tumours and central tumours: patient #1 exhibited a deviation of around ±6% from the reference, while patient #3 showed a dose deviation of just ±1%. Robust target coverage was maintained in all large or peripheral tumours, but deteriorated in patient 1, leading to an uncertainty band of roughly 11%.

“Wide uncertainty bands indicate a higher risk that the intended dose may not be accurately delivered,” Masuda explains. “In particular, a pronounced lower band for the GTV suggests the potential for cold spots within the tumour, which could compromise local tumour control.”

The team also observed that range undershoot increased LETd and overshoot decreased it, although absolute differences in LETd within the entire target were small. Importantly, all OAR dose constraints were satisfied even in the largest error scenarios, with uncertainty bands comparable to those of conventional carbon-ion treatment plans.

Addressing the LET trilemma

To investigate strategies to improve plan robustness, the researchers created five new plans for patient 1, who had a small, central tumour that was particularly susceptible to uncertainties. They modified the original multi-ion plan (carbon- and oxygen-ion beams delivered at 70° and 290°) in five ways: expanding the target; altering the beam angles to orthogonal or opposing arrangements; increasing the number of irradiation fields to a four-field arrangement; and using oxygen ions for both beam ports (“heavier-ion selection”).

The heavier-ion selection plan proved the most effective in mitigating the effects of range uncertainty, substantially narrowing the dose uncertainty bands compared with the original plan. The team attribute this to the inherently higher LETd in heavier ions, making the 90 keV/µm target easier to achieve with oxygen-ion beams alone. The other plan modifications led to limited improvements.

Dose–volume histograms
Improving robustness Dose–volume histograms for patient 1, for the original multi-ion plan and the heavier-ion selection plan, showing the combined effects of range and setup uncertainties. Solid, dashed and dotted curves represent the reference plans, and upper and lower uncertainty scenarios, respectively. (Courtesy: Phys. Med. Biol.10.1088/1361-6560/ae387b)

These findings suggest that strategically employing heavier ions to enhance plan robustness could help control the balance among range robustness, uniform dose and high LETd – potentially offering a practical strategy to overcome the LET trilemma.

“Clinically, this strategy is particularly well-suited for small, deep-seated tumours and complex, variable sites such as the nasal cavity, where range uncertainties are amplified by depth, steep dose gradients and daily anatomical changes,” says Masuda. “In such cases, the use of heavier ions enables robust dose delivery with high LETd.”

The researchers are now exploring the integration of emerging technologies – such as robust optimization, arc therapy, dual-energy CT, in-beam PET and online adaptation – to minimize uncertainties. “This integration is highly desirable for applying multi-ion therapy to challenging cases such as pancreatic cancer, where uncertainties are inherently large, or hypofractionated treatments, where even a single error can have a significant impact,” Masuda tells Physics World.

The post Multi-ion cancer therapy tackles the LET trilemma appeared first on Physics World.

  •  

New project takes aim at theory-experiment gap in materials data

Condensed-matter physics and materials science have a silo problem. Although researchers in these fields have access to vast amounts of data – from experimental records of crystal structures and conditions for synthesizing specific materials to theoretical calculations of electron band structures and topological properties – these datasets are often fragmented. Integrating experimental and theoretical data is a particularly significant challenge.

Researchers at the Beijing National Laboratory for Condensed Matter Physics and the Institute of Physics (IOP) of the Chinese Academy of Sciences (CAS) recently decided to address this challenge. Their new platform, MaterialsGalaxy, unifies data from experiment, computation and scientific literature, making it easier for scientists to identify previously hidden relationships between a material’s structure and its properties. In the longer term, their goal is to establish a “closed loop” in which experimental results validate theory and theoretical calculations guide experiments, accelerating the discovery of new materials by leveraging modern artificial intelligence (AI) techniques.

Physics World spoke to team co-leader Quansheng Wu to learn more about this new tool and how it can benefit the materials research community.

How does MaterialsGalaxy work?

The platform works by taking the atomic structure of materials and mathematically mapping it into a vast, multidimensional vector space. To do this, every material – regardless of whether its structure is known from experiment, from a theoretical calculation or from simulation – must first be converted into a unique structural vector that acts like a “fingerprint” for the material.

Then, when a MaterialsGalaxy user focuses on a material, the system automatically identifies its nearest neighbors in this vector space. This allows users to align heterogeneous data – for example, linking a synthesized crystal in one database with its calculated topological properties in another – even when different data sources define the material slightly differently.

The vector-based approach also enables the system to recommend “nearest neighbour” materials (analogs) to fill knowledge gaps, effectively guiding researchers from known data into unexplored territories. It does this by performing real-time vector similarity searches to dynamically link relevant experimental records, theoretical calculations and literature information. The result is a comprehensive profile for the material.

Where does data for MaterialsGalaxy come from?

We aggregated data from three primary channels: public databases; our institute’s own high-quality internal experimental records (known as the MatElab platform); and the scientific literature. All data underwent rigorous standardization using tools such as the pymatgen (Python Materials Genomics) materials analysis code and the spglib crystal structure library to ensure consistent definitions for crystal structures and physical properties.

Who were your collaborators on this project?

This project is a multi-disciplinary effort involving a close-knit collaboration among several research groups at the IOP, CAS and other leading institutions. My colleague Hongming Weng and I supervised the core development and design under the strategic guidance of Zhong Fang, while Tiannian Zhu (the lead author of our Chinese Physics B paper about MaterialsGalaxy) led the development of the platform’s architecture and core algorithms, as well as its technical implementation.

We enhanced the platform’s capabilities by integrating several previously published AI-driven tools developed by other team members. For example, Caiyuan Ye contributed the Con-CDVAE model for advanced crystal structure generation, while Jiaxuan Liu contributed VASPilot, which automates and streamlines first-principles calculations. Meanwhile, Qi Li contributed PXRDGen, a tool for simulating and generating powder X-ray diffraction patterns.

Finally, much of the richness of MaterialsGalaxy stems from the high-quality data it contains. This came from numerous collaborators, including Weng (who contributed the comprehensive topological materials database, Materiae), Youguo Shi (single-crystal growth), Shifeng Jin (crystal structure and diffraction), Jinbo Pan (layered materials), Qingbo Yan (2D ferroelectric materials), Yong Xu (nonlinear optical materials), and Xingqiu Chen (topological phonons). My own contribution was a library of AI-generated crystal structures produced by the Con-CDVAE model.

What does MaterialsGalaxy enable scientists to do that they couldn’t do before?

One major benefit is that it prevents researchers from becoming stalled when data for a specific material is missing. By leveraging the tool’s “structural analogs” feature, they can look to the properties or growth paths of similar materials for insights – a capability not available in traditional, isolated databases.

We also hope that MaterialsGalaxy will offer a bridge between theory and experiment. Traditionally, experimentalists tend to consult the Inorganic Crystal Structure Database while theorists check the Materials Project. Now, they can view the entire lifecycle of a material – from how to grow a single crystal (experiment) to its topological invariants (theory) – on a single platform.

Beyond querying known materials, MaterialsGalaxy also allows researchers to use integrated generative AI models to create new structures. These can be immediately compared against the known database to assess synthesis feasibility and potential performance throughout the “vertical comparison” workflow.

What do you plan to do next?

We’re focusing on enhancing the depth and breadth of the tool’s data fusion. For example, we plan to develop representations based on graph neural networks (GNNs) to better handle experimental data that may contain defects or disorder, thereby improving matching accuracy.

We’re also interested in moving beyond crystal structure by introducing multi-modal anchors such as electronic band structures, X-ray diffraction (XRD) patterns and spectroscopic data. To do this, we plan to utilize technologies derived from computational linguistics and information processing (CLIP) to enable cross-modal retrieval, for example searching for theoretical band data by uploading an experimental XRD pattern.

Separately, we want to continue to expand our experimental data coverage, specifically targeting synthesis recipes and “failed” experimental records, which are crucial for training the next generation of “AI-enabled” scientists. Ultimately, we plan to connect an even wider array of databases, establishing robust links between them to realize a true Materials Galaxy of interconnected knowledge.

The post New project takes aim at theory-experiment gap in materials data appeared first on Physics World.

  •  

The pros and cons of patenting

For any company or business, it’s important to recognize and protect intellectual property (IP). In the case of novel inventions, which can include machines, processes and even medicines, a patent offers IP protection and lets firms control how those inventions are used. Patents, which in most countries can be granted for up to 20 years, give the owner exclusive rights so that others can’t directly copy the creation. A patent essentially prevents others from making, using or selling your invention.

But there are more reasons for holding a patent than IP protection alone. In particular, patents go some way to protecting the investment that may have been necessary to generate the IP in the first place, such as the cost of R&D facilities, materials, labour and expertise. Those factors need to be considered when you’re deciding if patenting is the right approach or not.

Patents are tangible assets that can be sold to other businesses or licensed for royalties to provide your compay with regular income

Patents are in effect a form of currency. Counting as tangible assets that add to the overall value of a company, they can be sold to other businesses or licensed for royalties to provide regular income. Some companies, in fact, build up or acquire significant patent portfolios, which can be used for bargaining with competitors, potentially leading to cross-licensing agreements where both parties agree to use each other’s technology.

Patents also say something about the competitive edge of a company, by demonstrating technical expertise and market position through the control of a specific technology. Essentially, patents give credibility to a company’s claims of its technical know-how: a patent shows investors that a firm has a unique, protected asset, making the business more appealing and attractive to further investment.

However, it’s not all one-way traffic and there are obligations on the part of the patentee. Firstly, a patent holder has to reveal to the world exactly how their invention works. Governments favour this kind of public disclosure as it encourages broader participation in innovation. The downside is that whilst your competitors cannot directly copy you, they can enhance and improve upon your invention, provided those changes aren’t covered by the original patent.

It’s also worth bearing in mind that a patent holder is responsible for patent enforcement and any ensuing litigation; a patent office will not do this for you. So you’ll have to monitor what your competitors are up to and decide on what course of action to take if you suspect your patent’s been infringed. Trouble is, it can sometimes be hard to prove or disprove an infringement – and getting the lawyers in can be expensive, even if you win.

Money talks

Probably the biggest consideration of all is the cost and time involved in making a patent application. Filing a patent requires a rigorous understanding of “prior art” – the existing body of relevant knowledge on which novelty is judged. You’ll therefore need to do a lot of work finding out about relevant established patents, any published research and journal articles, along with products or processes publicly disclosed before the patent’s filing date.

Before it can be filed with a patent office, a patent needs to be written as a legal description, which includes all the legwork like an abstract, background, detailed specifications, drawings and claims of the invention. Once filed, an expert in the relevant technical field will be assigned to assess the worth of the claim; this examiner must be satisfied that the application is both unique and “non-obvious” before it’s granted.

Even when the invention is judged to be technically novel, in order to be non-obvious, it must also involve an “inventive step” that would not be obvious to a person with “ordinary skill” in that technical field at the time of filing. The assessment phase can result in significant to-ing and fro-ing between the examiner and the applicant to determine exactly what is patentable. If insufficient evidence is found, the patent application will be refused.

Patents are only ever granted in a particular country or region, such as Europe, and the application process has to be repeated for each new place (although the information required is usually pretty similar). Translations may be required for some countries, there are fees for each application and, even if a patent is granted, you have to pay an additional annual bill to maintain the patent (which in the UK rises year on year).

Patents can take years to process, which is why many companies pay specialized firms to support their applications

Patent applications, in other words, can be expensive and can take years to process. That’s why many companies pay specialized firms to support their patent applications. Those firms employ patent attorneys – legal experts with a technical background who help inventors and companies manage their IP rights by drafting patent applications, navigating patent office procedures and advising on IP strategy. Attorneys can also represent their clients in disputes or licensing deals, thereby acting as a crucial bridge between science/engineering and law.

Perspiration and aspiration

It’s impossible to write about patents without mentioning the impact that Thomas Edison had as an inventor. During the 20th century, he became the world’s most prolific inventor with a staggering 1093 US patents granted in his lifetime. This monumental achievement remained unsurpassed until 2003, when it was overtaken by the Japanese inventor Shunpei Yamazaki and, more recently, by the Australian “patent titan” Kia Silverbrook in 2008.

Edison clearly saw there was a lot of value in patents, but how did he achieve so much? His approach was grounded in systematic problem solving, which he accomplished through his Menlo Park lab in New Jersey. Dedicated to technological development and invention, it was effectively the world’s first corporate R&D lab. And whilst Edison’s name appeared on all the patents, they were often primarily the work of his staff; he was effectively being credited for inventions made by his employees.

I have a love–hate relationship with patents or at least the process of obtaining them

I will be honest; I have a love–hate relationship with patents or at least the process of obtaining them. As a scientist or engineer, it’s easy to think all the hard work is getting an invention over the line, slogging your guts out in the lab. But applying for a patent can be just as expensive and time-consuming, which is why you need to be clear on what and when to patent. Even Edison grew tired of being hailed a genius, stating that his success was “1% inspiration and 99% perspiration”.

Still, without the sweat of patents, your success might be all but 99% aspiration.

The post The pros and cons of patenting appeared first on Physics World.

  •  

Starlink and the unravelling of digital sovereignty

Wind sweeps dust across across southeastern Iran in January 2025. Credit: NASA Earth Observatory image by Michala Garrison

In January 2026, Iranian authorities shut down landline and mobile telecommunications infrastructure in the country to clamp down on coordinated protests. Starlink terminals, which were discreetly mounted on rooftops, helped Iranian protesters bypass this internet blackout. The role played by Starlink in the recent Iranian protests challenges the notion of digital sovereignty and promotes corporate […]

The post Starlink and the unravelling of digital sovereignty appeared first on SpaceNews.

  •  

Practical impurity analysis for biogas producers

Biogas is a renewable energy source formed when bacteria break down organic materials such as food waste, plant matter, and landfill waste in an oxygen‑free (anaerobic) process. It contains methane and carbon dioxide, along with trace amounts of impurities. Because of its high methane content, biogas can be used to generate electricity and heat, or to power vehicles. It can also be upgraded to almost pure methane, known as biomethane, which can directly replace natural fossil gas.

Strict rules apply to the amount of impurities allowed in biogas and biomethane, as these contaminants can damage engines, turbines, and catalysts during upgrading or combustion. EN 16723 is the European standard that sets maximum allowable levels of siloxanes and sulfur‑containing compounds for biomethane injected into the natural gas grid or used as vehicle fuel. These limits are extremely low, meaning highly sensitive analytical techniques are required. However, most biogas plants do not have the advanced equipment needed to measure these impurities accurately.

Researchers from the Paul Scherrer Institute, Switzerland: Julian Indlekofer (left) and Ayush Agarwal (right), with the Liquid Quench Sampling System
Researchers from the Paul Scherrer Institute, Switzerland: Julian Indlekofer (left) and Ayush Agarwal (right), with the Liquid Quench Sampling System (Courtesy: Markus Fischer/Paul Scherrer Institute PSI)

The researchers developed a new, simpler method to sample and analyse biogas using GC‑ICP‑MS. Gas chromatography (GC) separates chemical compounds in a gas mixture based on how quickly they travel through a column. Inductively Coupled Plasma Mass Spectrometry (ICP‑MS) then detects the elements within those compounds at very low concentrations. Crucially, this combined method can measure both siloxanes and sulfur compounds simultaneously. It avoids matrix effects that can limit other detectors and cause biased or ambiguous results. It also achieves the very low detection limits required by EN 16723.

The sampling approach and centralized measurement enables biogas plants to meet regulatory standards using an efficient, less complex, and more cost‑effective method with fewer errors. Overall, this research provides a practical, high‑accuracy tool that makes reliable biogas impurity monitoring accessible to plants of all sizes, strengthening biomethane quality, protecting infrastructure, and accelerating the transition to cleaner energy systems.

Read the full article

Sampling to analysis: simultaneous quantification of siloxanes and sulfur compounds in biogas for cleaner energy

Ayush Agarwal et al 2026 Prog. Energy 8 015001

Do you want to learn more about this topic?

Household biogas technology in the cold climate of low-income countries: a review of sustainable technologies for accelerating biogas generation Sunil Prasad Lohani et al. (2024)

The post Practical impurity analysis for biogas producers appeared first on Physics World.

  •  
❌