↩ Accueil

Vue normale

Reçu hier — 13 octobre 2025 Physics World

Theoretical physicist Michael Berry wins 2025 Isaac Newton Medal and Prize

13 octobre 2025 à 12:45

The theoretical physicist Michael Berry from the University of Bristol has won the 2025 Isaac Newton Medal and Prize for his “profound contributions across mathematical and theoretical physics in a career spanning over 60 years”. Presented by the Institute of Physics (IOP), which publishes Physics World, the international award is given annually for “world-leading contributions to physics by an individual of any nationality”.

Born in 1941 in Surrey, UK, Berry earned a BSc in physics from the University of Exeter in 1962 and a PhD from the University of St Andrews in 1965. He then moved to Bristol, where he has remained for the rest of his career.

Berry is best known for his work in the 1980s in which he showed that, under certain conditions, quantum systems can acquire what is known as a geometric phase. He was studying quantum systems in which the Hamiltonian describing the system is slowly changed so that it eventually returns to its initial form.

Berry showed that the adiabatic theorem widely used to describe such systems was incomplete and that a system acquires a phase factor that depends on the path followed, but not on the rate at which the Hamiltonian is changed. This geometric phase factor is now known as the Berry phase.

Over his career Berry, has written some 500 papers across a wide number of topics. In physics, Berry’s ideas have applications in condensed matter, quantum information and high-energy physics, as well as optics, nonlinear dynamics, and atomic and molecular physics. In mathematics, meanwhile, his work forms the basis for research in analysis, geometry and number theory.

Berry told Physics World that the award is “unexpected recognition for six decades of obsessive scribbling…creating physics by seeking ‘claritons’ – elementary particles of sudden understanding – and evading ‘anticlaritons’ that annihilate them” as well as “getting insights into nature’s physics” such as studying tidal bores, tsunamis, rainbows and “polarised light in the blue sky”.

Over the years, Berry has won a wide number of other honours, including the IOP’s Dirac Medal and the Royal Medal from the Royal Society, both awarded in 1990. He was also given the Wolf Prize for Physics in 1998 and the 2014 Lorentz Medal from the Royal Netherlands Academy of Arts and Sciences. In 1996 he recieved a knighthood for his services to science.

Berry will also be a speaker at the IOP’s International Year of Quantum celebrations on 4 November.

Celebrating success

Berry’s latest honour forms part of the IOP’s wider 2025 awards, which recognize everyone from early-career scientists and teachers to technicians and subject specialists. Other winners include Julia Yeomans, who receives the Dirac Medal and Prize for her work highlighting the relevance of active physics to living matter.

Lok Yiu Wu, meanwhile, receives Jocelyn Bell Burnell Medal and Prize for her work on the development of a novel magnetic radical filter device, and for ongoing support of women and underrepresented groups in physics.

In a statement, IOP president Michele Dougherty congratulated all the winners. “It is becoming more obvious that the opportunities generated by a career in physics are many and varied – and the potential our science has to transform our society and economy in the modern world is huge,” says Dougherty. “I hope our winners appreciate they are playing an important role in this community, and know how proud we are to celebrate their successes.”

The full list of 2025 award winners is available here.

The post Theoretical physicist Michael Berry wins 2025 Isaac Newton Medal and Prize appeared first on Physics World.

Phase shift in optical cavities could detect low-frequency gravitational waves

13 octobre 2025 à 10:00

A network of optical cavities could be used to detect gravitational waves (GWs) in an unexplored range of frequencies, according to researchers in the UK. Using technology already within reach, the team believes that astronomers could soon be searching for ripples in space–time across the milli-Hz frequency band at 10⁻⁵ Hz–1 Hz.

GWs were first observed a decade ago and since then the LIGO–Virgo–KAGRA detectors have spotted GWs from hundreds of merging black holes and neutron stars. These detectors work in the 10 Hz–30 kHz range. Researchers have also had some success at observing a GW background at nanohertz frequencies using pulsar timing arrays.

However, GWs have yet to be detected in the milli-Hz band, which should include signals from binary systems of white dwarfs, neutron stars, and stellar-mass black holes. Many of these signals would emanate from the Milky Way.

Several projects are now in the works to explore these frequencies, including the space-based interferometers LISA, Taiji, and TianQin; as well as satellite-borne networks of ultra-precise optical clocks. However, these projects are still some years away.

Multidisciplinary effort

Joining these efforts was a collaboration called QSNET, which was within the UK’s Quantum Technology for Fundamental Physics (QTFP) programme. “The QSNET project was a network of clocks for measuring the stability of fundamental constants,” explains Giovanni Barontini at the University of Birmingham. “This programme brought together physics communities that normally don’t interact, such as quantum physicists, technologists, high energy physicists, and astrophysicists.”

QTFP ended this year, but not before Barontini and colleagues had made important strides in demonstrating how milli-Hz GWs could be detected using optical cavities.

Inside an ultrastable optical cavity, light at specific resonant frequencies bounces constantly between a pair of opposing mirrors. When this resonant light is produced by a specific atomic transition, the frequency of the light in the cavity is very precise and can act as the ticking of an extremely stable clock.

“Ultrastable cavities are a main component of modern optical atomic clocks,” Barontini explains. “We demonstrated that they have reached sufficient sensitivities to be used as ‘mini-LIGOs’ and detect gravitational waves.”

When such GW passes through an optical cavity, the spacing between its mirrors does not change in any detectable way. However, QSNET results have led to Barontini’s team to conclude that milli-Hz GWs alter the phase of the light inside the cavity. What is more, they conclude that this effect would be detectable in the most precise optical cavities currently available.

“Methods from precision measurement with cold atoms can be transferred to gravitational-wave detection,” explains team member Vera Guarrera. “By combining these toolsets, compact optical resonators emerge as credible probes in the milli-Hz band, complementing existing approaches.”

Ground-based network

Their compact detector would comprise two optical cavities at 90° to each other – each operating at a different frequency – and an atomic reference at a third frequency. The phase shift caused by a passing gravitational wave is revealed in a change in how the three frequencies interfere with each other. The team proposes linking multiple detectors to create a global, ground-based network. This, they say, could detect a GW and also locate the position of its source in the sky.

By harnessing this existing technology, the researchers now hope that future studies could open up a new era of discovery of GWs in the milli-Hz range, far sooner than many projects currently in development.

“This detector will allow us to test astrophysical models of binary systems in our galaxy, explore the mergers of massive black holes, and even search for stochastic backgrounds from the early universe,” says team member Xavier Calmet at the University of Sussex. “With this method, we have the tools to start probing these signals from the ground, opening the path for future space missions.”

Barontini adds, “Hopefully this work will inspire the build-up of a global network of sensors that will scan the skies in a new frequency window that promises to be rich of sources – including many from our own galaxy,”.

The research is described in Classical and Quantum Gravity.

 

The post Phase shift in optical cavities could detect low-frequency gravitational waves appeared first on Physics World.

Reçu avant avant-hier Physics World

The physics behind why cutting onions makes us cry

10 octobre 2025 à 16:30

Researchers in the US have studied the physics of how cutting onions can produce a tear-jerking reaction.

While it is known that volatile chemicals released from the onion – called propanethial S-oxide – irritate the nerves in the cornea to produce tears, how such chemical-laden droplets reach the eyes and whether they are influenced by the knife or cutting technique remain less clear.

To investigate, Sunghwan Jung from Cornell University and colleagues built a guillotine-like apparatus and used high-speed video to observe the droplets released from onions as they were cut by steel blades.

“No one had visualized or quantified this process,” Jung told Physics World. “That curiosity led us to explore the mechanics of droplet ejection during onion cutting using high-speed imaging and strain mapping.”

They found that droplets, which can reach up to 60 cm high, were released in two stages – the first being a fast mist-like outburst that was followed by threads of liquid fragmenting into many droplets.

The most energetic droplets were released during the initial contact between the blade and the onion’s skin.

When they began varying the sharpness of the blade and the cutting speed, they discovered that a greater number of droplets were released by blunter blades and faster cutting speeds.

“That was even more surprising,” notes Jung. “Blunter blades and faster cuts – up to 40 m/s – produced significantly more droplets with higher kinetic energy.”

Another surprise was that refrigerating the onions prior to cutting also produced an increased number of droplets of similar velocity, compared to unchilled vegetables.

So if you want to reduce chances of welling up when making dinner, sharpen your knives, cut slowly and perhaps don’t keep the bulbs in the fridge.

The researchers say there are many more layers to the work and now plan to study how different onion varieties respond to cutting as well as how cutting could influence the spread of airborne pathogens such as salmonella.

The post The physics behind why cutting onions makes us cry appeared first on Physics World.

Motion blur brings a counterintuitive advantage for high-resolution imaging

10 octobre 2025 à 10:00
Three pairs of greyscale images, showing text, a pattern of lines, and an image. The left images are blurred, the right images are clearer
Blur benefit: Images on the left were taken by a camera that was moving during exposure. Images on the right used the researchers’ algorithm to increase their resolution with information captured by the camera’s motion. (Courtesy: Pedro Felzenszwalb/Brown University)

Images captured by moving cameras are usually blurred, but researchers at Brown University in the US have found a way to sharpen them up using a new deconvolution algorithm. The technique could allow ordinary cameras to produce gigapixel-quality photos, with applications in biological imaging and archival/preservation work.

“We were interested in the limits of computational photography,” says team co-leader Rashid Zia, “and we recognized that there should be a way to decode the higher-resolution information that motion encodes onto a camera image.”

Conventional techniques to reconstruct high-resolution images from low-resolution ones involve relating low-res to high-res via a mathematical model of the imaging process. These effectiveness of these techniques is limited, however, as they produce only relatively small increases in resolution. If the initial image is blurred due to camera motion, this also limits the maximum resolution possible.

Exploiting the “tracks” left by small points of light

Together with Pedro Felzenszwalb of Brown’s computer science department, Zia and colleagues overcame these problems, successfully reconstructing a high-resolution image from one or several low-resolution images produced by a moving camera. The algorithm they developed to do this takes the “tracks” left by light sources as the camera moves and uses them to pinpoint precisely where the fine details must have been located. It then reconstructs these details on a finer, sub-pixel grid.

“There was some prior theoretical work that suggested this shouldn’t be possible,” says Felzenszwalb. “But we show that there were a few assumptions in those earlier theories that turned out not to be true. And so this is a proof of concept that we really can recover more information by using motion.”

Application scenarios

When they tried the algorithm out, they found that it could indeed exploit the camera motion to produce images with much higher resolution than those without the motion. In one experiment, they used a standard camera to capture a series of images in a grid of high-resolution (sub-pixel) locations. In another, they took one or more images while the sensor was moving. They also simulated recording single images or sequences of pictures while vibrating the sensor and while moving it along a linear path. These scenarios, they note, could be applicable to aerial or satellite imaging. In both, they used their algorithm to construct a single high-resolution image from the shots captured by the camera.

“Our results are especially interesting for applications where one wants high resolution over a relatively large field of view,” Zia says. “This is important at many scales from microscopy to satellite imaging. Other areas that could benefit are super-resolution archival photography of artworks or artifacts and photography from moving aircraft.”

The researchers say they are now looking into the mathematical limits of this approach as well as practical demonstrations. “In particular, we hope to soon share results from consumer camera and mobile phone experiments as well as lab-specific setups using scientific-grade CCDs and thermal focal plane arrays,” Zia tells Physics World.

“While there are existing systems that cameras use to take motion blur out of photos, no one has tried to use that to actually increase resolution,” says Felzenszwalb. “We’ve shown that’s something you could definitely do.”

The researchers presented their study at the International Conference on Computational Photography and their work is also available on the arXiv pre-print server.

The post Motion blur brings a counterintuitive advantage for high-resolution imaging appeared first on Physics World.

Hints of a boundary between phases of nuclear matter found at RHIC

9 octobre 2025 à 17:30

In a major advance for nuclear physics, scientists on the STAR Detector at the Relativistic Heavy Ion Collider (RHIC) in the US have spotted subtle but striking fluctuations in the number of protons emerging from high-energy gold–gold collisions. The observation might be the most compelling sign yet of the long-sought “critical point” marking a boundary separating different phases of nuclear matter. This similar to how water can exist in liquid or vapour phases depending on temperature and pressure.

Team member Frank Geurts at Rice University in the US tells Physics World that these findings could confirm that the “generic physics properties of phase diagrams that we know for many chemical substances apply to our most fundamental understanding of nuclear matter, too.”

A phase diagram maps how a substance transforms between solid, liquid, and gas. For everyday materials like water, the diagram is familiar, but the behaviour of nuclear matter under extreme heat and pressure remains a mystery.

Atomic nuclei are made of protons and neutrons tightly bound together. These protons and neutrons are themselves made of quarks that are held together by gluons. When nuclei are smashed together at high energies, the protons and neutrons “melt” into a fluid of quarks and gluons called a quark–gluon plasma. This exotic high-temperature state is thought to have filled the universe just microseconds after the Big Bang.

Smashing gold ions

The quark–gluon plasma is studied by accelerating heavy ions like gold nuclei to nearly the speed of light and smashing them together. “The advantage of using heavy-ion collisions in colliders such as RHIC is that we can repeat the experiment many millions, if not billions, of times,” Geurts explains.

By adjusting the collision energy, researchers can control the temperature and density of the fleeting quark–gluon plasma they create. This allows physicists to explore the transition between ordinary nuclear matter and the quark–gluon plasma. Within this transition, theory predicts the existence of a critical point where gradual change becomes abrupt.

Now, the STAR Collaboration has focused on measuring the minute fluctuations in the number of protons produced in each collision. These “proton cumulants,” says Geurts, are statistical quantities that “help quantify the shape of a distribution – here, the distribution of the number of protons that we measure”.

In simple terms, the first two cumulants correspond to the average and width of that distribution, while higher-order cumulants describe its asymmetry and sharpness. Ratios of these cumulants are tied to fundamental properties known as susceptibilities, which become highly sensitive near a critical point.

Unexpected discovery

Over three years of experiments, the STAR team studied gold–gold collisions at a wide range of energies, using sophisticated detectors to track and identify the protons and antiprotons created in each event. By comparing how the number of these particles changed with energy, the researchers discovered something unexpected.

As the collision energy decreased, the fluctuations in proton numbers did not follow a smooth trend. “STAR observed what it calls non-monotonic behaviour,” Geurts explains. “While at higher energies the ratios appear to be suppressed, STAR observes an enhancement at lower energies.” Such irregular changes, he said, are consistent with what might happen if the collisions pass near the critical point — the boundary separating different phases of nuclear matter.

For Volodymyr Vovchenko, a physicist at the University of Houston who was not involved in the research, the new measurements represent “a major step forward”. He says that “the STAR Collaboration has delivered the most precise proton-fluctuation data to date across several collision energies”.

Still, interpretation remains delicate. The corrections required to extract pure physical signals from the raw data are complex, and theoretical calculations lag behind in providing precise predictions for what should happen near the critical point.

“The necessary experimental corrections are intricate,” Vovchenko said, and some theoretical models “do not yet implement these corrections in a fully consistent way.” That mismatch, he cautions, “can blur apples-to-apples comparisons.”

The path forward

The STAR team is now studying new data from lower-energy collisions, focusing on the range where the signal appears strongest. The results could reveal whether the observed pattern marks the presence of a nuclear matter critical point or stems from more conventional effects.

Meanwhile, theorists are racing to catch up. “The ball now moves largely to theory’s court,” Vovchenko says. He emphasizes the need for “quantitative predictions across energies and cumulants of various order that are appropriate for apples-to-apples comparisons with these data.”

Future experiments, including RHIC’s fixed-target program and new facilities such as the FAIR accelerator in Germany, will extend the search even further. By probing lower energies and producing vastly larger datasets, they aim to map the transition between ordinary nuclear matter and quark–gluon plasma with unprecedented precision.

Whether or not the critical point is finally revealed, the new data are a milestone in the exploration of the strong force and the early universe. As Geurts put it, these findings trace “landmark properties of the most fundamental phase diagram of nuclear matter,” bringing physicists one step closer to charting how everything  – from protons to stars – first came to be.

The research is described in Physical Review Letters.

The post Hints of a boundary between phases of nuclear matter found at RHIC appeared first on Physics World.

From quantum curiosity to quantum computers: the 2025 Nobel Prize for Physics

9 octobre 2025 à 15:50

This year’s Nobel Prize for Physics went to John Clarke, Michel Devoret and John Martinis “for the discovery of macroscopic quantum mechanical tunnelling and energy quantization in an electric circuit”.

That circuit was a superconducting device called a Josephson junction and their work in the 1980s led to the development of some of today’s most promising technologies for quantum computers.

To chat about this year’s laureates, and the wide-reaching scientific and technological consequences of their work I am joined by Ilana Wisby – who is a quantum physicist, deep tech entrepreneur and former CEO of UK-based Oxford Quantum Circuits. We chat about the trio’s breakthrough and its influence on today’s quantum science and technology.

Courtesy: American ElementsThis podcast is supported by American Elements, the world’s leading manufacturer of engineered and advanced materials. The company’s ability to scale laboratory breakthroughs to industrial production has contributed to many of the most significant technological advancements since 1990 – including LED lighting, smartphones, and electric vehicles.

The post From quantum curiosity to quantum computers: the 2025 Nobel Prize for Physics appeared first on Physics World.

The power of physics: what can a physicist do in the nuclear energy industry?

9 octobre 2025 à 11:00

Nuclear power in the UK is on the rise – and so too are the job opportunities for physicists. Whether it’s planning and designing new reactors, operating existing plants safely and reliably, or dealing with waste management and decommissioning, physicists play a key role in the burgeoning nuclear industry.

The UK currently has nine operational reactors across five power stations, which together provided 12% of the country’s electricity in 2024. But the government wants that figure to reach 25% by 2050 as part of its goal to move away from fossil fuels and reach net zero. Some also think that nuclear energy will be vital for powering data centres for AI in a clean and efficient way.

While many see fusion as the future of nuclear power, it is still in the research and development stages, so fission remains where most job opportunities lie. Although eight of the current fleet of nuclear reactors are to be retired by the end of this decade, the first of the next generation are already in construction. At Hinkley Point C in Somerset, two new reactors are being built with costs estimated to reach £46bn; and in July 2025, Sizewell C in Suffolk got the final go-ahead.

Rolls-Royce, meanwhile, has just won a government-funded bid to develop small modular reactors (SMR) in the UK. Although currently an unproven technology, the hope is that SMRs will be cheaper and quicker to build than traditional plants, with proponents saying that each reactor could produce enough affordable emission-free energy to power about 600,000 homes for at least 60 years.

The renaissance of the nuclear power industry has led to employment in the sector growing by 35% between 2021 and 2024, with the workforce reaching over 85,000. However – as highlighted in a 2025 members survey by the Nuclear Institute – there are concerns about a skills shortage. In fact, the Nuclear Skills Plan was detailed by the Nuclear Skills Delivery Group in 2024 with the aim to address this problem.

Supported by an investment of £763m by 2030 from the UK government and industry, the plan’s objectives include quadrupling the number of PhDs in nuclear fission, and doubling the number of graduates entering the workforce. It also aims to provide opportunities for people to “upskill” and join the sector mid-career. The overall hope is to fill 40,000 new jobs by the end of the decade.

Having a degree in physics can open the door to any part of the nuclear-energy industry, from designing, operating or decommissioning a reactor, to training staff, overseeing safety or working as a consultant. We talk to six nuclear experts who all studied physics at university but now work across the sector, for a range of companies – including EDF Energy and Great British Energy–Nuclear. They give a quick snapshot of their “nuclear journeys”, and offer advice to those thinking of following in their footsteps.

Design and construction

Michael Hodgson
(Courtesy: Michael Hodgson)

Michael Hodgson, lead engineer, Rolls-Royce SMR

My interest in nuclear power started when I did a project on energy at secondary school. I learnt that there were significant challenges around the world’s future energy demands, resource security, and need for clean generation. Although at the time these were not topics commonly talked about, I could see they were vital to work on, and thought nuclear would play an important role.

I went on to study physics at the University of Surrey, with a year at Michigan State University in the US and another at CERN. After working for a couple of years, I returned to Surrey to do a part-time masters in radiation detection and instrumentation, followed a few years later by a PhD in radiation-hard semiconductor neutron detectors.

Up until recently, my professional work has mainly been in the supply chain for nuclear applications, working for Thermo Fisher Scientific, Centronic and Exosens. Nuclear power isn’t made by one company, it’s a combination of thousands of suppliers and sub-suppliers, the majority of which are small to medium-sized enterprises that need to operate across multiple industries. My job was primarily a technical design authority for manufacturers of radiation detectors and instruments, used in applications such as reactor power monitoring, health physics, industrial controls, and laboratory equipment, to name but a few. Now I work at Rolls-Royce SMR as a lead engineer for the control and instrumentation team. This role involves selecting and qualifying the thousands of different detectors and control instruments that will support the operation of small modular reactors.

Logical, evidence-based problem solving is the cornerstone of science and a powerful tool in any work setting

Beyond the technical knowledge I’ve gained throughout my education, studying physics has also given me two important skills. Firstly, learning how to learn – this is critical in academia but it also helps you step into any professional role. The second skill is the logical, evidence-based problem solving that is the cornerstone of science, which is a powerful tool in any work setting.

A career in nuclear energy can take many forms. The industry is comprised of a range of sectors and thousands of organizations that altogether form a complex support structure. My advice for any role is that knowledge is important, but experience is critical. While studying, try to look for opportunities to gain professional experience – this may be industry placements, research projects, or even volunteering. And it doesn’t have to be in your specific area of interest – cross-disciplinary experience breeds novel thinking. Utilizing these opportunities can guide your professional interests, set your CV apart from your peers, and bring pragmatism to your future roles.

Reactor operation

Katie Barber
(Courtesy: Katie Barber)

Katie Barber, nuclear reactor operator and simulator instructor at Sizewell B, EDF

I studied physics at the University of Leicester simply because it was a subject I enjoyed – at the time I had no idea what I wanted to do for a career. I first became interested in nuclear energy when I was looking for graduate jobs. The British Energy (now EDF) graduate scheme caught my eye because it offered a good balance of training and on-the-job experience. I was able to spend time in multiple different departments at different power stations before I decided which career path was right for me.

At the end of my graduate scheme, I worked in nuclear safety for several years. This involved reactor physics testing and advising on safety issues concerning the core and fuel. It was during that time I became interested in the operational response to faults. I therefore applied for the company’s reactor operator training programme – a two-year course that was a mixture of classroom and simulator training. I really enjoyed being a reactor operator, particularly during outages when the plant would be shutdown, cooled, depressurised and dissembled for refuelling before reversing the process to start up again. But after almost 10 years in the control room, I wanted a new challenge.

Now I develop and deliver the training for the control-room teams. My job, which includes simulator and classroom training, covers everything from operator fundamentals (such as reactor physics and thermodynamics) and normal operations (e.g. start up and shutdown), through to accident scenarios.

My background in physics gives me a solid foundation for understanding the reactor physics and thermodynamics of the plant. However, there are also a lot of softer skills essential for my role. Teaching others requires the ability to present and explain technical material; to facilitate a constructive debrief after a simulator scenario; and to deliver effective coaching and feedback. The training focuses as much on human performance as it does technical knowledge, highlighting the importance of effective teamwork, error prevention and clear communications.

A graduate training scheme is an excellent way to get an overview of the business, and gain experience across many different departments and disciplines

With Hinkley Point C construction progressing well and the recent final investment decision for Sizewell C, now is an exciting time to join the nuclear industry. A graduate training scheme is an excellent way to get an overview of the business, and gain experience across many different departments and disciplines, before making the decision about which area is right for you.

Nuclear safety

Jacob Plummer
(Courtesy: Jacob Plummer)

Jacob Plummer, principal nuclear safety inspector, Office for Nuclear Regulation

I’d been generally interested in nuclear science throughout my undergraduate physics degree at the University of Manchester, but this really accelerated after studying modules in applied nuclear and reactor physics. The topic was engaging, and the nuclear industry offered a way to explore real-world implementation of physics concepts. This led me to do a masters in nuclear science and technology, also at Manchester (under the Nuclear Technology Education Consortium), to develop the skills the UK nuclear sector required.

My first job was as a graduate nuclear safety engineer at Atkins (now AtkinsRealis), an engineering consultancy. It opened my eyes to the breadth of physics-related opportunities in the industry. I worked on new and operational power station projects for Hitachi-GE and EDF, as well as a variety of defence new-build projects. I primarily worked in hazard analysis, using modelling and simulation tools to generate evidence on topics like fire, blast and flooding to support safety case claims and inform reactor designs. I was also able to gain experience in project management, business development, and other energy projects, such as offshore wind farms. The analytical and problem solving skills I had developed during my physics studies really helped me to adapt to all of these roles.

Currently I work as a principal nuclear safety inspector at the Office for Nuclear Regulation. My role is quite varied. Day to day I might be assessing safety case submissions from a prospective reactor vendor; planning and delivering inspections at fuel and waste sites; or managing fire research projects as part of an international programme. A physics background helps me to understand complex safety arguments and how they link to technical evidence; and to make reasoned and logical regulatory judgements as a result.

Physics skills and experience are valued across the nuclear industry, from hazards and fault assessment to security, safeguards, project management and more

It’s a great time to join the nuclear industry with a huge amount of activity and investment across the nuclear lifecycle. I’d advise early-career professionals to cast the net wide when looking for roles. There are some obvious physics-related areas such as health physics, fuel and core design, and criticality safety, but physics skills and experience are valued across the nuclear industry, from hazards and fault assessment to security, safeguards, project management and more. Don’t be limited by the physicist label.

Waste and decommissioning

Becky Houghton
(Courtesy: Egis)

Becky Houghton, principal consultant, Galson Sciences Ltd

My interest in a career in nuclear energy sparked mid-way through my degree in physics and mathematics at the University of Sheffield, when I was researching “safer nuclear power” for an essay. Several rabbit holes later, I had discovered a myriad of opportunities in the sector that would allow me to use the skills and knowledge I’d gained through my degree in an industrial setting.

My first job in the field was as a technical support advisor on a graduate training scheme, where I supported plant operations on a nuclear licensed site. Next, I did a stint working in strategy development and delivery across the back end of the fuel cycle, before moving into consultancy. I now work as a principal consultant for Galson Sciences Ltd, part of the Egis group. Egis is an international multi-disciplinary consulting and engineering firm, within which Galson Sciences provides specialist nuclear decommissioning and waste management consultancy services to nuclear sector clients worldwide.

Ultimately, my role boils down to providing strategic and technical support to help clients make decisions. My focus these days tends to be around radioactive waste management, which can mean anything from analysing radioactive waste inventories to assessing the environmental safety of disposal facilities.

In terms of technical skills needed for the role, data analysis and the ability to provide high-quality reports on time and within budget are at the top of the list. Physics-wise, an understanding of radioactive decay, criticality mechanisms and the physico-chemical properties of different isotopes are fairly fundamental requirements. Meanwhile, as a consultant, some of the most important soft skills are being able to lead, teach and mentor less experienced colleagues; develop and maintain strong client relationships; and look after the well-being and deployment of my staff.

Whichever part of the nuclear fuel cycle you end up in, the work you do makes a difference

My advice to anyone looking to go into the nuclear energy is to go for it. There are lots of really interesting things happening right now across the industry, all the way from building new reactors and operating the current fleet, to decommissioning, site remediation and waste management activities. Whichever part of the nuclear fuel cycle you end up in, the work you do makes a difference, whether that’s by cleaning up the legacy of years gone by or by helping to meet the UK’s energy demands. Don’t be afraid to say “yes” to opportunities even if they’re outside your comfort zone, keep learning, and keep being curious about the world around you.

Uranium enrichment

Mark Savage
(Courtesy: Mark Savage)

Mark Savage, nuclear licensing manager, Urenco UK

As a child, I remember going to the visitors’ centre at the Sellafield nuclear site – a large nuclear facility in the north-west of England that’s now the subject of a major clean-up and decommissioning operation. At the centre, there was a show about splitting the atom that really sparked my interest in physics and nuclear energy.

I went on to study physics at Durham University, and did two summer placements at Sellafield, working with radiometric instruments. I feel these placements helped me get a place on the Rolls-Royce nuclear engineering graduate scheme after university. From there I joined Urenco, an international supplier of uranium enrichment services and fuel cycle products for the civil nuclear industry.

While at Urenco, I have undertaken a range of interesting roles in nuclear safety and radiation physics, including criticality safety assessment and safety case management. Highlights have included being the licensing manager for a project looking to deploy a high-temperature gas-cooled reactor design, and presenting a paper at a nuclear industry conference in Japan. These roles have allowed me to directly apply my physics background – such as using Monte Carlo radiation transport codes to model nuclear systems and radiation sources – as well as develop broader knowledge and skills in safety, engineering and project management.

My current role is nuclear licensing manager at the Capenhurst site in Cheshire, where we operate a number of nuclear facilities including three uranium enrichment plants, a uranium chemical deconversion facility, and waste management facilities. I lead a team who ensure the site complies with regulations, and achieves the required approvals for our programme of activities. Key skills for this role include building relationships with internal and external stakeholders; being able to understand and explain complex technical issues to a range of audiences; and planning programmes of work.

I would always recommend anyone interested in working in nuclear energy to look for work experience

Some form of relevant experience is always advantageous, so I would always recommend anyone interested in working in nuclear energy to look for work experience visits, summer placements or degree schemes that include working with industry.

Skills initiatives

Saralyn Thomas
(Courtesy: Great British Energy – Nuclear)

Saralyn Thomas, skills lead, Great British Energy – Nuclear

During my physics degree at the University of Bristol, my interest in energy led me to write a dissertation on nuclear power. This inspired me to do a masters in nuclear science and technology at the University of Manchester under the Nuclear Technology Education Consortium. The course opened doors for me, such as a summer placement with the UK National Nuclear Laboratory, and my first role as a junior safety consultant with Orano.

I worked in nuclear safety for roughly 10 years, progressing to principal consultant with Abbott Risk Consulting, but decided that this wasn’t where my strengths and passions lay. During my career, I volunteered for the Nuclear Institute (NI), and worked with the society’s young members group – the Young Generation Network (YGN). I ended up becoming chair of the YGN and a trustee of the NI, which involved supporting skills initiatives including those feeding into the Nuclear Skills Plan. Having a strategic view of the sector and helping to solve its skills challenges energized me in a new way, so I chose to change career paths and moved to Great British Energy – Nuclear (GBE-N) as skills lead. In this role I plan for what skills the business and wider sector will need for a nuclear new build programme, as well as develop interventions to address skills gaps.

GBE-N’s current remit is to deliver Europe’s first fleet of small modular reactors, but there is relatively limited experience of building this technology. Problem-solving skills from my background in physics have been essential to understanding what assumptions we can put in place at this early stage, learning from other nuclear new builds and major infrastructure projects, to help set us up for the future.

The UK’s nuclear sector is seeing significant government commitment, but there is a major skills gap

To anyone interested in nuclear energy, my advice is to get involved now. The UK’s nuclear sector is seeing significant government commitment, but there is a major skills gap. Nuclear offers a lifelong career with challenging, complex projects – ideal for physicists who enjoy solving problems and making a difference.

 

The post The power of physics: what can a physicist do in the nuclear energy industry? appeared first on Physics World.

A record-breaking anisotropic van der Waals crystal?

9 octobre 2025 à 10:13

In general, when you measure material properties such as optical permittivity, your measurement doesn’t depend on the direction in which you make it.

However, recent research has shown that this is not the case for all materials. In some cases, their optical permittivity is directional. This is commonly known as in-plane optical anisotropy. A larger difference between optical permittivity in different directions means a larger anisotropy.

Materials with very large anisotropies have applications in a wide range of fields from photonics and electronics to medical imaging. However, for most materials remains available today, the value remains relatively low.

These potential applications combined with the current limitation has driven a large amount of research into novel anisotropic materials.

In this latest work, a team of researchers studied the quasi-one-dimensional van der Waals crystal: Ta2NiSe5.

Van der Waals (vdW) crystals are made up of chains, ribbons, or layers of atoms that stick together through weak van der Waals forces.

In quasi-one-dimensional vdW crystals, the atoms are strongly connected along one direction, while the connections in the other directions are much weaker, making their properties very direction-dependent.

This structure makes quasi-one-dimensional vdW crystals a good place to search for large optical anisotropy values. The researchers studied the new crystal by using a range of measurement techniques such as ellipsometry and spectroscopy as well as state of the art first principles computer simulations.

The results show that Ta2NiSe5 has a record-breaking in-plane optical anisotropy across the visible to infrared spectral region, representing the highest value reported among van der Waals materials to date.

The study therefore has large implications for next-generation devices in photonics and beyond.

Read the full article

Giant in-plane anisotropy in novel quasi-one-dimensional van der Waals crystal – IOPscience

Zhou et al. 2025 Rep. Prog. Phys. 88 050502

 

The post A record-breaking anisotropic van der Waals crystal? appeared first on Physics World.

Unlocking the limits of quantum security

9 octobre 2025 à 10:12

In quantum information theory, secret-key distillation is a crucial process for enabling secure communication across quantum networks. It works by extracting confidential bits from shared quantum states or channels using local operations and limited classical communication, ensuring privacy even over insecure links.

A bipartite quantum state is a system shared between two parties (often called Alice and Bob) that may exhibit entanglement. If they successfully distil a secret key, they can encrypt and decrypt messages securely, using the key like a shared password known only to them.

To achieve this, Alice and Bob use point-to-point quantum channels and perform local operations, meaning each can only manipulate their own part of the system. They also rely on one-way classical communication, where Alice sends messages to Bob, but Bob cannot reply. This constraint reflects realistic limitations in quantum networks and helps researchers identify the minimum requirements for secure key generation.

This paper investigates how many secret bits can be extracted under these conditions. The authors introduce a resource-theoretic framework based on unextendible entanglement which is a form of entanglement that cannot be shared with additional parties. This framework allows them to derive efficiently computable upper bounds on secret-key rates, helping determine how much security is achievable with limited resources.

Their results apply to both one-shot scenarios, where the quantum system is used only once, and asymptotic regimes, where the same system is used repeatedly and statistical patterns emerge. Notably, they extend their approach to quantum channels assisted by forward classical communication, resolving a long-standing open problem about the one-shot forward-assisted private capacity.

Finally, they show that error rates in private communication can decrease exponentially with repeated channel use, offering a scalable and practical path toward building secure quantum messaging systems.

Read the full article

Extendibility limits quantum-secured communication and key distillation

Vishal Singh and Mark M Wilde 2025 Rep. Prog. Phys. 88 067601

Do you want to learn more about this topic?

Distribution of entanglement in large-scale quantum networks by S PerseguersG J Lapeyre JrD CavalcantiM Lewenstein and A Acín (2013)

The post Unlocking the limits of quantum security appeared first on Physics World.

Optical gyroscope detects Earth’s rotation with the highest precision yet

8 octobre 2025 à 18:11

As the Earth moves through space, it wobbles. Researchers in Germany have now directly observed this wobble with the highest precision yet thanks to a large ring laser gyroscope they developed for this purpose. The instrument, which is located in southern Germany and operates continuously, represents an important advance in the development of super-sensitive rotation sensors. If further improved, such sensors could help us better understand the interior of our planet and test predictions of relativistic effects, including the distortion of space-time due to Earth’s rotation.

The Earth rotates once every day, but there are tiny fluctuations, or wobbles, in its axis of rotation. These fluctuations are caused by several factors, including the gravitational forces of the Moon and Sun and, to a lesser extent, the neighbouring planets in our Solar System. Other, smaller fluctuations stem from the exchange of momentum between the solid Earth and the oceans, atmosphere and ice sheets. The Earth’s shape, which is not a perfect sphere but is flattened at the poles and thickened at the equator, also contributes to the wobble.

These different types of fluctuations produce effects known as precession and nutation that cause the extension of the Earth’s axis to trace a wrinkly circle in the sky. At the moment, this extended axis is aligned precisely with the North Star. In the future, it will align with other stars before returning to the North Star again in a cycle that lasts 26,000 years.

Most studies of the Earth’s rotation involve combining data from many sources. These sources include very long baseline radio-astronomy observations of quasars; global satellite navigation systems (GNSS); and GNSS observations combined with satellite laser ranging (SLR) and Doppler orbitography and radiopositioning integrated by satellite (DORIS). These techniques are based on measuring the travel time of light, and because it is difficult to combine them, only one such measurement can be made per day.

An optical interferometer that works using the Sagnac effect

The new gyroscope, which is detailed in Science Advances, is an optical interferometer that operates using the Sagnac effect. At its heart is an optical cavity that guides a light beam around a square path 16 m long. Depending on the rate of rotation it experiences, this cavity selects two different frequencies from the beam to be coherently amplified. “The two frequencies chosen are the only ones that have an integer number of waves around the cavity,” explains team leader Ulrich Schreiber of the Technische Universität München (TUM). “And because of the finite velocity of light, the co-rotating beam ‘sees’ a slightly larger cavity, while the anti-rotating beam ‘sees’ a slightly shorter one.”

The frequency shift in the interference pattern produced by the co-rotating beam is projected onto an external detector and is strictly proportional to the Earth’s rotation rate. Because the accuracy of the measurement depends, in part, on the mechanical stability of the set-up, the researchers constructed their gyroscope from a glass ceramic that does not expand much with temperature. They also set it up horizontally in an underground laboratory, the Geodetic Observatory Wettzell in southern Bavaria, to protect it as much as possible from external vibrations.

The instrument can sense the Earth’s rotation to within an accuracy of 48 parts per billion (ppb), which corresponds to picoradians per second. “This is about a factor of 100 better than any other rotation sensor,” says Schreiber, “and, importantly, is less than an order of magnitude away from the regime in which relativistic effects can be measured – but we are not quite there yet.”

An increase in the measurement accuracy and stability of the ring laser by a factor of 10 would, Schreiber adds, allow the researchers to measure the space-time distortion caused by the Earth’s rotation. For example, it would permit them to conduct a direct test for the Lense-Thirring effect — that is, the “dragging” of space by the Earth’s rotation – right at the Earth’s surface.

To reach this goal, the researchers say they would need to amend several details of their sensor design. One example is the composition of the thin-film coatings on the mirrors inside their optical interferometer. “This is neither easy nor straightforward,” explains Schreiber, “but we have some ideas to try out and hope to progress here in the near future.

“In the meantime, we are working towards implementing our measurements into a routine evaluation procedure,” he tells Physics World.

The post Optical gyroscope detects Earth’s rotation with the highest precision yet appeared first on Physics World.

Susumu Kitagawa, Richard Robson and Omar Yaghi win the 2025 Nobel Prize for Chemistry

8 octobre 2025 à 12:01

Susumu Kitagawa, Richard Robson and Omar Yaghi have been awarded the 2025 Nobel Prize for Chemistry “for developing metal-organic frameworks”.

The award includes a SEK 11m prize ($1.2m), which is shared equally by the winners. The prize will be presented at a ceremony in Stockholm on 10 December.

The prize was announced this morning by members of the Royal Swedish Academy of Science. Speaking on the phone during the press conference, Kitagawa noted that he was “deeply honoured and delighted” that his research had been recognized. 

A new framework

Beginning in the late 1980s and for the next couple of decades, the trio, who are all trained chemists, developed a new form of molecular architecture in that metal ions function as cornerstones that are linked by long organic carbon-based molecules.

Together, the metal ions and molecules form crystals that contain large cavities through which gases and other chemicals can flow.

“It’s a little like Hermione’s handbag – small on the outside, but very large on the inside,” noted Heiner Linke, chair of the Nobel Committee for Chemistry.

Yet the trio had to overcome several challenges before they could be used such as making them stable and flexible, which Kitagawa noted “was very tough”.

These porous materials are now called metal-organic frameworks (MOF). By varying the building blocks used in the MOFs, researchers can design them to capture and store specific substances as well as drive chemical reactions or conduct electricity.

“Metal-organic frameworks have enormous potential, bringing previously unforeseen opportunities for custom-made materials with new functions,” added Linke.

Following the laureates’ work, chemists have built tens of thousands of different MOFs.

3D MOFs are an important class of materials that could be used in applications as diverse as sensing, gas storage, catalysis and optoelectronics.  

MOFs are now able to capture water from air in the desert, sequester carbon dioxide from industry effluents, store hydrogen gas, recover rare-earth metals from waste, break down oil contamination as well as extract “forever chemicals” such as PFAS from water.

“My dream is to capture air and to separate air into CO2, oxygen and water and convert them to usable materials using renewable energy,” noted Kitagawa. 

Their 2D versions might even be used as flexible material platforms to realize exotic quantum phases, such as topological and anomalous quantum Hall insulators.

Life scientific

Kitagawa was born in 1951 in Kyoto, Japan. He obtained a PhD from Kyoto University, Japan, in 1979 and then held positions at Kindai University before joining Tokyo Metropolitan University in 1992. He then joined Kyoto University in 1998 where he is currently based.

Robson was born in 1937 in Glusburn, UK. He obtained a PhD from University of Oxford in 1962. After postdoc positions at California Institute of Technology and Stanford University, in 1966 he moved to the University of Melbourne where he remained for the rest of his career.

Yaghi was born in 1965 in Amman, Jordan. He obtained a PhD from University of Illinois Urbana-Champaign, US, in 1990. He then held positions at Arizona State University, the University of Michigan and the University of California, Los Angeles, before joining the University of California, Berkeley, in 2012 where he is currently based.

The post Susumu Kitagawa, Richard Robson and Omar Yaghi win the 2025 Nobel Prize for Chemistry appeared first on Physics World.

Machine learning optimizes nanoparticle design for drug delivery to the brain

8 octobre 2025 à 10:30

Neurodegenerative diseases affect millions of people worldwide, but treatment of such conditions is limited by the blood–brain barrier (BBB), which blocks the passage of drugs to the brain. In the quest for more effective therapeutic options, a multidisciplinary research team has developed a novel machine learning-based technique to predict the behaviour of nanoparticles as drug delivery systems.

The work focuses on nanoparticles that can cross the BBB and provide a promising platform for enhancing drug transport into the brain. But designing specific nanoparticles to target specific brain regions is a complex and time-consuming task; there’s a need for improved design frameworks to identify potential candidates with desirable bioactivity profiles. For this, the team – comprising researchers from the University of the Basque Country (UPV/EHU) in Spain and Tulane University in the USA, led by the multicentre CHEMIF.PTML Lab – turned to machine learning.

Machine learning uses molecular and clinical data to detect trends that may lead to novel drug delivery strategies with improved efficiency and reduced side effects. In contrast to slow and costly trial-and-error or physical modelling approaches, machine learning could provide efficient initial screening of large combinations of nanoparticle compositions. Traditional machine learning, however, can be hindered by the lack of suitable data sets.

To address this limitation, the CHEMIF.PTML Lab team developed the IFE.PTML method – an approach that integrates information fusion, Python-based encoding and perturbation theory with machine learning algorithms, describing the model in Machine Learning: Science and Technology.

“The main advantage of our IFE.PTML method lies in its ability to handle heterogeneous nanoparticle data,” corresponding author Humberto González-Díaz explains. “Standard machine learning approaches often struggle with disperse and multi-source datasets from nanoparticle experiments. Our approach integrates information fusion to combine diverse data types – such as physicochemical properties, bioassays and so on – and applies perturbation theory to model these uncertainties as probabilistic perturbations around baseline conditions. This results in more robust, generalizable predictions of nanoparticle behaviour.”

To build the predictive models, the researchers created a database containing physicochemical and bioactivity parameters for 45 different nanoparticle systems across 41 different cell lines. They used these data to train IFE.PTML models with three machine learning algorithms – random forest, extreme gradient boosting and decision tree – to predict the drug delivery behaviour of various nanomaterials. The random forest-based model showed the best overall performance, with accuracies of 95.1% and 89.7% on training and testing data sets, respectively.

Experimental demonstration

To illustrate the real-world applicability of the random forest-based IFE.PTML model, the researchers synthetized two novel magnetite nanoparticle systems (the 31 nm-diameter Fe3O4_A and the 26 nm-diameter Fe3O4_B). Magnetite-based nanoparticles are biocompatible, can be easily functionalized and have a high surface area-to-volume ratio, making them efficient drug carriers. To make them water soluble, the nanoparticles were coated with either PMAO (poly(maleic anhydride-alt-1-octadecene)) or PMAO plus PEI (poly(ethyleneimine).

Nanoparticle preparation process
Preparation process Functionalization of Fe3O4 nanoparticles with PMAO and PEI polymers. (Courtesy: Mach. Learn.: Sci. Technol. 10.1088/2632-2153/ae038a)

The team characterized the structural, morphological and magnetic properties of the four nanoparticle systems and then used the optimized model to predict their likelihood of favourable bioactivity for drug delivery in various human brain cell lines, including models of neurodegenerative disease, brain tumour models and a cell line modelling the BBB.

As inputs for their model, the researchers used a reference function based on the bioactivity parameters for each system, plus perturbation theory operators for various nanoparticle parameters. The IFE.PTML model calculated key bioactivity parameters, focusing on indicators of toxicity, efficacy and safety. These included the 50% cytotoxic, inhibitory, lethal and toxic concentrations (at which 50% of the biological effect is observed) and the zeta potential, which affects the nanoparticles’ capacity to cross the BBB. For each parameter, the model output a binary result: “0” for undesired and “1” for desired bioactivities.

The model identified PMAO-coated nanoparticles as the most promising candidates for BBB and neuronal applications, due to their potentially favourable stability and biocompatibility. Nanoparticles with PMAO-PEI coatings, on the other hand, could prove optimal for targeting brain tumour cells.

The researchers point out that, where comparisons were possible, the trends predicted by the RF-IFE.PTML model agreed with the experimental findings, as well as with previous studies reported in the literature. As such, they conclude that their model is efficient and robust and offers valuable predictions on nanoparticle–coating combinations designed to act on specific targets.

“The present study focused on the nanoparticles as potential drug carriers. Therefore, we are currently implementing a combined machine learning and deep learning methodology with potential drug candidates for neurodegenerative diseases,” González-Díaz tells Physics World.

The post Machine learning optimizes nanoparticle design for drug delivery to the brain appeared first on Physics World.

Advances in quantum error correction showcased at Q2B25

7 octobre 2025 à 17:00

This year’s Q2B meeting took place at the end of last month in Paris at the Cité des Sciences et de l’Industrie, a science museum in the north-east of the city. The event brought together more than 500 attendees and 70 speakers – world-leading experts from industry, government institutions and academia. All major quantum technologies were highlighted: computing, AI, sensing, communications and security.

Among the quantum computing topics was quantum error correction (QEC) – something that will be essential for building tomorrow’s fault-tolerant machines. Indeed, it could even be the technology’s most important and immediate challenge, according to the speakers on the State of Quantum Error Correction Panel: Paul Hilaire of Telecom Paris/IP Paris, Michael Vasmer of Inria, Quandela’s Boris Bourdoncle, Riverlane’s Joan Camps and Christophe Vuillot from Alice & Bob.

As was clear from the conference talks, quantum computers are undoubtedly advancing in leaps and bounds. One of their most important weak points, however, is that their fundamental building blocks (quantum bits, or qubits) are highly prone to errors. These errors are caused by interactions with the environment – also known as noise – and correcting them will require innovative software and hardware. Today’s machines are only capable of running on average a few hundred operations before an error occurs; but in the future, we will have to develop quantum computers capable of processing a million error-free quantum operations (known as a MegaQuOp) or even a trillion error-free operations (TeraQuOps).

QEC works by distributing one quantum bit of information – called a logical qubit – across several different physical qubits, such as superconducting circuits or trapped atoms. Each physical qubit is noisy, but they work together to preserve the quantum state of the logical qubit – at least for long enough to perform a calculation. It was Peter Shor who first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. A technique known as syndrome decoding is then used to diagnose which error was the likely source of corruption on an encoded state. The error can then be reversed by applying a corrective operation depending on the syndrome.

Prototype quantum computer from NVIDIA
Computing advances A prototype quantum computer from NVIDIA that makes use of seven qubits. (Courtesy: Isabelle Dumé)

While error correction should become more effective as the number of physical qubits in a logical qubit increases, adding more physical qubits to a logical qubit also adds more noise. Much progress has been made in addressing this and other noise issues in recent years, however.

“We can say there’s a ‘fight’ when increasing the length of a code,” explains Hilaire. “Doing so allows us to correct more errors, but we also introduce more sources of errors. The goal is thus being able to correct more errors than we introduce. What I like with this picture is the clear idea of the concept of a fault-tolerant threshold below which fault-tolerant quantum computing becomes feasible.”

Developments in QEC theory

Speakers at the Q2B25 meeting shared a comprehensive overview of the most recent advancements in the field – and they are varied. First up, concatenated error correction codes. Prevalent in the early days of QEC, these fell by the wayside in favour of codes like surface code, but are making a return as recent work has shown. Concatenated codes can achieve constant encoding rates and a quantum computer operating on a linear, nearest-neighbour connectivity was recently put forward. Directional codes, the likes of which are being developed by Riverlane, are also being studied. These leverage native transmon qubit logic gates – for example, iSWAP gates – and could potentially outperform surface codes in some aspects.

The panellists then described bivariate bicycle codes, being developed by IBM, which offer better encoding rates than surface codes. While their decoding can be challenging for real-time applications, IBM’s “relay belief propagation” (relay BP) has made progress here by simplifying decoding strategies that previously involved combining BP with post-processing. The good thing is that this decoder is actually very general and works for all the “low-density parity check codes” — one of the most studied class of high performance QEC codes (these also include, for example, surface codes and directional codes).

There is also renewed interest in decoders that can be parallelized and operate locally within a system, they said. These have shown promise for codes like the 1D repetition code, which could revive the concept of self-correcting or autonomous quantum memory. Another possibility is the increased use of the graphical language ZX calculus as a tool for optimizing QEC circuits and understanding spacetime error structures.

Hardware-specific challenges

The panel stressed that to achieve robust and reliable quantum systems, we will need to move beyond so-called hero experiments. For example, the demand for real-time decoding at megahertz frequencies with microsecond latencies is an important and unprecedented challenge. Indeed, breaking down the decoding problem into smaller, manageable bits has proven difficult so far.

There are also issues with qubit platforms themselves that need to be addressed: trapped ions and neutral atoms allow for high fidelities and long coherence times, but they are roughly 1000 times slower than superconducting and photonic qubits and therefore require algorithmic or hardware speed-ups. And that is not all: solid-state qubits (such as superconducting and spin qubits) suffer from a “yield problem”, with dead qubits on manufactured chips. Improved fabrication methods will thus be crucial, said the panellists.

Q2B25

 

Collaboration between academia and industry

The discussions then moved towards the subject of collaboration between academia and industry. In the field of QEC, such collaboration is highly productive today, with joint PhD programmes and shared conferences like Q2B, for example. Large companies also now boast substantial R&D departments capable of funding high-risk, high-reward research, blurring the lines between fundamental and application-oriented research. Both sectors also use similar foundational mathematics and physics tools.

At the moment there’s an unprecedented degree of openness and cooperation in the field. This situation might change, however, as commercial competition heats up, noted the panellists. In the future, for example, researchers from both sectors might be less inclined to share experimental chip details.

Last, but certainly not least, the panellists stressed the urgent need for more PhDs trained in quantum mechanics to address the talent deficit in both academia and industry. So, if you were thinking of switching to another field, perhaps now could be the time to jump.

The post Advances in quantum error correction showcased at Q2B25 appeared first on Physics World.

A low vibration wire scanner fork for free electron lasers

7 octobre 2025 à 16:51
High performance, proven, wire scanner for transverse beam profile measurement for the latest generation of low emittance accelerators and FELs. (Courtesy: UHV Design)
High performance, proven, wire scanner for transverse beam profile measurement for the latest generation of low emittance accelerators and FELs. (Courtesy: UHV Design)

A new high-performance wire scanner fork that the latest generation of free electron lasers (FELs) can use for measuring beam profiles has been developed by UK-based firm UHV Design. Produced using technology licensed from the Paul Scherrer Institute (PSI) in Switzerland, the device could be customized for different FELs and low emittance accelerators around the world. It builds on the company’s PLSM range, which allows heavy objects to be moved very smoothly and with minimal vibrations.

The project began 10 years ago when the PSI was starting to build the Swiss Free Electron Laser and equipping the facility, explains Jonty Eyres. The remit for UHV Design was to provide a stiff, very smooth, bellows sealed, ultra-high vacuum compatible linear actuator that could move a wire fork without vibrating it adversely. The fork, designed by PSI, can hold wires in two directions and can therefore scan the intensity of the beam profile in both X and Y planes using just one device as opposed to two or more as in previous such structures.

“We decided to employ an industrial integrated ball screw and linear slide assembly with a very stiff frame around it, the construction of which provides the support and super smooth motion,” he says. “This type of structure is generally not used in the ultra-high vacuum industry.”

The position of the wire fork is determined through a (radiation-hard) side mounted linear optical encoder in conjunction with the PSI’s own motor and gearbox assembly. A power off brake is also incorporated to avoid any issues with back driving under vacuum load if electrical power was to be lost to the PLSM.  All electrical connections terminated with UTO style connectors to PSI specification.

Long term reliability was important to avoid costly and unnecessary down time, particularly between planned FEL maintenance shutdowns. The industrial ball screw and slide assembly by design was the perfect choice in conjunction with a bellows assembly rated for 500,000 cycles with an option to increase to 1 million cycles.

Eyres and his UHV design team began by building a prototype that the PSI tested themselves with a high-speed camera. Once validated, the UHV engineers then built a batch of 20 identical units to prove that the device could be replicated in terms of constraints and tolerances.

The real challenge in constructing this device, says Eyres, was about trying to minimize the amount of vibration on the wire, which, for PSI, is typically between 5 and 25 microns thick. This is only possible if the vibration of the wire during a scan is low compared to the cross section of the wire – that is, about a micron for a 25-micron wire. “Otherwise, you are just measuring noise,” explains Eyres. “The small vibration we achieved can be corrected for in calculations, so providing an accurate value for the beam profile intensity.”

UHV Design holds the intellectual property rights for the linear actuator and PSI the property rights of the fork. Following the success of the project and a subsequent agreement between them both, it was recently decided that UHV Design buy the licence to promote the wire fork, allowing the company to sell the device or a version of it to any institution or company operating a FEL or low-emittance accelerator. “The device is customizable and can be adapted to different types of fork, wires, motors or encoders,” says Eyres. “The heart of the design remains the same: a very stiff structure and its integrated ball screw and linear slide assembly. But, it can be tailored to meet the requirements of different beam lines in terms of stroke size, specific wiring and the components employed.”

UHV Design’s linear actuator was installed on the Swiss FEL in 2016 and has been performing very well since, says Eyres.

A final and important point to note, he adds, is that UHV Design built an identical copy of their actuator when we took on board the licence agreement, so that we could prove it could still reproduce the same performance. “We built an exact copy of the wire scanner, including the PSI fork assembly and sent it to the PSI, who then used the very same high-speed camera rig that they’d employed in 2015 to directly compare the new actuator with the original ones supplied. They reported that the results were indeed comparable, meaning that if fitted to the Swiss FEL today, it would perform in the same way.”

For more information: https://www.uhvdesign.com/products/linear-actuators/wire-scanner/

The post A low vibration wire scanner fork for free electron lasers appeared first on Physics World.

Rapid calendar life screening of electrolytes for silicon anodes using voltage holds

7 octobre 2025 à 15:48

2025-09-ecs-wb-schematic-main-image

Silicon-based lithium-ion batteries exhibit severe time-based degradation resulting in poor calendar lives. In this webinar, we will talk about how calendar aging is measured, why the traditional measurement approaches are time intensive and there is a need for new approaches to optimize materials for next generation silicon based systems. Using this new approach we also screen multiple new electrolyte systems that can lead to calendar life improvements in Si containing batteries.

An interactive Q&A session follows the presentation.

ankit-verma-headshot
Ankit Verma

Ankit Verma’s expertise is in physics-based and data-driven modeling of lithium-ion and next generation lithium metal batteries. His interests lie in unraveling the coupled reaction-transport-mechanics behavior in these electrochemical systems with experiment-driven validation to provide predictive insights for practical advancements. Predominantly, he’s working on improving silicon anodes energy density and calendar life as part of the Silicon Consortium Project, understanding solid-state battery limitations and upcycling of end-of-life electrodes as part of the ReCell Center.

Verma’s past works include optimization of lithium-ion battery anodes and cathodes for high-power and fast-charge applications and understanding electrodeposition stability in metal anodes.

 

the-electrochemical-society-logo

biologic-battery-logo-800

The post Rapid calendar life screening of electrolytes for silicon anodes using voltage holds appeared first on Physics World.

John Clarke, Michel Devoret and John Martinis win the 2025 Nobel Prize for Physics

7 octobre 2025 à 11:52

John Clarke, Michel Devoret and John Martinis share the 2025 Nobel Prize for Physics “for the discovery of macroscopic quantum mechanical tunnelling and energy quantization in an electric circuit”. 

The award includes a SEK 11m prize ($1.2m), which is shared equally by the winners. The prize will be presented at a ceremony in Stockholm on 10 December.

The prize was announced this morning by members of the Royal Swedish Academy of Science. Olle Eriksson of Uppsala University and chair of the Nobel Committee for Physics commented, “There is no advanced technology today that does not rely on quantum mechanics.”

Göran Johansson of Chalmers University of Technology explained that the three laureates took quantum tunnelling from the microscopic world and onto superconducting chips, allowing physicists to study quantum physics and ultimately create quantum computers.

Speaking on the telephone, John Clarke said of his win, “To put it mildly, it was the surprise of my life,” adding “I am completely stunned. It had never occurred to me that this might be the basis of a Nobel prize.” On the significance of the trio’s research, Clarke said, “The basis of quantum computing relies to quite an extent on our discovery.”

As well as acknowledging the contributions of Devoret and Martinis, Clarke also said that their work was made possible by the work of Anthony Leggett and Brian Josephson – who laid the groundwork for their work on tunnelling in superconducting circuits. Leggett and Josephson are previous Nobel winners.

As well as having scientific significance, the trio’s work has led to the development of nascent commercial quantum computers that employ superconducting circuits. Physicist and tech entrepreneur Ilana Wisby, who co-founded Oxford Quantum Circuits, told Physics World, “It’s such a brilliant and well-deserved recognition for the community”.

A life in science

Clarke was born in 1942 in Cambridge, UK. He received his BA in physics from the University of Cambridge in 1964 before carrying out a PhD at Cambridge in 1968. He then moved to the University of California, Berkeley, to carry out a postdoc before joining the physics faculty in 1969 where he has remained since.

Devoret was born in Paris, France in 1953. He graduated from Ecole Nationale Superieure des Telecommunications in Paris in 1975 before earning a PhD from the University of Paris, Orsay, in 1982. He then moved to the University of California, Berkeley, to work in Clarke’s group collaborating with Martinis who was a graduate student at the time. In 1984 Devoret returned to France to start his own research group at the Commissariat à l’Energie Atomique in Saclay (CEA-Saclay) before heading to the US to Yale University in 2002. In 2024 he moved to the University of California, Santa Barbara, and also became chief scientist at Google Quantum AI.

Martinis was born in the US in 1958. He received a BS in physics in 1980 and a PhD in physics both from the University of California, Berkeley. He then carried out postdocs at CEA-Saclay, France, and the National Institute of Standards and Technology in Boulder, Colorado, before moving to the University of California, Santa Barbara, in 2004. In 2014 Martinis and his team joined Google with the aim of building the first useful quantum computer before he moved to Australia in 2020 to join the start-up Silicon Quantum Computing. In 2022 he co-founded the company Qolab, of which he is currently the chief technology officer.

The trio did its prizewinning work in the mid-1980s at the University of California, Berkeley. At the time Devoret was a postdoctoral fellow and Martinis was a graduate student – both working for Clarke. They were looking for evidence of macroscopic quantum tunnelling (MQT) in a device called a Josephson junction. This comprises two pieces of superconductor that are separated by an insulating barrier. In 1962 the British physicist Brian Josephson predicted how the Cooper pairs of electrons that carry current in a superconductor can tunnel across the barrier unscathed. This Josephson effect was confirmed experimentally in 1963.

Single wavefunction

The lowest-energy (ground) state of a superconductor is a macroscopic quantum state in which all Cooper pairs are described by a single quantum-mechanical wavefunction. In the late 1970s, the British–American physicist Anthony Leggett proposed that the tunnelling of this entire macroscopic state could be observed in a Josephson junction.

The idea is to put the system into a metastable state in which electrical current flows without resistance across the junction – resulting in zero voltage across the junction. If the system is indeed a macroscopic quantum state, then it should be able to occasionally tunnel out of this metastable state, resulting in a voltage across the junction.

This tunnelling can be observed by increasing the current through the junction and measuring the current at which a voltage occurs – obtaining an average value over many such measurements. As the temperature of the device is reduced, this average current increases – something that is expected regardless of whether the system is in a macroscopic quantum state.

However, at very low temperatures the average current becomes independent of temperature, which is the signature of macroscopic quantum tunnelling that Martinis, Devoret and Clarke were seeking. Their challenge was to reduce the noise in their experimental apparatus, because noise has a similar effect as tunnelling on their measurements.

Multilevel system

As well as observing the signature of tunnelling, they were also able to show that the macroscopic quantum state exists in several different energy states. Such a multilevel system is essentially a macroscopic version of an atom or nucleus, with its own spectroscopic structure.

The noise-control techniques developed by the trio to observe MQT and the fact that a Josephson junction can function as a macroscopic multilevel quantum system have led to the development of superconducting quantum bits (qubits) that form the basis of some nascent quantum computers.

The post John Clarke, Michel Devoret and John Martinis win the 2025 Nobel Prize for Physics appeared first on Physics World.

Is materials science the new alchemy for the 21st century?

6 octobre 2025 à 14:00

For many years, I’ve been a judge for awards and prizes linked to research and innovation in engineering and physics. It’s often said that it’s better to give than to receive, and it’s certainly true in this case. But another highlight of my involvement with awards is learning about cutting-edge innovations I either hadn’t heard of or didn’t know much about.

One area that never fails to fascinate me is the development of new and advanced materials. I’m not a materials scientist – my expertise lies in creating monitoring systems for engineering – so I apologize for any over-simplification in what follows. But I do want to give you a sense of just how impressive, challenging and rewarding the field of materials science is.

It’s all too easy to take advanced materials for granted. We are in constant contact with them in everyday life, whether it’s through applications in healthcare, electronics and computing or energy, transport, construction and process engineering. But what are the most important materials innovations right now – and what kinds of novel materials can we expect in future?

Drivers of innovation

There are several – and all equally important – drivers when it comes to materials development. One is the desire to improve the performance of products we’re already familiar with. A second is the need to develop more sustainable materials, whether that means replacing less environmentally friendly solutions or enabling new technology. Third, there’s the drive for novel developments, which is where some of the most ground-breaking work is occurring.

On the environmental front, we know that there are many products with components that could, in principle, be recycled. However, the reality is that many products end up in landfill because of how they’ve been constructed. I was recently reminded of this conundrum when I heard a research presentation about the difficulties of recycling solar panels.

Solar farm in the evening sun
Green problem Solar panels often fail to be recycled at their end of their life despite containing reusable materials. (Courtesy: iStock/Milos Muller)

Photovoltaic cells become increasingly inefficient with time and most solar panels aren’t expected to last more than about 30 years. Trouble is, solar panels are so robustly built that recycling them requires specialized equipment and processes. More often than not, solar panels just get thrown away despite mostly containing reusable materials such as glass, plastic and metals – including aluminium and silver.

It seems ironic that solar panels, which enable sustainable living, could also contribute significantly to landfill. In fact, the problem could escalate significantly if left unaddressed. There are already an estimated 1.8 million solar panels in use the UK, and potentially billions around the world, with a rapidly increasing install base. Making solar panels more sustainable is surely a grand challenge in materials science.

Waste not, want not

Another vital issue concerns our addiction to new tech, which means we rarely hang on to objects until the end of their life; I mean, who hasn’t been tempted by a shiny new smartphone even though the old one is perfectly adequate? That urge for new objects means we need more materials and designs that can be readily re-used or recycled, thereby reducing waste and resource depletion.

As someone who works in the aerospace industry, I know first-hand how companies are trying to make planes more fuel efficient by developing composite materials that are stronger and can survive higher temperatures and pressures – for example carbon fibre and composite matrix ceramics. The industry also uses “additive manufacturing” to enable more intricate component design with less resultant waste.

Plastics are another key area of development. Many products are made from single type, recyclable materials, such as polyethylene or polypropylene, which benefit from being light, durable and capable of withstanding chemicals and heat. Trouble is, while polyethene and polypropene can be recycled, they both create the tiny “microplastics” that, as we know all too well, are not good news for the environment.

Person holding eco plastic garbage bio bags in rolls outdoors
Sustainable challenge Material scientists will need to find practical bio-based alternatives to conventional plastics to avoid polluting microplastics entering the seas and oceans. (Courtesy: iStock/Dmitriy Sidor)

Bio-based materials are becoming more common for everyday items. Think about polylactic acid (PLA), which is a plant-based polymer derived from renewable resources such as cornstarch or sugar cane. Typically used for food or medical packaging, it’s usually said to be “compostable”, although this is a term we need to view with caution.

Sadly, PLA does not degrade readily in natural environments or landfill. To break it down, you need high-temperature, high-moisture industrial composting facilities. So whilst PLAs come from natural plants, they are not straightforward to recycle, which is why single-use disposable items, such as plastic cutlery, drinking straws and plates, are no longer permitted to be made from it.

Thankfully, we’re also seeing greater use of more sustainable, natural fibre composites, such as flax, hemp and bamboo (have you tried bamboo socks or cutlery?). All of which brings me to an interesting urban myth, which is that in 1941 legendary US car manufacturer Henry Ford built a car apparently made entirely of a plant-based plastic – dubbed the “soybean” car (see box).

The soybean car: fact or fiction?

Soybean car frame patent
Crazy or credible? Soybean car frame patent signed by Henry Ford and Eugene Turenne Gregorie. (Courtesy: Image in public domain)

Henry Ford’s 1941 “soybean” car, which was built entirely of a plant-based plastic, was apparently motivated by a need to make vehicles lighter (and therefore more fuel efficient), less reliant on steel (which was in high demand during the Second World War) and safer too. The exact ingredients of the plastic are, however, not known since there were no records kept.

Speculation is that it was a combination of soybeans, wheat, hemp, flax and ramie (a kind of flowering nettle). Lowell Overly, a Ford designer who had major involvement in creating the car, said it was “soybean fibre in a phenolic resin with formaldehyde used in the impregnation”. Despite being a mix of natural and synthetic materials – and not entirely made of soybeans – the car was nonetheless a significant advancement for the automotive industry more than eight decades ago.

Avoiding the “solar-panel trap”

So what technology developments do we need to take materials to the next level? The key will be to avoid what I coin the “solar-panel trap” and find materials that are sustainable from cradle to grave. We have to create an environmentally sustainable economic system that’s based on the reuse and regeneration of materials or products – what some dub the “circular economy”.

Sustainable composites will be essential. We’ll need composites that can be easily separated, such as adhesives that dissolve in water or a specific solvent, so that we can cleanly, quickly and cheaply recover valuable materials from complex products. We’ll also need recycled composites, using recycled carbon fibre, or plastic combined with bio-based resins made from renewable sources like plant-based oils, starches and agricultural waste (rather than fossil fuels).

Vital too will be eco-friendly composites that combine sustainable composite materials (such as natural fibres) with bio-based resins. In principle, these could be used to replace traditional composite materials and to reduce waste and environmental impact.

Another important trend is developing novel metals and complex alloys. As well as enhancing traditional applications, these are addressing future requirements for what may become commonplace applications, such as wide-scale hydrogen manufacture, transportation and distribution.

Soft and stretchy

Then there are “soft composites”. These are advanced, often biocompatible materials that combine softer, rubbery polymers with reinforcing fibres or nanoparticles to create flexible, durable and functional materials that can be used for soft robotics, medical implants, prosthetics and wearable sensors. These materials can be engineered for properties like stretchability, self-healing, magnetic actuation and tissue integration, enabling innovative and patient-friendly healthcare solutions.

Wearable electronic monitors on patients' arms
Medical magic Wearable electronic materials could transform how we monitor human health. (Shutterstock/Guguart)

And have you heard of e-textiles, which integrate electronic components into everyday fabrics? These materials could be game-changing for healthcare applications by offering wearable, non-invasive monitoring of physiological information such as heart rate and respiration.

Further applications could include advanced personal protective equipment (PPE), smart bandages and garments for long-term rehabilitation and remote patient care. Smart textiles could revolutionize medical diagnostics, therapy delivery and treatment by providing personalized digital healthcare solutions.

Towards “new gold”

I realize I have only scratched the surface of materials science – an amazing cauldron of ideas where physics, chemistry and engineering work hand in hand to deliver groundbreaking solutions. It’s a hugely and truly important discipline. With far greater success than the original alchemists, materials scientists are adept at creating the “new gold”.

Their discoveries and inventions are making major contributions to our planet’s sustainable economy from the design, deployment and decommission of everyday items, as well as finding novel solutions that will positively impact way we live today. Surely it’s an area we should celebrate and, as physicists, become more closely involved in.

The post Is materials science the new alchemy for the 21st century? appeared first on Physics World.

Perovskite detector could improve nuclear medicine imaging

6 octobre 2025 à 10:30

A perovskite semiconductor that can detect and image single gamma-ray photons with both high-spatial and high-energy resolution could be used to create next-generation nuclear medicine scanners that can image faster and provide clearer results. The perovskite is also easier to grow and much cheaper than existing detector materials such as cadmium zinc telluride (CZT), say the researchers at Northwestern University in the US and Soochow University in China who developed it.

Nuclear medicine imaging techniques like single-photon emission computed tomography (SPECT) work by detecting the gamma rays emitted by a short-lived radiotracer delivered to a specific part of a patient’s body. Each gamma ray can be thought of as being a pixel of light, and after millions of these pixels have been collected, a 3D image of the region of interest can be built up by an external detector.

Such detectors are today made from either semiconductors like CZT or scintillators such as NaI:TI, CsI and LYSO, but CZT detectors are expensive – often costing hundreds of thousands to millions of dollars. CZT crystals are also brittle, making the detectors difficult to manufacture. While NaI is cheaper than CZT, detectors made of this material end up being bulky and generate blurrier images.

High-quality crystals of CsPbBr3

To overcome these problems, researchers led by Mercouri Kanatzidis and Yihui He studied the lead halide perovskite crystal CsPbBr3. They already knew that this was an efficient solar cell material and recently, they discovered that it also showed promise for detecting X-rays and gamma rays.

In the new work, detailed in Nature Communications, the team grew high-quality crystals of CsPbBr3 and fabricated them into detector devices. “When a gamma-ray photon enters the crystal, it interacts with the material and produces electron–hole pairs,” explains Kanatzidis. “These charge carriers are collected as an electrical signal that we can measure to determine both the energy of the photon and its point of interaction.”

The researchers found that their detectors could resolve individual gamma rays at the energies used in SPECT imaging with high resolution. They could also sense extremely weak signals from the medical tracer technetium-99m, which is routinely employed in hospital settings. They were thus able to produce sharp images that could distinguish features as small as 3.2 mm. This fine sensitivity means that patients would be exposed to shorter scan times or smaller doses of radiation compared with NaI or CZT detectors.

Ten years of optimization

“Importantly, a parallel study published in Advanced Materials the same week as our Nature Communications paper directly compared perovskite performance with CZT, the only commercial semiconductor material available today for SPECT, which showed that perovskites can even surpass CZT in certain aspects,” says Kanatzidis.

“The result was possible thanks to our efforts over the last 10 years in optimizing the crystal growth of CsPbBr3, improving the electrode contacts in the detectors and carrier transport and nuclear electronics therein,” adds He. “Since the first demonstration of high spectral resolution by CsPbBr3 in our previous work, it has gradually been recognized as the most promising competitor to CZT.”

Looking forward, the Northwestern–Soochow team is now busy scaling up detector fabrication and improving its long-term stability. “We are also trying to better understand the fundamental physics of how gamma rays interact in perovskites, which could help optimize future materials,” says Kanatzidis. “A few years ago, we established a new company, Actinia, with the goal of commercializing this technology and moving it toward practical use in hospitals and clinics,” he tells Physics World.

“High-quality nuclear medicine shouldn’t be limited to hospitals that can afford the most expensive equipment,” he says. “With perovskites, we can open the door to clearer, faster, safer scans for many more patients around the world. The ultimate goal is better scans, better diagnoses and better care for patients.”

The post Perovskite detector could improve nuclear medicine imaging appeared first on Physics World.

Radioactive BEC could form a ‘superradiant neutrino laser’

4 octobre 2025 à 14:48

Radioactive atoms in a Bose–Einstein condensate (BEC) could form a “superradiant neutrino laser” in which the atomic nuclei undergo accelerated beta decay. The hypothetical laser has been proposed by two researchers US who say that it could be built and tested. While such a neutrino laser has no obvious immediate applications, further developments could potentially assist in the search for background neutrinos from the Big Bang – an important goal of neutrino physicists.

Neutrinos – the ghostly particles produced in beta decay – are notoriously difficult to detect or manipulate because of the weakness of their interaction with matter. They cannot be used to produce a conventional laser because they would pass straight through mirrors unimpeded. More fundamentally, neutrinos are fermions rather than bosons such as photons. This prevents neutrinos forming a two-level system with a population inversion as only one neutrino can occupy each quantum state in a system.

However, another quantum phenomenon called superradiance can also increase the intensity and coherence of the radiation from photons. This occurs when the emitters are sufficiently close together to become indistinguishable. The emission then comes not from any single entity but from the collective ensemble. As it does not require the emitted particles to be quantum degenerate, this is not theoretically forbidden for fermions. “There are devices that use superradiance to make light sources, and people call them superradiant lasers – although that’s actually a misnomer” explains neutrino physicist Benjamin Jones of the University of Texas at Arlington and a visiting professor at the University of Manchester. “There’s no stimulated emission.”

In their new work, Jones and colleague Joseph Formaggio of Massachusetts Institute of Technology propose that, in a BEC of radioactive atoms, superradiance could enhance the neutrino emission rate and therefore speed up beta decay, with an initial burst before the expected exponential decay commences. “That has not been seen for nuclear systems so far – only for electronic ones,” says Formaggio. Rubidium was used to produce the first ever condensate in 1995 by Carl Wiemann and Eric Cornell of University of Colorado Boulder, and conveniently, one of its isotopes decays by beta emission with a half-life of 86 days.

Radioactive vapour

The presence of additional hyperfine states would make direct laser cooling of rubidium-83 more challenging than the rubidium-87 isotope used by Wiemann and Cornell, but not significantly more so than the condensation of rubidium-85, which has also been achieved. Alternatively, the researchers propose that a dual condensate could be created in which rubidium-83 is cooled by sympathetic cooling with rubidium-87. The bigger challenge, says Jones, is the Bose–Einstein condensation of a radioactive atom, which has yet to be achieved: “It’s difficult to handle in a vacuum system,” he explains, “You have to be careful to make sure you don’t contaminate your laboratory with radioactive vapour.”

If such a condensate were produced, the researchers predict that superradiance would increase with the size of the BEC. In a BEC of 106 atoms, for example, more than half the atoms would decay within three minutes. The researchers now hope to test this prediction. “This is one of those experiments that does not require a billion dollars to fund,” says Formaggio. “It is done in university laboratories. It’s a hard experiment but it’s not out of reach, and I’d love to see it done and be proven right or wrong.”

If the prediction were proved correct, the researchers suggest it could eventually lead towards a benchtop neutrino source. As the same physics applies to neutrino capture, this could theoretically assist the detection of neutrinos that decoupled from the hot plasma of the universe just seconds after the Big Bang – hundreds of thousands of years before photons in the cosmic microwave background. The researchers emphasize, however, that this would not currently be feasible.

Sound proposal

Neutrino physicist Patrick Huber of Virginia Tech is impressed by the work. “I think for a first, theoretical study of the problem this is very good,” he says. “The quantum mechanics seems to be sound, so the question is if you try to build an experiment what kind of real-world obstacles are you going to encounter?” He predicts that, if the experiment works, other researchers would quite likely find hitherto unforeseen applications.

Atomic, molecular and optical physicist James Thompson of University of Colorado Boulder is sceptical, however. He says several important aspects are either glossed over or simply ignored. Most notably, he calculates that the de Broglie wavelength of the neutrinos would be below the Bohr radius – which would prevent a BEC from feasibly satisfying the superradiance criterion that the atoms be indistinguishable.

“I think it’s a really cool, creative idea to think about,” he concludes, “but I think there are things we’ve learned in atomic physics that haven’t really crept into [the neutrino physics] community yet. We learned them the hard way by building experiments, having them not work and then figuring out what it takes to make them work.”

The proposal is described in Physical Review Letters.

The post Radioactive BEC could form a ‘superradiant neutrino laser’ appeared first on Physics World.

Bayes’ rule goes quantum

3 octobre 2025 à 15:00

How would Bayes’ rule – a technique to calculate probabilities – work in the quantum world? Physicists at the National University of Singapore, Japan’s University of Nagoya, and the Hong Kong University of Science and Technology in Guangzhou have now put forward a possible explanation. Their work could help improve quantum machine learning and quantum error correction in quantum computing.

Bayes’ rule is named after Thomas Bayes who first defined it for conditional probabilities in “An Essay Towards Solving a Problem in the Doctrine of Chances” in 1763.  It describes the probability of an event based on prior knowledge of conditions that might be related to the event. One area in which it is routinely used is to update beliefs based on new evidence (data). In classical statistics, the rule can be derived from the principle of minimum change, meaning that the updated beliefs must be consistent with the new data while only minimally deviating from the previous belief.

In mathematical terms, the principle of minimum change minimizes the distance between the joint probability distributions of the initial and updated belief. Simply put, this is the idea that for any new piece of information, beliefs are updated in the smallest possible way that is compatible with the new facts. For example, when a person tests positive for Covid-19, they may have suspected that they were ill, but the new information confirms this. Bayes’ rule is a therefore way to calculate the probability of having contracted Covid-19 based not only on the test result, and the chance of the test yielding a false negative, but also on the patient’s initial suspicions.

Quantum analogue

Quantum versions of Bayes’ rule have been around for decades, but the approach through the minimum change principle had not been tried before. In the new work, a team led by Ge Bai, Francesco Buscemi and Valerio Scarani set out to do just that.

“We found which quantum Bayes’ rule is singled out when one maximizes the fidelity (which is equivalent to minimizing the change) between two processes,” explains Bai. “In many cases, the solution is the ‘Petz recovery map’, proposed by Dénes Petz in the 1980s and which was already considered as being one of the best candidates for the quantum Bayes’ rule. It is based on the rules of information processing, crucial not only for human reasoning, but also for machine learning models that update their parameters with new data.”

Quantum theory is counter-intuitive, and the mathematics is hard, says Bai. “Our work provides a mathematically sound way to update knowledge about a quantum system, rigorously derived from simple principles of reasoning, he tells Physics World. “It demonstrates that the mathematical description of a quantum system—the density matrix—is not just a predictive tool, but is genuinely useful for representing our understanding of an underlying system. “It effectively extends the concept of gaining knowledge, which mathematically corresponds to a change in probabilities, into the quantum realm.”

A conservative stance

The “simple principles of reasoning” encompass the minimum change principle, adds Buscemi. “The idea is that while new data should lead us to update our opinion or belief about something, the change should be as small as possible, given the data received.

“It’s a conservative stance of sorts: I’m willing to change my mind, but only by the amount necessary to accept the hard facts presented to me, no more.”

“This is the simple (yet powerful) principle that Ge mentioned,” he says, “and it guides scientific inference by preventing unwanted biases from entering the reasoning process.”

An axiomatic approach to the Petz recovery map

While several quantum versions of the Bayes’ rule have been put forward before now, these were mostly based on the fact of having analogous properties to their classical counterpart, adds Scarani. “Recently, Francesco and one co-author proposed an axiomatic approach to the most frequently-used quantum Bayes rule, the one using the Petz recovery map. Our work is the first to derive a quantum Bayes rule from an optimization principle, which works very generally for classical information, but which has been used here for the first time in quantum information.

The result is very intriguing, he says: “we recover the Petz map in many cases, but not all. If we take that our new approach is the correct way to define a quantum Bayes rule, then previous constructions based on analogies were correct very often, but not quite always; and one or more of the axioms are not to be enforced after all. Our work is therefore is a major advance, but it is not the end of the road – and this is nice.”

Indeed, the researchers say they are now busy further refining their quantum Bayes’ rule. They are also looking into applications for it. “Beyond machine learning, this rule could be powerful for inference—not just for predicting the future but also retrodicting the past,” says Bai. “This is directly applicable to problems in quantum communication, where one must recover encoded messages, and in quantum tomography, where the goal is to infer a system’s internal state from observations.

“We will be using our results to develop new, hopefully more efficient, and mathematically well-founded methods for these tasks,” he concludes.

The present study is detailed in Physical Review Letters.

The post Bayes’ rule goes quantum appeared first on Physics World.

The top five physics Nobel prizes of the 21st century revealed

3 octobre 2025 à 12:00

With the 2025 Nobel Prize for Physics due to be unveiled on Tuesday 7 October, Physics World has been getting in the mood by speculating who might win. It’s a prediction game we have fun with every year – and you can check out our infographic to make your own call.

Quantum physics is our hot favourite this time round – it’s the International Year of Quantum Science and Technology and the Nobel Committee for Physics aren’t immune to wider events. But whoever wins, you know that the prize will have been very carefully considered by committee members.

Over the 125 years since the prize was first awarded, almost every seminal finding in physics has been honoured – from the discovery of the electron, neutrino and positron to the development of quantum mechanics and the observation of high-temperature superconductivity.

But what have been the most significant physics prizes of the 21st century? I’m including 2000 as part of this century (ignoring pedants who say it didn’t start till 1 January 2001). During that time, the Nobel Prize for Physics has been awarded 25 times and gone to 68 different people, averaging out at about 2.7 people per prize.

Now, my choice is entirely subjective, but I reckon the most signficant prizes are those that:

  • are simple to understand;
  • were an experimental or theoretical tour-de-force;
  • have long-term implications for science and open new paths;
  • expose deeper questions at their heart;
  • were on people’s bucket lists and/or have long, historical links;
  • were won by people we’d heard of at the time;
  • are of wider interest to non-physicists or those with only a passing interest in the subject.

So with that in mind, here’s my pick of the five top physics Nobel prizes of the 21st century. You’ll probably disagree violently with my choice so e-mail us with your thoughts.

5. Neutrino oscillation – 2015 prize

Tiny success The SuperKamiokande detector in Japan, where neutrino oscillations were first spotted, led to the 2015 Nobel Prize for Physics to Takaaki Kajita and Art McDonald. (Courtesy: Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo)

Coming in at number five in our list of top physics Nobels of the 21st century is the discovery of neutrino oscillation, which went to Takaaki Kajita and Art McDonald in 2015. The neutrino was first hypothesized by Wolfgang Pauli back in 1930 as “a desperate remedy” to the fact that energy didn’t seem to be conserved when a nucleus emits an electron via beta decay. Fred Reines and Clyde Cowan had won a Nobel prize in 1995 for the original discovery of neutrinos themselves, which are chargeless particles that interact with matter via the weak force and are fiendishly hard to detect.

But what Kajita (at the Super-Kamikande experiment in Japan) and McDonald (at the Sudbury Neutrino Observatory in Canada) had done is to see them switch, or “oscillate”, from one type to another. Their work proved that these particles, which physicists had assumed to be massless, do have mass after all. This was at odds with the Standard Model of particle physics – and isn’t it fun when physics upends conventional wisdom?

What’s more, the discovery of neutrino oscillation explained why Ray Davies and John Bahcall had seen only a third of the solar neutrinos predicted by theory in their famous experiment of 1964. This discrepancy arose because solar neutrinos are oscillating between flavours as they travel to the Earth – and their experiment had detected only a third as it was sensitive mainly to electron neutrinos, not the other types.

4. Bose–Einstein condensation – 2001 prize

A Bose–Einstein condensate emerges from a cloud of cold rubidium atoms
Cool finding The first Bose–Einstein condensate (BEC) was created in 1995 from a cloud of cold rubidium atoms by Eric Cornell and Carl Wieman, with the “spike” in the density of atoms indicating many atoms occupying the same quantum state – the signature of a BEC. Cornell and Wieman won the 2001 Nobel Prize for Physics along with Wolfgang Ketterle, who made a BEC a few months later (Courtesy: NIST/JILA/CU-Boulder)

At number four in our list of the best physics Nobel prizes of the 21st century is the 2001 award, which went to Eric Cornell, Wolfgang Ketterle and Carl Wieman for creating the first Bose–Einstein condensates (BECs). I love the idea that Cornell and Wieman created a new state of matter – in which particles are locked together in their lowest quantum state – at exactly 10.54 a.m. on Monday 5 June 1995 at the JILA laboratory in Boulder, Colorado.

First envisaged by Satyendra Nath Bose and Albert Einstein in 1924, Cornell and Wieman created the first BEC by cooling 2000 rubidium-87 atoms to 170nK using the then new techniques of laser and evaporative cooling. Within a few months, Wolfgang Ketterle over at the Massachusetts Institute of Technology also made a BEC from 500,000 sodium-23 atoms at 2 μK.

Since then hundreds of groups around the world have created BECs, which have been used for everything from slowing light to making “atom lasers” and even modelling the behaviour of black holes. Moreover, the interactions between the atoms can be finely controlled, meaning BECs can be used to simulate properties of condensed-matter systems that are extremely difficult – or impossible – to probe in real materials.

3. Higgs boson – 2013 prize

Francois Englert and Peter Higgs.
Particle pioneers Peter Higgs (right) in the CERN auditorium with François Englert on 4 July 2012 when the discovery of the Higgs boson was announced, for which the pair won the 2013 Nobel Prize for Physics. (Courtesy: CERN/Maximilien Brice)

Coming in at number three is the 2013 prize, which went to François Englert and the late Peter Higgs for discovering the mechanism by which subatomic particles get mass. Their work was confirmed in 2012 by the discovery of the so-called Higgs boson at the ATLAS and CMS experiments at CERN’s Large Hadron Collider.

Higgs and Englert didn’t, of course, win for detecting the Higgs boson, although the Nobel citation credits the ATLAS and CMS teams in its citation. What they were being credited for was work done back in the early 1960s when they published papers independently of each other that provided a mechanism by which particles can have the masses we observe.

Higgs had been studying spontaneous symmetry breaking, which led to the notion of massless, force-carrying particles, known as Goldstone bosons. But what Higgs realized was that Goldstone bosons don’t necessarily occur when a symmetry is spontaneously broken – they could be reinterpreted as an additional quantum (polarization) state of a force-carrying particle.

The leftover terms in the equations represented a massive particle – the Higgs boson – avoiding the need for a massless unobserved particle. Writing in his now-famous 1964 paper (Phys. Rev. Lett. 13 508), Higgs highlighted the possibility of a massive spin-zero boson, which is what was discovered at CERN in 2012.

That work probably got more media attention than all Nobel prizes this century, because who doesn’t love a huge international collaboration tracking down a particle on the biggest physics experiment of all time? Especially as the Standard Model doesn’t predict what its mass should be so it’s hard to know where to look. But it doesn’t take top slot in my book because it “only” confirmed what we had expected and we’re still on the look-out for “new physics” beyond the Standard Model.

2. Dark energy – 2011 prize

Cooper-fig1
Cosmic discovery The universe has been expanding since the dawn of time, but instead of slowing down, in the last five or six billion years the expansion has sped up, bagging the 2011 Nobel prize for Saul Perlmutter, Adam Riess and Brian Schmidt. (Courtesy: NASA/WMAP Science Team)

Taking second place in our list is the discovery that the expansion of the universe is not slowing down – but accelerating – thanks to studies of exploding stars called supernovae. As with so many Nobel prizes these days, the 2011 award went to three people: Brian Schmidt, who led the High-Z Supernovae Search Team, and his colleague Adam Riess, and to Saul Perlmutter who led the rival Supernova Cosmology Project.

Theirs was a pretty sensational finding that implied that about three-quarters of the mass–energy content of the universe must consist of some weird, gravitationally repulsive substance, dubbed “dark energy”, about which even now we still know virtually nothing. It had previously been assumed that the universe would – depending on how much matter it contains – either collapse eventually in a big crunch or go on expanding forever, albeit at an ever more gentle pace.

The teams had been studying type 1a supernovae, which always blow up in the same way when they reach the same mass, which means that they can be used as “standard candles” to accurately measure distance in the universe. Such supernovae are very rare and the two groups had to carry out painstaking surveys using ground-based telescopes and the Hubble Space Telescope to find enough of them.

The teams thought they’d find that the expansion of the universe is decelerating, but as more and more data piled up, the results only appeared to make sense if the universe has a force pushing matter apart. The Royal Swedish Academy of Sciences said the discovery was “as significant” as the 2006 prize, which had gone to John Mather and the late George Smoot for their discovery in 1992 of the minute temperature variations in the cosmic microwave background – the fossil remnants of the large-scale structures in today’s universe.

But to me, the accelerating expansion has the edge as the implications are even more profound, pointing as they do to the composition and fate of the cosmos.

1. Gravitational waves – 2017 prize

Artist's impression of gravitational waves from a black-hole binary
Space–time collision Artist’s impression of a black-hole binary system generating gravitational waves, the discovery of which led to the 2017 Nobel Prize for Physics for Barry Barish, Kip Thorne and Rainer Weiss, which is (so far) the top physics prize of the 21st century. (Courtesy: LIGO/T Pyle)

And finally, the winner of the greatest Nobel Prize for Physics of the 21st century is the 2017 award, which went to Barry Barish, Kip Thorne and the late Rainer Weiss for the discovery of gravitational waves. Not only is it the most recent prize on my list, it’s also memorable for being a genuine first – discovering the “ripples in space–time” originally predicted by Einstein. The two LIGO detectors in Livingston, Louisiana, and Hanford, Washington, are also astonishing feats of engineering, capable of detecting changes in distance tinier than the radius of the proton.

The story of how gravitational waves were first observed is now well known. It was in the early hours of the morning Monday 14 September 2015, just after staff who had been calibrating the LIGO detector in Livingston had gone to bed, when gravitational waves created from the collision of two black holes 1.3 billion light-years away hit the LIGO detectors in the US. The historic measurement dubbed GW150914 hit the headlines around the world.

More than 200 gravitational-wave events have so far been detected – and observing these ripples, which had long been on many physicists’ bucket lists, has over the last decade become almost routine. Most gravitational-wave detections have been binary black-hole mergers, though there have also been a few neutron-star/black-hole collisions and some binary neutron-star mergers too. Gravitational-wave astronomy is now a well-established field not just thanks to LIGO but also Virgo in Italy and KAGRA in Japan. There are also plans for an even more advanced Einstein Telescope, which could detect in a day what it took LIGO a decade to spot.

Gravitational waves also opened the whole new field of “multimessenger astronomy” – the idea that you observe a cosmic event with gravitational waves and then do follow-up studies using other instruments, measuring it with cosmic rays, neutrinos and photons. Each of these cosmic messengers is produced by distinct processes and so carries information about different mechanisms within its source.

The messengers also differ widely in how they carry this information to the astronomer: for example, gravitational waves and neutrinos can pass through matter and intergalactic magnetic fields, providing an unobstructed view of the universe at all wavelengths. Combining observations of different messengers will therefore let us see more and look further.

  • Think we’re right or spectacularly wrong with our pick of the top five Nobel physics prizes of the 21st century? Get in touch by e-mailing us with your thoughts.

The post The top five physics Nobel prizes of the 21st century revealed appeared first on Physics World.

ASTRO 2025: expanding the rules of radiation therapy

3 octobre 2025 à 10:00

“ASTRO 2025 has opened with a palpable sense of momentum. The turnout has been really strong and the energy is unmistakable,” said Catheryn Yashar, president-elect of the American Society for Radiation Oncology (ASTRO). “There’s a buzz in the exhibit hall, lots of talking in the lobby. And the sessions have generated excitement – it’s data that’s challenging our long held standards and testing the expanding rules of radiation therapy.”

Yashar was speaking at a news briefing arranged to highlight a select few high-impact abstracts. And in accord with the ASTRO 2025 meeting’s theme of “rediscovering radiation medicine and exploring new indications”, the chosen presentations included examples of innovative techniques and less common indications, including radiotherapy treatments of non-malignant disease and a novel combination of external-beam radiation with radioligand therapy.

Keeping heart rhythm under control

Ventricular tachycardia (VT) is a life-threatening heart rhythm disorder that’s usually treated with medication, implantation of a cardiac device and then catheter ablation, an invasive procedure in which a long catheter is inserted via a leg vein into the heart to destroy abnormal cardiac tissue. A research team at Washington University School of Medicine has now shown that stereotactic arrhythmia radiation therapy (STAR) could provide an equally effective and potentially safer treatment alternative.

Shannon Jiang at ASTRO 2025
STAR researcher Shannon Jiang from Washington University School of Medicine. (Courtesy: ©ASTRO/Nick Agro 2025)

STAR works by delivering precision beams of radiation to the scarred tissue that drives the abnormal heart rhythm, without requiring invasive catheters or anaesthesia.

“Over the past several years, STAR has emerged as a novel non-invasive treatment for patients with refractory VT,” said Shannon Jiang, who presented the team’s findings at ASTRO. “So far, there have been several single-arm studies showing promising results for STAR, but there are currently no data that directly compare STAR to catheter ablation, and that’s the goal for our study.”

Jiang and colleagues retrospectively analysed data from 43 patients with recurrent refractory VT (which no longer responds to treatment). Patients were treated with either STAR or repeat catheter ablation at a single institution. The team found that both treatments were similarly effective at controlling arrhythmia, but patients receiving radiation had far fewer serious side effects.

Within one year of the procedure, eight patients (38%) in the ablation group experienced treatment-related serious adverse events, compared with just two (9%) in the STAR group. These complications occurred sooner after ablation (median six days) than after radiation (10 months). In four cases, patients receiving ablation died within a month of treatment, soon after experiencing an adverse event, and one patient did not survive the procedure. In contrast, in the STAR group, there were no deaths attributed to treatment-related side effects. One year after treatment, overall survival was 73% following radiation and 58% after ablation; at three years (the median follow-up time), it was 45% in both groups.

“Despite the fact that this is a retrospective, non-randomized analysis, our study provides some important preliminary data that support the use of STAR as a potentially safer and equally effective treatment option for patients with high-risk refractory VT,” Jiang concluded.

Commenting on the study, Kenneth Rosenzweig from Icahn School of Medicine at Mount Sinai emphasizes that the vast majority of patients with VT will be well cared for by standard cardiac ablation, but that radiation can help in certain situations. “This study shows that for patients where the ablation just isn’t working anymore, there’s another option. Some patients will really need the help of radiation medicine to get them through, and work like this will help us figure out who those patients are and what we can do to improve their quality-of-life.”

A radiation combination

A clinical trial headed up at the University of California, Los Angeles, has shown that adding radioligand therapy to metastasis-directed radiation therapy more than doubles progression-free survival in men with oligometastatic prostate cancer, without increasing toxicity.

“When we pair external-beam radiation directed to tumours we can see with a radiopharmaceutical to reach microscopic disease we can’t see, patients can experience a notably longer interval before progression,” explained principal investigator Amar Kishan.

Patients with oligometastatic prostate cancer (up to five metastases outside the prostate after initial therapy) are increasingly treated with metastasis-directed stereotactic body radiation therapy (SBRT). While this treatment can delay progression and the need for hormone therapy, in most patients the cancer recurs, likely due to the presence of undetectable microscopic disease.

Amar Kishan at ASTRO 2025
Delaying cancer progression Amar Kishan from the University of California, Los Angeles. (Courtesy: ©ASTRO/Scott Morgan 2025)

Radioligand therapy uses a radiopharmaceutical drug to deliver precise radiation doses directly to tumours. For prostate cancer, the drug combines radioactive isotope lutetium-177 with a ligand that targets the prostate-specific membrane antigen (PSMA) found on cancer cells. Following its promising use in men with advanced prostate cancer, the team examined whether adding radioligand therapy to SBRT could also improve progression-free survival in men with early metastatic disease.

The phase II LUNAR trial included 92 men with oligometastatic prostate cancer and one to five distant lesions as seen on a PSMA PET/CT scan. The patients were randomized to receive either SBRT alone (control arm) or two cycles of the investigational PSMA-targeting drug 177Lu-PNT2002, eight weeks apart, followed by SBRT.

At a median follow-up of 22 months, adding radioligand therapy improved median progression-free survival from 7.4 to 17.3 months. Hormone therapy was also delayed, from 14.1 months in the control group to 24.3 months. Of 65 progression events observed, 64 were due to new lesions rather than regrowth at previously treated sites. Both treatments were well tolerated, with no difference in severe side effects between the two groups.

“We conclude that adding two cycles of 177Lu-PNT2002 to SBRT significantly improves progression-free survival in men with oligorecurrent prostate cancer, presumably by action on occult metastatic disease, without an increase in toxicity,” said Kishan. “Ultimately, while this intervention worked well, 64% of patients even on the investigational arm still had some progression, so we could further optimize the dose and cycle and other variables for these patients.”

Pain relief for knee osteoarthritis

Osteoarthritis is a painful joint disease that arises when the cartilage cushioning the ends of bones wears down. Treatments include pain medication, which can cause significant side effects with long-term use, or invasive joint replacement surgery. Byoung Hyuck Kim from Seoul National University College of Medicine described how low-dose radiotherapy (LDRT) could help bridge this treatment gap.

Byoung Hyuck Kim at ASTRO 2025
Easing arthritis pain Byoung Hyuck Kim from Seoul National University College of Medicine. (Courtesy: ©ASTRO/Scott Morgan 2025)

LDRT could provide a non-invasive alternative treatment for knee osteoarthritis, a leading cause of disability, Kim explained. But while it is commonly employed in Europe to treat joint pain, its use in other countries is limited by low awareness and a lack of high-quality randomized evidence. To address this shortfall, Kim and colleagues performed a randomized, placebo-controlled trial designed to provide sufficient evidence to incorporate LDRT into clinical standard-of-care.

“There’s a clinical need for moderate interventions between weak pain medications and aggressive surgery, and we think radiation may be a suitable option for those patients, especially when drugs and injections are poorly tolerated,” said Kim.

The multicentre trial included 114 patients with mild to moderate knee osteoarthritis. Participants were randomized to receive one of three treatments: 0.3 Gy radiotherapy in six fractions; 3 Gy in six fractions; or sham irradiation where the treatment system did not deliver radiation – an approach that had not been tested in previous studies.

The use of pain medication was limited, to avoid masking effects from the radiation itself. Response was considered positive if the patients (who did not know which treatment they had received) exhibited improvements in pain levels, physical function and overall condition.

“Interestingly, at one month [after treatment], the response rates were very similar across all groups, which reflects a strong placebo effect from the sham group,” said Kim. “At four months, after the placebo effect had diminished, the 3 Gy group demonstrated significantly higher response rate compared to the sham control group; however, the 0.3 Gy group did not.”

The response rates at four months were 70.3%, 58.3% and 41.7%, for the 3 Gy, 0.3 Gy and sham groups, respectively. As expected, with radiation doses less than 5% of those typically used for cancer treatments, no radiation-related side effects were observed.

“Our study shows that a single course of low-dose radiotherapy improves knee osteoarthritis symptoms and function at four months, with no treatment-related toxicity observed,” Kim concluded. “So our trial could provide objective evidence and suggest that LDRT is a non-pharmacologic scalable option that merits further trials.”

“While small, [the study] was really well executed in terms of being placebo controlled. It clearly showed that the 3 Gy arm was superior to the placebo control arm and there was a 30% benefit,” commented Kristina Mirabeau-Beale from GenesisCare. “So I think we can say definitively that the benefit is from radiation more than just the placebo effect of interacting with our healthcare system.”

The post ASTRO 2025: expanding the rules of radiation therapy appeared first on Physics World.

Quantum information or metamaterials: our predictions for this year’s Nobel Prize for Physics

2 octobre 2025 à 19:30
Infographic showing Nobel physics prizes in terms of field of research
Courtesy: Alison Tovey/IOP Publishing

On Tuesday 7 October the winner(s) of the 2025 Nobel Prize for Physics will be announced. The process of choosing the winners is highly secretive, so looking for hints about who will be this year’s laureates is futile. Indeed, in the immediate run-up to announcement, only members of the Nobel Committee for Physics and the Class for Physics at the Royal Swedish Academy of Sciences know who will be minted as the latest Nobel laureates. What is more, recent prizes provide little guidance because the deliberations and nominations are kept secret for 50 years. So we really are in the dark when it comes to predicting who will be named next week.

If you would like to learn more about how the Nobel Prize for Physics is awarded, check out this profile of Lars Brink, who served on the Nobel Committee for Physics on eight occasions.

But this level of secrecy doesn’t stop people like me from speculating about this year’s winners. Before I explain the rather lovely infographic that illustrates this article – and how it could be used to predict future Nobel winners – I am going to share my first prediction for next week.

Inspired by last year’s physics Nobel prize, which went to two computer scientists for their work on artificial intelligence, I am predicting that the 2025 laureates will be honoured for their work on quantum information and algorithms. Much of the pioneering work in this field was done several decades ago, and has come to fruition in functioning quantum computers and cryptography systems. So the time seems right for an award and I have four people in mind. They are Peter Shor, Gilles Brassard, Charles Bennett and David Deutsch. However, only three can share the prize.

Moving on to our infographic, which gives a bit of pseudoscientific credibility to my next predictions! It charts the history of the physics Nobel prize in terms of field of endeavour. One thing that is apparent from the infographic is that since about 1990 there have been clear gaps between awards in certain fields. If you look at “atomic, molecular and optical physics”, for example, there are gaps between awards of about 5–10 years. One might conclude, therefore, that the Nobel committee considers the field of an award and tries to avoid bunching together awards in the same field.

Looking at the infographic, it looks like we are long overdue a prize in nuclear and particle physics – the last being 10 years ago. However, we haven’t had many big breakthroughs in this field lately. Two aspects of particle physics that have been very fruitful in the 21st century have been the study of the quark–gluon plasma formed when heavy nuclei collide; and the precise study of antimatter – observing how it behaves under gravity, for example. But I think it might be a bit too early for Nobels in these fields.

One possibility for a particle-physics Nobel is the development of the theory of cosmic inflation, which seeks to explain the observed nature of the current universe by invoking an exponential expansion of the universe in its very early history. If an award were given for inflation, it would most certainly go to Alan Guth and Andrei Linde. A natural for the third slot would have been Alexei Starobinsky, who sadly died in 2023 – and Nobels are not awarded posthumously. If there was a third winner for inflation, it would probably be Paul Steinhardt.

Invisibility cloaks

2016 was the last year when we had a Nobel prize in condensed-matter physics, so what work in that field would be worthy of an award this year? There has been a lot of very interesting research done in the field of metamaterials – materials that are engineered to have specific properties, particularly in terms of how they interact with light or sound.

A Nobel prize for metamaterials would surely go to the theorist John Pendry, who pioneered the concept of transformation optics. This simplifies our understanding of how light interacts with metamaterials and helps with the design of objects and devices with amazing properties. These include invisibility cloaks –the first of which was built in 2006 by the experimentalist David Smith, who I think is also a contender for this year’s Nobel prize. Smith’s cloak works at microwave frequencies, but my nomination for the third slot has done an amazing amount of work on developing metamaterials for practical applications in optics. If you follow this field, you know that I am thinking of the applied physicist Federico Capasso – who is also known for the invention of the quantum cascade laser.

The post Quantum information or metamaterials: our predictions for this year’s Nobel Prize for Physics appeared first on Physics World.

US scientific societies blast Trump administration’s plan to politicize grants

2 octobre 2025 à 17:30

Almost 60 US scientific societies have signed a letter calling on the US government to “safeguard the integrity” of the peer-review process when distributing grants. The move is in to response to an executive order issued by the Trump administration in August that places accountability for reviewing and awarding new government grants in the hands of agency heads.

The executive order – Improving Oversight of Federal Grantmaking – calls on each agency head to “designate a senior appointee” to review new funding announcements and to “review discretionary grants to ensure that they are consistent with agency priorities and the national interest.”

The order outlines several previous grants that it says have not aligned with the Trump administration’s current policies, claiming that in 2024 more than a quarter of new National Science Foundation (NSF) grants went to diversity, equity, and inclusion and what it calls “other far-left initiatives”.

“These NSF grants included those to educators that promoted Marxism, class warfare propaganda, and other anti-American ideologies in the classroom, masked as rigorous and thoughtful investigation,” the order states. “There is a strong need to strengthen oversight and coordination of, and to streamline, agency grantmaking to address these problems, prevent them from recurring, and ensure greater accountability for use of public funds more broadly.”

Increasing burdens

In response, the 58 agencies – including the American Physical Society, the American Astronomical Society, the Biophysical Society, the American Geophysical Union and SPIE – have written to the majority and minority leaders of the US Senate and House of Representatives, to voice their concerns that the order “raises the possibility of politicization” in federally funded research.

“Our nation’s federal grantmaking ecosystem serves as the gold standard for supporting cutting-edge research and driving technological innovation worldwide,” the letters states. “Without the oversight traditionally applied by appropriators and committees of jurisdiction, this [order] will significantly increase administrative burdens on both researchers and agencies, slowing, and sometimes stopping altogether, vital scientific research that our country needs.”

The letter says more review and oversight is required by the US Congress before the order should go into effect, adding that the scientific community “is eager” to work with congress and the Trump administration “to strengthen our scientific enterprise”.

The post US scientific societies blast Trump administration’s plan to politicize grants appeared first on Physics World.

❌