↩ Accueil

Vue normale

UK physics leaders express ‘deep concern’ over funding cuts in letter to science minister Patrick Vallance

3 mars 2026 à 15:48

The heads of university physics departments in the UK have published an open letter expressing their “deep concern” about funding changes announced late last year by UK Research and Innovation (UKRI), the umbrella organisation for the UK’s research councils.

Addressed to science minister Patrick Vallance, the letter says the cuts are causing “reputational risk” and calls for “strategic clarity and stability” to ensure that UK physics can thrive.

It has so far been signed by 58 people who represent 45 different universities, including Birmingham, Bristol, Cambridge, Durham, Imperial College, Liverpool, Manchester and Oxford.

The letter says that the changes at UKRI “risk undermining science’s fundamental role in improving our prosperity, health and quality of life, as well as delivering sustainable growth through innovation, productivity and scientific leadership”.

The signatories warn that the UK’s international standing in physics is “a strategic asset” and that areas such as particle physics, astronomy and nuclear physics are “especially important”.

Raising concerns

The decision by the heads of physics to write to Vallance comes in the wake of UKRI stating in December that it will be adjusting how it allocates government funding for scientific research and infrastructure.

The Science and Technology Facilities Council (STFC), which is part of UKRI, stated that projects would need to be cut given inflation, rising energy costs as well as “unfavourable movements in foreign exchange rates” that have increased STFC’s annual costs by over £50m a year.

The STFC noted that it would need to reduce spending from its core budget by at least 30% over 2024/2025 levels while also cutting the number of projects financed by its infrastructure fund.

The council has already said two UK national facilities – the Relativistic Ultrafast Electron Diffraction and Imaging facility and a mass spectrometry centre dubbed C‑MASS – will now not be prioritised.

In addition, two international particle-physics projects will not be supported: a UK-led upgrade to the LHCb experiment at CERN as well as a contribution to the Electron-Ion Collider at the Brookhaven National Laboratory that is currently being built.

Philip Burrows, director of the John Adams Institute for Accelerator Science at the University of Oxford, who is one of the signatories of the letter, told Physics World that the cuts are “like buying a Formula-1 car but not being able to afford the driver”.

Burrows admits that the STFC has been hit “particularly hard” by its flat-cash settlement, given that a large fraction of its expenditure pays the UK’s subscriptions to international facilities and operating the UK’s flagship national facilities.

But because most of the rest of the STFC’s budget supports scientists to do research at those facilities, he is concerned that the funding cuts will fall disproportionately on the science programme.

“Constraining these areas risks weakening the very talent pipeline on which the UK’s innovation economy depends,” the letter states. “Fundamental physics also delivers substantial public engagement and cultural impact, strengthening public support for science and reinforcing the UK’s reputation as a global scientific leader.”

The signatories also say they are “particularly concerned” about the UK’s capacity to lead the scientific exploitation of major international projects. “An abrupt pause in funding for key international science programmes risks damaging UK researchers’ competitive advantage into the 2040s,” they note.

The letter now calls on the government to work with UKRI and STFC to “stabilise” curiosity-driven grants for physics within STFC “at a minimum of flat funding in real terms” as well as protect post-docs, students and technicians from the cuts.

It also calls on the UK to develop a long-term strategy for infrastructure and call on the government to address facilities cost pressures through “dedicated and equitable mechanisms so that external shocks do not singularly erode the UK’s research base in STFC-funded research areas”.

The news comes as Michele Dougherty today formally stepped down from her role as IOP president. Dougherty, who also holds the position of executive chair of the STFC, had previously stepped back from presidential duties on 26 January due to a conflict of interest.

Paul Howarth, who has been IOP president-elect since September, will now become IOP president.

The post UK physics leaders express ‘deep concern’ over funding cuts in letter to science minister Patrick Vallance appeared first on Physics World.

Ancient reversal of Earth’s magnetic field took an extraordinarily long time

3 mars 2026 à 15:00

The Earth’s magnetic poles have reversed 540 times over the past 170 million years. Usually, these reversals are relatively speedy in geological terms, taking around 10,000 years to complete. Now, however, scientists in the US, France and Japan have found evidence of much slower reversals deep in Earth’s geophysical past. Their findings could have important implications for our understanding of Earth’s climate and evolutionary history.

Scientists think the Earth’s magnetic field arises from a dynamo effect created by molten metal circulating inside the planet’s outer core. Its consequences include the bubble-like magnetosphere, which shields us from the solar wind and cosmic radiation that would otherwise erode our atmosphere.

From time to time, this field weakens, and the Earth’s magnetic north and south poles switch places. This is known as a geomagnetic reversal, and we know about it because certain types of terrestrial rocks and marine sediment cores contain evidence of past reversals. Judging from this evidence, reversals usually take a few thousand years, during which time the poles drift before settling again on opposite sides of the globe.

Looking into the past

Researchers led by Yuhji Yamamoto of Kochi University, Japan and Peter Lippert at the University of Utah, US, have now identified two major exceptions to this rule. Drawing on evidence obtained during the Integrated Ocean Drilling Program expedition in 2012, they say that around 40 million years ago, during the Eocene epoch, the Earth experienced two reversals that took 18,000 and 70,000 years.

The team based these findings on cores of sediment extracted off the coast of Newfoundland, Canada, up to 250 metres below the seabed. These cores contain crystals of magnetite that were produced by a combination of ancient microorganisms and other natural processes. The iron oxide particles within these crystals align with the polarity of the Earth’s magnetic field at the time the sediments were deposited. Because marine sediments are far less affected by erosion and weathering than sediments onshore, Yamamoto says the information they preserve about past Earth environments – including geomagnetic conditions – is exceptionally clean.

Significance for evolutionary history

The team says the difference between a geomagnetic reversal that takes 10,000 years and one that takes 70,000 years is significant because prolonged intervals of weaker geomagnetic fields would have exposed the Earth to higher amounts of cosmic radiation for longer. The effects on living creatures could have been devastating, says Lippert. As well as higher rates of genetic mutations due to increased radiation, he points out that organisms from bacteria to birds use the Earth’s magnetic field while navigating. “A lower strength field would create sustained pressures on these organisms to adapt,” he says.

If humans had existed at the time of these reversals, the effects on our species could have been similarly profound. “Modern humans (Homo sapiens) are thought to have begun dispersing out of Africa only about 50,000 years ago,” Yamamoto observes. “If a geomagnetic reversal can persist for a period comparable to – or even longer than – this timescale, it implies that the Earth’s environment could undergo substantial and continuous change throughout the entire period of human evolution.”

Although our genetic ancestors dodged that particular bullet, Yamamoto thinks the team’s findings, which are published in Nature Communications Earth & Environment, offer a valuable perspective on how evolution and environmental change could interact in the future. “This period corresponds to an epoch when Earth was far warmer than it is today, and when Greenland is thought to have been a truly ‘green land’,” he explains. “We also know that atmospheric CO₂ concentrations during this era were comparable to levels projected for the end of this century, making it an important ‘climate analogue’ for understanding near‑future climate conditions.”

The discovery could also have more direct implications for future life on Earth. The magnitude of the Earth’s magnetic field has decreased by around 5% in each century since records began. This decrease, combined with the slow drift of our current magnetic North Poletowards Siberia, could indicate that we are in the early stages of a new geomagnetic reversal. Re‑evaluating the duration of such reversals is thus not only an issue for geophysicists, Yamamoto says. It’s also an important opportunity to reconsider fundamental questions about how we should coexist with our planet and how we ought to confront a continually changing environment.

Motivation for future studies

John Tarduno, a geophysicist at the University of Rochester, US, who was not involved in the study, describes it as “outstanding” work that “documents an exciting discovery bearing on the nature of magnetic shielding through time and the geomagnetic reversal process”. He agrees that reduced shielding could have had biotic effects, and adds that the discovery of long reversal transitions could influence scientific thinking on the statistics of field reversals – including questions of whether the field retains some “memory” of previous events. “This new study will provide motivation to examine reversal transitions at very high resolution,” Tarduno says.

For their next project, Yamamoto and colleagues aim to use sequences of lava flows in Iceland to analyse how the Earth’s magnetic field evolved. Lippert’s team, for its part, will be studying features called geomagnetic excursions that appear in both deep sea and terrestrial sediments. Such excursions are evidence of short-lived, incomplete attempts at field reversals, and Lippert explains that they can be excellent stratigraphic markers, helping scientists correlate records on geological timescales and compare them with samples taken from different parts of the world. “Excursions, like long reversals, can inform our understanding of what ultimately causes a geomagnetic field reversal to start and persist to completion,” he says.

The post Ancient reversal of Earth’s magnetic field took an extraordinarily long time appeared first on Physics World.

Focusing on fusion: Debbie Callahan talks commercial laser fusion

3 mars 2026 à 12:00
Debbie Callahan
Fusion adopter Debbie Callahan is chief strategy officer at Focused Energy. (Courtesy: Focused Energy)

With the world’s energy demands increasing, and our impact on the climate becoming ever clearer, the search is on for greener, cleaner energy production. That’s why research into fusion energy is undergoing something of a renaissance.

Construction of the International Thermonuclear Experimental Reactor (ITER) in France – the world’s largest fusion experiment – is currently under way, while there are numerous other large-scale facilities and academic research projects too. There has also been a rise in the number of smaller commercial companies joining the race.

One person at the forefront of fusion research is Debbie Callahan – a plasma physicist who spent 35 years working at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in the US. She is now chief strategy officer at Focused Energy, a laser-fusion firm based in Germany and California, which is trying to generate energy from the laser-driven fusion of hydrogen isotopes.

Callahan recently talked to Physics World online editor Hamish Johnston about working in the fusion sector, Focused Energy’s research and technology, and the career opportunities available. The following is an edited extract of their conversation, which you can hear in full on the Physics World Weekly podcast.

How does NIF’s approach to fusion differ from that taken by magnetic confinement facilities such as ITER?

To get fusion to happen, you need three elements that we sometimes call the triple product. You need a certain amount of density in your plasma, you need temperature, and you need time. The product of those has to be over a certain value.

Magnetic fusion and inertial fusion are kind of the opposite of each other. In a magnetic fusion system like ITER, you have a low-density plasma, but you hold it for a long time. You do that by using magnetic fields that trap the plasma and keep it from escaping.

In inertial fusion – like at NIF – it’s the opposite. You don’t hold the plasma together at all, it’s only held by its own inertia, and you have a very high density for a short time. In both cases, you can make fusion happen.

What is the current state of the art at NIF, in terms of how much energy you have to put in to achieve fusion versus how much you get out?

To date, the best shot at NIF – by which I mean an individual, high-energy laser bombardment of the target capsule – occurred during an experiment in April 2025, which had a target gain of about 4.1. That means that they got out 4.1 times the amount of energy that they put in. The incident laser energy for those shots is around two megajoules, so they got out about eight megajoules.

This is a tremendous accomplishment that’s taken decades to get to. But to make inertial fusion energy successful and use it in a power plant, we need significantly higher gains of more like 50 to 100.

Target chamber at a fusion facility
Captured beams The target chamber of the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory. NIF has demonstrated that inertial fusion can work with deuterium–tritium fuel, but it is a research facility not a commercial endeavour. (Courtesy: Lawrence Livermore National Laboratory/Damien Jemison)

Can you explain Focused Energy’s approach to fusion?

Focused Energy was founded in July 2021, and has offices in the US and Germany. Just a month later, we achieved fusion ignition, which is when the fusion fuel becomes hot enough for the reactions to sustain themselves through their own internal heating (it is not the same as gain).

At NIF lasers are fired into a small cylinder of gold or depleted uranium and the energy is converted into X-rays, which then drive the capsule. It’s what’s called laser indirect drive. At Focused Energy, however, we’re directly driving the capsule. The laser energy is put directly on the capsule, with no intermediate X-rays.

The advantage of this approach is that converting laser energy to X-rays is not very efficient. It makes it much harder to get the high target gains that we need. At Focused Energy, we believe that direct drive is the best option for fusion energy to get us to a gain of over 50.

So is boosting efficiency one of your key goals to make fusion practical at an industrial level?

Yes, exactly. You have to remember that NIF was funded for national security purposes, not for fusion energy. It wasn’t designed to be a power plant – the goal was just to generate fusion energy for the first time.

In particular, the laser at NIF is less than 1% efficient but we believe that for fusion power generation, the laser needs to be about 10% efficient.

So one of the big thrusts for our company is to develop more efficient lasers that are driven by diodes – called diode pump solid state lasers.

Can you tell us about Focused Energy’s two technologies called LightHouse and Pearl Fuel?

LightHouse is our fusion pilot plant. When operational, it will be the first power plant to produce engineering gain greater than one, meaning it will produce more energy than it took to drive it. In other words, we’ll be producing net electricity.

For NIF, in contrast, gain is the amount of energy out relative to the amount of laser energy in. But the laser is very inefficient, so the amount of electricity they had to put in to produce that eight megajoules of fusion energy is a lot.

Meanwhile, Pearl is the capsule the laser is aimed at in our direct drive system. It’s filled with deuterium–tritium fuel derived from sea water and lithium.

Artist impression of a proposed fusion power plant
Rejuvenating nuclear A rendering of Focused Energy’s proposed fusion power plant at the Biblis fission power plant in Germany, which was shut down in 2011. (Courtesy: Focused Energy)

How do you develop the capsule to absorb the laser energy and give as much of it to the fuel as possible?

The development of the capsule for a fusion power plant is quite complicated. First, we need it to be a perfect sphere so it compresses spherically. The materials also need to efficiently absorb the laser light so you can minimize the size of that laser.

You have to be able to cheaply and quickly mass produce these targets too. While NIF does 400 shots per year, we will need to do about 900,000 shots a day – about 10 per second. We’ll also have to efficiently remove the exploded target material from the reactor chamber so that it can be cleared for the next shot.

It’s a very complicated design that needs to bring together all the pieces of the power plant in a consistent way.

When you are designing these elements, what plays a bigger role – computer simulations or experiments?

Computer simulations play a large part in developing these designs. But one of the lessons that I learned from NIF was that, although the simulation codes are state of the art, you need very precise answers, and the codes are not quite good enough – experimental data play a huge role in optimizing the design. I expect the same will be true at Focused Energy.

A third factor that’s developing is artificial intelligence (AI) and machine learning. In fact, at Livermore, a project working on AI contributed to achieving gain for the first time in December 2022. I only see AI’s role in fusion getting bigger, especially once we are able to do higher repetition rate experiments, which will provide more training data.

What intellectual property (IP) does Focused Energy have in addition to that for the design of the Pearl target and the LightHouse plant?

We also have IP in the design of the lasers – they are not the same lasers as used at NIF. And I think there’ll be a lot of IP around how we fabricate the targets. After all, it’s pretty complicated to figure out how to build 900,000 targets a day at a reasonable cost.

We’ll see a lot of IP coming out of this project in those areas, but there’s also the act of putting it all together. How we integrate these things in order to make a successful plant is important.

What are the challenges of working with deuterium and tritium as materials for fusion?

We chose deuterium and tritium because they are the easiest elements to fuse, and have been successfully demonstrated as fusion fuel by NIF.

Deuterium can be found naturally in sea water, but getting tritium – which is radioactive – is more complicated. We breed it from lithium. Our reactor designs have lithium in them, and the neutrons from the fusion reactions breed the tritium.

Making sure that we have enough tritium, and figuring out how to extract that material to use it for future shots, is a big task. We have to be able to breed enough tritium to keep the plant going.

To work on this, we have a collaboration funded by the US Department of Energy to work with Savannah River National Lab in South Carolina. They have a lot of expertise in designing these tritium-extraction systems.

How will you capture the heat from the deuterium–tritium fusion reaction?

We will use a conventional steam cycle to convert the heat into electricity. It’s funny – we’ll have this very hi-tech way of producing heat, but at the end of the day, we will use a traditional system to produce the electricity from that heat.

So what’s the timeline on development?

Our plan is to have a pilot plant up by the end of the 2030s. It’s a fairly aggressive timeline given the things that we have to do. But that’s part of being a start-up – we have to take some risks and try to move quickly to achieve our goal.

To help that we have, in my view, a superpower – we have one foot in Europe and one foot in the US. There are a lot of opportunities between the two continents to partner with other companies, universities and governments. I think that makes us strong because we have access to some of the best talent from around the world.

How does working at Focused Energy compare with life as an academic at Lawrence Livermore?

There are a lot of similarities. My role now is to bring the knowledge and skills I learned at NIF to Focused Energy, so it’s been a natural transition.

In fact, there was a lot of pressure working at NIF. We were trying to move very quickly, so it’s actually very similar to working in a start-up like Focused Energy.

One of the big differences is the level of bureaucracy. Working for a government-funded lab meant there were lots of rules and paperwork, which takes up your time and you don’t always see the value in it.

In contrast, working for a small start-up means we can move more quickly because we don’t have as many of those kinds of constraints. Personally, I find that great because it leaves more time for the fun and interesting things – like trying to get fusion on the grid.

Are you still involved in academic research in any way?

As a firm, we are still out there collaborating with academics. Last year, for example, we gave four separate presentations at the American Physical Society Division of Plasma Physics meeting.

Debbie Callahan speaking on stage
Active collaboration Debbie Callahan presenting the work of Focused Energy at the IEEE Pulsed Power and Plasma Science Conference in Berlin in June 2025. (Courtesy: Focused Energy)

I feel very strongly about peer review. Of course, publishing isn’t our number one priority, but we need feedback from others. We’re trying to do something that no-one’s done before, so it’s important to have our colleagues give us feedback on what we’re doing, point out mistakes we’re making or things we’re forgetting.

Working with universities and national labs in both Europe and the US is vital. Communicating with others in the field is important for us to get to where we want to go.

And of course, being an active part of the fusion community is good for recruitment too. We regularly give presentations at conferences that students attend. We meet those students and they learn about our work – and they might be future employees for our company.

What’s your advice for early-career physicists keen on joining the fusion industry?

There are so many opportunities right now, especially compared to the start of my career when the work was mainly just at universities or national labs. Nowadays, there are a lot of companies in the sector. Not all of them will survive because there’s only so much money, but there are still lots of opportunities. If you’re interested in fusion energy, go for it.

The field is always developing. There’s new stuff happening every day – and new problems. So if you like problem-solving, it’s great, especially if you want to do something good for the world.

There are also opportunities for people who are not plasma physicists. At Focused Energy we have people across so many fields – those who work on lasers, others who work on reactor design, some developing the AI and machine learning, and those who work on target physics, like me. To achieve fusion energy, we need physicists, engineers, mathematicians and computer scientists. We need researchers, technicians and operators. There’s going to be tremendous growth in this sector.

The post Focusing on fusion: Debbie Callahan talks commercial laser fusion appeared first on Physics World.

Shadow sculptures evoke quantum physics

2 mars 2026 à 17:50

This winter in Bristol has been even gloomier than usual – so I was really looking forward to the Bristol Light Festival 2026. We went on the last evening of the event (28 February) and we were blessed with dry weather and warmish temperatures.

The festival featured 10 illuminated installations that were scattered throughout Bristol and the crowds were out in force to enjoy them. I wasn’t expecting to be thinking about physics as I wandered through town, but that’s exactly what I found myself doing at an installation called The Midnight Ballet by the British sculptor Will Budgett. Rather appropriately, it was located next to the HH Wills Physics Laboratory at the University of Bristol.

The display comprises seven sculptures that are illuminated from two different directions. The result is two very different images of ballerinas projected onto two screens (see image).

Art and science

So, why was I thinking about physics while admiring the work? To me the pieces embody – in a purely artistic way – the idea of superposition and measurement in quantum mechanics. A sculpture is capable of producing two different images (a superposition of states), but neither of these images is observable until a sculpture is illuminated from specific directions (the measurements).

Now, I know that this analogy is far from perfect. Measurements can be made simultaneously in two orthogonal planes, for example. But, Budgett’s beautiful artworks really made me think about quantum physics. Given the exhibit’s close proximity to the university’s physics department, I suspect I am not the only one.

The post Shadow sculptures evoke quantum physics appeared first on Physics World.

Nuclear-powered transport – how far can it take us?

2 mars 2026 à 17:22

In 1942 physicists in Chicago, led by Enrico Fermi, famously produced the world’s first self-sustaining nuclear chain reaction. But it was to be another nine years before electricity was generated from fission for the first time. That landmark event occurred in 1951 when the Experimental Breeder Reactor-I in southern Idaho powered a string of four 200-watt light bulbs.

Our ability to harness nuclear power has been under constant development since then. In fact, according to the Nuclear Energy Association, a record 2667 terrawatt-hours of electricity was generated by nuclear reactors around the world in 2024 – up 2.5% on the year before. But what, I wonder, is the potential of nuclear-powered transport?

A “nuclear engine” has many advantages, notably providing a vehicle with an almost unlimited supply of onboard power, with no need for regular refuelling. That’s particularly attractive for large ships and submarines, where fuel stops at sea are few and far between. It’s even better for space craft, which cannot refuel at all.

The downside is that a vehicle needs to be fairly large to carry even a small nuclear fission reactor – plus all the heavy shielding to protect passengers onboard. Stringent safety requirements also have to be met. If the vehicle were to crash or explode, the shield around the reactor needs to stay fully intact.

Ships and planes

Perhaps the best known transport application of nuclear power is at sea, where it’s used for warships, submarines and supercarriers. The world’s first nuclear-powered ship was the US Navy submarine Nautilus, which was launched in 1954. As the first vessel to have a nuclear reactor for propulsion, it revolutionized naval capabilities.

Compared to oil or coal-fired ships, nuclear-powered vessels can travel far greater distances. All the fuel is in the reactor, which means there is no need for additional fuel be carried onboard – or for exhaust chimneys or air intakes. Even better, the fuel is relatively cheap. But operating and infrastructure costs are steep, which is why almost all nuclear-powered marine vessels belong to the military.

There have, however, been numerous attempts to develop other forms of nuclear-powered transport. While a nuclear-powered aircraft might seem unlikely, the idea of flying non-stop to the other side of the world, without giving off any greenhouse-gas emissions, is appealing. Incredible as it might seem, airborne nuclear reactors were actually trialled in the mid-1950s.

That was when the United States Air Force converted a B-36 bomber to carry an operational air-cooled reactor, weighing around 18 tons. The aircraft was not actually nuclear powered but it was operated in this configuration to assess the feasibility of flying a nuclear reactor. The aircraft made a total of 47 flights between July 1955 and March 1957.

In 1955, the Soviet Union also ran a project to adapt a Tupolev Tu-95 “Bear” aircraft for nuclear power. However, because of the radiation hazard to the crew and the difficulties in providing adequate shielding, the project was soon abandoned. Neither the American or the Soviet atomic-powered aircraft ever flew and – because the technology was inherently dangerous – it was never considered for commercial aviation.

Cars and trains

The same fate befell nuclear-powered trains. In 1954 the US nuclear physicist Lyle Borst, then at the University of Utah, proposed a 360-tonne locomotive carrying a uranium-235 fuelled nuclear reactor. Several other countries, including Germany, Russia and the UK, also had schemes for nuclear locos. But public concerns about safety could not be overcome and nuclear trains were never built. The $1.2m price tag of Borst’s train didn’t help either.

Ford Nucleon design
Nuclear nightmare Ford’s Nucleon car thankfully never got past the concept stage.

In the late 1950s, meanwhile, there were at least four theoretical nuclear-powered “concept cars”: the Ford Nucleon, the Studebaker Packard Astral, the Simca Fulgur and the Arbel Symétric. Based on the assumption that nuclear reactors would get much smaller over time, it was felt that such a car would need relatively light radiation shielding. I certainly wouldn’t have wanted to take one of those for a spin; in the end none got beyond concept stage.

Perhaps the real success story of nuclear propulsion has been in space.

But perhaps the real success story of nuclear propulsion has been in space. Between 1967 and 1988, the Soviet Union pioneered the use of fission reactors for powering surveillance satellites, with over 30 nuclear-powered satellites being launched during that period. And since the early 1960s, radioisotopes have been a key source of energy in space.

Driven by the desire for faster, more capable and longer duration space missions to the Moon, Mars and beyond, China, Russia and the US are now investing significantly in the next generation of nuclear reactor technology for space propulsion, where solar or radioisotope power will be inadequate. Several options are on the table.

One is nuclear thermal propulsion, whereby energy from a fission reactor heats a propellant fuel. Another is nuclear electric propulsion, in which the fission energy ionizes a gas that gets propelled out the back of the spacecraft. Both involve using tiny nuclear reactors of the kind used in submarines, except they’re cooled by gas, not water. Key programmes are aiming for in-space demonstrations in the next 5–10 years.

Where next?

Many of the first ideas for nuclear-powered transport were dreamed up little more than a decade after the first self-sustaining chain reaction. The appeal was clear: compared to other fuels, nuclear power has a high energy density and lasts much longer. It also has zero carbon emissions. Nuclear power must have seemed a panacea for all our energy needs – using it for cars and planes must have seen an obvious next step.

However, there are major safety issues to address when nuclear sources are mobilized, from protecting passengers and crew, to ensuring appropriate safeguards should anything go wrong. And today we understand all too well the legacy of nuclear systems, from the safe disposal of spent fuel to the decommissioning of nuclear infrastructure and equipment.

We’ve struck the right balance when it comes to using nuclear power, confining it to sea-faring vessels under the watchful eye of the military.

Here on Earth, I think we’ve struck the right balance when it comes to using nuclear power, confining it to sea-faring vessels under the watchful eye of the military. But as human-crewed, deep-space exploration beckons, a whole new set of issues will arise. There will, of course, be lots of technical and engineering challenges.

How, for example, will we maintain, repair and decommission nuclear-powered space craft? How will we avoid endangering crews or polluting the environment especially when craft take off? Who should set appropriate legislation – and how we do we police those rules? When it comes to space, nuclear will help us “to boldly go”; but it will also require bold regulation.

The post Nuclear-powered transport – how far can it take us? appeared first on Physics World.

Bubbles, foams and self-assembly: a conversation with Early Career Award winner Aurélie Hourlier-Fargette

Par : No Author
2 mars 2026 à 11:45

Congratulations on winning the 2025 JPhys Materials Early Career Award. What does this mean for you at this stage of your career?

I am really grateful to the Editorial Board of JPhys Materials for this award and for highlighting our work. This is a key recognition for the whole team behind the results presented in this research paper. We were taking a new turn in our research with this topic – trying to convince bubbles to assemble into crystalline structures towards architected materials – and this award is an important encouragement to continue pushing in this direction. At the crossroads of physics, physical chemistry, materials science and mechanics, we hope that this is only the beginning of our interdisciplinary journey around bubble assemblies and foam-based materials.

Your research explores elasto-capillarity and foam architectures, what inspired you to work in this fascinating area?

I always say that research is a series of encounters – with people, and with scientific themes and objects. I was lucky to discover this interdisciplinary world as an undergraduate, during an internship on elasto-capillarity at the intersection of physics and mechanics. The scientific communities working on these topics – and also on foams – are fantastic. In both fields, I was fortunate to meet talented people who inspired my future work, combining scientific skills and creativity.

In France, the GDR MePhy (mechanics and physics of complex systems) played a key role in broadening my perspective, by organizing workshops on many different topics, always with interdisciplinarity in mind.

You have demonstrated mechanically guided self-assembly of bubbles leading to crystalline foam structures. What’s the significance of this finding and how could it impact materials design?

In the paper, part of the journal’s Emerging Leaders collection, we provide a proof-of-concept with alginate and polyurethane materials to demonstrate that it is possible to use a fibre array to order bubbles into a crystalline structure, which can be tuned by choosing the fibre pattern, and to keep this ordering upon solidification to provide an alternative approach to additive manufacturing. This work is mainly fundamental, and we hope it paves the way toward a wider use of mechanical self-assembly principles in the context of porous architected materials.

The use of solidifying materials for those studies is two-fold: first, it allows us to observe the systems with X-ray microtomography once solidified, and second, it demonstrates that we could use such techniques to build actual solid materials.

Guiding bubbles with fibre arrays
Guiding bubbles with fibre arrays Arrangements of bubbles constrained by a network of fibres (highlighted with red dots) can exhibit long-range order and even include Kelvin cell arrangements. (Courtesy: J. Phys. Mater. 10.1088/2515-7639/adaa21)

What excites you most about this field right now, and where do you see the biggest opportunities for breakthroughs?

Combining physical understanding and materials science is certainly a great area of opportunity to better exploit mechanical self-assembly. It is very compelling to search for strategies based on physical principles to generate materials with non-trivial mechanical or acoustic properties. Capillarity, elasticity, stimuli-induced modification of systems, as well as geometrical considerations, all offer a great playground to explore. Curiosity-driven research has many advantages, and often, unexpected observations completely reshape the trajectory that we had in mind.

Could you tell us about your team’s current research priorities and the directions you are most focused on?

We believe that focusing first on the underlying physical principles, especially in terms of mechanical self-assembly, will provide the building blocks to generate novel materials. One key research axis we are exploring now is widening the range of materials that can be used for “liquid foam templating” (a general approach that involves controlling the properties of a foam in its liquid state to control the resulting properties of the foam after solidification). We focus on the solidification mechanisms, either by playing with external stimuli or by controlling the solidification reactions via the introduction of catalysts or solidifying agents.

What are the key challenges in achieving ordered structures during solidification?

Liquid foams provide beautiful hierarchical structures that are also short-lived. To take advantage of the mechanical self-assembly of bubbles to build solid materials, understanding the relevant timescales is key: depending on whether the foam has time to drain and destabilize before solidification or not, its final morphology can be completely different. Controlling both the ageing mechanisms and the solidification of the matrix is particularly challenging.

How do you see foam-based materials impacting real-world applications?

Both biomedical devices and soft robots often rely on soft materials – either to match the mechanical properties of biological tissues or to provide the mechanical properties to build soft robots to enable motion. Being able to customize self-assembled hierarchical structures could allow us to explore a wider range of even softer materials, with specific properties resulting from their structural features. Applications could also extend to stiffer materials, mainly in the context of acoustic properties and wave propagation in such architected structures.

What are the most surprising behaviours you have observed during the processes of self-assembly and solidification of foams?

For the experiments detailed in the paper, the structures revealed their beauty once the X-ray tomography scans were performed. When we varied the parameters, we could only guess what was going to happen before getting the visual confirmation a few hours later. We were really happy to see that changing the pattern of the fibre array could indeed provide different ordered foam structures. In some other projects we are working on, foam stability has been a real challenge. We were sometimes surprised to obtain long-lasting liquid systems.

X-ray tomography scans of foams
Creating order X-ray tomography scans of foams without a fibre array (left), showing a disordered structure, and with a square fibre array (right), showing large ordered zones. (Courtesy: J. Phys. Mater. 10.1088/2515-7639/adaa21)

Looking ahead, what are the next big questions you hope to tackle in your field?

In the fundamental context of the physics and mechanics of elasto-capillarity, the study of model systems involving self-assembly mechanisms will be a key aspect of our research. I then hope to successfully identify key applications for such architected systems – mainly in the fields of mechanical or acoustic metamaterials, but also for biomedical engineering. Regarding foam solidification, understanding the mechanisms of pore opening during the solidification process – leading to either closed-cell or open-cell foams – is also an important question for the community.

You worked on bio-integrated electronics during your postdoc and contributed to a seminal paper on skin-interfaced biosensors for wireless monitoring in neonatal ICUs. How has that shaped your current research interests?

That fantastic experience allowed me to work in a group with numerous people from many different backgrounds, pushing the frontiers of interdisciplinarity in ways I could not have imagined before joining the Rogers group as a postdoc. At the moment, I am focusing on more fundamental questions, but it is definitely important to keep in mind what physics and materials science can bring to a broad variety of applications that offer solutions for society, in biomedical engineering and beyond.

Your research often combines theory and experiment and involves interdisciplinary collaboration. How do you see these collaborations shaping the future of your field?

It is always the scientific questions we want to answer – or the goals we aim to achieve – that should define the collaborations, bringing together multiple skills and backgrounds to tackle a shared challenge. Clearly, at the intersection of physics, physical chemistry, materials science and mechanics, there are many interesting questions that require contributions from different disciplines and skillsets. A key aspect is how people trained in different areas learn to “speak the same language” in order to advance interdisciplinary topics.

X-ray microtomography on the MINAMEC platform
3D structural analysis The team’s foam research projects make extensive use of X-ray microtomography on the MINAMEC platform at Institut Charles Sadron. (Courtesy: Aurélie Hourlier-Fargette)

How do you envision your research evolving over the next 5–10 years?

I hope to be able to combine fundamental research and meaningful applications successfully – perhaps in the form of medical devices or tools for soft robots. There are many exciting possibilities, but it is certainly still too early for me to predict.

What advice would you give early-career researchers pursuing interdisciplinary projects?

Believe in what you are doing! We push boundaries more easily in areas we are passionate about, and we are also more productive when we work on topics for which we have found a supportive environment – with a unique combination of collaborators and access to state-of-the-art equipment.

In research, and especially in interdisciplinary fields, a key challenge is finding the right balance: you need to stay focused on the research projects that matter for you, while also keeping an open mind and staying aware of what others are doing. This broader vision helps you understand how your work integrates into a larger, more complex landscape.

Finally, what inspires you most as a scientist, and what keeps you motivated during challenging phases of research?

I have always liked working with desktop-scale experiments, where we can touch the objects and have an intuition for the physical mechanisms behind the observed phenomena.

Another source of inspiration is the beauty of the scientific objects that we study. With droplets, bubbles and foams – which are not only scientifically interesting but also beautiful – there is a strong connection with art and photography.

And finally, a key aspect of our professional life is the people we work with. It is clearly an additional motivation to feel part of a community where we can discuss both scientific questions and ways to improve how research is organized, as well as help younger students, PhDs and postdocs find their professional path. Working with amazing colleagues definitely helps when the path is longer or more difficult than expected.

The post Bubbles, foams and self-assembly: a conversation with Early Career Award winner Aurélie Hourlier-Fargette appeared first on Physics World.

From bunkers to bright spaces: the future of smart shielded radiosurgery treatment rooms

Par : No Author
2 mars 2026 à 10:38

This webinar explores how smart shielding is transforming the design of Leksell Gamma Knife radiosurgery environments, shifting from bunker‑like spaces to open, patient‑centric treatment rooms. Drawing from dose‑rate maps, room‑dimension considerations and modern shielding innovations, we’ll demonstrate how treatment rooms can safely incorporate features such as windows and natural light, improving both functionality and patient experience.

Dr Riccardo Bevilacqua will walk through the key questions that clinicians, planners and hospital administrators should ask when evaluating new builds or upgrading existing treatment rooms. We will highlight how modern shielding approaches expand design possibilities, debunk outdated assumptions and offer practical guidance on evaluating sites and educating stakeholders on what lies “beyond bunkers”.

Dr Riccardo Bevilacqua
Dr Riccardo Bevilacqua

Dr Riccardo Bevilacqua, a nuclear physicist with a PhD in neutron data for Generation IV nuclear reactors from Uppsala University, has worked as a scientist for the European Commission and at various international research facilities. His career has transitioned from research to radiation safety and back to medical physics, the field that first interested him as a student in Italy. Based in Stockholm, Sweden, he leads global radiation‑safety initiatives at Elekta. Outside of work, Riccardo is a father, a stepfather and writes popular‑science articles on physics and radiation.

The post From bunkers to bright spaces: the future of smart shielded radiosurgery treatment rooms appeared first on Physics World.

The physics of why basketball shoes are so squeaky

27 février 2026 à 16:00

If you have ever watched a basketball match, you will know that along with the sound of the ball being bounced, there is also the constant squeaking of shoes as the players move across the court.

Such noise is a common occurrence in everyday life from the scraping of chalk on a blackboard to when brakes are applied on a bicycle.

Physicists in France, Isreal, the UK and the US have now recreated the phenomenon in a lab and discovered that the squeaking is due to a previously unseen mechanism.

Katia Bertoldi from the Harvard John A. Paulson School of Engineering and Applied Sciences and colleagues slid a basketball shoe, or a rubber sample, across a smooth glass plate and used high-speed imaging and audio measurements to analyse the squeak.

Previous studies looking at the effect suggested that “pulses” are created when two materials “stick and slip”, but such studies focused on slow movements, which do not create squeaks.

The team instead found that the noise was not caused by random stick-slip events, but rather deformations of the rubber sole pulsing in bursts, or rippling, across the surface.

In this case, small parts of the sole change shape and lose and regain contact with the surface, with the “ripple” travelling at near supersonic speeds.

The pitch of the squeak even matches the rate of the “bursts”, which is determined by the stiffness and thickness of the shoe sole.

The authors also found that if a soft surface is smooth, the pulses are irregular and produce no sharp sounds, whereas ridged surfaces – like the grip patterns on sports shoes – produce consistent pulse frequencies, resulting in a high-pitched squeak.

In another twist, lab experiments showed that in some instances, the slip pulses are triggered by triboelectric discharges – miniature lightning bolts caused by the friction of the rubber.

Indeed, the physics of these pulses share similar features with fracture fronts in plate tectonics, and so a better understanding the dynamics that occur between two surfaces may offer insights into  friction across a range of systems.

“These results bridge two fields that are traditionally disconnected: the tribology of soft materials and the dynamics of earthquakes,” notes Shmuel Rubinstein from Hebrew University. “Soft friction is usually considered slow, yet we show that the squeak of a sneaker can propagate as fast as, or even faster than, the rupture of a geological fault, and that their physics is strikingly similar.”

The post The physics of why basketball shoes are so squeaky appeared first on Physics World.

Dark optical cavity alters superconductivity

27 février 2026 à 13:09

An international team of researchers has shown that superconductivity can be modified by coupling a superconductor to a dark electromagnetic cavity. The research opens the door to the control of a material’s properties by modifying its electromagnetic environment.

Electronic structure defines many material properties – and this means that some properties can be changed by applying electromagnetic fields. The destruction of superconductivity by a magnetic field and the use of electric fields to control currents in semiconductors are two familiar examples.

There is growing interest in how electronic properties could be controlled by placing a material in a dark electromagnetic cavity that resonates with an electronic transition in that material. In this scenario, an external field is not applied to the material. Rather, interactions occur via quantum vacuum fluctuations within the cavity.

Holy Grail

“The Holy Grail of cavity materials research is to alter the properties of complex materials by engineering the electromagnetic environment,” explains the team – which includes Itai Keren, Tatiana Webb and Dmitri Basov at Columbia University in the US.

They created an optical cavity from a small slab of hexagonal boron nitride. This was interfaced with a slab of κ-ET, which is an organic low-temperature superconductor. The cavity was designed to resonate with an infrared transition in κ-ET involving the vibrational stretching of carbon–carbon bonds.

Hexagonal boron nitride was chosen because it is a hyperbolic van der Waals material. Van der Waals materials are stacks of atomically-thin layers. Atoms are strongly bound within each layer, but the layers are only weakly bound to each other by the van der Waals force. The gaps between layers can act as waveguides, confining light that bounces back and forth within the slab. As a result the slab behaves like an optical cavity with an isofrequency surface that is a hyperboloid in momentum space. Such a cavity supports a large number of modes and vacuum fluctuations, which enhances interactions with the superconductor.

Superfluid suppression

The researchers found that the presence of the cavity caused a strong suppression of superfluid density in κ-ET (a superconductor can be thought of as a superfluid of charged particles). The team mapped the superfluid density using magnetic force microscopy. This involved placing a tiny magnetic tip near to the surface of the superconductor. The magnetic field of the tip cannot penetrate into the superconductor (the Meissner effect) and this results in a force on the tip that is related to the superfluid density. They found that the density dropped by as much as 50% near the cavity interface.

The team also investigated the optical properties of the cavity using scattering-type scanning near-field optical microscope (s-SNOM). This involves firing tightly-focused laser light at an atomic force microscope (AFM) tip that is tapping on the surface of the cavity. The scattered light is processed to reveal the near-field component of light from just the region of the cavity below the tip .

The tapping tip creates phonon polaritons in the cavity, which are particle-like excitations that couple lattice vibrations to light. Analysing the near-field light across the cavity confirmed that the carbon stretching mode of κ-ET is coupled to the cavity. Calculations done by the team suggest that cavity coupling reduces the amplitude of the stretching mode vibrations.

Physicists know that superconductivity can arise from interactions between electrons and phonons (lattice vibrations), So, it is possible that the reduction in superfluid density is related to the suppression of stretching-mode vibrations. This, however, is not certain because κ-ET is an unconventional superconductor, which means that physicists do not understand the mechanism that causes its superconductivity. Further experiments could therefore shed light on the mysteries of unconventional superconductors.

“We are confident that our experiments will prompt further theoretical pursuits,” the team tells Physics World. The researchers also believe that practical applications could be possible. “Our work shows a new path towards the manipulation of superconducting properties.”

The research is described in Nature.

The post Dark optical cavity alters superconductivity appeared first on Physics World.

Chernobyl at 40: physics, politics and the nuclear debate today

27 février 2026 à 10:27

On 26 April 2026, it will be 40 years since the explosion at Unit 4 of the Chernobyl Nuclear Power Plant – the worst nuclear accident the world has known. In the early hours of 26 April 1986, a badly designed reactor, operated under intense pressure during a safety test, ran out of control. A powerful explosion and prolonged fire followed, releasing radioactive material across Ukraine, Belarus, Russia, with smaller quantities spewing across Europe.

In this episode of Physics World Stories, host Andrew Glester speaks with Jim Smith, an environmental physicist at the University of Portsmouth. Smith began his academic life studying astrophysics, but always had an interest in environmental issues. His PhD in applied mathematics at Liverpool focused on modelling how radioactive material from Chernobyl was transported through the atmosphere and deposited as far away as the Lake District in north-western England.

Smith recounts his visits to the abandoned Chernobyl plant and the 1000-square-mile exclusion zone, now home to roaming wolves and other thriving wildlife. He wants a rational debate about the relative risks, arguing that the accident’s social and economic consequences have significantly outweighed the long-term impacts of radiation itself.

The discussion ranges from the politics of nuclear energy and the hierarchical culture of the Soviet system, to lessons later applied during the Fukushima accident. Smith makes the case for nuclear power as a vital complement to renewables.

He also shares the story behind the Chernobyl Spirit Company – a social enterprise he has launched with Ukrainian colleagues, producing safe, high-quality spirits to support Ukrainian communities. Listen to find out whether Andrew Glester dared to try one.

The post Chernobyl at 40: physics, politics and the nuclear debate today appeared first on Physics World.

💾

LHCb upgrade: CERN collaboration responds to UK funding cut

26 février 2026 à 15:47

Later this year, CERN’s Large Hadron Collider (LHC) and its huge experiments will shutdown for the High Luminosity upgrade. When complete in 2030, the particle-collision rate in the LHC will be increased by a factor of 10 and the experiments will be upgraded so that they can better capture and analyse the results of these collisions. This will allow physicists to study particle interactions at unprecedented precision and could even reveal new physics beyond the Standard Model.

Earlier this year, however, the UK government announced that it will no longer fund the upgrade of the LHCb experiment on the LHC, which is run by a collaboration of more than 1700 physicists worldwide. The UK had promised to contribute about £50 million to the upgrade – which is a significant chunk of the overall cost.

In this episode of the Physics World Weekly podcast I am in conversation with the particle physicist Tim Gershon, who is based at the UK’s University of Warwick. Gershon is spokesperson-elect for the LHCb collaboration and is playing a leading role in the upgrade.

Gershon explains that UK participation and leadership has been crucial for the success of LHCb and cautions that the future of the experiment and the future of UK particle physics have been imperilled by the funding cut.

We also chat about recent discoveries made by LHCb and look forward to what new physics the experiment could find after the upgrade.

The post LHCb upgrade: CERN collaboration responds to UK funding cut appeared first on Physics World.

Read-out of Majorana qubits reveals their hidden nature

Par : No Author
26 février 2026 à 12:23

Quantum computers could solve problems that are out of reach for today’s classical machines. However, the quantum states they rely on are prone to decohering – that is, losing their quantum information due to local noise. One possible way around this is to use quantum bits (qubits) constructed from quasiparticle states known as Majorana zero modes (MZMs) that are protected from this noise. But there’s a catch. To perform computations, you need to be able to measure, or read out, the states of your qubits. How do you do that in a system that is inherently protected from its environment?

Scientists at QuTech in the Netherlands, together with researchers from the Madrid Institute of Materials Science (ICMM) in Spain, say they may have found an answer. By measuring a property known as quantum capacitance, they report that they have read out the parity of their MZM system, backing up an earlier readout demonstration from a team at Microsoft Quantum Hardware on a different Majorana platform.

Measuring parity

The QuTech/ICMM researchers generated their MZMs across two quantum dots – semiconductor structures that can confine electrons – connected by a superconducting nanowire. Electrons can transfer, or tunnel, between the quantum dots through this wire. Majorana-based qubits store their quantum information across these separated MZMs, with both elements in the pair required to encode a single “parity” bit. A pair of parity bits (combining four MZMs in total) forms a qubit.

A parity bit has two possible states. When the two quantum dots are in a superposition of both having one electron and both having none, the system is said to have even parity (a “0”). When the system is instead in superposition of only one of the quantum dots having an electron, the parity is said to be odd (a “1”). Importantly, these even and odd parity states have the same average value of electric charge, meaning that a charge sensor cannot tell them apart.

The key to measuring parity lies in the electrons’ behaviour. In the even-parity state, an even number of electrons can pair up and enter the superconductor together as a Cooper pair. In the odd-parity state, however, the lone electron lacks a partner and cannot flow through the wire in the same way. By measuring the charge flowing into the superconductor, the team was therefore able to determine the parity state. The researchers also determined that the lifetimes of these states were in the millisecond range, which they say is promising for quantum computations.

Competing platforms

According to Nick van Loo, a quantum engineer at QuTech and the first author of a Nature paper on the work, similar chains of quantum dots (known as Kitaev chains) are a promising platform for realizing Majorana modes because each element in the chain can be controlled and tuned. This control, he adds, makes results easier to reproduce, helping to overcome some of the interpretation challenges that have affected Majorana results over the past decade.

Van Loo also stresses that his team uses a different architecture from the Microsoft Quantum Hardware team to create its Majorana modes – one that he says allows for better tuneability as well as easier and more scalable readout. He adds that this architecture also allows an independent charge sensor to be used to confirm the MZM’s charge neutrality.

In response, Chetan Nayak, a technical fellow at Microsoft Quantum Hardware, says it is important that the QuTech/ICMM team independently measured a millisecond time scale for parity fluctuations. However, he notes that the team did not extend this parity lifetime and adds that the so-called “poor man’s Majoranas” used in this research do not constitute a scalable platform for topological qubits, as they lack topological protection.

Seeking full protection

Van Loo acknowledges that the team’s two-site Kitaev chain is not topologically protected. However, he says the degree of protection is expected to improve exponentially as more sites are added. In the near term, he and his colleagues hope to operate their qubit by inducing rotations through coupling pairs of Majorana modes. Once these hurdles are overcome, he tells Physics World that “one major milestone will still remain: demonstrating braiding of Majorana modes to establish their non-Abelian exchange statistics”.

Jay Deep Sau, a physicist at the University of Maryland, US, who was not involved in either the QuTech/ICMM or the Microsoft Quantum Hardware research, describes this as the first measurement of fermion parity in the smallest quantum dot chain platform for creating MZMs. Compared to the Microsoft result, Sau agrees that the quantum dot chain is more controlled. However, he is sceptical that this control will apply to larger chains, casting doubt on whether this is truly a scalable way of realizing MZMs. The significance of these results, he adds, will only be apparent if the quantum dot chain approach can demonstrate a coherent qubit before its semiconductor nanowire counterpart.

The post Read-out of Majorana qubits reveals their hidden nature appeared first on Physics World.

Quantum-secure Internet expands to citywide scale

25 février 2026 à 17:25

Researchers in China have distributed device-independent quantum cryptographic keys over city-scale distances for the first time – a significant improvement compared to the previous record of a few hundred metres. Led by Jian-Wei Pan of the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS), the researchers say the achievement brings the world a step closer to a completely quantum-secure Internet.

Many of us use Internet encryption almost daily, for example when transferring sensitive information such as bank details. Today’s encryption techniques use keys based on mathematical algorithms, and classical supercomputers cannot crack them in any practical amount of time. Powerful quantum computers could change this, however, which has driven researchers to explore potential alternatives.

One such alternative, known as quantum key distribution (QKD), encrypts information by exploiting the quantum properties of photons. The appeal of this approach is that when quantum-entangled photons transmit a key between two parties, any attempted hack by a third party will be easy to detect because their intervention will disturb the entanglement.

While the basic form of QKD enables information to be transmitted securely, it does have some weak points. One of them is that a malicious third party could steal the key by hacking the devices the sender and/or receiver is using.

A more advanced version of QKD is device-independent QKD (DI-QKD). As its name suggests, this version does not depend on the state of a device. Instead, it derives its security key directly from fundamental quantum phenomena – namely, the violation of conditions known as Bell’s inequalities. Establishing this violation ensures that a third party has not interfered with the process employed to generate the secure key.

The main drawback of DI-QKD is that it is extremely technically demanding, requiring high-quality entanglement and an efficient means of detecting it. “Until now, this has only been possible over short distances – 700 m at best – and in laboratory-based proof-of-principle experiments,” says Pan.

High-fidelity entanglement over 11 km of fibre

In the latest work, Pan and colleagues constructed two quantum nodes consisting of single trapped atoms. Each node was equipped with four high-numerical-aperture lenses to efficiently collect single photons emitted by the atoms. These photons have a wavelength of 780 nm, which is not optimal for transmission through optical fibres. The team therefore used a process known as quantum frequency conversion to shift the emitted photons to a longer wavelength of 1315 nm, which is less prone to optical loss in fibres.

By interfering and detecting a single photon, the team was able to generate what’s known as heralded entanglement between the two quantum nodes – something Pan describes as “an essential resource” for DI-QKD. While significant progress has been made in extending the entangling distance for qubits of this type, Pan notes that these advances have been hampered by low fidelities and low entangling rates.

To address this, Pan and his colleagues employed a single-photon-based entangling scheme that boosts remote entangling probability by more than two orders of magnitude. They also placed their atoms in highly excited Rydberg states to generate single photons with high purity and low noise. “It is these innovations that allow us to achieve high-fidelity and high-rate entanglement over a long distance,” Pan explains.

Using this setup, the researchers explored the feasibility of performing DI-QKD between two entangled atoms linked by optical fibres up to 100 km in length. In this study, which is detailed in Science, they demonstrated practical DI-QKD under finite-key security over 11 km of fibre.

Metropolitan-scale quantum key distribution

Based on the technologies they developed, Pan thinks it could now be possible to implement DI-QKD over metropolitan scales with existing optical fibres. Such a system could provide encrypted communication with the highest level of physical security, but Pan notes that it could also have other applications. For example, high-fidelity entanglement could also serve as a fundamental building block for constructing quantum repeaters and scaling up quantum networks.

Carlos Sabín, a physicist at the Autonomous University of Madrid (UAM), Spain, who was not involved in the study, says that while the work is an important step, there is still a long way to go before we are able to perform completely secure and error-free quantum key distribution on an inter-city scale. “This is because quantum entanglement is an inherently fragile property,” Sabín explains. “As light travels through the fibre, small losses accumulate and the entanglement generated is of poorer quality, which translates into higher error rates in the cryptographic keys generated. Indeed, the results of the experiment show that errors in the key range from 3% when the distance is 11 km to more than 7% for 100 km.”

Pan and colleagues now plan to add more atoms to each node and to use techniques like tweezer arrays to further enhance both the entangling rate and the secure key rate over longer distances. “We are aiming for 1000 km, over which we hope to incorporate quantum repeaters,” Pan tells Physics World. “By using processes like ‘entanglement swapping’ to connect a series of such two-node entanglement, we anticipate that we will be able maintain a similar entangling rate for much longer distances.”

The post Quantum-secure Internet expands to citywide scale appeared first on Physics World.

Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans

25 février 2026 à 14:00

Todd McNutt is a radiation oncology physicist at Johns Hopkins University in the US and the co-founder of Oncospace, where he led the development of an artificial intelligence (AI)-powered tool that simultaneously accelerates radiation planning and elevates plan quality and consistency. The software, now rebranded as Plan AI and available from US manufacturer Sun Nuclear, draws upon data from thousands of previous radiotherapy treatments to predict the lowest possible dose to healthy tissues for each new patient. Treatment planners then use this information to define goals that streamline and automate the creation of a best achievable plan.

Physics World’s Tami Freeman spoke with McNutt about the evolution of Oncospace and the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres.

Can you describe how the Oncospace project began?

Back in 2007, several groups were discussing how we could better use clinical data for discovery and knowledge generation. I had several meetings with folks at Johns Hopkins, including Alex Szalay who helped develop the Sloan Digital Sky Survey. He built a large database of galaxies and stars and it became a huge research platform for both amateur and professional astronomers.

From that discussion, and other initiatives, we looked at moving towards structured data collection for patients in the clinical environment. By marrying these data with radiation treatment plans we could study how dose distributions across the anatomy affect patient outcomes. And we took that opportunity to build a database for radiotherapy.

What inspired the transition from academic research to founding the company Oncospace Inc in 2019?

After populating the database with data from many patients, we could examine which anatomic features impact our ability to generate a plan that minimizes radiation dose to normal tissues while treating target volumes as best as possible. We came up with a feature set that characterized the relationships between normal anatomy and targets, as well as target complexity.

This early work allowed us to predict expected doses from these shape-relationship features, and it worked well. At that point, we knew we could tap into this database and generate a prediction that could help create treatment plans for new patients. We thought of this as personalized medicine: for the first time, we could see the level of treatment plan quality that we could achieve for a specific patient.

I thought that this was useful commercially and that we should get it out to other clinics. Praveen Sinha, who I’d known from my previous work at Philips and now leads Sun Nuclear’s software business line, asked if I wanted to create a startup. The timing was right for both of us and I had a team here ready to go, so we went ahead and did it. With his knowledge of startups and my knowledge of what we wanted to achieve, we had perfect timing and a perfect group to work with.

Plan AI enables both predictive planning and peer review, how do these functions work?

The idea behind predictive planning is that, for a given patient, I can predict the expected dose that I should be able to achieve for them.

Plan AI software
Plan AI software Comparing dose–volume histogram prediction bands with clinical goals (arrows) provides users with valuable feedback on what can be achieved before the planning process begins. The screen shows the prediction sent to the treatment planning system.
Dose–volume histograms
Clinical plan The screen shows a review of the results that the treatment planning system achieved, with dose–volume histograms shown by the solid lines.

Treatment planning involves specifying dosimetric objectives to the planning system and asking it to optimize radiation delivery to meet these. But nobody really knows what the right objectives even are – it is just a trial-and-error process. Plan AI’s prediction provides a rational set of objectives for plan optimization, allowing the planning system’s algorithm to move towards a good solution and making treatment planning an easier problem to solve.

Peer review involves a peer physician looking at every treatment plan to evaluate it for quality and safety. But again, people don’t really know the level of quality you can generate, it depends on the patient’s anatomy. Providing a predicted dose with clinical dose goals enables a rapid review to see whether it is a high-quality plan or not.

In the past we looked at simple things like whether a contour is missing slices or contains discontinuities and Plan AI checks for this, but you can do far more with AI. For example, you could look at all the contoured rectums in the system and predict if your contour goes too far into the sigmoid colon, then it may be mis-contoured. We have research software that can flag such potential anomalies so they don’t get overlooked.

The Plan AI models are developed using Oncospace’s database of previous treatments; can you describe this data lake?

When we first started, we developed a large SQL database containing all the shape-relationship features and dosimetry features. The SQL language is ideal for being able to query and sift through the data, but when the company was formed, we recognized that there was some age to that technology.

So for the Plan AI data lake, we extracted all the different shape-relationship and shape-complexity features and put them into a Parquet database in the cloud. This made the data lake much more amenable to applying machine learning algorithms to it. The SQL data lake at Johns Hopkins is maintained separately and primarily used to investigate toxicity predictions and spatial dose patterns. But for Plan AI, the models are fixed and streamlined for the specific task of dose prediction.

What does the model training process entail?

One of the first tasks was to curate the data, using the AAPM’s standardized structure-naming model. Our data scientist Julie Shade wrote some tools for automatic name mapping and target identification; that helped us process much larger amounts of data for the model.

Once we had all the shape-relationship and shape-complexity features and all the doses, we trained the models by anatomical region. We have FDA-approved models for the male and female pelvis, thorax, abdomen and head-and-neck. For each of these, we predict the doses for every organ-at-risk. Then we used a five-fold validation model to make sure that the predictions were good on an internal data set.

We also performed external validation at institutions including Johns Hopkins and Montefiore hospitals. We created predicted plans from recent treatment plans that had been evaluated by physicians. For almost all cases, both plan quality and plan efficiency were improved with Plan AI.

One aspect of this training is that whenever we drive optimization via predictive planning we want to push towards the best achievable dose. Regular machine learning predicts an expected, or average, dose across all patients. But you never want to drive a treatment plan towards the average dose, because then every plan you generate will be happy being average. Our model predicts both the average and the best achievable dose, and drives plan optimization towards the best achievable.

When implementing new technology in the clinic, it’s important to fit into the existing treatment workflow. How clinic-ready are these AI tools?

Radiation therapy is protocol-driven: we know what technique we’re going to use to treat and what our clinical dose goals are for different structures. What we don’t know is the patient-specific part of that. So for each anatomical region, we built models out of a wide range of treatment protocols, with many different types of patients, to ensure that the same prediction model works for any protocol. This means a user can use any protocol for treatment and the predictions will work, they don’t have to retrain anything. It’s ready to go out of the box, there’s a library of protocols to start with, and you can change protocols as you need for your own clinic.

The other part of being clinic-ready is aligning with the way that planning is currently performed, which is using dose–volume histograms. Treatment plans are optimized by manipulating these dose objectives, and that’s exactly what we predict. So users aren’t changing the whole paradigm of how planners operate. They still use their treatment planning system (TPS) – we just put the objectives in there. Basically, a TPS script sends the patient’s CT and contours to the cloud, where Plan AI makes the predictions. The TPS then pulls back in the objectives built from the models, based on this specific patient’s anatomy. The TPS runs the optimization and, as a last step, can send the plan back to Plan AI to check that it fits within the best achievable predictions.

Did you encounter any challenges bringing AI into a clinical setting?

Interestingly, the challenges aren’t technical, they are more human related. One of the more systemic challenges is data security when using medical data for training. A nice thing about our system is that the features we generate from treatment plans are just mathematical shape-relationship features and don’t involve a lot of identifiable information.

AI has been used in radiation therapy for image contouring and auto-segmentation, and early efforts were not so good. So, there’s always a good, healthy scepticism. But once you show people that it works and works well, this can be overcome. I have seen some people worried about job security and AI taking over. We are medical professionals designing a treatment plan to care for a patient and there’s a lot of pride and art in that – if you automate that, it takes away some of this pride and art.

I tell people that if we automate the easier things, then they can spend their quality time on the more difficult and challenging cases, because that’s where their talent might be needed more.

Do you have any advice for clinics looking to adopt AI-driven planning?

Introduce it as an assistant, not as a solution. You want people that already know what they’re doing to be able to use their knowledge more efficiently. We want to make their jobs easier and show them that it also improves quality.

With dosimetrists, for example, they create a plan and work hard getting the dose down – and then the physician looks at it and suggests that they can do better. Predictive planning gives them confidence that they are right and takes the uncertainty out of the physician review process. And once you’ve gained that level of confidence, you can start using it for adaptive planning or other technologies.

Where do you see predictive modelling and AI in oncology in five years from now?

Right now, there’s been a lot of data collected, but we want that data to advance and learn. Having multiple centres adding to this pool of knowledge and being able to continually update those models from new, broader data sets could be of huge value.

In terms of patient outcomes, we’ve done a lot of the work looking at how the spatial pattern of dose impacts toxicity and outcomes. This is part of the research being performed at Johns Hopkins and still in discovery mode. But down the road, some of these predictions of normal tissue outcomes could be fed into the planning process to help reduce toxicity at the patient level.

Finally, what’s been the most rewarding part of this journey for you?

During my prior experience building treatment planning systems, the biggest problem was always that nobody knew what the objective was. Nobody knew how to tell the system: “this is the dose I expect to receive, now optimize to get it for me”, because you didn’t know what you could do. For any given patient, you could ask for too much or too little. Now, for the first time, I argue that we actually know what our objective is in our treatment planning.

This levels the playing field between different environments, different countries, or even different dosimetrists with different levels of experience. The Plan AI tool brings all this to a consistent state and enables high quality, efficient planning everywhere. We can provide this predictive planning tool to clinics around the world. Now we just have to get everybody using it.

 

The post Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans appeared first on Physics World.

The future of particle physics: what can the past teach us?

25 février 2026 à 12:00

In his opening remarks to the 4th International Symposium on the History of Particle Physics, Chris Llewellyn Smith – who was a director-general of CERN in the 1990s – suggested participants should speak about “what’s not written in the journals”, including “mistakes, dead-ends and problems with getting funding”. Doing so, he said, would “provide insight into the way science really progresses”.

The symposium was not your usual science conference. Held last November at CERN, it took place inside the lab’s 400-seat main auditorium, which has been the venue for many historic announcements, including the discovery of the Higgs boson. Its brown-beige walls are covered with lively designs by the Finnish artist Ilona Rista, suggesting to me the aftermath of a collision of high-energy bar codes.

The 1980s and 1990s saw the construction and operation of various important accelerators and detectors.

The focus of the meeting was the development of particle physics in the 1980s and 1990s – a period that saw the construction and operation of various important accelerators and detectors. At CERN, these included the UA1 and UA2 experiments at the Super Proton Synchrotron, where the W and Z bosons were discovered. Later, there was the Large Electron-Positron Collider (LEP), which came online in 1989, and the Large Hadron Collider (LHC), approved five years later.

Delegates also heard about the opening of various accelerators in the US during those two decades, including two at the Stanford Linear Accelerator Center – the Positron-Electron Project in 1980 and the Stanford Linear Collider in 1989. Most famous of all was the start-up of the Tevatron at Fermilab in 1983. Over at Dubna in the former Soviet Union, meanwhile, scientists built the Nuclotron, a superconducting synchrotron, which opened in 1992.

Conference speakers covered unfinished machines of the era as well. The US cancelled two proton–proton facilities – ISABELLE in 1983 and the Superconducting Super Collider (SSC) a decade later. The Soviet Union, meanwhile, abandoned the multi-TeV proton–proton collider UNK a few years later, though news has recently emerged that Russia might revive the project.

Several speakers recounted the discovery of the W and Z particles at CERN in 1983 and the discovery of the top quark at Fermilab in 1995. Others addressed the strange fact that fewer neutrinos from the Sun had been detected than theory suggested. The “solar-neutrino problem”, as it was known, was finally resolved by Takaaki Kajita’s discovery of neutrino oscillation in 1998, for which he shared the 2015 Nobel Prize for Physics with Art McDonald.

The conference also addressed unsuccessful searches for proton decay, axions, magnetic monopoles, the Higgs boson, supersymmetry particles and other targets. Other speakers described projects with highly positive outcomes, such as the advent of particle cosmology, or what some have jokingly dubbed “the heavenly lab”. The development of string theory, grand unified theories and perturbative quantum chromodynamics was tackled too.

In an exchange in the question-and-answer session after one talk, the Greek physicist Kostas Gavroglu referred to many of such quests as “failures”. That remark prompted the Australian-born US theoretical physicist Helen Quinn to say she preferred the term “falling forward”; such failures, she said, were instances of “I tried this, and it didn’t work so I tried that”.

In relating his work on detecting gravitational waves, the US Nobel-prize-winning physicist Barry Barish said he felt his charge was not to celebrate the importance of his discoveries nor the ingenuity of the route he took. Instead, Barish explained, his job was to answer the much more informal question: “What made me do what?”.

His point was illustrated by the US theorist Alan Guth, who described the very human and serendipitous path he took to working on cosmic inflation – the super-fast expansion of the universe just after the Big Bang. When he started, Guth said, “all the ingredients were already invented”. But the startling idea of inflation hinged on accidental meetings, chance conversations, unexpected visits, a restricted word count for Physical Review Letters, competitions, insecurities and “spectacular realizations” coalescing.

Wider world

Another theme that arose in the conference was that science does not unfold inside its own bubble but can have extensive and immediate impacts on the world around it. Two speakers, for instance, recounted the invention of the World Wide Web at CERN in the late 1980s. It’s fair to say that no other discovery by a single individual – Tim Berners-Lee – has so radically and quickly transformed the world.

The growing role of international politics in promoting and protecting projects was mentioned too, with various speakers explaining how high-level political negotiations enabled physicists to work at institutions and experiments in other nations. The Polish physicist Agnieszka Zalewska, for example, described her country’s path to membership in CERN, while Russian-born US physicist Vladimir Shiltsev spoke about the “diaspora” of Russian particle physicists after the fall of the Soviet Union in 1991.

As a result of the Superconducting Super Colllider’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.

Sometimes politics created destructive interference. The US physicist, historian and author Michael Riordan described how the US’s determination to “go it alone” to outcompete Europe in high-energy physics was a major factor in bringing about the opposite: the termination of the SSC in 1993. As a result of that project’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.

Indeed, contemporary politics occasionally hit the conference itself in incongruous and ironic ways. Two US physicists, for example, were denied permission to attend because budgets had been cut and travel restrictions increased. In the end, one took personal time off and paid his own way, leaving his affiliation off the programme.

Before the conference, some people complained that conference organizers hadn’t paid enough attention to physicists who’d worked in the Soviet Union but were from occupied republics. Several speakers addressed this shortcoming by mentioning people like Gersh Budker (1918–1977). A Ukrainian-born physicist who worked and died in the Soviet Union, Budker was nominated for a Nobel Prize (1957) and even has had a street named after him at CERN. Unmentioned, though, was that Budker was Jewish and that his father was killed by Ukrainian nationalists in a pogrom.

On the final day of the conference, which just happened to be World Science Day for Peace and Development, CERN mounted a public screening of the 2025 documentary film The Peace Particle. Directed by Alex Kiehl, much of it was about CERN’s internationalism, with a poster for the film describing the lab as “Mankind’s biggest experiment…science for peace in a divided world”.

But in the Q&A afterwards some audience members criticized CERN for allegedly whitewashing Russia for its invasion of the Ukraine and Israel for genocide. Those onstage defended CERN on the grounds of its desire to promote internationalism.

The critical point

The keynote speaker of the conference was John Krige, a science historian from Georgia Tech who has worked on a three-volume history of CERN. Those who launched the lab, Krige reminded the audience, had radical “scientific, political and cultural aspirations” for the institution. Their dream was that CERN wouldn’t just revive European science and promote regional collaborative effects after the Second World War, but also potentially improve the global world order too.

Krige went on to quote one CERN founder, who’d said that international science facilities such as CERN would be “one of the best ways of saving Western civilization”. Recent events, however, have shown just how fragile those ambitions are. Alluding to CERN’s Future Circular Collider and other possible projects, Llewellyn Smith ended his closing remarks with a warning.

“The perennial hope that the next big high-energy project will be genuinely global,” he said, “seems to be receding over the horizon due to the polarization of world politics”.

The post The future of particle physics: what can the past teach us? appeared first on Physics World.

A breakthrough in modelling open quantum matter

25 février 2026 à 09:32

Attempts to understand quantum phase transitions in open systems usually rely on real‑time Lindbladian evolution, which tracks how a quantum state changes as it relaxes toward a steady state. This approach is powerful for studying decoherence, dissipation and long‑time behaviour, but it often fails to reveal the deeper structure of the system including the phase transitions, critical points and hidden quantum order that define its underlying physics.

In this work, the researchers introduce a new framework called imaginary‑time Lindbladian evolution, which allows them to define and classify quantum phases in open systems using the spectrum of an imaginary‑Liouville superoperator. This approach works not only for pure ground states but also for finite‑temperature Gibbs states of stabilizer Hamiltonians, showing its relevance for realistic, mixed‑state conditions.

A key diagnostic in their method is the imaginary‑Liouville gap, the spectral gap between the lowest and next‑lowest decay modes. When this gap closes, the system undergoes a phase transition, a change that is accompanied by diverging correlation lengths and nonanalytic shifts in physical observables. The closing of this gap also coincides with the divergence of the Markov length, a recently proposed indicator of criticality in open quantum systems.

To demonstrate the power of their framework, the researchers map out phase diagrams for systems with

Z2σ×Z2τ

symmetry, capturing both spontaneous symmetry breaking and average symmetry‑protected topological phases. Their method reveals universal critical behaviour that real‑time Lindbladian steady states fail to detect, highlighting why imaginary‑time evolution fills a missing piece in the theory of open‑system phases.

Importantly, the authors emphasise that real‑time Lindbladians remain essential for modelling dissipation in practical settings. Their new framework complements this conventional approach, offering a systematic way to study phase transitions in open systems. They also outline how phase diagrams can be constructed using both bottom‑up (state‑based) and top‑down (Hamiltonian‑based) strategies, illustrating the method with a dissipative transverse‑field Ising model.

Overall, this work provides a unified and versatile way to understand quantum phases in open systems, revealing critical behaviour and topological structure that were previously inaccessible. It opens new directions for studying mixed‑state quantum matter and advances the theoretical foundations needed for future quantum technologies.

Read the full article

A new framework for quantum phases in open systems: steady state of imaginary-time Lindbladian evolution

Yuchen Guo et al 2025 Rep. Prog. Phys. 88 118001

Do you want to learn more about this topic?

Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)

The post A breakthrough in modelling open quantum matter appeared first on Physics World.

How reversibility becomes irreversible

25 février 2026 à 09:31

In the macroscopic world, we see irreversible processes everywhere, heat flowing from hot to cold, gases mixing, systems decaying. Yet at the microscopic level, quantum mechanics is perfectly reversible, with its equations running equally well forwards and backwards in time. How then, does irreversibility emerge from fundamentally reversible dynamics?

A common explanation is coarse-graining, which simplifies a complex system by ignoring microscopic details and focusing only on large-scale behaviour. To make the micro–macro divide precise, however, one must first define what “macroscopic” means. Here it is given a quantitative inferential meaning: a state is macroscopic if it is perfectly inferable from the perspective of a specified measurement and prior. Central to this framework is a coarse-graining map built from the measurement and its optimal Bayesian recovery via the Petz map; macroscopic states are precisely its fixed points, turning macroscopicity into a sharp condition of perfect inferability. This construction is grounded in Bayesian retrodiction, which infers what a system likely was before it was measured, together with an observational deficit that quantifies how much information is lost in forming a macroscopic description.

States that are macroscopically inferable can be characterised in several equivalent ways, all tied to to a new measure of disorder called macroscopic entropy, which captures how irreversible, or “uninferable”, a macroscopic process appears from the observer’s perspective. This perspective is formalised through inferential reference frames, built from the combination of a prior and a measurement, which determine what an observer can and cannot recover about the underlying quantum state.

The researchers also develop a resource theory of microscopicity, treating macroscopic states as free and identifying the operations that cannot generate microscopic detail. This unifies and extends existing resource theories of coherence, athermality, and asymmetry. They further introduce observational discord, a new way to understand quantum correlations when observational power is limited, and provide conditions for when this discord vanishes.

Altogether, this work reframes macroscopic irreversibility as an information-theoretic phenomenon, grounded not in a fundamental dynamical asymmetry but in an inferential asymmetry arising from the observer’s limited perspective. It offers a unified way to understand coarse-graining, entropy, and the emergence of classical behaviour from quantum mechanics. It deepens our understanding of time’s direction and has implications for quantum computing, thermodynamics, and the study of quantum correlations in realistic, constrained settings.

Read the full article

Macroscopicity and observational deficit in states, operations, and correlations

Teruaki Nagasawa et al 2025 Rep. Prog. Phys. 88 117601

Do you want to learn more about this topic?

Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)

The post How reversibility becomes irreversible appeared first on Physics World.

Visible light paints patterns onto chiral antiferromagnets

24 février 2026 à 17:44

Researchers at Los Alamos National Laboratory in New Mexico, US have used visible light to both image and manipulate the domains of a chiral antiferromagnet (AFM). By “painting” complex patterns onto samples of cobalt niobium sulfite (Co1/3NbS2), they demonstrated that it is possible to control AFM domain formation and dynamics, boosting prospects for data storage devices based on antiferromagnetic materials rather than the ferromagnetic ones commonly used today.

In antiferromagnetic materials, the spins of neighbouring atoms in the material’s lattice are opposed to each other (they are antiparallel). For this reason, they do not exhibit a net magnetization in the absence of a magnetic field. This characteristic makes them largely immune to disturbances from external magnetic fields, but it also makes them all but invisible to simple electrical and optical probes, and extremely difficult to manipulate.

A special structure

In the new work, a Los Alamos team led by Scott Crooker focused on Co1/3NbS2 because of its topological nature. In this material, layers of cobalt atoms are positioned, or intercalated, between monolayers of niobium disulfide, creating 2D triangular lattices with ABAB stacking. The spins of these cobalt atoms point either toward or away from the centers of the tetrahedra formed by the atoms. The result is a noncoplanar spin ordering that produces a chiral, or “handed,” spin texture.

This chirality affects the motion of electrons in the material because when an electron passes through a chiral pattern of spins, it picks up a geometrical phase known as a Berry phase. This makes it move as if it were “seeing” a region with a real magnetic field, giving the material a nonzero Hall conductivity which, in turn, affects how it absorbs circularly polarized light.

Characterizing a topological antiferromagnet

To characterize this behaviour, the researchers used an optical technique called magnetic circular dichroism (MCD) that measures the difference in absorption between left and right circularly polarized light and depends explicitly on the Hall conductivity.

Similar to the MCD that is measured in well-known ferromagnets such as iron or nickel, the amplitude and sign of the MCD measured in Co1/3NbS2 varied as a function of the wavelength  of the light.  This dependence occurs because light prompts optical transitions between filled and empty energy bands. “In more complex materials like this, there is a whole spaghetti of bands, and one needs to consider all of them,” Crooker explains. “Precisely which mix of transitions are being excited depends of course on the photon energy, and this mix changes with energy. Sometimes the net response is positive, sometimes negative; it just depends on the details of the band structure.”

To understand the mix of transitions taking place, as well as the topological character of those transitions, scientists use the concept of Berry curvature, which is the momentum-space version of the magnetic field-like effect described earlier. If the accumulated Berry phase is positive (negative), then the electron is moving in a right-handed (left-handed) spin texture chirality, which is captured by the Berry curvature of the band structure in momentum space.

Imaging and painting chiral AFM domains

To image directly the domains with positive and negative chirality, the researchers cooled the sample below its ordering temperature, shined light of a particular wavelength onto it, and measured its MCD using a scanning MCD microscope. The sign of the measured MCD value revealed the chirality of the AFM domains.

To “write” a different chirality into these AFM domains, the researchers again cooled the sample below its ordering temperature, this time in the presence of a small positive magnetic field B, which fixed the sample in a positive chiral AFM state. They then reversed the polarity of B and illuminated a spot of the sample to heat it above the ordering temperature. Once the spot cooled down, the negative-polarity B-field changed the AFM state in the illuminated region into a negative chirality. When the “painting” was finished, the researchers imaged the patterns with the MCD microscope.

In the past, a similar thermo-magnetic scheme gave rise to ferromagnetic-based data storage disks. This work, which is published in Physical Review Letters, marks the first time that light has been used to manipulate AFM chiral domains – a fundamental requirement for developing AFM-based information storage technology and spintronics.  In the future, Crooker says the group plans to extend this technique to characterize other complex antiferromagnets with nontrivial magnetic configurations, use light to “write” interesting spatial patterns of chiral domains (patterns of Berry phase), and see how this influences electrical transport.

The post Visible light paints patterns onto chiral antiferromagnets appeared first on Physics World.

Green concrete: paving the way for sustainable structures

Par : No Author
24 février 2026 à 12:00

Grey, ugly, dull. Concrete is not the most exciting material in the world. That is, until you start to think about its impact on our lives. Concrete is the second most consumed material on the planet after water. Humanity uses about 30 billion tonnes of the stuff every year, the equivalent of building an entire new New York City every month. Put another way, there is so much concrete in the world and so much being made that by the 2040s it will outweigh all living matter.

As the son of a builder, I have made a few concrete mixes over the years myself, usually following my father’s tried and trusted recipe. Take one part cement (fine mineral powder), two parts sand, and four parts aggregate (crushed stone), then mix and add enough water until it all goes gloopy.

The ubiquity and low cost of these simple ingredients are just two of the reasons for concrete’s global reach. In liquid form, it can be moulded into almost any shape, and once set, it is as hard and durable as stone. What’s more, it doesn’t burn, rot or get eaten by animals.

These factors make concrete the ideal material for everything from vast imposing dams to sleek kitchen floors. However, its gargantuan presence across society comes at an equally epic environmental cost. If concrete were a country, it would rank third behind only the US and China as a greenhouse gas emitter.

Though raw material processing and transport of concrete are part of the problem, concrete’s biggest environmental impact comes from the heat and chemical processes involved in producing cement. Ordinary cement clinker (the raw form of cement before it is ground to a powder) is the product of heating limestone up to 1450 °C until it breaks apart into lime and carbon dioxide (CO2). This heating requires lots of energy and the chemical process releases huge amounts of the greenhouse gas CO2 – meaning that cement makes up around 90% of the carbon footprint of an average concrete mix.

Cement factory at twilight
Tricky ingredient Concrete’s biggest environmental impact comes from the heat and chemical processes involved in making cement. It’s very energy-intensive and produces vast quantities of carbon dioxide. (Courtesy: Shutterstock/Bilanol)

In the UK and some other parts of the world, this climate impact is well recognized, with the industry having made significant efforts to decarbonize over the last few decades. “Since 1990, the UK concrete industry has decreased its direct and indirect environmental impacts by over 53% through various technology levers,” says Elaine Toogood – an architect and senior director at the Mineral Products Association’s Concrete Centre, the UK’s technical hub for all things concrete.

This reduction has been achieved through actions such as fuel switching, decarbonizing electricity and transport networks, and carbon capture technology. “For example, over 50% of all the heat that’s needed to make cement is now supplied by waste-derived fuels,” Toogood adds.

Yet the sheer scale of the global concrete industry means that much more needs to be done to fully mitigate concrete’s carbon impact. Can physics, and more specifically AI, lend a hand?

Low-carbon replacements

Replacing cement – concrete’s least green ingredient – with low-carbon alternatives seems like a good place to start. Two well-proven options have been available for decades.

Fly ash – the by-product of burning coal at power plants – can replace about 30% of cement in concrete mixes. It has been used in the construction of many prominent structures including the Channel Tunnel, which opened in 1994. Blast furnace slag – the by-product of iron and steel production – is another capable replacement, and can make up to 70% of cement content. Slag was used in 2009 to substitute half of the regular cement in the precast concrete units that now make up the sea defences on Blackpool beach.

Yet although these waste materials are currently extensively used as cement or concrete additions in the UK and elsewhere, they rely on very polluting sources (coal-fired power plants and blast furnaces) that are gradually being phased out globally to meet climate targets. As a result, fly ash and blast furnace slag are not long-term solutions. New low-carbon materials are needed, which is where physics can play a decisive role.

Based in Debre Tabor University in Ethiopia, Gashaw Abebaw Adanu is an expert in innovative construction materials. In 2021 he and colleagues investigated the potential of partially replacing (0%, 5%, 10%, 15% and 20%) standard cement with ash from burning lowland Ethiopian bamboo leaf, a common local construction waste material (Adv. Civ. Eng 10.1155/2021/6468444). The findings were encouraging. Though the concrete took longer to set with increased bamboo leaf ash content, the material’s strength, water absorption and sulphate attack (concrete breakdown caused by sulphate ions reacting with the hardened cement paste) improved for 5–10% bamboo leaf ash mixes. The results suggest that up to 10% of cement could be swapped for this local low-carbon alternative.

Steel, copper – or hair?

More recently, Adanu has turned his focus to concrete fibre reinforcement. Adding small amounts of steel, copper or polyethene fibres is known to increase concrete’s ductility and crack resistance by up to 200% and 90%, respectively. The tiny fibres act like micro-stitches throughout the entire mix, transforming concrete from a brittle material into a tough, energy-absorbing composite.

Fibre reinforcement also leads to major cost savings and a reduced carbon footprint, primarily by removing the need for traditional steel rebar and mesh, where 50 kg of steel fibres can often do the work of 100 kg+ of traditional rebar. Eliminating this expensive material also reduces labour and maintenance costs.

In his latest research, Adanu has explored an unexpected alternative fibre reinforcement material that would decrease costs further as it would otherwise go to landfill: human hair (Eng. Res. Express 7 015115). Adanu took waste hair from barbershops in Debre Tabor (with permission, of course), and added small amounts of it in different quantities to standard concrete mixes. “It’s not biodegradable, it’s not compostable, but as a fibre reinforcement material, we found that using 1–2% human hair improves the concrete’s tensile strength, compressive strength, cracking resistance and reduces shrinkage,” says Adanu. “It makes concrete more clean and sustainable, and because it improves the quality of the concrete, it reduces cost at the same time.”

Research like Adanu’s, involving experimentation with local materials, has been the driving force for innovation in construction for millennia. From the ancient Neolithic practice of boosting mudbricks’ strength by adding local straw, to the Romans using volcanic dust as high-quality cement for concrete constructions like the Pantheon in Rome – a structure that still stands to this day, with its 43.3-m diameter non-reinforced concrete dome remaining the largest in the world. But testing one material at a time is no longer the only way.

Four photos of concrete buildings
Shapely material Concrete is ubiquitous in modern buildings, from generic office blocks (top left) to some of the world’s most creative architecture, such as (top right) the Auditorio de Tenerife in Santa Cruz (designed by Santiago Calatrava) and (bottom left) the Metropolitan Cathedral Nossa Senhora Aparecida in Brasilia (designed by Oscar Niemeyer). But it can be found in much older buildings as well. The largest unreinforced concrete dome in the world (bottom right) is on the Pantheon in Rome, built in 126 CE. This structure uses volcanic dust as its cement. (Courtesy: Shutterstock/Snide12; Shutterstock/Framalicious; Shutterstock/Marcelo Moryan; Shutterstock/Sean Pavone)

Taking a more modern, wide-ranging approach, a team of researchers led by Soroush Mahjoubi and Elsa Olivetti of Massachusetts Institute of Technology (MIT), recently mined the cement and concrete literature, and a database of over one million rock samples, looking for cement ingredient substitutes (Communications Materials 6 99). The study not only confirmed the potential of the well-known alternatives fly ash and metallurgic slags, but also various biomass ashes like the bamboo leaf ash Adanu investigated, as well as rice husk, sugarcane bagasse, wood, tree bark and palm oil fuel ashes.

The meta-review in addition identified various other waste materials with high potential. These include construction and demolition wastes (ceramics, bricks, concrete), waste glass, municipal solid waste incineration ashes, and mine tailings (iron ore, copper, zinc), as well as 25 igneous rock types that could significantly reduce cement’s carbon impact.

AI to the rescue

Although a number of these alternative concrete materials have been known for some time, they have struggled to make an impact, with very few being used to partially replace regular cement in ready-mix concretes. Getting construction companies or concrete contractors to give them a try is no simple task.

“Concrete contractors are used to using certain mixes for certain jobs at certain times of the year, so they can plan a site and project based on how those materials are going to behave,” says Toogood. “Newer mixes act slightly differently when fresh,” she adds, which makes life tricky for those running a construction site, where concrete that behaves in a predictable manner is critical so that things run smoothly and efficiently.

Two physicists – Raphael Scheps and Gideon Farrell – aim to build this trust in low-carbon alternatives through their UK construction technology company Converge. Starting out using sensors to measure the real-time performance of different mixes of concrete in situ, they have built one of the world’s largest datasets on the performance of concrete.

Two photos of sensors on building sites - a macro shot of a probe and a wider shot of a person wearing hi-vis and a hard hat crouched on a concrete surface
Watch and learn UK construction technology company Converge uses sensors to measure the real-time performance of different mixes of concrete in situ, then adds the data to its AI program to model untested concrete mixes. (Left) Signal Long Range is Converge’s LoRaWAN-enabled, fully embedded concrete-monitoring sensor for large-scale construction. (Right) Installation of Converge’s Helix system at HS2 Old Oak Common – a long-range, reusable concrete-monitoring solution. (Courtesy: Converge)

They can now apply an AI model underpinned by physics principles. The program simulates the physical and chemical interactions of different components to predict the performance of a vast number of concrete mixes in a wide range of situations to a high level of accuracy. And this is key, as it builds trust to experiment with lower-carbon mixes. “With projects in the UK and Australia, we’ve helped people tweak the mix that they’re using and achieve quite major carbon savings,” says Scheps. “Anywhere from 10% all the way up to 44%.”

Currently used to recommend existing cost-saving concrete recipes, Scheps sees Converge’s AI model becoming more sophisticated over time. “As it starts to uncover the real fundamental physics-based rules for what drives concrete chemistry, our model will make projections for entirely new materials,” he enthuses.

Also exploring the power of AI to optimize concrete production is US company Concrete.ai. Like Converge, Concrete.ai was born from the idea of applying physics principles to optimize traditional materials and industries; specifically, how AI can be used to reduce the carbon footprint of concrete. And also like Converge, the company’s technology rests on one of the world’s largest concrete databases, consisting of vast amounts of different recipes and materials, alongside their associated performances.

Trained on this dataset, Concrete.ai’s generative AI model creates millions of possible mix designs to identify the optimal concrete recipe for any particular application. “The main difference between a solution like Concrete.ai’s and general models like ChatGPT or Gemini is that our goal is really to create recipes that don’t exist yet,” explains chief technology officer and co-founder Mathieu Bauchy. “Popular large language models regurgitate what they have been trained on and tend to hallucinate, whereas our model discovers new recipes that have never been produced before without breaking the laws of physics or chemistry, and in a reliable way.”

Bauchy sees Concrete.ai’s role as a bridge between concrete producers keen to cut their costs and carbon footprint, and innovators like Adanu or the MIT group exploring new low-carbon concrete materials who are unable to demonstrate the performance of these materials in real-world scenarios and at scale.

Circular benefits

It is perhaps apt that the industry most in need of AI insights from the likes of Converge, Concrete.ai and their growing number of competitors is the AI industry itself. New data centres being used to train, deploy and deliver AI applications and services are the cause of a huge spike in the greenhouse gas emissions of tech giants such as Google, Meta, Microsoft and Amazon. And one of the biggest contributors to those emissions is the concrete from which these hyperscale facilities are built.

Aerial view of large industrial building complex next to a solar farm
Feedback loop The Google Hyperscale Data Center for AI and Sustainable Energy opened in Winschoten, Netherlands, in November 2025. The massive growth in AI is leading to many more of these huge structures, and though the electricity they run on is increasingly from renewable sources, the concrete from which they are built is decidedly less green. But AI is also potentially the best resource we have to reduce the carbon cost of concrete. (Courtesy: Shutterstock/Make more Aerials)

This is the reason Meta recently partnered with concrete maker Amrize to develop AI-optimized concrete. For Meta’s new 66,500 m2 data centre in Rosemount, Minnesota, the partners applied Meta’s AI models and Amrize’s materials-engineering expertise to deliver concrete that met key criteria including high strength and low carbon content, as well as practical performance characteristics like decent cure speed and surface quality. The partners estimate that the custom mix will reduce the total carbon footprint of this concrete by 35%.

“There is an interesting synergy between concrete and AI,” says Bauchy. “AI can help design greener concrete, and on the other hand, concrete can be used to build more sustainable data centres to power AI.” With other tech giants exploring AI’s potential in reducing the carbon footprint of the concrete they use too, it may well be that the very places in which AI is developed become the testbeds for AI-derived sustainable green concrete solutions.

The post Green concrete: paving the way for sustainable structures appeared first on Physics World.

New journal aims to advance the interdisciplinary field of personalized health

23 février 2026 à 11:00

Personalized health – the use of individualized measurements to address each patient’s specific needs – is a research field that’s evolving at pace. Bringing this level of personalization into the clinic is an interdisciplinary challenge, requiring the development of sensors that generate clinically meaningful data outside the hospital, new imaging modalities and analysis techniques, and computational tools that address the uncertainties of dealing with just one individual.

Much of the most impactful work in this field sits in the spaces between established disciplines. And for researchers looking to publish their findings or read about the latest breakthroughs, this work is often scattered across discipline-specific journals. A new open access journal from IOP Publishing – Medical Sensors & Imaging (MSI) – aims to remedy this shortfall, providing a dedicated home for authors working across sensing, imaging, modelling and data-driven healthcare.

Medical Sensors & Imaging
New launch Medical Sensors & Imaging is fully open for submissions. (Courtesy: IOP Publishing)

“We want a journal where physicists, engineers, computer scientists, biomedical researchers and clinicians can publish and read work that advances personalized health, without confinement into traditional silos,” explains founding editor-in-chief Marco Palombo from Cardiff University. “MSI also aims to play an important role in strengthening interdisciplinary exchange.”

“The community needs a specialized forum that doesn’t just report on new materials or a clinical trial, but validates innovations that can specifically solve complex biomedical challenges,” adds deputy editor Xiliang Luo from Qingdao University of Science and Technology. “I think this journal is a perfect fit for that gap.”

Connecting communities

Published by IOP Publishing on behalf of the Institute of Physics and Engineering in Medicine (IPEM), MSI aims to dismantle the barriers between engineering innovation and clinical application by creating a community of experts that work together to translate innovative technology into clinical settings.

MSI sits within IPEM’s journal portfolio that includes Physics in Medicine & Biology, Physiological Measurement and Medical Engineering & Physics. Its aims and scope were designed to complement, rather than overlap with, these existing journals and provide a dedicated venue for translational work and practical applied research that may otherwise struggle to fit a traditional scope.

Marco Palombo
Marco Palombo: “We have been discussing green AI and green healthcare for at least 10 years. I think MSI can be one of the first journals to push this area forward.”

Being part of this established family of journals brings with it strong editorial standards, an established readership base and a commitment to scientific integrity. The journal also offers rapid, high-quality peer review, with feedback that’s constructive, rigorous and fair. MSI is fully open access, which maximizes the visibility, reach and impact of its published papers.

“For a new journal in a dynamic field, ensuring content is discoverable and barrier-free is essential for building an audience quickly and establishing credibility,” says Palombo. “We also wanted MSI to support global participation. Many excellent groups operate with limited budgets but make major scientific contributions. Open access reduces inequities in who can read and build on published work.”

“For the authors, we can provide a specialized platform for scientists whose work transcends traditional boundaries, offering visibility to a broad audience that’s eager for translational solutions,” says Luo. “And for the readers, I think we will be the go-to resource for academic researchers, industry R&D leaders, and healthcare innovators seeking the latest breakthroughs in personalized health monitoring and advanced diagnostics.”

Hot topics

Palombo contributed to the strategic development of the journal at an early stage, drawing upon his experience in healthcare and medical imaging research and engaging with the research community to identify the scientific niche that MSI could fill. Working with IOP Publishing, he helped shape the journal’s aims and scope and assembled a diverse, internationally recognized editorial board with knowledge aligned with the journal’s mission – including Lui, who brings specialist expertise in wearable technologies and biosensors.

Xiliang Luo
Xiliang Luo: “We hope to establish a forum where advanced sensing technology and imaging techniques can enable the next generation of personalized and predictive health.”

The journal will publish high-quality research on novel biomedical sensing and imaging techniques, along with the algorithms, validation frameworks and translational studies that demonstrate their application in real-world medicine. MSI also provides a platform to showcase research on hot topics such as wearable and implantable sensors for continuous physiological monitoring, for example, or microneedle-based sensing technologies and breath analysis.

The development of flexible and biocompatible materials will be key for the growth of bio-integrated devices and biodegradable or transient electronics, as will anti-fouling strategies that enable use of sensors in complex biological environments. On the imaging side, the journal scope encompasses mainstay medical imaging techniques such as MRI, CT, ultrasound, PET and SPECT, as well as emerging multimodal and hybrid approaches, with a focus on technical innovation and translational relevance.

“Given my own background, I’m particularly keen to see strong submissions in the area of MRI, including advanced quantitative biomarkers and approaches that probe tissue microstructure,” notes Palombo. “I also see huge potential in connecting imaging to computational modelling – particularly digital twins – and in building imaging pipelines that enable personalized diagnosis and prognosis.”

“Other exciting areas include combining sensing and imaging technologies into one system, and closed-loop ‘sense then act’ systems, which sense something and can then release medicine to treat the disease,” says Luo.

The rise of AI

Artificial intelligence (AI) is becoming increasingly central to both sensing and imaging, and will likely play a major role in the evolution of personalized health, enabling a shift towards multimodal fusion of sensor streams, imaging and clinical data. AI could also facilitate the introduction of integrated sensor systems that collect data and interpret signals in real time, and digital twins that link patient-specific data with computational models to simulate disease progression or treatment response.

Palombo emphasizes the importance of trustworthy AI: methods that don’t just provide an output, but are explainable, robust and explicitly handle uncertainty. This is a direction seen in the general field of AI, but is especially important within healthcare. He also cites the increasing momentum around green healthcare and green AI, with personalized health technologies designed to reduce waste and minimize energy consumption, and clinical models developed with far greater computational efficiency.

“It would be fantastic to have an AI model running directly on the sensor, for example, and this ties in with the environmental impact of AI,” he explains. “If we keep AI small and manageable, then it pollutes less, is more affordable for everybody and can be deployed on small, lightweight devices.”

A community focal point

Looking ahead, Palombo hopes that MSI will becomes a leading platform for interdisciplinary innovation in personalized health, and the routine home for publishing major advances in sensing, imaging, modelling and trustworthy AI. “Over time, I’d like the journal to build depth in core areas, while also actively shaping emerging directions such as digital twins, uncertainty-aware and explainable AI, multimodal integration and technologies that are genuinely deployable in clinical workflows.”

“Currently, the fields of sensor engineering and clinical medicine often run on parallel tracks. My hope is that this journal will force these tracks to converge over time,” adds Luo. “I see the journal fostering a new language where chemists, physicists, engineers and doctors can understand each other by publishing papers in MSI.”

  • Medical Sensors & Imaging is fully open for submissions, with the first issue expected to publish in Q2/Q3 of this year. During the launch phase, IOPP is covering the article processing charge (APC) for all accepted papers, enabling early contributors to publish at no cost while helping the journal establish a strong foundation of high-quality inaugural content. Beyond this period, many authors will benefit from support through IOPP’s transformative agreements, while others may be eligible for APC waivers and discounts.

 

The post New journal aims to advance the interdisciplinary field of personalized health appeared first on Physics World.

Olympian Eileen Gu rules the piste with physics and international relations

20 février 2026 à 18:32

Here at Physics World we are always on the look out for physicists with extraordinary talents outside of science. In 2023, for example we were in awe of Harvard University’s Jenny Hoffman who ran across the US in 47 days, 12 hours and 35 minutes – shattering the previous record by one week.

Now, coverage of the Winter Olympics in Italy has revealed that the Chinese freestyle skier Eileen Gu had studied physics at Stanford University. The most decorated female Olympic freestyle skier in history, US-born Gu bagged two gold medals and a silver at the 2022 Beijing games and added three silvers at Milano Cortina.

Gu has subsequently switched majors to international relations at Stanford, but we can still celebrate her as an honorary physicist.

Physics-rich event

Indeed, freestyle skiing is quite possibly the most physics-rich of all Olympic events. Athletes must consider friction, gravity and the conservation of momentum and angular momentum to perfect their skiing.

Now, I’m not suggesting that studying free-body diagrams of freestyle manoeuvres is essential for Olympic success, but I live in hope that an understanding of classical mechanics can improve one’s skiing. (I’m not sure why I believe this, because a PhD and decades of writing about physics certainly hasn’t improved my skiing!).

As well as being lauded for her prowess on the snow, Gu has found herself at the centre of an international furore regarding her choice of competing for China rather than for the US. So, international relations combined with physics seems like a very good course of study!

  • Article has been updated to include Gu’s third silver medal at Milano Cortina.

The post Olympian Eileen Gu rules the piste with physics and international relations appeared first on Physics World.

Wobbling gyroscopes could harvest energy from ocean waves

20 février 2026 à 14:15

A new way of extracting energy from ocean waves has been proposed by a researcher in Japan. The system couples a gyroscope to an electrical generator and could be fine tuned to extract energy from a wide range of wave conditions. A prototype of the design is currently being built for testing in a wave tank. If successful, the system could be used to generate electricity onboard ships.

Ocean waves contain huge amounts of energy and humans have tried to harness this energy for centuries. But, despite the development of myriad technologies and a number of trials, the widespread commercial conversion of wave energy remains an elusive goal. One important problem is that most generation schemes only work within a narrow range of wave conditions – and the ocean can be a very messy place.

Now, Takahito Iida at the University of Osaka has proposed a new energy-harvesting technology that uses gyroscopic flywheel system that can be tuned to absorb energy efficiently over a broad range of wave frequencies.

“Wave energy devices often struggle because ocean conditions are constantly changing,” says Iida. “However, a gyroscopic system can be controlled in a way that maintains high energy absorption, even as wave frequencies vary.”

Wobbling top

At the heart of the technology is gyroscopic precession, whereby a torque on a rotating object causes the object’s axis of rotation to trace out a circle. This is familiar to anyone who has played with a spinning top, which will wobble (precess) when perturbed.

Iida’s device is called a gyroscopic wave energy converter and comprises a spinning flywheel mounted on a floating platform (see figure). On calm seas, the gyroscope’s axis of rotation points in a fixed direction thanks to the conservation of angular momentum. However, waves will cause the platform to pitch from side-to-side, exerting torques on the gyroscope and causing it to precess.  It is this precession that drives a generator to deliver electrical power.

To design the system, Iida used linear wave theory to model the coupled interactions between waves, the platform, the gyroscope and the generator. This allowed him to devise a scheme for tuning the gyroscope frequency and generator parameters so that an energy conversion efficiency of 50% is achieved for a variety of wave conditions.

The effect of the generator was modelled as a spring-damper. This is a system that responds to a torque by storing and then returning some energy to the gyroscope (the spring), and removing some energy by converting it to electricity (the damper).  Iida discovered that a maximum conversion of 50% occurs when the spring coefficient of the generator is adjusted such that the gyroscope’s resonant frequency matches the resonant frequency of the floating platform.

Fundamental constraint

Iida explains that 50% is the maximum efficiency that can be achieved. “This efficiency limit is a fundamental constraint in wave energy theory. What is exciting is that we now know that it can be reached across broadband frequencies, not just at a single resonant condition.”

Iida tells Physics World that a small prototype (approximately 50 cm3 in size) is being built and will be tested in a 100 m-long tank.

The next step will be the development of a system with a generating capacity of about 5 kW. Iida says that the ultimate goal is a 300 kW generator.

Iida also explains that the gyroscopic wave energy converter is designed to operate untethered to the seabed. As a result he says the system would be ideal for use as an auxiliary power system for a ship. “The target output of 300 kW is based on the assumed auxiliary power demand of a typical commercial vessel,” says Iida.

The research is described in the Journal of Fluid Mechanics.

The post Wobbling gyroscopes could harvest energy from ocean waves appeared first on Physics World.

World’s smallest QR code paves the way for ultralong-life data storage

20 février 2026 à 10:00

A team headed up at TU Wien in Austria has set the Guinness World Record for creating the world’s smallest QR code. Working with industry partner Cerabyte, the researchers produced a stable and repeatedly readable QR code with an area of just 1.977 µm2. When read out – using an electron microscope, as its structure is too fine to be seen with a standard optical microscope – the QR code links to a scientific webpage at TU Wien.

But this wasn’t just a ploy to get into the record books, the QR code was created as part of the team’s research into ceramic data storage materials. Unlike conventional magnetic or electronic data storage media, which degrade within decades, ceramic-based storage is designed to withstand extreme temperatures, radiation, chemical corrosion and mechanical damage.

As such, information stored in ceramic materials could endure for centuries, or even millennia. And in contrast to today’s data centres, ceramics preserve stored information without any energy input and without requiring cooling.

Electron microscope image of QR code
Invisible code The world’s smallest QR code can only be read out using an electron microscope. (Courtesy: TU Wien)

To create these ultralong-life data storage systems, the researchers use focused ion beams to mill the QR code into a thin film of chromium nitride, a durable ceramic often used to coat high-performance cutting tools. As each individual pixel is just 49 nm in size, roughly 10 times smaller than the wavelength of visible light, the code cannot be imaged using visible light. But when examined with an electron microscope, the QR code could indeed be read out reliably.

After the writing process, the entire stack of ceramic films is subjected to extreme conditions, such as high temperatures, corrosive environments and mechanical stress, to evaluate the material’s long-term durability and readout stability.

Pushing storage to its limits

Creating a “tiny QR code” was not the team’s initial goal, but emerged as a natural outcome of pushing this storage technology to its limits, says Paul Mayrhofer from TU Wien’s Institute of Materials Science and Technology.

“During a discussion with one of my PhD students, Erwin Peck, we realised that the writing procedure we had developed already produced features smaller than what had previously been reported for QR codes,” he explains. “This sparked the idea: if we can reliably write structures at that scale, why not intentionally create the smallest QR code possible?”

To claim its place in the record books, the QR code was successfully milled and read out in the presence of witnesses and its size independently verified using calibrated scanning electron microscopy at the University of Vienna. It is now officially recognized by Guinness as the world’s smallest QR code, and is roughly one third the size of the previous record holder.

Mayrhofer points out that the storage capacity of the ceramic data storage technology far surpasses that of a single QR code. “Based on current estimates, a cartridge of 100 x 100 x 20 mm with ceramic storage medium could potentially store on the order of 290 terabytes of raw data,” he says.

As well as offering this impressive raw capacity, for practical applications it’s also crucial that the ceramic storage offers high writing speed, which determines how efficiently large datasets can be stored, and low energy consumption during writing, which will influence the potential for scalability and sustainability. The researchers are currently working to optimize both of these parameters.

“Humanity has preserved information for millennia when carved in stone, yet much of today’s digital information risks being lost within decades,” project leader Alexander Kirnbauer tells Physics World. “Our long-term goal is to create an ultrastable, sustainable data storage technology capable of preserving information for extremely long times – potentially thousands to millions of years. In essence, we want to develop a form of storage that ensures the knowledge of our digital age does not disappear over time.”

The post World’s smallest QR code paves the way for ultralong-life data storage appeared first on Physics World.

Quantum Systems Accelerator focuses on technologies for computing

19 février 2026 à 15:59

Developing practical technologies for quantum information systems requires the cooperation of academic researchers, national laboratories and industry. That is the mission of the  Quantum Systems Accelerator (QSA), which is based at the Lawrence Berkeley National Laboratory in the US.

The QSA’s director Bert de Jong is my guest in this episode of the Physics World Weekly podcast. His academic research focuses on computational chemistry and he explains how this led him to realise that quantum phenomena can be used to develop technologies for solving scientific problems.

In our conversation, de Jong explains why the QSA is developing a range of  qubit platforms − including neutral atoms, trapped ions, and superconducting qubits – rather than focusing on a single architecture. He champions the co-development of quantum hardware and software to ensure that quantum computing is effective at solving a wide range of problems from particle physics to chemistry.

We also chat about the QSA’s strong links to industry and de Jong reveals his wish list of scientific problems that he would solve if he had access today to a powerful quantum computer.

Oxford Ionics logo

 

This podcast is supported by Oxford Ionics.

The post Quantum Systems Accelerator focuses on technologies for computing appeared first on Physics World.

❌