↩ Accueil

Vue lecture

Incoming CERN director-general Mark Thomson outlines his future priorities

How did you get interested in particle physics?

I studied physics at Oxford University and I was the first person in my family to go to university. I then completed a DPhil at Oxford in 1991 studying cosmic rays and neutrinos. In 1992 I moved to University College London as a research fellow. That was the first time I went to CERN and two years later I began working on the Large Electron-Positron Collider, which was the predecessor of the Large Hadron Collider. I was fortunate enough to work on some of the really big measurements of the W and Z bosons and electroweak unification, so it was a great time in my life. In 2000 I worked at the University of Cambridge where I set up a neutrino group. It was then that I began working at Fermilab – the US’s premier particle physics lab.

So you flipped from collider physics to neutrinos physics?

Over the past 20 years, I have oscillated between them and sometimes have done both in parallel. Probably the biggest step forward was in 2013 when I became spokesperson for the Deep Underground Neutrino Experiment – a really fascinating, challenging and ambitious project. In 2018 I was then appointed executive chair of the Science and Technology Facilities Council (STFC) — one of the main UK funding agencies. The STFC funds particle physics and astronomy in the UK and maintains relationships with organisations such as CERN, the Square Kilometre Array Observatory as well as operating some of the UK’s biggest national infrastructures such as the Rutherford Appleton Laboratory and the Daresbury Laboratory.

What did that role involve?

It covered strategic funding of particle physics and astronomy in the UK and also involved running a very large scientific organization with about 2800 scientific, technical and engineering staff. It was very good preparation for the role as CERN director-general.

What attracted you to become CERN director-general?

CERN is such an important part of the global particle-physics landscape. But I don’t think there was ever a moment where I just thought “Oh, I must do this”. I’ve spent six years on the CERN Council, so I know the organization well. I realized I had all of the tools to do the job – a combination of the science, knowing the organization and then my experience in previous roles. CERN has been a large part of my life for many years, so it’s a fantastic opportunity for me.

What were your first thoughts when you heard you had got the role?

It was quite a surreal moment. My first thoughts were “Well, OK, that’s fun”, so it didn’t really sink in until the evening. I’m obviously very happy and it was fantastic news but it was almost a feeling of “What happens now?”.

What so does happen now as CERN director-general designate?

There will be a little bit of shadowing, but you can’t shadow someone for the whole year, that doesn’t make very much sense. So what I really have to do is understand the organization, how it works from the inside and, of course, get to know the fantastic CERN staff, which I’ve already  started doing. A lot of my time at the moment is meeting people and understanding how things work.

How might you do things differently?

I don’t think I will do anything too radical. I will have a look at where we can make things work better. But my priority for now is putting in place the team that will work with me from January. That’s quite a big chunk of work.

We have a decision to make on what comes after the High Luminosity-LHC in the mid-2040s

What do you think your leadership style will be?

I like to put around me a strong leadership team and then delegate and trust the leadership team to deliver. I’m there to set the strategic direction but also to empower them to deliver. That means I can take an outward focus and engage with the member states to promote CERN. I think my leadership style is to put in place a culture where the staff can thrive and operate in a very open and transparent way. That’s very important to me because it builds trust both within the organization and with CERN’s partners. The final thing is that I’m 100% behind CERN being an inclusive organization.

So diversity is an important aspect for you?

I am deeply committed to diversity and CERN is deeply committed to it in all its forms, and that will not change. This is a common value across Europe: our member states absolutely see diversity as being critical, and it means a lot to our scientific communities as well. From a scientific point of view, if we’re not supporting diversity, we’re losing people who are no different from others who come from more privileged backgrounds. Also, diversity at CERN has a special meaning: it means all the normal protected characteristics, but also national diversity. CERN is a community of 24 member states and quite a few associate member states, and ensuring nations are represented is incredibly important. It’s the way you do the best science, ultimately, and it’s the right thing to do.

The LHC is undergoing a £1bn upgrade towards a High Luminosity-LHC (HL-LHC), what will that entail?  

The HL-LHC is a big step up in terms of capability and the goal will be to increase the luminosity of the machine. We are also upgrading the detectors to make them even more precise. The HL-LHC will run from about 2030 to the early 2040s. So by the end of LHC operations, we would have only taken about 10% of the overall data set once you add what the HL-LHC is expected to produce.

What physics will that allow?

There’s a very specific measurement that we would like to make around the nature of the Higgs mechanism. There’s something very special about the Higgs boson that it has a very strange vacuum potential, so it’s always there in the vacuum. With the HL-LHC, we’re going to start to study the structure of that potential. That’s a really exciting and fundamental measurement and it’s a place where we might start to see new physics.

Beyond the HL-LHC, you will also be involved in planning what comes next. What are the options?

We have a decision to make on what comes after the HL-LHC in the mid-2040s. It seems a long way off but these projects need a 20-year lead-in. I think the consensus amongst the scientific community for a number of years has been that the next machine must explore the Higgs boson. The motivation for a Higgs factory is incredibly strong.

Yet there has not been much consensus whether that should be a linear or circular machine?

My personal view is that a circular collider is the way forward. One option is the Future Circular Collider (FCC) – a 91 km circumference collider that would be built at CERN.

What would the benefits of the FCC be?

We know how to build circular colliders and it gives you significantly more capability than a linear machine by producing more Higgs bosons. It is also a piece of research infrastructure that will be there for many years beyond the electron-positron collider. The other aspect is that at some point in the future, we are going to want a high-energy hadron collider to explore the unknown.

But it won’t come cheap, with estimates being about £12-15bn for the electron-positron version, dubbed FCC-ee?

While the price tag for the FCC-ee is significant, that is spread over 24 member states for 15 years and contributions can also come from elsewhere. I’m not saying it’s going to be easy to actually secure that jigsaw puzzle of resource, because money will need to come from outside Europe as well.

China is also considering the Circular Electron Positron Collider (CEPC) that could, if approved, be built by the 2030s. What would happen to the FCC if the CEPC were to go ahead? 

I think that will be part of the European Strategy for Particle Physics, which will happen throughout this year, to think about the ifs and buts. Of course, nothing has really been decided in China. It’s a big project and it might not go ahead. I would say it’s quite easy to put down aggressive timescales on paper but actually delivering them is always harder. The big advantage of CERN is that we have the scientific and engineering heritage in building colliders and operating them. There is only one CERN in the world.

What do you make of alternative technologies such as muon colliders that could be built in the existing LHC tunnel and offer high energies?

It’s an interesting concept but technically we don’t know how to do it. There’s a lot of development work but it’s going to take a long time to turn that into a real machine. So looking at a muon collider on the time scale of the mid-2040s is probably unrealistic. What is critical for an organization like CERN and for global particle physics is that when the HL-LHC stops by 2040, there’s not a large gap without a collider project.

Last year CERN celebrated its 70th anniversary, what do you think particle physics might look like in the next 70 years?

If you look back at the big discoveries over the last 30 years we’ve seen neutrino oscillations, the Higgs boson, gravitational waves and dark energy. That’s four massive discoveries. In the coming decade we will know a lot more about the nature of the neutrino and the Higgs boson via the HL-LHC. The big hope is we find something else that we don’t expect.

The post Incoming CERN director-general Mark Thomson outlines his future priorities appeared first on Physics World.

‘Sneeze simulator’ could improve predictions of pathogen spread

A new “sneeze simulator” could help scientists understand how respiratory illnesses such as COVID-19 and influenza spread. Built by researchers at the Universitat Rovira i Virgili (URV) in Spain, the simulator is a three-dimensional model that incorporates a representation of the nasal cavity as well as other parts of the human upper respiratory tract. According to the researchers, it should help scientists to improve predictive models for respiratory disease transmission in indoor environments, and could even inform the design of masks and ventilation systems that mitigate the effects of exposure to pathogens.

For many respiratory illnesses, pathogen-laden aerosols expelled when an infected person coughs, sneezes or even breathes are important ways of spreading disease. Our understanding of how these aerosols disperse has advanced in recent years, mainly through studies carried out during and after the COVID-19 pandemic. Some of these studies deployed techniques such as spirometry and particle imaging to characterize the distributions of particle sizes and airflow when we cough and sneeze. Others developed theoretical models that predict how clouds of particles will evolve after they are ejected and how droplet sizes change as a function of atmospheric humidity and composition.

To build on this work, the UVR researchers sought to understand how the shape of the nasal cavity affects these processes. They argue that neglecting this factor leads to an incomplete understanding of airflow dynamics and particle dispersion patterns, which in turn affects the accuracy of transmission modelling. As evidence, they point out that studies focused on sneezing (which occurs via the nose) and coughing (which occurs primarily via the mouth) detected differences in how far droplets travelled, the amount of time they stayed in the air and their pathogen-carrying potential – all parameters that feed into transmission models. The nasal cavity also affects the shape of the particle cloud ejected, which has previously been found to influence how pathogens spread.

The challenge they face is that the anatomy of the naval cavity varies greatly from person to person, making it difficult to model. However, the UVR researchers say that their new simulator, which is based on realistic 3D printed models of the upper respiratory tract and nasal cavity, overcomes this limitation, precisely reproducing the way particles are produced when people cough and sneeze.

Reproducing human coughs and sneezes

One of the features that allows the simulator to do this is a variable nostril opening. This enables the researchers to control air flow through the nasal cavity, and thus to replicate different sneeze intensities. The simulator also controls the strength of exhalations, meaning that the team could investigate how this and the size of nasal airways affects aerosol cloud dispersion.

During their experiments, which are detailed in Physics of Fluids, the UVR researchers used high-speed cameras and a laser beam to observe how particles disperse following a sneeze. They studied three airflow rates typical of coughs and sneezes and monitored what happened with and without nasal cavity flow. Based on these measurements, they used a well-established model to predict the range of the aerosol cloud produced.

A photo of a man with dark hair, glasses and a beard holding a 3D model of the human upper respiratory tract. A mask is mounted on a metal arm in the background.
Simulator: Team member Nicolás Catalán with the three-dimensional model of the human upper respiratory tract. The mask in the background hides the 3D model to simulate any impact of the facial geometry on the particle dispersion. (Courtesy: Bureau for Communications and Marketing of the URV)

“We found that nasal exhalation disperses aerosols more vertically and less horizontally, unlike mouth exhalation, which projects them toward nearby individuals,” explains team member Salvatore Cito. “While this reduces direct transmission, the weaker, more dispersed plume allows particles to remain suspended longer and become more uniformly distributed, increasing overall exposure risk.”

These findings have several applications, Cito says. For one, the insights gained could be used to improve models used in epidemiology and indoor air quality management.

“Understanding how nasal exhalation influences aerosol dispersion can also inform the design of ventilation systems in public spaces, such as hospitals, classrooms and transportation systems to minimize airborne transmission risks,” he tells Physics World.

The results also suggest that protective measures such as masks should be designed to block both nasal and oral exhalations, he says, adding that full-face coverage is especially important in high-risk settings.

The researchers’ next goal is to study the impact of environmental factors such as humidity and temperature on aerosol dispersion. Until now, such experiments have only been carried out under controlled isothermal conditions, which does not reflect real-world situations. “We also plan to integrate our experimental findings with computational fluid dynamics simulations to further refine protective models for respiratory aerosol dispersion,” Cito reveals.

The post ‘Sneeze simulator’ could improve predictions of pathogen spread appeared first on Physics World.

Memory of previous contacts affects static electricity on materials

Physicists in Austria have shown that the static electricity acquired by identical material samples can evolve differently over time, based on each samples’ history of contact with other samples. Led by Juan Carlos Sobarzo and Scott Waitukaitis at the Institute of Science and Technology Austria, the team hope that their experimental results could provide new insights into one of the oldest mysteries in physics.

Static electricity – also known as contact electrification or triboelectrification — has been studied for centuries. However, physicists still do not understand some aspects of how it works.

“It’s a seemingly simple effect,” Sobarzo explains. “Take two materials, make them touch and separate them, and they will have exchanged electric charge. Yet, the experiments are plagued by unpredictability.”

This mystery is epitomized by an early experiment carried out by the German-Swedish physicist Johan Wilcke in 1757. When glass was touched to paper, Wilcke found that glass gained a positive charge – while when paper was touched to sulphur, it would itself become positively charged.

Triboelectric series

Wilcke concluded that glass will become positively charged when touched to sulphur. This concept formed the basis of the triboelectric series, which ranks materials according to the charge they acquire when touched to another material.

Yet in the intervening centuries, the triboelectric series has proven to be notoriously inconsistent. Despite our vastly improved knowledge of material properties since the time of Wilcke’s experiments, even the latest attempts at ordering materials into triboelectric series have repeatedly failed to hold up to experimental scrutiny.

According to Sobarzo’s and colleagues, this problem has been confounded by the diverse array of variables associated with a material’s contact electrification. These include its electronic properties, pH, hydrophobicity, and mechanochemistry, to name just a few.

In their new study, the team approached the problem from a new perspective. “In order to reduce the number of variables, we decided to use identical materials,” Sobarzo describes. “Our samples are made of a soft polymer (PDMS) that I fabricate myself in the lab, cut from a single piece of material.”

Starting from scratch

For these identical materials, the team proposed that triboelectric properties could evolve over time as the samples were brought into contact with other, initially identical samples. If this were the case, it would allow the team to build a triboelectric series from scratch.

At first, the results seemed as unpredictable as ever. However, as the same set of samples underwent repeated contacts, the team found that their charging behaviour became more consistent, gradually forming a clear triboelectric series.

Initially, the researchers attempted to uncover correlations between this evolution and variations in the parameters of each sample – with no conclusive results. This led them to consider whether the triboelectric behaviour of each sample was affected by the act of contact itself.

Contact history

“Once we started to keep track of the contact history of our samples – that is, the number of times each sample has been contacted to others–the unpredictability we saw initially started to make sense,” Sobarzo explains. “The more contacts samples would have in their history, the more predictable they would behave. Not only that, but a sample with more contacts in its history will consistently charge negative against a sample with less contacts in its history.”

To explain the origins of this history-dependent behaviour, the team used a variety of techniques to analyse differences between the surfaces of uncontacted samples, and those which had already been contacted several times. Their measurements revealed just one difference between samples at different positions on the triboelectric series. This was their nanoscale surface roughness, which smoothed out as the samples experienced more contacts.

“I think the main take away is the importance of contact history and how it can subvert the widespread unpredictability observed in tribocharging,” Sobarzo says. “Contact is necessary for the effect to happen, it’s part of the name ‘contact electrification’, and yet it’s been widely overlooked.”

The team is still uncertain of how surface roughness could be affecting their samples’ place within the triboelectric series. However, their results could now provide the first steps towards a comprehensive model that can predict a material’s triboelectric properties based on its contact-induced surface roughness.

Sobarzo and colleagues are hopeful that such a model could enable robust methods for predicting the charges which any given pair of materials will acquire as they touch each other and separate. In turn, it may finally help to provide a solution to one of the most long-standing mysteries in physics.

The research is described in Nature.

The post Memory of previous contacts affects static electricity on materials appeared first on Physics World.

Wireless deep brain stimulation reverses Parkinson’s disease in mice

Nanoparticle-mediated DBS reverses the symptoms of Parkinson’s disease
Nanoparticle-mediated DBS (I) Pulsed NIR irradiation triggers the thermal activation of TRPV1 channels. (II, III) NIR-induced β-syn peptide release into neurons disaggregates α-syn fibrils and thermally activates autophagy to clear the fibrils. This therapy effectively reverses the symptoms of Parkinson’s disease. Created using BioRender.com. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)

A photothermal, nanoparticle-based deep brain stimulation (DBS) system has successfully reversed the symptoms of Parkinson’s disease in laboratory mice. Under development by researchers in Beijing, China, the injectable, wireless DBS not only reversed neuron degeneration, but also boosted dopamine levels by clearing out the buildup of harmful fibrils around dopamine neurons. Following DBS treatment, diseased mice exhibited near comparable locomotive behaviour to that of healthy control mice.

Parkinson’s disease is a chronic brain disorder characterized by the degeneration of dopamine-producing neurons and the subsequent loss of dopamine in regions of the brain. Current DBS treatments focus on amplifying dopamine signalling and production, and may require permanent implantation of electrodes in the brain. Another approach under investigation is optogenetics, which involves gene modification. Both techniques increase dopamine levels and reduce Parkinsonian motor symptoms, but they do not restore degenerated neurons to stop disease progression.

Chunying Chen
Team leader Chunying Chen from the National Center for Nanoscience and Technology. (Courtesy: Chunying Chen)

The research team, at the National Center for Nanoscience and Technology of the Chinese Academy of Sciences, hypothesized that the heat-sensitive receptor TRPV1, which is highly expressed in dopamine neurons, could serve as a modulatory target to activate dopamine neurons in the substantia nigra of the midbrain. This region contains a large concentration of dopamine neurons and plays a crucial role in how the brain controls bodily movement.

Previous studies have shown that neuron degeneration is mainly driven by α-synuclein (α-syn) fibrils aggregating in the substantia nigra. Successful treatment, therefore, relies on removing this build up, which requires restarting of the intracellular autophagic process (in which a cell breaks down and removes unnecessary or dysfunctional components).

As such, principal investigator Chunying Chen and colleagues aimed to develop a therapeutic system that could reduce α-syn accumulation by simultaneously disaggregating α-syn fibrils and initiating the autophagic process. Their three-component DBS nanosystem, named ATB (Au@TRPV1@β-syn), combines photothermal gold nanoparticles, dopamine neuron-activating TRPV1 antibodies, and β-synuclein (β-syn) peptides that break down α-syn fibrils.

The ATB nanoparticles anchor to dopamine neurons through the TRPV1 receptor then, acting as nanoantennae, convert pulsed near-infrared (NIR) irradiation into heat. This activates the heat-sensitive TRPV1 receptor and restores degenerated dopamine neurons. At the same time, the nanoparticles release β-syn peptides that clear out α-syn fibril buildup and stimulate intracellular autophagy.

The researchers first tested the system in vitro in cellular models of Parkinson’s disease. They verified that under NIR laser irradiation, ATB nanoparticles activate neurons through photothermal stimulation by acting on the TRPV1 receptor, and that the nanoparticles successfully counteracted the α-syn preformed fibril (PFF)-induced death of dopamine neurons. In cell viability assays, neuron death was reduced from 68% to zero following ATB nanoparticle treatment.

Next, Chen and colleagues investigated mice with PFF-induced Parkinson’s disease. The DBS treatment begins with stereotactic injection of the ATB nanoparticles directly into the substantia nigra. They selected this approach over systemic administration because it provides precise targeting, avoids the blood–brain barrier and achieves a high local nanoparticle concentration with a low dose – potentially boosting treatment effectiveness.

Following injection of either nanoparticles or saline, the mice underwent pulsed NIR irradiation once a week for five weeks. The team then performed a series of tests to assess the animals’ motor abilities (after a week of training), comparing the performance of treated and untreated PFF mice, as well as healthy control mice. This included the rotarod test, which measures the time until the animal falls from a rotating rod that accelerates from 5 to 50 rpm over 5 min, and the pole test, which records the time for mice to crawl down a 75 cm-long pole.

Results of motor tests in mice
Motor tests Results of (left to right) rotarod, pole and open field tests, for control mice, mice with PFF-induced Parkinson’s disease, and PFF mice treated with ATB nanoparticles and NIR laser irradiation. (Courtesy: CC BY-NC/Science Advances 10.1126/sciadv.ado4927)

The team also performed an open field test to evaluate locomotive activity and exploratory behaviour. Here, mice are free to move around a 50 x 50 cm area, while their movement paths and the number of times they cross a central square are recorded. In all tests, mice treated with nanoparticles and irradiation significantly outperformed untreated controls, with near comparable performance to that of healthy mice.

Visualizing the dopamine neurons via immunohistochemistry revealed a reduction in neurons in PFF-treated mice compared with controls. This loss was reversed following nanoparticle treatment. Safety assessments determined that the treatment did not cause biochemical toxicity and that the heat generated by the NIR-irradiated ATB nanoparticles did not cause any considerable damage to the dopamine neurons.

Eight weeks after treatment, none of the mice experienced any toxicities. The ATB nanoparticles remained stable in the substantia nigra, with only a few particles migrating to cerebrospinal fluid. The researchers also report that the particles did not migrate to the heart, liver, spleen, lung or kidney and were not found in blood, urine or faeces.

Chen tells Physics World that having discovered the neuroprotective properties of gold clusters in Parkinson’s disease models, the researchers are now investigating therapeutic strategies based on gold clusters. Their current research focuses on engineering multifunctional gold cluster nanocomposites capable of simultaneously targeting α-syn aggregation, mitigating oxidative stress and promoting dopamine neuron regeneration.

The study is reported in Science Advances.

The post Wireless deep brain stimulation reverses Parkinson’s disease in mice appeared first on Physics World.

How should scientists deal with politicians who don’t respect science?

Three decades ago – in May 1995 – the British-born mathematical physicist Freeman Dyson published an article in the New York Review of Books. Entitled “The scientist as rebel”, it described how all scientists have one thing in common. No matter what their background or era, they are rebelling against the restrictions imposed by the culture in which they live.

“For the great Arab mathematician and astronomer Omar Khayyam, science was a rebellion against the intellectual constraints of Islam,” Dyson wrote. Leading Indian physicists in the 20th century, he added, were rebelling against their British colonial rulers and the “fatalistic ethic of Hinduism”. Even Dyson traced his interest in science as an act of rebellion against the drudgery of compulsory Latin and football at school.

“Science is an alliance of free spirits in all cultures rebelling against the local tyranny that each culture imposes,” he wrote. Through those acts of rebellion, scientists expose “oppressive and misguided conceptions of the world”. The discovery of evolution and of DNA changed our sense of what it means to be human, he said, while black holes and Gödel’s theorem gave us new views of the universe and the nature of mathematics.

But Dyson feared that this view of science was being occluded. Writing in the 1990s, which was a time of furious academic debate about the “social construction of science”, he feared that science’s liberating role was becoming hidden by a cabal of sociologists and philosophers who viewed scientists as like any other humans, governed by social, psychological and political motives. Dyson didn’t disagree with that view, but underlined that nature is the ultimate arbiter of what’s important.

Today’s rebels

One wonders what Dyson, who died in 2020, would make of current events were he alive today. It’s no longer just a small band of academics disputing science. Its opponents also include powerful and highly placed politicians, who are tarring scientists and scientific findings for lacking objectivity and being politically motivated. Science, they say, is politics by other means. They then use that charge to justify ignoring or openly rejecting scientific findings when creating regulations and making decisions.

Thousands of researchers, for instance, contribute to efforts by the United Nations Intergovernmental Panel on Climate Change (IPCC) to measure the impact and consequences of the rising amounts of carbon dioxide in the atmosphere. Yet US President Donald Trump –speaking after Hurricane Helene left a trail of destruction across the south-east US last year – called climate change “one of the great scams”. Meanwhile, US chief justice John Roberts once rejected using mathematics to quantify the partisan effects of gerrymandering, calling it “sociological gobbledygook”.

In the current superheated US political climate, many scientific findings are charged with being agenda-driven rather than the outcomes of checked and peer-reviewed investigations

These attitudes are not only anti-science but also undermine democracy by sidelining experts and dissenting voices, curtailing real debate, scapegoating and harming citizens.

A worrying precedent for how things may play out in the Trump administration occurred in 2012 when North Carolina’s legislators passed House Bill 819. By prohibiting the use of models of sea-level rise to protect people living near the coast from flooding, the bill damaged the ability of state officials to protect its coastline, resources and citizens. It also prevented other officials from fulfilling their duty to advise and protect people against threats to life and property.

In the current superheated US political climate, many scientific findings are charged with being agenda-driven rather than the outcomes of checked and peer-reviewed investigations. In the first Trump administration, bills were introduced in the US Congress to stop politicians from using science produced by the Department of Energy in policies to avoid admitting the reality of climate change.

We can expect more anti-scientific efforts, if the first Trump administration is anything to go by. Dyson’s rebel alliance, it seems, now faces not just posturing academics but a Galactic Empire.

The critical point

In his 1995 essay, Dyson described how scientists can be liberators by abstaining from political activity rather than militantly engaging in it. But how might he have seen them meeting this moment? Dyson would surely not see them turning away from their work to become politicians themselves. After all, it’s abstaining from politics that empowers scientists to be “in rebellion against the restrictions” in the first place. But Dyson would also see them as aware that science is not the driving force in creating policies; political implementation of scientific findings ultimately depends on politicians appreciating the authority and independence of these findings.

One of Trump’s most audacious “Presidential Actions”, made in the first week of his presidency, was to define sex. The action makes a female “a person belonging, at conception, to the sex that produces the large reproductive cell” and a male “a person belonging, at conception, to the sex that produces the small reproductive cell”. Trump ordered the government to use this “fundamental and incontrovertible reality” in all regulations.

An editorial in Nature (563 5) said that this “has no basis in science”, while cynics, citing certain biological interpretations that all human zygotes and embryos are initially effectively female, gleefully insisted that the order makes all of us female, including the new US president. For me and other Americans, Trump’s action restructures the world as it has been since Genesis.

Still, I imagine that Dyson would still see his rebels as hopeful, knowing that politicians don’t have the last word on what they are doing. For, while politicians can create legislation, they cannot legislate creation.

Sometimes rebels have to be stoic.

The post How should scientists deal with politicians who don’t respect science? appeared first on Physics World.

Scientists discover secret of ice-free polar-bear fur

In the teeth of the Arctic winter, polar-bear fur always remains free of ice – but how? Researchers in Ireland and Norway say they now have the answer, and it could have applications far beyond wildlife biology. Having traced the fur’s ice-shedding properties to a substance produced by glands near the root of each hair, the researchers suggest that chemicals found in this substance could form the basis of environmentally-friendly new anti-icing surfaces and lubricants.

The substance in the bear’s fur is called sebum, and team member Julian Carolan, a PhD candidate at Trinity College Dublin and the AMBER Research Ireland Centre, explains that it contains three major components: cholesterol, diacylglycerols and anteisomethyl-branched fatty acids. These chemicals have a similar ice adsorption profile to that of perfluoroalkyl (PFAS) polymers, which are commonly employed in anti-icing applications.

“While PFAS are very effective, they can be damaging to the environment and have been dubbed ‘forever chemicals’,” explains Carolan, the lead author of a Science Advances paper on the findings. “Our results suggest that we could replace these fluorinated substances with these sebum components.”

With and without sebum

Carolan and colleagues obtained these results by comparing polar bear hairs naturally coated with sebum to hairs where the sebum had been removed using a surfactant found in washing-up liquid. Their experiment involved forming a 2 x 2 x 2 cm block of ice on the samples and placing them in a cold chamber. Once the ice was in place, the team used a force gauge on a track to push it off. By measuring the maximum force needed to remove the ice and dividing this by the area of the sample, they obtained ice adhesion strengths for the washed and unwashed fur.

This experiment showed that the ice adhesion of unwashed polar bear fur is exceptionally low. While the often-accepted threshold for “icephobicity” is around 100 kPa, the unwashed fur measured as little as 50 kPa. In contrast, the ice adhesion of washed (sebum-free) fur is much higher, coming in at least 100 kPa greater than the unwashed fur.

What is responsible for the low ice adhesion?

Guided by this evidence of sebum’s role in keeping the bears ice-free, the researchers’ next task was to determine its exact composition. They did this using a combination of techniques, including gas chromatography, mass spectrometry, liquid chromatography-mass spectrometry and nuclear magnetic resonance spectroscopy. They then used density functional theory methods to calculate the adsorption energy of the major components of the sebum. “In this way, we were able to identify which elements were responsible for the low ice adhesion we had identified,” Carolan tells Physics World.

This is not the first time that researchers have investigated animals’ anti-icing properties. A team led by Anne-Marie Kietzig at Canada’s McGill University, for example, previously found that penguin feathers also boast an impressively low ice adhesion. Team leader Bodil Holst says that she was inspired to study polar bear fur by a nature documentary that depicted the bears entering and leaving water to hunt, rolling around in the snow and sliding down hills – all while remaining ice-free. She and her colleagues collaborated with Jon Aars and Magnus Andersen of the Norwegian Polar Institute, which carries out a yearly polar bear monitoring campaign in Svalbard, Norway, to collect their samples.

Insights into human technology

As well as solving an ecological mystery and, perhaps, inspiring more sustainable new anti-icing lubricants, Carolan says the team’s work is also yielding insights into technologies developed by humans living in the Arctic. “Inuit people have long used polar bear fur for hunting stools (nikorfautaq) and sandals (tuterissat),” he explains. “It is notable that traditional preparation methods protect the sebum on the fur by not washing the hair-covered side of the skin. This maintains its low ice adhesion property while allowing for quiet movement on the ice – essential for still hunting.”

The researchers now plan to explore whether it is possible to apply the sebum components they identified to surfaces as lubricants. Another potential extension, they say, would be to pursue questions about the ice-free properties of other Arctic mammals such as reindeer, the arctic fox and wolverine. “It would be interesting to discover if these animals share similar anti-icing properties,” Carolan says. “For example, wolverine fur is used in parka ruffs by Canadian Inuit as frost formed on it can easily be brushed off.”

The post Scientists discover secret of ice-free polar-bear fur appeared first on Physics World.

Inverse design configures magnon-based signal processor

For the first time, inverse design has been used to engineer specific functionalities into a universal spin-wave-based device. It was created by Andrii Chumak and colleagues at Austria’s University of Vienna, who hope that their magnonic device could pave the way for substantial improvements to the energy efficiency of data processing techniques.

Inverse design is a fast-growing technique for developing new materials and devices that are specialized for highly specific uses. Starting from a desired functionality, inverse-design algorithms work backwards to find the best system or structure to achieve that functionality.

“Inverse design has a lot of potential because all we have to do is create a highly reconfigurable medium, and give it control over a computer,” Chumak explains. “It will use algorithms to get any functionality we want with the same device.”

One area where inverse design could be useful is creating systems for encoding and processing data using quantized spin waves called magnons. These quasiparticles are collective excitations that propagate in magnetic materials. Information can be encoded in the amplitude, phase, and frequency of magnons – which interact with radio-frequency (RF) signals.

Collective rotation

A magnon propagates by the collective rotation of stationary spins (no particles move) so it offers a highly energy-efficient way to transfer and process information. So far, however, such magnonics has been limited by existing approaches to the design of RF devices.

“Usually we use direct design – where we know how the spin waves behave in each component, and put the components together to get a working device,” Chumak explains. “But this sometimes takes years, and only works for one functionality.”

Recently, two theoretical studies considered how inverse design could be used to create magnonic devices. These took the physics of magnetic materials as a starting point to engineer a neural-network device.

Building on these results, Chumak’s team set out to show how that approach could be realized in the lab using a 7×7 array of independently-controlled current loops, each generating a small magnetic field.

Thin magnetic film

The team attached the array to a thin magnetic film of yttrium iron garnet. As RF spin waves propagated through the film, differences in the strengths of magnetic fields generated by the loops induced a variety of effects: including phase shifts, interference, and scattering. This in turn created complex patterns that could be tuned in real time by adjusting the current in each individual loop.

To make these adjustments, the researchers developed a pair of feedback-loop algorithms. These took a desired functionality as an input, and iteratively adjusted the current in each loop to optimize the spin wave propagation in the film for specific tasks.

This approach enabled them to engineer two specific signal-processing functionalities in their device. These are a notch filter, which blocks a specific range of frequencies while allowing others to pass through; and a demultiplexer, which separates a combined signal into its distinct component signals. “These RF applications could potentially be used for applications including cellular communications, WiFi, and GPS,” says Chumak.

While the device is a success in terms of functionality, it has several drawbacks, explains Chumak. “The demonstrator is big and consumes a lot of energy, but it was important to understand whether this idea works or not. And we proved that it did.”

Through their future research, the team will now aim to reduce these energy requirements, and will also explore how inverse design could be applied more universally – perhaps paving the way for ultra-efficient magnonic logic gates.

The research is described in Nature Electronics.

The post Inverse design configures magnon-based signal processor appeared first on Physics World.

The muon’s magnetic moment exposes a huge hole in the Standard Model – unless it doesn’t

A tense particle-physics showdown will reach new heights in 2025. Over the past 25 years researchers have seen a persistent and growing discrepancy between the theoretical predictions and experimental measurements of an inherent property of the muon – its anomalous magnetic moment. Known as the “muon g-2”, this property serves as a robust test of our understanding of particle physics.

Theoretical predictions of the muon g-2 are based on the Standard Model of particle physics (SM). This is our current best theory of fundamental forces and particles, but it does not agree with everything observed in the universe. While the tensions between g-2 theory and experiment have challenged the foundations of particle physics and potentially offer a tantalizing glimpse of new physics beyond the SM, it turns out that there is more than one way to make SM predictions.

In recent years, a new SM prediction of the muon g-2 has emerged that questions whether the discrepancy exists at all, suggesting that there is no new physics in the muon g-2. For the particle-physics community, the stakes are higher than ever.

Rising to the occasion?

To understand how this discrepancy in the value of the muon g-2 arises, imagine you’re baking some cupcakes. A well-known and trusted recipe tells you that by accurately weighing the ingredients using your kitchen scales you will make enough batter to give you 10 identical cupcakes of a given size. However, to your surprise, after portioning out the batter, you end up with 11 cakes of the expected size instead of 10.

What has happened? Maybe your scales are imprecise. You check and find that you’re confident that your measurements are accurate to 1%. This means each of your 10 cupcakes could be 1% larger than they should be, or you could have enough leftover mixture to make 1/10th of an extra cupcake, but there’s no way you should have a whole extra cupcake.

You repeat the process several times, always with the same outcome. The recipe clearly states that you should have batter for 10 cupcakes, but you always end up with 11. Not only do you now have a worrying number of cupcakes to eat but, thanks to all your repeated experiments, you’re more confident that you are following all the steps and measurements accurately. You start to wonder whether something is missing from the recipe itself.

Before you jump to conclusions, it’s worth checking that there isn’t something systematically wrong with your scales. You ask several friends to follow the same recipe using their own scales. Amazingly, when each friend follows the recipe, they all end up with 11 cupcakes. You are more sure than ever that the cupcake recipe isn’t quite right.

You’re really excited now, as you have corroborating evidence that something is amiss. This is unprecedented, as the recipe is considered sacrosanct. Cupcakes have never been made differently and if this recipe is incomplete there could be other, larger implications. What if all cake recipes are incomplete? These claims are causing a stir, and people are starting to take notice.

Close-up of weighing scale with small cakes on top
Food for thought Just as a trusted cake recipe can be relied on to produce reliable results, so the Standard Model has been incredibly successful at predicting the behaviour of fundamental particles and forces. However, there are instances where the Standard Model breaks down, prompting scientists to hunt for new physics that will explain this mystery. (Courtesy: iStock/Shutter2U)

Then, a new friend comes along and explains that they checked the recipe by simulating baking the cupcakes using a computer. This approach doesn’t need physical scales, but it uses the same recipe. To your shock, the simulation produces 11 cupcakes of the expected size, with a precision as good as when you baked them for real.

There is no explaining this. You were certain that the recipe was missing something crucial, but now a computer simulation is telling you that the recipe has always predicted 11 cupcakes.

Of course, one extra cupcake isn’t going to change the world. But what if instead of cake, the recipe was particle physics’ best and most-tested theory of everything, and the ingredients were the known particles and forces? And what if the number of cupcakes was a measurable outcome of those particles interacting, one hurtling towards a pivotal bake-off between theory and experiment?

What is the muon g-2?

Muons are an elementary particle in the SM that have a half-integer spin, and are similar to electrons, but are some 207 times heavier. Muons interact directly with other SM particles via electromagnetism (photons) and the weak force (W and Z bosons, and the Higgs particle). All quarks and leptons – such as electrons and muons – have a magnetic moment due to their intrinsic angular momentum or “spin”. Quantum theory dictates that the magnetic moment is related to the spin by a quantity known as the “g-factor”. Initially, this value was predicted to be at g = 2 for both the electron and the muon.

However, these calculations did not take into account the effects of “radiative corrections” – the continuous emission and re-absorption of short-lived “virtual particles” (see box) by the electron or muon – which increases g by about 0.1%. This seemingly minute difference is referred to as “anomalous g-factor”, aµ = (g – 2)/2. As well as the electromagnetic and weak interactions, the muon’s magnetic moment also receives contributions from the strong force, even though the muon does not itself participate in strong interactions. The strong contributions arise through the muon’s interaction with the photon, which in turn interacts with quarks. The quarks then themselves interact via the strong-force mediator, the gluon.

This effect, and any discrepancies, are of particular interest to physicists because the g-factor acts as a probe of the existence of other particles – both known particles such as electrons and photons, and other, as yet undiscovered, particles that are not part of the SM.

“Virtual” particles

Illustration of subatomic particles in the Standard Model
(Courtesy: CERN)

The Standard Model of particle physics (SM) describes the basic building blocks – the particles and forces – of our universe. It includes the elementary particles – quarks and leptons – that make up all known matter as well as the force-carrying particles, or bosons, that influence the quarks and leptons. The SM also explains three of the four fundamental forces that govern the universe –electromagnetism, the strong force and the weak force. Gravity, however, is not adequately explained within the model.

“Virtual” particles arise from the universe’s underlying, non-zero background energy, known as the vacuum energy. Heisenberg’s uncertainty principle states that it is impossible to simultaneously measure both the position and momentum of a particle. A non-zero energy always exists for “something” to arise from “nothing” if the “something” returns to “nothing” in a very short interval – before it can be observed. Therefore, at every point in space and time, virtual particles are rapidly created and annihilated.

The “g-factor” in muon g-2 represents the total value of the magnetic moment of the muon, including all corrections from the vacuum. If there were no virtual interactions, the muon’s g-factor would be exactly g = 2. The first confirmation of g > 2 came in 1948 when Julian Schwinger calculated the simplest contribution from a virtual photon interacting with an electron (Phys. Rev. 73 416). His famous result explained a measurement from the same year that found the electron’s g-factor to be slightly larger than 2 (Phys. Rev. 74 250). This confirmed the existence of virtual particles and paved the way for the invention of relativistic quantum field theories like the SM.

The muon, the (lighter) electron and the (heavier) tau lepton all have an anomalous magnetic moment.  However, because the muon is heavier than the electron, the impact of heavy new particles on the muon g-2 is amplified. While tau leptons are even heavier than muons, tau leptons are extremely short-lived (muons have a lifetime of 2.2 μs, while the lifetime of tau leptons is 0.29 ns), making measurements impracticable with current technologies. Neither too light nor too heavy, the muon is the perfect tool to search for new physics.

New physics beyond the Standard Model (commonly known as BSM physics) is sorely needed because, despite its many successes, the SM does not provide the answers to all that we observe in the universe, such as the existence of dark matter. “We know there is something beyond the predictions of the Standard Model, we just don’t know where,” says Patrick Koppenburg, a physicist at the Dutch National Institute for Subatomic Physics (Nikhef) in the Netherlands, who works on the LHCb Experiment at CERN and on future collider experiments. “This new physics will provide new particles that we haven’t observed yet. The LHC collider experiments are actively searching for such particles but haven’t found anything to date.”

Testing the Standard Model: experiment vs theory

In 2021 the Muon g-2 experiment at Fermilab in the US captured the world’s attention with the release of its first result (Phys. Rev. Lett. 126 141801). It had directly measured the muon g-2 to an unprecedented precision of 460 parts per billion (ppb). While the LHC experiments attempt to produce and detect BSM particles directly, the Muon g-2 experiment takes a different, complementary approach – it compares precision measurements of particles with SM predictions to expose discrepancies that could be due to new physics. In the Muon g-2 experiment, muons travel round and round a circular ring, confined by a strong magnetic field. In this field, the muons precess like spinning tops (see image at the top of this article). The frequency of this precession is the anomalous magnetic moment and it can be extracted by detecting where and when the muons decay.

The Muon g-2 experiment
Magnetic muons The Muon g-2 experiment at the Fermi National Accelerator Laboratory. (Courtesy: Reidar Hahn/Fermilab, US Department of Energy)

Having led the experiment as manager and run co-ordinator, Muon g-2 is an awe-inspiring feature of science and engineering, involving more than 200 scientists from 35 institutions in seven countries. I have been involved in both the operation of the experiment and the analysis of results. “A lot of my favourite memories from g-2 are ‘firsts’,” says Saskia Charity, a researcher at the University of Liverpool in the UK and a principal analyser of the Muon g-2 experiment’s results. “The first time we powered the magnet; the first time we stored muons and saw particles in the detectors; and the first time we released a result in 2021.”

The Muon g-2 result turned heads because the measured value was significantly higher than the best SM prediction (at that time) of the muon g-2 (Phys. Rep. 887 1). This SM prediction was the culmination of years of collaborative work by the Muon g-2 Theory Initiative, an international consortium of roughly 200 theoretical physicists (myself among them). In 2020 the collaboration published one community-approved number for the muon g-2. This value had a precision comparable to the Fermilab experiment – resulting in a deviation between the two that has a chance of 1 in 40,000 of being a statistical fluke  – making the discrepancy all the more intriguing.

While much of the SM prediction, including contributions from virtual photons and leptons, can be calculated from first principles alone, the strong force contributions involving quarks and gluons are more difficult. However, there is a mathematical link between the strong force contributions to muon g-2 and the probability of experimentally producing hadrons (composite particles made of quarks) from electron–positron annihilation. These so-called “hadronic processes” are something we can observe with existing particle colliders; much like weighing cupcake ingredients, these measurements determine how much each hadronic process contributes to the SM correction to the muon g-2. This is the approach used to calculate the 2020 result, producing what is called a “data-driven” prediction.

Measurements were performed at many experiments, including the BaBar Experiment at the Stanford Linear Accelerator Center (SLAC) in the US, the BESIII Experiment at the Beijing Electron–Positron Collider II in China, the KLOE Experiment at DAFNE Collider in Italy, and the SND and CMD-2 experiments at the VEPP-2000 electron–positron collider in Russia. These different experiments measured a complete catalogue of hadronic processes in different ways over several decades. Myself and other members of the Muon g-2 Theory Initiative combined these findings to produce the data-driven SM prediction of the muon g-2. There was (and still is) strong, corroborating evidence that this SM prediction is reliable.

This discrepancy strongly indicates, to a very high level of confidence, the existence of new physics. It seemed more likely than ever that BSM physics had finally been detected in a laboratory.

1 Eyes on the prize

Chart of muon g-2 results from 5 different experiments
(Courtesy: Muon g-2 collaboration/IOP Publishing)

Over the last two decades, direct experimental measurements of the muon g-2 have become much more precise. The predecessor to the Fermilab experiment was based at Brookhaven National Laboratory in the US, and when that experiment ended, the magnetic ring in which the muons are confined was transported to its current home at Fermilab.

That was until the release of the first SM prediction of the muon g-2 using an alternative method called lattice QCD (Nature 593 51). Like the data-driven prediction, lattice QCD is a way to tackle the tricky hadronic contributions, but it doesn’t use experimental results as a basis for the calculation. Instead, it treats the universe as a finite box containing a grid of points (a lattice) that represent points in space and time. Virtual quarks and gluons are simulated inside this box, and the results are extrapolated to a universe of infinite size and continuous space and time. This method requires a huge amount of computer power to arrive at an accurate, physical result but it is a powerful tool that directly simulates the strong-force contributions to the muon g-2.

The researchers who published this new result are also part of the Muon g-2 Theory Initiative. Several other groups within the consortium have since published QCD calculations, producing values for g-2 that are in good agreement with each other and the experiment at Fermilab. “Striking agreement, to better than 1%, is seen between results from multiple groups,” says Christine Davis of the University of Glasgow in the UK, a member of the High-precision lattice QCD (HPQCD) collaboration within the Muon g-2 Theory Initiative. “A range of methods have been developed to improve control of uncertainties meaning further, more complete, lattice QCD calculations are now appearing. The aim is for several results with 0.5% uncertainty in the near future.”

If these lattice QCD predictions are the true SM value, there is no muon g-2 discrepancy between experiment and theory. However, this would conflict with the decades of experimental measurements of hadronic processes that were used to produce the data-driven SM prediction.

To make the situation even more confusing, a new experimental measurement of the muon g-2’s dominant hadronic process was released in 2023 by the CMD-3 experiment (Phys. Rev. D 109 112002). This result is significantly larger than all the other, older measurements of the same process, including its own predecessor experiment, CMD-2 (Phys. Lett. B 648 28). With this new value, the data-driven SM prediction of aµ = (g – 2)/2 is in agreement with the Muon g-2 experiment and lattice QCD. Over the last few years, the CMD-3 measurements (and all older measurements) have been scrutinized in great detail, but the source of the difference between the measurements remains unknown.

2 Which Standard Model?

Chart of the Muon g-2 experiment results versus the various Standard Model predictions
(Courtesy: Alex Keshavarzi/IOP Publishing)

Summary of the four values of the anomalous magnetic moment of the muon aμ that have been obtained from different experiments and models. The 2020 and CMD-3 predictions were both obtained using a data-driven approach. The lattice QCD value is a theoretical prediction and the Muon g-2 experiment value was measured at Fermilab in the US. The positions of the points with respect to the y axis have been chosen for clarity only.

Since then, the Muon g-2 experiment at Fermilab has confirmed and improved on that first result to a precision of 200 ppb (Phys. Rev. Lett. 131 161802). “Our second result based on the data from 2019 and 2020 has been the first step in increasing the precision of the magnetic anomaly measurement,” says Peter Winter of Argonne National Laboratory in the US and co-spokesperson for the Muon g-2 experiment.

The new result is in full agreement with the SM predictions from lattice QCD and the data-driven prediction based on CMD-3’s measurement. However, with the increased precision, it now disagrees with the 2020 SM prediction by even more than in 2021.

The community therefore faces a conundrum. The muon g-2 either exhibits a much-needed discovery of BSM physics or a remarkable, multi-method confirmation of the Standard Model.

On your marks, get set, bake!

In 2025 the Muon g-2 experiment at Fermilab will release its final result. “It will be exciting to see our final result for g-2 in 2025 that will lead to the ultimate precision of 140 parts-per-billion,” says Winter. “This measurement of g-2 will be a benchmark result for years to come for any extension to the Standard Model of particle physics.” Assuming this agrees with the previous results, it will further widen the discrepancy with the 2020 data-driven SM prediction.

For the lattice QCD SM prediction, the many groups calculating the muon’s anomalous magnetic moment have since corroborated and improved the precision of the first lattice QCD result. Their next task is to combine the results from the various lattice QCD predictions to arrive at one SM prediction from lattice QCD. While this is not a trivial task, the agreement between the groups means a single lattice QCD result with improved precision is likely within the next year, increasing the tension with the 2020 data-driven SM prediction.

New, robust experimental measurements of the muon g-2’s dominant hadronic processes are also expected over the next couple of years. The previous experiments will update their measurements with more precise results and a newcomer measurement is expected from the Belle-II experiment in Japan. It is hoped that they will confirm either the catalogue of older hadronic measurements or the newer CMD-3 result. Should they confirm the older data, the potential for new physics in the muon g-2 lives on, but the discrepancy with the lattice QCD predictions will still need to be investigated. If the CMD-3 measurement is confirmed, it is likely the older data will be superseded, and the muon g-2 will have once again confirmed the Standard Model as the best and most resilient description of the fundamental nature of our universe.

Large group of people stood holding a banner that says Muon g-2
International consensus The Muon g-2 Theory Initiative pictured at their seventh annual plenary workshop at the KEK Laboratory, Japan in September 2024. (Courtesy: KEK-IPNS)

The task before the Muon g-2 Theory Initiative is to solve these dilemmas and update the 2020 data-driven SM prediction. Two new publications are planned. The first will be released in 2025 (to coincide with the new experimental result from Fermilab). This will describe the current status and ongoing body of work, but a full, updated SM prediction will have to wait for the second paper, likely to be published several years later.

It’s going to be an exciting few years. Being part of both the experiment and the theory means I have been privileged to see the process from both sides. For the SM prediction, much work is still to be done but science with this much at stake cannot be rushed and it will be fascinating work. I’m looking forward to the journey just as much as the outcome.

The post The muon’s magnetic moment exposes a huge hole in the Standard Model – unless it doesn’t appeared first on Physics World.

Low-temperature plasma halves cancer recurrence in mice

Treatment with low-temperature plasma is emerging as a novel cancer therapy. Previous studies have shown that plasma can deactivate cancer cells in vitro, suppress tumour growth in vivo and potentially induce anti-tumour immunity. Researchers at the University of Tokyo are investigating another promising application – the use of plasma to inhibit tumour recurrence after surgery.

Lead author Ryo Ono and colleagues demonstrated that treating cancer resection sites with streamer discharge – a type of low-temperature atmospheric plasma – significantly reduced the recurrence rate of melanoma tumours in mice.

“We believe that plasma is more effective when used as an adjuvant therapy rather than as a standalone treatment, which led us to focus on post-surgical treatment in this study,” says Ono.

In vivo experiments

To create the streamer discharge, the team applied a high-voltage pulse (25 kV, 20 ns, 100 pulse/s) to a 3 mm-diameter rod electrode with a hemispherical tip. The rod was placed in a quartz tube with a 4 mm inner diameter, and the working gas – humid oxygen mixed with ambient air – was flowed through the tube. As electrons in the plasma collide with molecules in the gas, the mixture generates cytotoxic reactive oxygen and nitrogen species.

The researchers performed three experiments on mice with melanoma, a skin cancer with a local recurrence rate of up to 10%. In the first experiment, they injected 11 mice with mouse melanoma cells, resecting the resulting tumours eight days later. They then treated five of the mice with streamer discharge for 10 min, with the mouse placed on a grounded plate and the electrode tip 10 mm above the resection site.

Experimental setup for plasma generation
Experimental setup Streamer discharge generation and treatment. (Courtesy: J. Phys. D: Appl. Phys. 10.1088/1361-6463/ada98c)

Tumour recurrence occurred in five of the six control mice (no plasma treatment) and two of the five plasma-treated mice, corresponding to recurrence rates of 83% and 40%, respectively. In a second experiment with the same parameters, recurrence rates were 44% in nine control mice and 25% in eight plasma-treated mice.

In a third experiment, the researchers delayed the surgery until 12 days after cell injection, increasing the size of the tumour before resection. This led to a 100% recurrence rate in the control group of five mice. Only one recurrence was seen in five plasma-treated mice, although one mouse that died of unknown causes was counted as a recurrence, resulting in a recurrence rate of 40%.

All of the experiments showed that plasma treatment reduced the recurrence rate by roughly 50%. The researchers note that the plasma treatment did not affect the animals’ overall health.

Cytotoxic mechanisms

To further confirm the cytotoxicity of streamer discharge, Ono and colleagues treated cultured melanoma cells for between 0 and 250 s, at an electrode–surface distance of 10 mm. The cells were then incubated for 3, 6 or 24 h. Following plasma treatments of up to 100 s, most cells were still viable 24 h later. But between 100 and 150 s of treatment, the cell survival rate decreased rapidly.

The experiment also revealed a rapid transition from apoptosis (natural programmed cell death) to late apoptosis/necrosis (cell death due to external toxins) between 3 and 24 h post-treatment. Indeed, 24 h after a 150 s plasma treatment, 95% of the dead cells were in the late stages of apoptosis/necrosis. This finding suggests that the observed cytotoxicity may arise from direct induction of apoptosis and necrosis, combined with inhibition of cell growth at extended time points.

In a previous experiment, the researchers used streamer discharge to treat tumours in mice before resection. This treatment delayed tumour regrowth by at least six days, but all mice still experienced local recurrence. In contrast, in the current study, plasma treatment reduced the recurrence rate.

The difference may be due to different mechanisms by which plasma inhibits tumour recurrence: cytotoxic reactive species killing residual cancer cells at the resection site; or reactive species triggering immunogenic cell death. The team note that either or both of these mechanisms may be occurring in the current study.

“Initially, we considered streamer discharge as the main contributor to the therapeutic effect, as it is the primary source of highly reactive short-lived species,” explains Ono. “However, recent experiments suggest that the discharge within the quartz tube also generates a significant amount of long-lived reactive species (with lifetimes typically exceeding 0.1 s), which may contribute to the therapeutic effect.”

One advantage of the streamer discharge device is that it uses only room air and oxygen, without requiring the noble gases employed in other cold atmospheric plasmas. “Additionally, since different plasma types generate different reactive species, we hypothesized that streamer discharge could produce a unique therapeutic effect,” says Ono. “Conducting in vivo experiments with different plasma sources will be an important direction for future research.”

Looking ahead to use in the clinic, Ono believes that the low cost of the device and its operation should make it feasible to use plasma treatment immediately after tumour resection to reduce recurrence risk. “Currently, we have only obtained preliminary results in mice,” he tells Physics World. “Clinical application remains a long-term goal.”

The study is reported in Journal of Physics D: Applied Physics.

The post Low-temperature plasma halves cancer recurrence in mice appeared first on Physics World.

Ultra-high-energy neutrino detection opens a new window on the universe

Using an observatory located deep beneath the Mediterranean Sea, an international team has detected an ultra-high-energy cosmic neutrino with an energy greater than 100 PeV, which is well above the previous record. Made by the KM3NeT neutrino observatory, such detections could enhance our understanding of cosmic neutrino sources or reveal new physics.

“We expect neutrinos to originate from very powerful cosmic accelerators that also accelerate other particles, but which have never been clearly identified in the sky. Neutrinos may provide the opportunity to identify these sources,” explains Paul de Jong, a professor at the University of Amsterdam and spokesperson for the KM3NeT collaboration. “Apart from that, the properties of neutrinos themselves have not been studied as well as those of other particles, and further studies of neutrinos could open up possibilities to detect new physics beyond the Standard Model.”

Neutrinos are subatomic particles with masses less than a millionth of that of electrons. They are electrically neutral and interact rarely with matter via the weak force. As a result, neutrinos can travel vast cosmic distances without being deflected by magnetic fields or being absorbed by interstellar material. “[This] makes them very good probes for the study of energetic processes far away in our universe,” de Jong explains.

Scientists expect high-energy neutrinos to come from powerful astrophysical accelerators – objects that are also expected to produce high-energy cosmic rays and gamma rays. These objects include active galactic nuclei powered by supermassive black holes, gamma-ray bursts, and other extreme cosmic events. However, pinpointing such accelerators remains challenging because their cosmic rays are deflected by magnetic fields as they travel to Earth, while their gamma rays can be absorbed on their journey. Neutrinos, however, move in straight lines and this makes them unique messengers that could point back to astrophysical accelerators.

Underwater detection

Because they rarely interact, neutrinos are studied using large-volume detectors. The largest observatories use natural environments such as deep water or ice, which are shielded from most background noise including cosmic rays.

The KM3NeT observatory is situated on the Mediterranean seabed, with detectors more than 2000 m below the surface. Occasionally, a high-energy neutrino will collide with a water molecule, producing a secondary charged particle. This particle moves faster than the speed of light in water, creating a faint flash of Cherenkov radiation. The detector’s array of optical sensors capture these flashes, allowing researchers to reconstruct the neutrino’s direction and energy.

KM3NeT has already identified many high-energy neutrinos, but in 2023 it detected a neutrino with an energy far in excess of any previously detected cosmic neutrino. Now, analysis by de Jong and colleagues puts this neutrino’s energy at about 30 times higher than that of the previous record-holder, which was spotted by the IceCube observatory at the South Pole. “It is a surprising and unexpected event,” he says.

Scientists suspect that such a neutrino could originate from the most powerful cosmic accelerators, such as blazars. The neutrino could also be cosmogenic, being produced when ultra-high-energy cosmic rays interact with the cosmic microwave background radiation.

New class of astrophysical messengers

While this single neutrino has not been traced back to a specific source, it opens the possibility of studying ultra-high-energy neutrinos as a new class of astrophysical messengers. “Regardless of what the source is, our event is spectacular: it tells us that either there are cosmic accelerators that result in these extreme energies, or this could be the first cosmogenic neutrino detected,” de Jong noted.

Neutrino experts not associated with KM3NeT agree on the significance of the observation. Elisa Resconi at the Technical University of Munich tells Physics World, “This discovery confirms that cosmic neutrinos extend to unprecedented energies, suggesting that somewhere in the universe, extreme astrophysical processes – or even exotic phenomena like decaying dark matter – could be producing them”.

Francis Halzen at the University of Wisconsin-Madison, who is IceCube’s principal investigator, adds, “Observing neutrinos with a million times the energy of those produced at Fermilab (ten million for the KM3NeT event!) is a great opportunity to reveal the physics beyond the Standard Model associated with neutrino mass.”

With ongoing upgrades to KM3NeT and other neutrino observatories, scientists hope to detect more of these rare but highly informative particles, bringing them closer to answering fundamental questions in astrophysics.

Resconi, explains, “With a global network of neutrino telescopes, we will detect more of these ultrahigh-energy neutrinos, map the sky in neutrinos, and identify their sources. Once we do, we will be able to use these cosmic messengers to probe fundamental physics in energy regimes far beyond what is possible on Earth.”

The observation is described in Nature.

The post Ultra-high-energy neutrino detection opens a new window on the universe appeared first on Physics World.

Threads of fire: uncovering volcanic secrets with Pele’s hair and tears

Volcanoes are awe-inspiring beasts. They spew molten rivers, towering ash plumes, and – in rarer cases – delicate glassy formations known as Pele’s hair and Pele’s tears. These volcanic materials, named after the Hawaiian goddess of volcanoes and fire, are the focus of the latest Physics World Stories podcast, featuring volcanologists Kenna Rubin (University of Rhode Island) and Tamsin Mather (University of Oxford).

Pele’s hair is striking: fine, golden filaments of volcanic glass that shimmer like spider silk in the sunlight. Formed when lava is ejected explosively and rapidly stretched into thin strands, these fragile fibres range from 1 to 300 µm thick – similar to human hair. Meanwhile, Pele’s tears – small, smooth droplets of solidified lava – can preserve tiny bubbles of volcanic gases within themselves, trapped in cavities.

These materials are more than just geological curiosities. By studying their structure and chemistry, researchers can infer crucial details about past eruptions. Understanding these “fossil” samples provides insights into the history of volcanic activity and its role in shaping planetary environments.

Rubin and Mather describe what it’s like working in extreme volcanic landscapes. One day, you might be near the molten slopes of active craters, and then on another trip you could be exploring the murky depths of underwater eruptions via deep-sea research submersibles like Alvin.

For a deeper dive into Pele’s hair and tears, listen to the podcast and explore our recent Physics World feature on the subject.

The post Threads of fire: uncovering volcanic secrets with Pele’s hair and tears appeared first on Physics World.

Modelling the motion of confined crowds could help prevent crushing incidents

Researchers led by Denis Bartolo, a physicist at the École Normale Supérieure (ENS) of Lyon, France, have constructed a theoretical model that forecasts the movements of confined, densely packed crowds. The study could help predict potentially life-threatening crowd behaviour in confined environments. 

To investigate what makes some confined crowds safe and others dangerous, Bartolo and colleagues – also from the Université Claude Bernard Lyon 1 in France and the Universidad de Navarra in Pamplona, Spain – studied the Chupinazo opening ceremony of the San Fermín Festival in Pamplona in four different years (2019, 2022, 2023 and 2024).

The team analysed high-resolution video captured from two locations above the gathering of around 5000 people as the crowd grew in the 50 x 20 m city plaza: swelling from two to six people per square metre, and ultimately peaking at local densities of nine per square metre. A machine-learning algorithm enabled automated detection of the position of each person’s head; from which localized crowd density was then calculated.

“The Chupinazo is an ideal experimental platform to study the spontaneous motion of crowds, as it repeats from one year to the next with approximately the same amount of people, and the geometry of the plaza remains the same,” says theoretical physicist Benjamin Guiselin, a study co-author formerly from ENS Lyon and now at the Université de Montpellier.

In a first for crowd studies, the researchers treated the densely packed crowd as a continuum like water, and “constructed a mechanics theory for the crowd movement without making any behavioural assumptions on the motion of individuals,” Guiselin tells Physics World.

Their studies, recently described in Nature, revealed a change in behaviour akin to a phase change when the crowd density passed a critical threshold of four individuals per square metre. Below this density the crowd remained relatively inactive. But above that threshold it started moving, exhibiting localized oscillations that were periodic over about 18 s, and occurred without any external guiding such as corralling.

Unlike a back-and-forth oscillation, this motion – which involves hundreds of people moving over several metres – has an almost circular trajectory that shows chirality (or handedness) and a 50:50 chance of turning to either the right or left. “Our model captures the fact that the chirality is not fixed. Instead it emerges in the dynamics: the crowd spontaneously decides between clockwise or counter-clockwise circular motion,” explains Guiselin, who worked on the mathematical modelling.

“The dynamics is complicated because if the crowd is pushed, then it will react by creating a propulsion force in the direction in which it is pushed: we’ve called this the windsock effect. But the crowd also has a resistance mechanism, a counter-reactive effect, which is a propulsive force opposite to the direction of motion: what we have called the weathercock effect,” continues Guiselin, adding that it is these two competing mechanisms in conjunction with the confined situation that gives rise to the circular oscillations.

The team observed similar oscillations in footage of the 2010 tragedy at the Love Parade music festival in Duisburg, Germany, in which 21 people died and several hundred were injured during a crush.

Early results suggest that the oscillation period for such crowds is proportional to the size of the space they are confined in. But the team want to test their theory at other events, and learn more about both the circular oscillations and the compression waves they observed when people started pushing their way into the already crowded square at the Chupinazo.

If their model is proven to work for all densely packed, confined crowds, it could in principle form the basis for a crowd management protocol. “You could monitor crowd motion with a camera, and as soon as you detect these oscillations emerging try to evacuate the space, because we see these oscillations well before larger amplitude motions set in,” Guiselin explains.

The post Modelling the motion of confined crowds could help prevent crushing incidents appeared first on Physics World.

Freedom in the Equation exhibition opens at Harvard Science Centre

A new exhibition dedicated to Ukrainian scientists has opened at Harvard Science Centre in the Cambridge, Massachusetts, in the US.

The exhibition – Freedom in the Equation – shares the stories of 10 scientists to highlight Ukraine’s lost scientific potential due to Russia’s aggression towards the country while also shedding light on the contributions of Ukrainian scientists.

Among them are physicists Vasyl Kladko and Lev Shubnikov. Kladko worked on semiconductor physics and was deputy director of the Institute of Semiconductor Physics in Kyiv. He was killed in 2022 at the age of 65 as he tried to help his family flee Russia’s invasion.

Shubnikov, meanwhile, established a cryogenic lab at the Ukrainian Institute of Physics and Technology in Kharkiv (now known as the Kharkiv Institute of Physics and Technology) in the early 1930s.  In 1937, Shubnikov was arrested during Stalin’s regime and accused of espionage and was executed shortly after.

The scientists were selected by Oleksii Boldyrev, a molecular biologist and founder of the online platform myscience.ua, together with Krystyna Semeryn, a literary scholar and publicist.

The portraits were created by Niklas Elemehed, who is the official artist of the Nobel prize, with the text compiled by Olesia Pavlyshyn, editor-in-chief at the Ukrainian popular-science outlet Kunsht.

The exhibition, which is part of the Science at Risk project, runs until 10 March. “Today, I witness scientists being killed, and preserving their names has become a continuation of my work in historical research and a continuation of resistance against violence toward Ukrainian science,” says Boldyrev.

The post Freedom in the Equation exhibition opens at Harvard Science Centre appeared first on Physics World.

Schrödinger’s cat states appear in the nuclear spin state of antimony

Physicists at the University of New South Wales (UNSW) are the first to succeed in creating and manipulating quantum superpositions of a single, large nuclear spin. The superposition involves spin states that are very far apart and are therefore the superposition is considered a Schrödinger’s cat state. The work could be important for applications in quantum information processing and quantum error correction.

It was Erwin Schrödinger who, in 1935, devised his famous thought experiment involving a cat that could, worryingly, be both dead and alive at the same time. In his gedanken experiment, the decay of a radioactive atom triggers a mechanism (the breaking of a vial containing a poisonous gas) that kills the cat. However, since the decay of the radioactive atom is a quantum phenomenon,  the atom is in a superposition of being decayed and not decayed. If the cat and poison are hidden in a box, we do not know if the cat is alive or dead. Instead, the state of the feline is  a superposition of dead and alive – known as a Schrödinger’s cat state – until we open the box.

Schrödinger’s cat state (or just cat state) is now used to refer a superposition of two very different states of a quantum system. Creating cat states in the lab is no easy task, but researchers have managed to do this in recent years using the quantum superposition of coherent states of a laser field with different amplitudes, or phases, of the field. They have also created cat states using a trapped ion (with the vibrational state of the ion in the trap playing the role of the cat) and coherent microwave fields confined to superconducting boxes combined with Rydberg atoms and superconducting quantum bits (qubits).

Antimony atom cat

The cat state in the UNSW study is an atom of antimony, which is a heavy atom with a large nuclear spin. The high spin value implies that, instead of just pointing up and down (that is, in one of two directions), the nuclear spin of antimony can be in spin states corresponding to eight different directions. This makes it a high-dimensional quantum system that is valuable for quantum information processing and for encoding error-correctable logical qubits. The atom was embedded in a silicon quantum chip that allows for readout and control of the nuclear spin state.

Normally, a qubit, is described by just two quantum states, explains Xi Yu, who is lead author of a paper describing the study. For example, an atom with its spin pointing down can be labelled as the “0” state and the spin pointing up, the “1” state. The problem with such a system is that information contained in these states is fragile and can be easily lost when a 0 switches to a 1, or vice versa. The probability of this logical error occurring is reduced by creating a qubit using a system like the antinomy atom. With its eight different spin directions, a single error is not enough to erase the quantum information – there are still seven quantum states left, and it would take seven consecutive errors to turn the 0 into a 1.

More room for error

The information is still encoded in binary code (0 and 1), but there is more room for error between the logical codes, says team leader Andrea Morello. “If an error occurs, we detect it straight away, and we can correct it before further errors accumulate.”

The researchers say they were not initially looking to make and manipulate cat states but started with a project on high-spin nuclei for reasons unrelated to quantum information. They were in fact interested in observing quantum chaos in a single nuclear spin, which had been an experimental “holy grail” for a very long time, says Morello. “Once we began working with this system, we first got derailed by the serendipitous discovery of nuclear electric resonance, he remembers “We then became aware of some new theoretical ideas for the use of high-spin systems in quantum information and quantum error correcting codes.

“We therefore veered towards that research direction, and this is our first big result in that context,” he tells Physics World.

Scalable technology

The main challenge the team had to overcome in their study was to set up seven “clocks” that had to be precisely synchronized, so they could keep track of the quantum state of the eight-level system. Until quite recently, this would have involved cumbersome programming of waveform generators, explains Morello. “The advent of FPGA [field-programmable gate array] generators, tailored for quantum applications, has made this research much easier to conduct now.”

While there have already been a few examples of such physical platforms in which quantum information can be encoded in a (Hilbert) space of dimension larger than two – for example, microwave cavities or trapped ions – these were relatively large in size: bulk microwave cavities are typically the size of matchbox, he says. “Here, we have reconstructed many of the properties of other high-dimensional systems, but within an atomic-scale object – a nuclear spin. It is very exciting, and quite plausible, to imagine a quantum processor in silicon, containing millions of such Schrödinger cat states.”

The fact that the cat is hosted in a silicon chip means that this technology could be scaled up in the long-term using methods similar to those already employed in the computer chip industry today, he adds.

Looking ahead, the UNSW team now plans to demonstrate quantum error correction in its antimony system. “Beyond that, we are working to integrate the antimony atoms with lithographic quantum dots, to facilitate the scalability of the system and perform quantum logic operations between cat-encoded qubits,” reveals Morello.

The present study is detailed in Nature Physics.

The post Schrödinger’s cat states appear in the nuclear spin state of antimony appeared first on Physics World.

Quantum superstars gather in Paris for the IYQ 2025 opening ceremony

The United Nations Educational, Scientific and Cultural Organization (UNESCO) has declared 2025 the International Year of Quantum Science and Technology – or IYQ.

UNESCO kicked-off IYQ on 4–5 February at a gala opening ceremony in Paris. Physics World’s Matin Durrani was there, and he shares his highlights from the event in this episode of the Physics World Weekly podcast.

No fewer than four physics Nobel laureates took part in the ceremony alongside representatives from governments and industry. While some speakers celebrated the current renaissance in quantum research and the burgeoning quantum-technology sector, others called on the international community to ensure that people in all nations benefit from a potential quantum revolution – not just people in wealthier countries. The dangers of promising too much from quantum computers and other technologies, was also discussed – as Durrani explains.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quantum superstars gather in Paris for the IYQ 2025 opening ceremony appeared first on Physics World.

US science in chaos as impact of Trump’s executive orders sinks in

Scientists across the US have been left reeling after a spate of executive orders from US President Donald Trump has led to research funding being slashed, staff being told to quit and key programmes being withdrawn. In response to the orders, government departments and external organizations have axed diversity, equity and inclusion (DEI) programmes, scrubbed mentions of climate change from websites, and paused research grants pending tests for compliance with the new administration’s goals.

Since taking up office on 20 January, Trump has signed dozens of executive orders. One ordered the closure of the US Agency for International Development, which has supported medical and other missions worldwide for more than six decades. The administration said it was withdrawing almost all of the agency’s funds and wanted to sack its entire workforce. A federal judge has temporarily blocked the plans, saying they may violate the US’s constitution, which reserves decisions on funding to Congress.

Individual science agencies are under threat too. Politico reported that the Trump administration has asked the National Science Foundation (NSF), which funds much US basic and applied research, to lay off between a quarter and a half of its staff in the next two months. Another report suggests there are plans to cut the agency’s annual budget from roughly $9bn to $3bn. Meanwhile, former officials of the National Oceanic and Atmospheric Administration (NOAA) told CBS News that half its staff could be sacked and its budget slashed by 30%.

Even before they had learnt of plans to cut its staff and budget, officials at the NSF were starting to examine details of thousands of grants it had awarded for references to DEI, climate change and other topics that Trump does not like. The swiftness of the announcements has caused chaos, with recipients of grants suddenly finding themselves unable to access the NSF’s award cash management service, which holds grantees’ funds, including their salaries.

NSF bosses have taken some steps to reassure grantees. “Our top priority is resuming our funding actions and services to the research community and our stakeholders,” NSF spokesperson Mike England told Physics World in late January. In what is a highly fluid situation, there was some respite on 2 February when the NSF announced that access had been restored with the system able to accept payment requests.

“Un-American” actions

Trump’s anti-DEI orders have caused shockwaves throughout US science. According to 404 Media, NASA staff were told on 22 January to “drop everything” to remove mentions of DEI, Indigenous people, environmental justice and women in leadership, from public websites. Another victim has been NASA’s Here to Observe programme, which links undergraduates from under-represented groups with scientists who oversee NASA’s missions. Science reported that contracts for half the scientists involved in the programme had been cancelled by the end of January.

It is still unclear, however, what impact the Trump administration’s DEI rules will have on the make-up of NASA’s astronaut corps. Since choosing its first female astronaut in 1978, NASA has sought to make the corps more representative of US demographics. How exactly the agency should move forward will fall to Jared Isaacman, the space entrepreneur and commercial astronaut who has been nominated as NASA’s next administrator.

Anti-DEI initiatives have hit individual research labs too. Physics World understands that Fermilab – the US’s premier particle-physics lab – suspended its DEI office and its women in engineering group in January. Meanwhile, the Fermilab LBGTQ+ group, called Spectrum, was ordered to cease all activities and its mailing list deleted. Even the rainbow “Pride” flag was removed from the lab’s iconic Wilson Hall.

Some US learned societies, despite being formally unaffiliated with the government, have also responded to pressure from the new administration. The American Geophysical Union (AGU) removed the word “diversity” from its diversity and inclusion page, although it backtracked after criticism of the move.

There was also some confusion that the American Chemical Society had removed its webpage on diversity and inclusion, but they had in fact published a new page and failed to put a redirect in place. “Inclusion and Belonging is a core value of the American Chemical Society, and we remain committed to creating environments where people from diverse backgrounds, cultures, perspectives and experiences thrive,” a spokesperson told Physics World. “We know the broken link caused confusion and some alarm, and we apologize.”

For the time being, the American Physical Society’s page on inclusion remains live, as does that of the American Institute of Physics.

Dismantling all federal DEI programmes and related activities will damage lives and careers of millions of American women and men

Neal Lane, Rice University

Such a response – which some opponents denounce as going beyond what is legally required for fear of repercussions if no action is taken – has left it up to individual leaders to underline the importance of diversity in science. Neal Lane, a former science adviser to President Clinton, told Physics World that “dismantling all federal DEI programmes and related activities will damage lives and careers of millions of American women and men, including scientists, engineers, technical workers – essentially everyone who contributes to advancing America’s global leadership in science and technology”.

Lane, who is now a science and technology policy fellow at Rice University in Texas, think that the new administration’s anti-DEI actions “will weaken the US” and believes they should be considered “un-American”. “The purpose of DEI policies programmes and activities is to ensure all Americans have the opportunity to participate and the country is able to benefit from their participation,” he says.

One senior physicist at a US university, who wishes to remain anonymous, told Physics World that those behind the executive orders are relying on institutions and individuals to “comply in advance” with what they perceive to be the spirit of the orders. “They are relying on people to ignore the fine print, which says that executive orders can’t and don’t overwrite existing law. But it is up to scientists to do the reading — and to follow our consciences. More than universities are on the line: the lives of our students and colleagues are on the line.”

Education turmoil

Another target of the Trump administration is the US Department of Education, which was set up in 1978 to oversee everything from pre-school to postgraduate education. It has already put dozens of its civil servants on leave, ostensibly because their work involves DEI issues. Meanwhile, the withholding of funds has led to the cancellation of scientific meetings, mostly focusing on medicine and life sciences, that were scheduled in the US for late January and early February.

Colleges and universities in the US have also reacted to Trump’s anti-DEI executive order. Academic divisions at Harvard University and the Massachusetts Institute of Technology, for example, have already indicated that they will no longer require applicants for jobs to indicate how they plan to advance the goals of DEI. Northeastern University in Boston has removed the words “diversity” and “inclusion” from a section of its website.

Not all academic organizations have fallen into line, however. Danielle Holly, president of the women-only Mount Holyoke College in South Hadley, Massachusetts, says it will forgo contracts with the federal government if they required abolishing DEI. “We obviously can’t enter into contracts with people who don’t allow DEI work,” she told the Boston Globe. “So for us, that wouldn’t be an option.”

Climate concerns

For an administration that doubts the reality of climate change and opposes anti-pollution laws, the Environmental Protection Agency (EPA) is under fire too. Trump administration representatives were taking action even before the Senate approved Lee Zeldin, a former Republican Congressman from New York who has criticized much environmental legislation, as EPA Administrator. They removed all outside advisers on the EPA’s scientific advisory board and its clean air scientific advisory committee – purportedly to “depoliticize” the boards.

Once the Senate approved Zeldin on 29 January, the EPA sent an e-mail warning more than 1000 probationary employees who had spent less than a year in the agency that their roles could be “terminated” immediately. Then, according to the New York Times, the agency developed plans to demote longer-term employees who have overseen research, enforcement of anti-pollution laws, and clean-ups of hazardous waste. According to Inside Climate News, staff also found their individual pronouns scrubbed from their e-mails and websites without their permission – the result of an order to remove “gender ideology extremism”.

Critics have also questioned the nomination of Neil Jacobs to lead the NOAA. He was its acting head during Trump’s first term in office, serving during the 2019 “Sharpiegate” affair when Trump used a Sharpie pen to alter a NOAA weather map to indicate that Hurricane Dorian would affect Alabama. While conceding Jacobs’s experience and credentials, Rachel Cleetus of the Union of Concerned Scientists asserts that Jacobs is “unfit to lead” given that he “fail[ed] to uphold scientific integrity at the agency”.

Spending cuts

Another concern for scientists is the quasi-official team led by “special government employee” and SpaceX founder Elon Musk. The administration has charged Musk and his so-called “department of government efficiency”, or DOGE, to identify significant cuts to government spending. Though some of DOGE’s activities have been blocked by US courts, agencies have nevertheless been left scrambling for ways to reduce day-to-day costs.

The National Institutes of Health (NIH), for example, has said it will significantly reduce its funding for “indirect” costs of research projects it supported – the overheads that, for example, cover the cost of maintaining laboratories, administering grants, and paying staff salaries. Under the plans, indirect cost reimbursement for federally funded research would be capped at 15%, a drastic cut from its usual range.

NIH personnel have tried to put a positive gloss on its actions. “The United States should have the best medical research in the world,” a statement from NIH declared. “It is accordingly vital to ensure that as many funds as possible go towards direct scientific research costs rather than administrative overhead.”

Just because Elon Musk doesn’t understand indirect costs doesn’t mean Americans should have to pay the price with their lives

US senator Patty Murray

Opponents of the Trump administration, however, are unconvinced. They argue that the measure will imperil critical clinical research because many academic recipients of NIH funds did not have the endowments to compensate for the losses. “Just because Elon Musk doesn’t understand indirect costs doesn’t mean Americans should have to pay the price with their lives,” says US senator Patty Murray, a Democrat from Washington state.

Slashing universities’ share of grants to below 15%, could, however, force institutions to make up the lost income by raising tuition fees, which could “go through the roof”, according to the anonymous senior physicist contacted by Physics World. “Far from being a populist policy, these cuts to overheads are an attack on the subsidies that make university education possible for students from a range of socioeconomic backgrounds. The alternative is to essentially shut down the university research apparatus, which would in many ways be the death of American scientific leadership and innovation.”

Musk and colleagues have also gained unprecedented access to government websites related to civil servants and the country’s entire payments system. That access has drawn criticism from several commentators who note that, since Musk is a recipient of significant government support through his SpaceX company, he could use the information for his own advantage.

“Musk has access to all the data on federal research grantees and contractors: social security numbers, tax returns, tax payments, tax rebates, grant disbursements and more,” wrote physicist Michael Lubell from City College of New York. “Anyone who depends on the federal government and doesn’t toe the line might become a target. This is right out of (Hungarian prime minister) Viktor Orbán’s playbook.”

A new ‘dark ages’

As for the long-term impact of these changes, James Gates – a theoretical physicist at the University of Maryland and a past president of the US National Society of Black Physicists – is blunt. “My country is in for a 50-year period of a new dark ages,” he told an audience at the Royal College of Art in London, UK, on 7 February.

My country is in for a 50-year period of a new dark ages

James Gates, University of Maryland

Speaking at an event sponsored by the college’s association for Black students – RCA BLK – and supported by the UK’s organization for Black physicists, the Blackett Lab Family, he pointed out that the US has been through such periods before. As examples, Gates cited the 1950s “Red Scare” and the period after 1876 when the federal government abandoned efforts to enforce the civil rights of Black Americans in southern states and elsewhere.

However, he is not entirely pessimistic. “Nothing is permanent in human behaviour. The question is the timescale,” Gates said. “There will be another dawn, because that’s part of the human spirit.”

  • With additional reporting by Margaret Harris, online editor of Physics World, in London and Michael Banks, news editor of Physics World

The post US science in chaos as impact of Trump’s executive orders sinks in appeared first on Physics World.

Bacterial ‘cables’ form a living gel in mucus

Bacterial cells in solutions of polymers such as mucus grow into long cable-like structures that buckle and twist on each other, forming a “living gel” made of intertwined cells. This behaviour is very different from what happens in polymer-free liquids, and researchers at the California Institute of Technology (Caltech) and Princeton University, both in the US, say that understanding it could lead to new treatments for bacterial infections in patients with cystic fibrosis. It could also help scientists understand how cells organize themselves into polymer-secreting conglomerations of bacteria called biofilms that can foul medical and industrial equipment.

Interactions between bacteria and polymers are ubiquitous in nature. For example, many bacteria live as multicellular colonies in polymeric fluids, including host-secreted mucus, exopolymers in the ocean and the extracellular polymeric substance that encapsulates biofilms. Often, these growing colonies can become infectious, including in cystic fibrosis patients, whose mucus is more concentrated than it is in healthy individuals.

Laboratory studies of bacteria, however, typically focus on cells in polymer-free fluids, explains study leader Sujit Datta, a biophysicist and bioengineer at Caltech. “We wondered whether interactions with extracellular polymers influence proliferating bacterial colonies,” says Datta, “and if so, how?”

Watching bacteria grow in mucus

In their work, which is detailed in Science Advances, the Caltech/Princeton team used a confocal microscope to monitor how different species of bacteria grew in purified samples of mucus. The samples, Dutta explains, were provided by colleagues at the Massachusetts Institute of Technology and the Albert Einstein College of Medicine.

Normally, when bacterial cells divide, the resulting “daughter” cells diffuse away from each other. However, in polymeric mucus solutions, Datta and colleagues observed that the cells instead remained stuck together and began to form long cable-like structures. These cables can contain thousands of cells, and eventually they start bending and folding on top of each other to form an entangled network.

“We found that we could quantitively predict the conditions under which such cables form using concepts from soft-matter physics typically employed to describe non-living gels,” Datta says.

Support for bacterial colonies

The team’s work reveals that polymers, far from being a passive medium, play a pivotal role in supporting bacterial life by shaping how cells grow in colonies. The form of these colonies – their morphology – is known to influence cell-cell interactions and is important for maintaining their genetic diversity. It also helps determine how resilient a colony is to external stressors.

“By revealing this previously-unknown morphology of bacterial colonies in concentrated mucus, our finding could help inform ways to treat bacterial infections in patients with cystic fibrosis, in which the mucus that lines the lungs and gut becomes more concentrated, often causing the bacterial infections that take hold in that mucus to become life-threatening,” Datta tells Physics World.

Friend or foe?

As for why cable formation is important, Datta explains that there are two schools of thought. The first is that by forming large cables, bacteria may become more resilient against the body’s immune system, making them more infectious. The other possibility is that the reverse is true – that cable formation could in fact leave bacteria more exposed to the host’s defence mechanisms. These include “mucociliary clearance”, which is the process by which tiny hairs on the surface of the lungs constantly sweep up mucus and propel it upwards.

“Could it be that when bacteria are all clumped together in these cables, it is actually easier to get rid of them by expelling them out of the body?” Dutta asks.

Investigating these hypotheses is an avenue for future research, he adds. “Ours is a fundamental discovery on how bacteria grow in complex environments, more akin to their natural habitats,” Datta says. “We also expect it will motivate further work exploring how cable formation influences the ways in which bacteria interact with hosts, phages, nutrients and antibiotics.”

The post Bacterial ‘cables’ form a living gel in mucus appeared first on Physics World.

Sarah Sheldon: how a multidisciplinary mindset can turn quantum utility into quantum advantage

IBM is on a mission to transform quantum computers from applied research endeavour to mainstream commercial opportunity. It wants to go beyond initial demonstrations of “quantum utility”, where these devices outperform classical computers only in a few niche applications, and reach the new frontier of “quantum advantage”. That’ll be where quantum computers routinely deliver significant, practical benefits beyond approximate classical computing methods, calculating solutions that are cheaper, faster and more accurate.

Unlike classical computers, which rely on the binary bits that can be either 0 or 1, quantum computers exploit quantum binary bits (qubits), but as a superposition of 0 and 1 states. This superposition, coupled with quantum entanglement (a correlation of two qubits), enables quantum computers to perform some types of calculation significantly faster than classical machines, such as problems in quantum chemistry and molecular reaction kinetics.

In the vanguard of IBM’s quantum R&D effort is Sarah Sheldon, a principal research scientist and senior manager of quantum theory and capabilities at the IBM Thomas J Watson Research Center in Yorktown Heights, New York. After a double-major undergraduate degree in physics and nuclear science and engineering at Massachusetts Institute of Technology (MIT), Sheldon received her PhD from MIT in 2013 – though she did much of her graduate research in nuclear science and engineering as a visiting scholar at the Institute for Quantum Computing (IQC) at the University of Waterloo, Canada.

At IQC, Sheldon was part of a group studying quantum control techniques, manipulating the spin states of nuclei in nuclear-magnetic-resonance (NMR) experiments. “Although we were using different systems to today’s leading quantum platforms, we were applying a lot of the same kinds of control techniques now widely deployed across the quantum tech sector,” Sheldon explains.

“Upon completion of my PhD, I opted instinctively for a move into industry, seeking to apply all that learning in quantum physics into immediate and practical engineering contributions,” she says. “IBM, as one of only a few industry players back then with an experimental group in quantum computing, was the logical next step.”

Physics insights, engineering solutions

Sheldon currently heads a cross-disciplinary team of scientists and engineers developing techniques for handling noise and optimizing performance in novel experimental demonstrations of quantum computers. It’s ambitious work that ties together diverse lines of enquiry spanning everything from quantum theory and algorithm development to error mitigation, error correction and techniques for characterizing quantum devices.

We’re investigating how to extract the optimum performance from current machines online today as well as from future generations of quantum computers.

Sarah Sheldon, IBM

“From algorithms to applications,” says Sheldon, “we’re investigating what can we do with quantum computers: how to extract the optimum performance from current machines online today as well as from future generations of quantum computers – say, five or 10 years down the line.”

A core priority for Sheldon and colleagues is how to manage the environmental noise that plagues current quantum computing systems. Qubits are all too easily disturbed, for example, by their interactions with environmental fluctuations in temperature, electric and magnetic fields, vibrations, stray radiation and even interference between neighbouring qubits.

The ideal solution – a strategy called error correction – involves storing the same information across multiple qubits, such that errors are detected and corrected when one or more of the qubits are impacted by noise. But the problem with these so-called “fault-tolerant” quantum computers is they need millions of qubits, which is impossible to implement in today’s small-scale quantum architectures. (For context, IBM’s latest Quantum Development Roadmap outlines a practical path to error-corrected quantum computers by 2029.)

“Ultimately,” Sheldon notes, “we’re working towards large-scale error-corrected systems, though for now we’re exploiting near-term techniques like error mitigation and other ways of managing noise in these systems.” In practical terms, this means implementing quantum architectures without increasing the number of qubits – essentially, integrating them with classical computers to reduce noise through increasing samples on the quantum computer combined with classical processing.

Strength in diversity

For Sheldon, one big selling point of the quantum tech industry is the opportunity to collaborate with people from a wide range of disciplines. “My team covers a broad-scope R&D canvas,” she says. There are mathematicians and computer scientists, for example, working on complexity theory and novel algorithm development; physicists specializing in quantum simulation and incorporating error suppression techniques; as well as quantum chemists working on simulations of molecular systems.

“Quantum is so interdisciplinary – you are constantly learning something new from your co-workers,” she adds. “I started out specializing in quantum control techniques, before moving onto experimental demonstrations of larger multiqubit systems while working ever more closely with theorists.”

A corridor in IBM's quantum lab
Computing reimagined Quantum scientists and engineers at the IBM Thomas J Watson Research Center are working to deliver IBM’s Quantum Development Roadmap and a practical path to error-corrected quantum computers by 2029. (Courtesy: Connie Zhou for IBM)

External research collaborations are also mandatory for Sheldon and her colleagues. Front-and-centre is the IBM Quantum Network, which provides engagement opportunities with more than 250 organizations across the “quantum ecosystem”. These range from top-tier labs – such as CERN, the University of Tokyo and the UK’s National Quantum Computing Centre – to quantum technology start-ups like Q-CTRL and Algorithmiq. It also encompasses established industry players aiming to be early-adopting end-users of quantum technologies (among them Bosch, Boeing and HSBC).

“There’s a lot of innovation happening across the quantum community,” says Sheldon, “so external partnerships are incredibly important for IBM’s quantum R&D programme. While we have a deep and diverse skill set in-house, we can’t be the domain experts across every potential use-case for quantum computing.”

Opportunity knocks

Notwithstanding the pace of innovation, there are troubling clouds on the horizon. In particular, there is a shortage of skilled workers in the quantum workforce, with established technology companies and start-ups alike desperate to attract more physical scientists and engineers. The task is to fill not only specialist roles – be it error-correction scientists or quantum-algorithm developers – but more general positions such as test and measurement engineers, data scientists, cryogenic technicians and circuit designers.

Yet Sheldon remains upbeat about addressing the skills gap. “There are just so many opportunities in the quantum sector,” she notes. “The field has changed beyond all recognition since I finished my PhD.” Perhaps the biggest shift has been the dramatic growth of industry engagement and, with it, all sorts of attractive career pathways for graduate scientists and engineers. Those range from firms developing quantum software or hardware to the end-users of quantum technologies in sectors such as pharmaceuticals, finance or healthcare.

“As for the scientific community,” argues Sheldon, “we’re also seeing the outline take shape for a new class of quantum computational scientist. Make no mistake, students able to integrate quantum computing capabilities into their research projects will be at the leading edge of their fields in the coming decades.”

Ultimately, Sheldon concludes, early-career scientists shouldn’t necessarily over-think things regarding that near-term professional pathway. “Keep it simple and work with people you like on projects that are going to interest you – whether quantum or otherwise.”

The post Sarah Sheldon: how a multidisciplinary mindset can turn quantum utility into quantum advantage appeared first on Physics World.

Nanoparticles demonstrate new and unexpected mechanism of coronavirus disinfection

The COVID-19 pandemic provided a driving force for researchers to seek out new disinfection methods that could tackle future viral outbreaks. One promising approach relies on the use of nanoparticles, with several metal and metal oxide nanoparticles showing anti-viral activity against SARS-CoV-2, the virus that causes COVID-19. With this in mind, researchers from Sweden and Estonia investigated the effect of such nanoparticles on two different virus types.

Aiming to elucidate the nanoparticles’ mode of action, they discovered a previously unknown antiviral mechanism, reporting their findings in Nanoscale.

The researchers – from the Swedish University of Agricultural Sciences (SLU) and the University of Tartu – examined triethanolamine terminated titania (TATT) nanoparticles, spherical 3.5-nm diameter titanium dioxide (titania) particles that are expected to interact strongly with viral surface proteins.

They tested the antiviral activity of the TATT nanoparticles against two types of virus: swine transmissible gastroenteritis virus (TGEV) – an enveloped coronavirus that’s surrounded by a phospholipid membrane and transmembrane proteins; and the non-enveloped encephalomyocarditis virus (EMCV), which does not have a phospholipid membrane. SARS-CoV-2 has a similar structure to TGEV: an enveloped virus with an outer lipid membrane and three proteins forming the surface.

“We collaborated with the University of Tartu in studies of antiviral materials,” explains lead author Vadim Kessler from SLU. “They had found strong activity from cerium dioxide nanoparticles, which acted as oxidants for membrane destruction. In our own studies, we saw that TATT formed appreciably stable complexes with viral proteins, so we could expect potentially much higher activity at lower concentration.”

In this latest investigation, the team aimed to determine whether one of these potential mechanisms – blocking of surface proteins, or membrane disruption via oxidation by nanoparticle-generated reactive oxygen species – is the likely cause of TATT’s antiviral activity. The first of these effects usually occurs at low (nanomolar to micromolar) nanoparticle concentrations, the latter at higher (millimolar) concentrations.

Mode of action

To assess the nanoparticle’s antiviral activity, the researchers exposed viral suspensions to colloidal TATT solutions for 1 h, at room temperature and in the dark (without UV illumination). For comparison, they repeated the process with silicotungstate polyoxometalate (POM) nanoparticles, which are not able to bind strongly to cell membranes.

The nanoparticle-exposed viruses were then used to infect cells and the resulting cell viability served as a measure of the virus infectivity. The team note that the nanoparticles alone showed no cytotoxicity against the host cells.

Measuring viral infectivity after nanoparticle exposure revealed that POM nanoparticles did not exhibit antiviral effects on either virus, even at relatively high concentrations of 1.25 mM. TATT nanoparticles, on the other hand, showed significant antiviral activity against the enveloped TGEV virus at concentrations starting from 0.125 mM, but did not affect the non-enveloped EMCV virus.

Based on previous evidence that TATT nanoparticles interact strongly with proteins in darkness, the researchers expected to see antiviral activity at a nanomolar level. But the finding that TATT activity only occurred at millimolar concentrations, and only affected the enveloped virus, suggests that the antiviral effect is not due to blocking of surface proteins. And as titania is not oxidative in darkness, the team propose that the antiviral effect is actually due to direct complexation of nanoparticles with membrane phospholipids – a mode of antiviral action not previously considered.

“Typical nanoparticle concentrations required for effects on membrane proteins correspond to the protein content on the virus surface. With a 1:1 complex, we would need maximum nanomolar concentrations,” Kessler explains. “We saw an effect at about 1 mM/l, which is far higher. This was the indication for us that the effect was on the whole of membrane.”

Verifying the membrane effect

To corroborate their hypothesis, the researchers examined the leakage of dye-labelled RNA from the TGEV coronavirus after 1 h exposure to nanoparticles. The fluorescence signal from the dye showed that TATT-treated TGEV released significantly more RNA than non-exposed virus, attributed to the nanoparticles disrupting the virus’s phospholipid membrane.

Finally, the team studied the interactions between TATT nanoparticles and two model phospholipid compounds. Both molecules formed strong complexes with TATT nanoparticles, while their interaction with POM nanoparticles was weak. This additional verification led the researchers to conclude that the antiviral effect of TATT in dark conditions is due to direct membrane disruption via complexation of titania nanoparticles with phospholipids.

“To the best of our knowledge, [this] proves a new pathway for metal oxide nanoparticles antiviral action,” they write.

Importantly, the nanoparticles are non-toxic, and work at room temperature without requiring UV illumination – enabling simple and low-cost disinfection methods. “While it was known that disinfection with titania could work in UV light, we showed that no special technical measures are necessary,” says Kessler.

Kessler suggests that the nanoparticles could be used to coat surfaces to destroy enveloped viruses, or in cost-effective filters to decontaminate air or water. “[It should be] possible to easily create antiviral surfaces that don’t require any UV activation just by spraying them with a solution of TATT, or possibly other oxide nanoparticles with an affinity to phosphate, including iron and aluminium oxides in particular,” he tells Physics World.

The post Nanoparticles demonstrate new and unexpected mechanism of coronavirus disinfection appeared first on Physics World.

Organic photovoltaic solar cells could withstand harsh space environments

Carbon-based organic photovoltaics (OPVs) may be much better than previously thought at withstanding the high-energy radiation and sub-atomic particle bombardments of space environments. This finding, by researchers at the University of Michigan in the US, challenges a long-standing belief that OPV devices systematically degrade under conditions such as those encountered by spacecraft in low-Earth orbit. If verified in real-world tests, the finding suggests that OPVs could one day rival traditional thin-film photovoltaic technologies based on rigid semiconductors such as gallium arsenide.

Lightweight, robust, radiation-resilient photovoltaics are critical technologies for many aerospace applications. OPV cells are particularly attractive for this sector because they are ultra-lightweight, thermally stable and highly flexible. This last property allows them to be integrated onto curved surfaces as well as flat ones.

Today’s single-junction OPV devices also have a further advantage. Thanks to power conversion efficiencies (PCEs) that now exceed 20%, their specific power – that is, the power generated per weight – can be up to 40 W/g. This is significantly higher than traditional photovoltaic technologies, including those based on silicon (1 W/g) and gallium arsenide (3 W/g) on flexible substrates. Devices with such a large specific power could provide energy for small spacecraft heading into low-Earth orbit and beyond.

Until now, however, scientists believed that these materials had a fatal flaw for space applications: they weren’t robust to irradiation by the energetic particles (predominantly fluxes of electrons and protons) that spacecraft routinely encounter.

Testing two typical OPV materials

In the new work, researchers led by electrical and computer engineer Yongxi Li and physicist Stephen Forrest analysed how two typical OPV materials behave when exposed to proton particles with differing energies. They did this by characterizing their optoelectronic properties before and after irradiation exposure. The first materials were made up of small molecules (DBP, DTDCPB and C70) that had been grown using a technique called vacuum thermal evaporation (VTE). The second group consisted of solution-processed small molecules and polymers (PCE-10, PM6, BT-CIC and Y6).

The team’s measurements show that the OPVs grown by VTE retained their initial PV efficiency under radiation fluxes of up to 1012 cm−2. In contrast, polymer-based OPVs lose 50% of their original efficiency under the same conditions. This, say the researchers, is because proton irradiation breaks carbon-hydrogen bonds in the polymers’ molecular alkyl side chains. This leads to polymer cross-linking and the generation of charge traps that imprison electrons and prevent them from generating useful current.

The good news, Forrest says, is that many of these defects can be mended by thermally annealing the materials at temperatures of 45 °C or less. After such an annealing, the cell’s PCE returns to nearly 90% of its value before irradiation. This means that Sun-facing solar cells made of these materials could essentially “self-heal”, though Forrest acknowledges that whether this actually happens in deep space is a question that requires further investigation. “It may be more straightforward to design the material so that the electron traps never appear in the first place or by filling them with other atoms, so eliminating this problem,” he says.

According to Li, the new study, which is detailed in Joule, could aid the development of standardized stability tests for how protons interact with OPV devices. Such tests already exist for c-Si and GaAs solar cells, but not for OPVs, he says.

The Michigan researchers say they will now be developing materials that combine high PCEs with strong resilience to proton exposure. “We will then use these materials to fabricate OPV devices that we will then test on CubeSats and spacecraft in real-world environments,” Li tells Physics World.

The post Organic photovoltaic solar cells could withstand harsh space environments appeared first on Physics World.

How international conferences can help bring women in physics together

International conferences are a great way to meet people from all over the world to share the excitement of physics and discuss the latest developments in the subject. But the International Conference on Women in Physics (ICWIP) offers more by allowing us to to listen to the experiences of people from many diverse backgrounds and cultures. At the same time, it highlights the many challenges that women in physics still face.

The ICWIP series is organized by the International Union of Pure and Applied Physics (IUPAP) and the week-long event typically features a mixture of plenaries, workshops and talks. Prior to the COVID-19 pandemic, the conferences were held in various locations across the world, but the last two have been held entirely online. The last such meeting – the 8th ICWIP run from India in 2023 – saw around 300 colleagues from 57 countries attend. I was part of a seven-strong UK contingent – at various stages of our careers – who gave a presentation describing the current situation for women in physics in the UK.

Being held solely online didn’t stop delegates fostering a sense of community or discussing their predicaments and challenges. What became evident during the week was the extent and types of issues that women from across the globe still have to contend with. One is the persistence of implicit and explicit gender bias in their institutions or workplaces. This, along with negative stereotyping of women, produces discrepancies between male and female numbers in institutions, particularly at postgraduate level and beyond. Women often end up choosing not to pursue physics later into their careers and being reluctant to take up leadership roles.

Much more needs to be done to ensure women are encouraged in their careers. Indeed, women often face challenging work–life balances, with some expected to play a greater role in family commitments than men, and have little support at their workplaces. One postdoctoral researcher at the 2023 meeting, for example, attempted to discuss her research poster in the virtual conference room while looking after her young children at home – the literal balancing of work and life in action.

A virtual presentation with five speakers' avatars stood in front of a slide showing their names
Open forum The author and co-presenters at the most recent International Conference on Women in Physics. Represented by avatars online, they gave a presentation on women in physics in the UK. (Courtesy: Chethana Setty)

To improve their circumstances, delegates suggested enhancing legislation to combat gender bias and improve institutional culture through education to reduce negative stereotypes. More should also be done to improve networks and professional associations for women in physics. Another factor mentioned at the meeting, meanwhile, is the importance of early education and issues related to equity of teaching, whether delivered face-to-face or online.

But women can face disadvantages other than their gender, such as socioeconomic status and identity, resulting in a unique set of challenges for them. This is the principle of intersectionality and was widely discussed in the context of problems in career progression.

In the UK, change is starting to happen. The Limit Less campaign by the Institute of Physics (IOP), which publishes Physics World, encourages students post 16 years old to study physics. The annual Conference for Undergraduate Women and Non-binary Physicists provides individuals with support and encouragement in their personal and professional development. There are also other initiatives such as the STEM Returner programme and the Daphne Jackson Trust for those wishing to return to a physics career. WISE Ten Steps contributes to supporting workplace culture positively and the Athena SWAN and the IOP’s new Physics Inclusion Award aims to improve women’s prospects.

As we now look forward to the next ICWIP there is still a lot more to do. We must ensure that women can continue in their physics careers while recognizing that intersectionality will play an increasingly significant role in shaping future equity, diversity and inclusion policies. It is likely that soon a new team will be sought from academia and industry, comprising of individuals at various career stages to represent the UK at the next ICWIP. Please do get involved if you are interested. Participation is not limited to women.

Women are doing physics in a variety of challenging circumstances. Gaining an international outlook of different cultural perspectives, as is possible at an international conference like the ICWIP, helps to put things in context and highlights the many common issues faced by women in physics. Taking the time to listen and learn from each other is critical, a process that can facilitate collaboration on issues that affect us all. Fundamentally, we all share a passion for physics, and endeavour to be catalysts for positive change for future generations.

  • This article was based on discussions with Sally Jordan from the Open University; Holly Campbell, UK Atomic Energy Authority; Josie C, AWE; Wendy Sadler and Nils Rehm, Cardiff University; and Sarah Bakewell and Miriam Dembo, Institute of Physics

The post How international conferences can help bring women in physics together appeared first on Physics World.

Artisan, architect or artist: what kind of person are you at work?

We tend to define ourselves by the subjects we studied, and I am no different. I originally did physics before going on to complete a PhD in aeronautical engineering, which has led to a lifelong career in aerospace.

However, it took me quite a few years before I realized that there is more than one route to an enjoyable and successful career. I used to think that a career began at the “coal face” – doing things you were trained for or had a specialist knowledge of – before managing projects then products or people as you progressed to loftier heights.

Many of us naturally fall into one of three fundamental roles: artisan, architect or artist. So which are you?

At some point, I began to realize that while companies often adopt this linear approach to career paths, not everyone is comfortable with it. In fact, I now think that many of us naturally fall into one of three fundamental roles: artisan, architect or artist. So which are you?

Artisans are people who focus on creating functional, practical and often decorative items using hands-on methods or skills. Their work emphasizes craftmanship, attention to detail and the quality of the finished product. For scientists and engineers, artisans are highly skilled people who apply their technical knowledge and know-how. Let’s be honest: they are the ones who get the “real work” done. From programmers to machinists and assemblers, these are the people who create detailed designs and make or maintain a high-quality product.

Architects, on the other hand, combine vision with technical knowledge to create functional and effective solutions. Their work involves designing, planning and overseeing. They have a broader view of what’s happening and may be responsible for delivering projects. They need to ensure tasks are appropriately prioritized and keep things on track and within budget.

Architects also help with guiding on best practice and resolving or unblocking issues. They are the people responsible for ensuring that the end result meets the needs of users and, where applicable, comply with regulations. Typically, this role involves running a project or team – think principal investigator, project manager, software architect or systems engineer.

As for artists, they are the people who have a big picture view of the world – they will not have eyes for the finer details. They are less constrained by a framework and are comfortable working with minimal formal guidance and definition. They have a vision of what will be needed for the future – whether that’s new products and strategic goals or future skills and technology requirements.

Artists set the targets for how an organization, department or business needs to grow and they define strategies for how a business will develop its competitive edge. Artists are often leaders and chiefs.

Which type are you?

To see how these personas work in practice, imagine working for a power utility provider. If there’s a power outage, the artisans will be the people who get the power back on by locating and fixing damaged power lines, repairing substations and so on. They are practical people who know how to make things work.

The architect will be organizing the repair teams, working out who goes to which location, and what to prioritize, ensuring that customers are kept happy and senior leaders are kept informed of progress. The artist, meanwhile, will be thinking about the future. How, for example, can utilities protect themselves better from storm damage and what new technologies or designs can be introduced to make the supply more resilient and minimize disruption?

Predominantly artisans are practical, architects are tactical and artists are strategic but there is an overlap between these qualities. Artisans, architects and artists differ in their goals and methods, but the boundaries between them are blurred. Based on my gut experience as a physicist in industry, I’d say the breakdown between different skills is roughly as shown in the figure below.

Pie chart of personal attributes
Varying values Artisans, architects and artists don’t have only one kind of attribute but are practical, tactical and strategic in different proportions. The numbers shown here are based on the author’s gut feeling after working in industry for more than 30 years.

Now this breakdown is not hard and fast. To succeed in your career, you need to be creative, inventive and skilful – whatever your role. While working with your colleagues, you need to engage in common processes such as adhering to relevant standards, regulations and quality requirements to deliver quality solutions and products. But thinking of ourselves as artisans, architects or artists may explain why each of us is suited to a certain role.

Know your strengths

Even though we all have something of the other personas in us, what’s important is to know what your own core strength is. I used to believe that the only route for a successful career was to work through each of these personas by starting out as artisan, turning into an architect, and then ultimately becoming an artist. And to be fair, this is how many career paths are structured, which his why we’re often encouraged to think this way.

However, I have worked with people who liked “hands on” work so much, that didn’t want to move to a different role, even though it meant turning down a significant promotion. I also know others who have indeed moved between different personas, only to discover the new type of work did not suit them.

Trouble is, although it’s usually possible to retrace steps, it’s not always straightforward to do so. Quite why that should be the case is not entirely clear. It’s certainly not because people are unwilling to accept a pay cut, but more because changing tack is seen as a retrograde step for both employees and their employers.

To be successful, any team, department or business needs to not only understand the importance of this skills mix but also recognize it’s not a simple pipeline – all three personas are critical to success. So if you don’t know already, I encourage you to think about what you enjoy doing most, using your insights to proactively drive career conversations and decisions. Don’t be afraid to emphasize where your “value add” lies.

If you’re not sure whether a change in persona is right for you, seek advice from mentors and peers or look for a secondment to try it out. The best jobs are the ones where you can spend most of your time doing what you love doing. Whether you’re an artisan, architect or artist – the most impactful employees are the ones who really enjoy what they do.

The post Artisan, architect or artist: what kind of person are you at work? appeared first on Physics World.

Thousands of nuclear spins are entangled to create a quantum-dot qubit

A new type of quantum bit (qubit) that stores information in a quantum dot with the help of an ensemble of nuclear spin states has been unveiled by physicists in the UK and Austria. Led by Dorian Gangloff and Mete Atatüre at the University of Cambridge, the team created a collective quantum state that could be used as a quantum register to store and relay information in a quantum communication network of the future.

Quantum communication networks are used to exchange and distribute quantum information between remotely-located quantum computers and other devices. As well as enabling distributed quantum computing, quantum networks can also support secure quantum cryptography. Today, these networks are in the very early stages of development and use the entangled quantum states of photons to transmit information. Network performance is severely limited by decoherence, whereby the quantum information held by photons is degraded as they travel long distances. As a result, effective networks need repeater nodes that receive and then amplify weakened quantum signals.

“To address these limitations, researchers have focused on developing quantum memories capable of reliably storing entangled states to enable quantum repeater operations over extended distances,” Gangloff explains. “Various quantum systems are being explored, with semiconductor quantum dots being the best single-photon generators delivering both photon coherence and brightness.”

Single-photon emission

Quantum dots are widely used for their ability to emit single photons at specific wavelengths. These photons are created by electronic transitions in quantum dots and are ideal for encoding and transmitting quantum information.

However, the electronic spin states of quantum dots are not particularly good at storing quantum information for long enough to be useful as stationary qubits (or nodes) in a quantum network. This is because they contain hundreds or thousands of nuclei with spins that fluctuate. The noise generated by these fluctuations causes the decoherence of qubits based on electronic spin states.

In their previous research, Gangloff and Atatüre’s team showed how this noise could be controlled by sensing how it interacts with the electronic spin states.

Atatüre says, “Building on our previous achievements, we suppressed random fluctuations in the nuclear ensemble using a quantum feedback algorithm. This is already very useful as it dramatically improves the electron spin qubit performance.”

Magnon excitation

Now, using a gallium arsenide quantum dot, the team has used the feedback algorithm to stabilize 13,000 nuclear spin states in a collective, entangled “dark state”. This is a stable quantum state that cannot absorb or emit photons. By introducing just a single nuclear magnon (spin flip) excitation, shared across all 13,000 nuclei, they could then flip the entire ensemble between two different collective quantum states.

Each of these collective states could respectively be defined as a 0 and a 1 in a binary quantum logic system. The team then showed how quantum information could be exchanged between the nuclear system and the quantum dot’s electronic qubit with a fidelity of about 70%.

“The quantum memory maintained the stored state for approximately 130 µs, validating the effectiveness of our protocol,” Gangloff explains. “We also identified unambiguously the factors limiting the current fidelity and storage time, including crosstalk between nuclear modes and optically induced spin relaxation.”

The researchers are hopeful that their approach could transform one of the biggest limitations to quantum dot-based communication networks into a significant advantage.

“By integrating a multi-qubit register with quantum dots – the brightest and already commercially available single-photon sources – we elevate these devices to a much higher technology readiness level,” Atatüre explains.

With some further improvements to their system’s fidelity, the researchers are now confident that it could be used to strengthen interactions between quantum dot qubits and the photonic states they produce, ultimately leading to longer coherence times in quantum communication networks. Elsewhere, it could even be used to explore new quantum phenomena, and gather new insights into the intricate dynamics of quantum many-body systems.

The research is described in Nature Physics.

The post Thousands of nuclear spins are entangled to create a quantum-dot qubit appeared first on Physics World.

Say hi to Quinnie – the official mascot of the International Year of Quantum Science and Technology

Whether it’s the Olympics or the FIFA World Cup, all big global events need a cheeky, fun mascot. So welcome to Quinnie – the official mascot for the International Year of Quantum Science and Technology (IYQ) 2025.

Unveiled at the launch of the IYQ at the headquarters of UNESCO in Paris on 4 February, Quinnie has been drawn by Jorge Cham, the creator of the long-running cartoon strip PHD Comics.

Quinnie was developed for UNESCO in a collaboration between Cham and Physics Magazine, which is published by the American Physical Society (APS) – one of the founding partners of IYQ.

Image of Quinnie, the mascot for the International Year of Quantum Science and Technology
Riding high Quinnie surfing on a quantum wave function. (Courtesy: Jorge Cham)

“Quinnie represents a young generation approaching quantum science with passion, ingenuity, and energy,” says Physics editor Matteo Rini. “We imagine her effortlessly surfing on quantum-mechanical wave functions and playfully engaging with the knottiest quantum ideas, from entanglement to duality.”

Quinnie is set to appear in a series of animated cartoons that the APS will release throughout the year.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post Say hi to Quinnie – the official mascot of the International Year of Quantum Science and Technology appeared first on Physics World.

New class of quasiparticle appears in bilayer graphene

A newly-discovered class of quasiparticles known as fractional excitons offers fresh opportunities for condensed-matter research and could reveal unprecedented quantum phases, say physicists at Brown University in the US. The new quasiparticles, which are neither bosons nor fermions and carry no charge, could have applications in quantum computing and sensing, they say.

In our everyday, three-dimensional world, particles are classified as either fermions or bosons. Fermions such as electrons follow the Pauli exclusion principle, which prevents them from occupying the same quantum state. This property underpins phenomena like the structure of atoms and the behaviour of metals and insulators. Bosons, on the other hand, can occupy the same state, allowing for effects like superconductivity and superfluidity.

Fractional excitons defy this traditional classification, says Jia Leo Li, who led the research. Their properties lie somewhere in between those of fermions and bosons, making them more akin to anyons, which are particles that exist only in two-dimensional systems. But that’s only one aspect of their unusual nature, Li adds. “Unlike typical anyons, which carry a fractional charge of an electron, fractional excitons are neutral particles, representing a distinct type of quantum entity,” he says.

The experiment

Li and colleagues created the fractional excitons using two sheets of graphene – a form of carbon just one atom thick – separated by a layer of another two-dimensional material, hexagonal boron nitride. This layered setup allowed them to precisely control the movement of electrons and positively-charged “holes” and thus to generate excitons, which are pairs of electrons and holes that behave like single particles.

The team then applied a 12 T magnetic field to their bilayer structure. This strong field caused the electrons in the graphene to split into fractional charges – a well-known phenomenon that occurs in the fractional quantum Hall effect. “Here, strong magnetic fields create Landau electronic levels that induce particles with fractional charges,” Li explains. “The bilayer structure facilitates pairing between these positive and negative charges, making fractional excitons possible.”

“Distinct from any known particles”

The fractional excitons represent a quantum system of neutral particles that obey fractional quantum statistics, interact via dipolar forces and are distinct from any known particles, Li tells Physics World. He adds that his team’s study, which is detailed in Nature, builds on prior works that predicted the existence of excitons in the fractional quantum Hall effect (see, for example, Nature Physics 13, 751 2017Nature Physics 15, 898-903 2019Science 375 (6577), 205-209 2022).

The researchers now plan to explore the properties of fractional excitons further. “Our key objectives include measuring the fractional charge of the constituent particles and confirming their anyonic statistics,” Li explains. Studies of this nature could shed light on how fractional excitons interact and flow, potentially revealing new quantum phases, he adds.

“Such insights could have profound implications for quantum technologies, including ultra-sensitive sensors and robust quantum computing platforms,” Li says. “As research progresses, fractional excitons may redefine the boundaries of condensed-matter physics and applied quantum science.”

The post New class of quasiparticle appears in bilayer graphene appeared first on Physics World.

European Space Agency’s Euclid mission spots spectacular Einstein ring

The European Space Agency (ESA) has released a spectacular image of an Einstein ring – a circle of light formed around a galaxy by gravitational lensing. Taken by the €1.4bn Euclid mission, the ring is a result of the gravitational effects of a galaxy located around 590 million light-years from Earth.

Euclid was launched in July 2023 and is currently located in a spot in space called Lagrange Point 2 – a gravitational balance point some 1.5 million kilometres beyond the Earth’s orbit around the Sun. Euclid has a 1.2 m-diameter telescope, a camera and a spectrometer that it uses to plot a 3D map of the distribution of more than two billion galaxies. The images it takes are about four times as sharp as current ground-based telescopes.

Einstein’s general theory of relativity predicts that light will bend around objects in space, so that they focus the light like a giant lens. This gravitational lensing effect is bigger for more massive objects and means we can sometimes see the light from distant galaxies that would otherwise be hidden.

Yet if the alignment is just right, the light from the distant source galaxy bends to form a spectacular ring around the foreground object. In this case, the mass of galaxy NGC 6505 is bending and magnifying the light from a more distant galaxy, which is about 4.42 billion light-years away, into a ring.

Studying such rings can shed light on the expansion of the universe as well as the nature of dark matter.

Euclid’s first science results were released in May 2024, following its first shots of the cosmos in November 2023. Hints of the ring were first spotted in September 2023 when Euclid was being testing with follow-up measurements now revealing it in exquisite detail.

The post European Space Agency’s Euclid mission spots spectacular Einstein ring appeared first on Physics World.

Quantum simulators deliver surprising insights into magnetic phase transitions

Unexpected behaviour at phase transitions between classical and quantum magnetism has been observed in different quantum simulators operated by two independent groups. One investigation was led by researchers at Harvard University and used Rydberg atom as quantum bits (qubits). The other study was led by scientists at  Google Research and involved superconducting qubits. Both projects revealed unexpected deviations from the canonical mechanisms of magnetic freezing, with unexpected oscillations near the phase transition.

A classical magnetic material can be understood as a fluid mixture of magnetic domains that are oriented in opposite directions, with the domain walls in constant motion. As a strengthening magnetic field is applied to the system, the energy associated with a domain wall increases, so the magnetic domains themselves become larger and less mobile. At some point, when the magnetism becomes sufficiently strong, a quantum phase transition occurs, causing the magnetism of the material to become fixed and crystalline: “A good analogy is like water freezing,” says Mikhail Lukin of Harvard University.

The traditional quantitative model for these transitions is the Kibble–Zurek mechanism, which was first formulated to describe cosmological phase transitions in the early universe. It predicts that the dynamics of a system begin to “freeze” when the system gets so close to the transition point that the domains crystallize more quickly than they can come to equilibrium.

“There are some very good theories of various types of quantum phase transitions that have been developed,” says Lukin, “but typically these theories make some approximations. In many cases they’re fantastic approximations that allow you to get very good results, but they make some assumptions which may or may not be correct.”

Highly reconfigurable platform

In their work, Lukin and colleagues utilized a highly reconfigurable platform using Rydberg atom qubits. The system was pioneered by Lukin and others in 2016 to study a specific type of magnetic quantum phase transition in detail. They used a laser to simulate the effect of a magnetic field on the Rydberg atoms, and adjusted the laser frequency to tune the field strength.

The researchers found that, rather than simply becoming progressively larger and less mobile as the field strength increased (a phenomenon called coarsening), the domain sizes underwent unexpected oscillations around the phase transition.

“We were really quite puzzled,” says Lukin. “Eventually we figured out that this oscillation is a sign of a special type of excitation mode similar to the Higgs mode in high-energy physics. This is something we did not anticipate…That’s an example where doing quantum simulations on quantum devices really can lead to new discoveries.”

Meanwhile, the Google-led study used a new approach to quantum simulation with superconducting qubits. Such qubits have proved extremely successful and scalable because they use solid-state technology – and they are used in most of the world’s leading commercial quantum computers such as IBM’s Osprey and Google’s own Willow chips. Much of the previous work using such chips, however, has focused on sequential “digital” quantum logic in which one set of gates is activated only after the previous set has concluded. The long times needed for such calculations allows the effects of noise to accumulate, resulting in computational errors.

Hybrid approach

In the new work, the Google team developed a hybrid analogue–digital approach in which a digital universal quantum gate set was used to prepare well-defined input qubit states. They then switched the processor to analogue mode, using capacitive couplers to tune the interactions between the qubits. In this mode, all the qubits were allowed to operate on each other simultaneously, without the quantum logic being shoehorned into a linear set of gate operations. Finally, the researchers characterized the output by switching back to digital mode.

The researchers used a 69-qubit superconducting system to simulate a similar, but non-identical, magnetic quantum phase transition to that studied by Lukin’s group. They were also puzzled by similar unexpected behaviour in their system. The groups’ subsequently became aware of each other’s work, as Google Research’s Trond Anderson explains: “It’s very exciting to see consistent observations from the Lukin group. This not only provides supporting evidence, but also demonstrates that the phenomenon appears in several contexts, making it extra important to understand”.

Both groups are now seeking to push their research deeper into the exploration of complex many-body quantum physics. The Google group estimates that, to conduct its simulations of the highly entangled quantum states involved with the same level of experimental fidelity would take the US Department of Energy’s Frontier supercomputer – one of the world’s most powerful – more than a million years. The researchers now want to look at problems that are completely intractable classically, such as magnetic frustration. “The analogue–digital approach really combines the best of both worlds, and we’re very excited about this as a new promising direction towards making discoveries in systems that are too complex for classical computers,” says Anderson.

The Harvard researchers are also looking to push their system to study more and more complex quantum systems. “There are many interesting processes where dynamics – especially across a quantum phase transition – remains poorly understood,” says Lukin. “And it ranges from the science of complex quantum materials to systems in high-energy physics such as lattice gauge theories, which are notorious for being hard to simulate classically to the point where people literally give up…We want to apply these kinds of simulators to real open quantum problems and really use them to study the dynamics of these systems.”

The research is described in side-by-side papers in Nature. The Google paper is here and the Harvard paper here.

The post Quantum simulators deliver surprising insights into magnetic phase transitions appeared first on Physics World.

Supermassive black hole displays ‘unprecedented’ X-ray outbursts

An international team of researchers has detected a series of significant X-ray oscillations near the innermost orbit of a supermassive black hole – an unprecedented discovery that could indicate the presence of a nearby stellar-mass orbiter such as a white dwarf.

Optical outburst

The Massachusetts Institute of Technology (MIT)-led team began studying the extreme supermassive black hole 1ES 1927+654 – located around 270 million light years away and about a million times more massive than the Sun – in 2018, when it brightened by a factor of around 100 at optical wavelengths. Shortly after this optical outburst, X-ray monitoring revealed a period of dramatic variability as X-rays dropped rapidly – at first becoming undetectable for about a month, before returning with a vengeance and transforming into the brightest supermassive black hole in the X-ray sky.

“All of this dramatic variability seemed to be over by 2021, as the source appeared to have returned to its pre-2018 state. However, luckily, we continued to watch this source, having learned the lesson that this supermassive black hole will always surprise us. The discovery of these millihertz oscillations was indeed quite a surprise, but it gives us a direct probe of regions very close to the supermassive black hole,” says Megan Masterson, a fifth-year PhD candidate at the MIT Kavli Institute for Astrophysics and Space Research, who co-led the study with MIT’s Erin Kara – alongside researchers based elsewhere in the US, as well as at institutions in Chile, China, Israel, Italy, Spain and the UK.

“We found that the period of these oscillations rapidly changed – dropping from around 18 minutes in 2022 to around seven minutes in 2024. This period evolution is unprecedented, having never been seen before in the small handful of other supermassive black holes that show similar oscillatory behaviour,” she adds.

White dwarf

According to Masterson, one of the key ideas behind the study was that the rapid X-ray period change could be driven by a white dwarf – the compact remnant of a star like our Sun – orbiting around the supermassive black hole close to its event horizon.

“If this white dwarf is driving these oscillations, it should produce a gravitational wave signal that will be detectable with next-generation gravitational wave observatories, like ESA’s Laser Interferometer Space Antenna (LISA),” she says.

To test their hypothesis, the researchers used X-ray data from ESA’s XMM-Newton observatory to detect the oscillations, which allowed them to track how the X-ray brightness changed over time. The findings were presented in mid-January at the 245th meeting of the American Astronomical Society in National Harbor, Maryland, and subsequently reported in Nature.

According to Masterson, these insights into the behaviour of X-rays near a black hole will have major implications for future efforts to detect multi-messenger signals from supermassive black holes.

“We really don’t understand how common stellar-mass companions around supermassive black holes are, but these findings tell us that it may be possible for stellar-mass objects to survive very close to supermassive black holes and produce gravitational wave signals that will be detected with the next-generation gravitational wave observatories,” she says.

Looking ahead, Masterson confirms that the immediate next step for MIT research in this area is to continue to monitor 1ES 1927+654 – with both existing and future telescopes – in an effort to deepen understanding of the extreme physics at play in and around the innermost environments of black holes.

“We’ve learned from this discovery that we should expect the unexpected with this source,” she adds. “We’re also hoping to find other sources like this one through large time-domain surveys and dedicated X-ray follow-up of interesting transients.”

The post Supermassive black hole displays ‘unprecedented’ X-ray outbursts appeared first on Physics World.

How the changing environment affects solar-panel efficiency: the Indian perspective

This episode of the Physics World Weekly podcast looks at how climate and environmental change affect the efficiency of solar panels. Our guest is the climate scientist Sushovan Ghosh, who is lead author of paper that explores how aerosols, rising temperatures and other environmental factors will affect solar-energy output in India in the coming decades.

Today, India ranks fifth amongst nations in terms of installed solar-energy capacity and boosting this capacity will be crucial for the country’s drive to reduce its greenhouse gas emissions by 45% by 2030 – when compared to 2005.

While much of India is blessed with abundant sunshine, it is experiencing a persistent decline in incoming solar radiation that is associated with aerosol pollution. What is more, higher temperatures associated with climate change reduce the efficiency of solar cells  – and their performance is also impacted in India by other climate-related phenomena.

In this podcast, Ghosh explains how changes in the climate and environment affect the generation of solar energy and what can be done to mitigate these effects.

Ghosh co-wrote the paper when at the Centre for Atmospheric Sciences at the Indian Institute of Technology Delhi and he is now at the Barcelona Supercomputing Center in Spain. His co-authors in Delhi were Dilip Ganguly, Sagnik Dey and Subhojit Ghoshal Chowdhury; and the paper is called, “Future photovoltaic potential in India: navigating the interplay between air pollution control and climate change mitigation”. It appears in Environmental Research Letters, which is published by IOP Publishing – which also brings you Physics World.

The post How the changing environment affects solar-panel efficiency: the Indian perspective appeared first on Physics World.

Two-faced graphene nanoribbons could make the first purely carbon-based ferromagnets

A new graphene nanostructure could become the basis for the first ferromagnets made purely from carbon. Known as an asymmetric or “Janus” graphene nanoribbon after the two-faced god in Roman mythology, the opposite edges of this structure have different properties, with one edge taking a zigzag form. Lu Jiong , a researcher at the National University of Singapore (NUS) who co-led the effort to make the structure, explains that it is this zigzag edge that gives rise to the ferromagnetic state, making the structure the first of its kind.

“The work is the first demonstration of the concept of a Janus graphene nanoribbon (JGNR) strand featuring a single ferromagnetic zigzag edge,” Lu says.

Graphene nanostructures with zigzag-shaped edges show much promise for technological applications thanks to their electronic and magnetic properties. Zigzag GNRs (ZGNRs) are especially appealing because the behaviour of their electrons can be tuned from metal-like to semiconducting by adjusting the length or width of the ribbons; modifying the structure of their edges; or doping them with non-carbon atoms. The same techniques can also be used to make such materials magnetic. This versatility means they can be used as building blocks for numerous applications, including quantum and spintronics technologies.

Previously, only two types of symmetric ZGNRs had been synthesized via on-surface chemistry: 6-ZGNR and nitrogen-doped 6-ZGNR, where the “6” refers to the number of carbon rows across the nanoribbon’s width. In the latest work, Lu and co-team leaders Hiroshi Sakaguchi of the University of Kyoto, Japan and Steven Louie at the University of California, Berkeley, US sought to expand this list.

 “It has been a long-sought goal to make other forms of zigzag-edge related GNRs with exotic quantum magnetic states for studying new science and developing new applications,” says team member Song Shaotang, the first author of a paper in Nature about the research.

ZGNRs with asymmetric edges

Building on topological classification theory developed in previous research by Louie and colleagues, theorists in the Singapore-Japan-US collaboration predicted that it should be possible to tune the magnetic properties of these structures by making ZGNRs with asymmetric edges. “These nanoribbons have one pristine zigzag edge and another edge decorated with a pattern of topological defects spaced by a certain number m of missing motifs,” Louie explains. “Our experimental team members, using innovative z-shaped precursor molecules for synthesis, were able to make two kinds of such ZGNRs. Both of these have one edge that supports a benzene motif array with a spacing of m = 2 missing benzene rings in between. The other edge is a conventional zigzag edge.”

Crucially, the theory predicted that the magnetic behaviour – ranging from antiferromagnetism to ferrimagnetism to ferromagnetism – of these JGNRs could be controlled by varying the value of m. In particular, says Louie, the configuration of m = 2 is predicted to show ferromagnetism – that is, all electron spins aligned in the same direction – concentrated entirely on the pristine zigzag edge. This behaviour contrasts sharply with that of symmetric ZGNRs, where spin polarization occurs on both edges and the aligned edge spins are antiferromagnetically coupled across the width of the ribbon.

Precursor design and synthesis

To validate these theoretical predictions, the team synthesized JGNRs on a surface. They then used advanced scanning tunnelling microscope (STM) and atomic force microscope (AFM) measurements to visualize the materials’ exact real-space chemical structure. These measurements also revealed the emergence of exotic magnetic states in the JGNRs synthesized in Lu’s lab at the NUS.

atomic model of the JGNRs
Two sides: An atomic model of the Janus graphene nanoribbons (left) and its atomic force microscopic image (right). (Courtesy: National University of Singapore)

In the past, Sakaguchi explains that GNRs were mainly synthesized using symmetric precursor chemical structures, largely because their asymmetric counterparts were so scarce. One of the challenges in this work, he notes, was to design asymmetric polymeric precursors that could undergo the essential fusion (dehydrogenation) process to form JGNRs. These molecules often orient randomly, so the researchers needed to use additional techniques to align them unidirectionally prior to the polymerization reaction. “Addressing this challenge in the future could allow us to produce JGNRs with a broader range of magnetic properties,” Sakaguchi says.

Towards carbon-based ferromagnets

According to Lu, the team’s research shows that JGNRs could become the first carbon-based spin transport channels to show ferromagnetism. They might even lead to the development of carbon-based ferromagnets, capping off a research effort that began in the 1980s.

However, Lu acknowledges that there is much work to do before these structures find real-world applications. For one, they are not currently very robust when exposed to air. “The next goal,” he says, “is to develop chemical modifications that will enhance the stability of these 1D structures so that they can survive under ambient conditions.”

A further goal, he continues, is to synthesize JGNRs with different values of m, as well as other classes of JGNRs with different types of defective edges. “We will also be exploring the 1D spin physics of these structures and [will] investigate their spin dynamics using techniques such as scanning tunnelling microscopy combined with electron spin resonance, paving the way for their potential applications in quantum technologies.”

The post Two-faced graphene nanoribbons could make the first purely carbon-based ferromagnets appeared first on Physics World.

Asteroid Bennu contains the stuff of life, sample analysis reveals

A sample of asteroid dirt brought back to Earth by NASA’s OSIRIS-REx mission contains amino acids and the nucleobases of RNA and DNA, plus brines that could have facilitated the formation of organic molecules, scanning electron microscopy has shown.

The 120 g of material came from the near-Earth asteroid 101955 Bennu, which OSIRIS-REx visited in 2020. The findings “bolster the hypothesis that asteroids like Bennu could have delivered the raw ingredients to Earth prior to the emergence of life,” Dan Glavin of NASA’s Goddard Space Flight Center tells Physics World.

Bennu has an interesting history. It is 565 m across at its widest point and was once part of a much larger parent body, possibly 100 km in diameter, that was smashed apart in a collision in the Asteroid Belt between 730 million and 1.55 billion years ago. Bennu coalesced from the debris as a rubble pile that found itself in Earth’s vicinity.

The sample from Bennu was parachuted back to Earth in 2023 and shared among teams of researchers. Now two new papers, published in Nature and Nature Astronomy, reveal some of the findings from those teams.

Saltwater residue

In particular, researchers identified a diverse range of salt minerals, including sodium-bearing phosphates and carbonates that formed brines when liquid water on Bennu’s parent body either evaporated or froze.

SEM images of minerals found in Bennu samples
Mineral rich SEM images of trona (water-bearing sodium carbonate) found in Bennu samples. The needles form a vein through surrounding clay-rich rock, with small pieces of rock resting on top of the needles. (Courtesy: Rob Wardell, Tim Gooding and Tim McCoy, Smithsonian)

The liquid water would have been present on Bennu’s parent during the dawn of the Solar System, in the first few million years after the planets began to form. Heat generated by the radioactive decay of aluminium-26 would have kept pockets of water liquid deep inside Bennu’s parent body. The brines that this liquid water bequeathed would have played a role in kickstarting organic chemistry.

Tim McCoy, of the Smithsonian’s National Museum of Natural History and the lead author of the Nature paper, says that “brines play two important roles”.

One of those roles is producing the minerals that serve as templates for organic molecules. “As an example, brines precipitate phosphates that can serve as a template on which sugars needed for life are formed,” McCoy tells Physics World. The phosphate is like a pegboard with holes, and atoms can use those spaces to arrange themselves into sugar molecules.

The second role that brines can play is to then release the organic molecules that have formed on the minerals back into the brine, where they can combine with other organic molecules to form more complex compounds.

Ambidextrous amino acids

Meanwhile, the study reported in Nature Astronomy, led by Dan Glavin and Jason Dworkin of NASA’s Goddard Space Flight Center, focused on the detection of 14 of the 20 amino acids used by life to build proteins, deepening the mystery of why life only uses “left-handed” amino acids.

Amino acid molecules lack rotational symmetry – think of how, no matter how much you twist or turn your left hand, you will never be able to superimpose it on your right hand. As such, amino acids can randomly be either left- or right-handed, a property known as chirality.

However, for some reason that no one has been able to figure out yet, all life on Earth uses left-handed amino acids.

One hypothesis was that due to some quirk, amino acids formed in space and brought to Earth in impacts had a bias for being left-handed. This possibility now looks unlikely after Glavin and Dworkin’s team discovered that the amino acids in the Bennu sample are a mix of left- and right-handed, with no evidence that one is preferred over the other.

“So far we have not seen any evidence for a preferred chirality,” Glavin says. This goes for both the Bennu sample and a previous sample from the asteroid 162173 Ryugu, collected by Japan’s Hayabusa2 mission, which contained 23 different forms of amino acid. “For now, why life turned left on Earth remains a mystery.”

Taking a closer step to the origin of life

Another mystery is why the organic chemistry on Bennu’s parent body reached a certain point and then stopped. Why didn’t it form more complex organic molecules, or even life?

A mosaic image of Bennu
Near-Earth asteroid A mosaic image of Bennu, as observed by NASA’s OSIRIS-REx spacecraft. (Courtesy: NASA/Goddard/University of Arizona)

Amino acids are the construction blocks of proteins. In turn, proteins are one of the primary molecules for life, facilitating biological processes within cells. Nucleobases have also been identified in the Bennu sample, but although chains of nucleobases are the molecular skeleton of RNA and DNA, neither nucleic acid has been found in an extraterrestrial sample yet.

“Although the wet and salty conditions inside Bennu’s parent body provided an ideal environment for the formation of amino acids and nucleobases, it is not clear yet why more complex organic polymers did not evolve,” says Glavin.

Researchers are still looking for that complex chemistry. McCoy cites the 5-carbon sugar ribose, which is a component of RNA, as an essential organic molecule for life that scientists hope to one day find in an asteroid sample.

“But as you might imagine, as organic molecules increase in complexity, they decrease in number,” says McCoy, explaining that we will need to search ever larger amounts of asteroidal material before we might get lucky and find them.

The answers will ultimately help astrobiologists figure out where life began. Could proteins, RNA or even biological cells have formed in the early Solar System within objects such as Bennu’s parent planetesimal? Or did complex biochemistry begin only on Earth once the base materials had been delivered from space?

“What is becoming very clear is that the basic chemical building blocks of life could have been delivered to Earth, where further chemical evolution could have occurred in a habitable environment, including the origin of life itself,” says Glavin.

What’s really needed are more samples. China’s Tianwen-2 mission is blasting off later this year on a mission to capture a 100 g sample from the small near-earth asteroid 469219 Kamo‘oalewa. The findings are likely to be similar to those of OSIRIS-REx and Hayabusa2, but there’s always the chance that something more complex might be in that sample too. If and when those organic molecules are found, they will have huge repercussions for the origin of life on Earth.

The post Asteroid Bennu contains the stuff of life, sample analysis reveals appeared first on Physics World.

International quantum year launches in style at UNESCO headquarters in Paris

More than 800 researchers, policy makers and government officials from around the world gathered in Paris this week to attend the official launch of the International Year of Quantum Science and Technology (IYQ). Held at the headquarters of the United Nations Educational, Scientific and Cultural Organisation (UNESCO), the two-day event included contributions from four Nobel prize-winning physicists – Alain Aspect, Serge Haroche, Anne l’Huillier and William Phillips.

Opening remarks came from Cephas Adjej Mensah, a research director in the Ghanaian government, which last year submitted the draft resolution to the United Nations for 2025 to be proclaimed as the IYQ. “Let us commit to making quantum science accessible to all,” Mensah declared, reminding delegates that the IYQ is intended to be a global initiative, spreading the benefits of quantum equitably around the world. “We can unleash the power of quantum science and technology to make an equitable and prosperous future for all.”

The keynote address was given by l’Huillier, a quantum physicist at Lund University in Sweden, who shared the 2023 Nobel Prize for Physics with Pierre Agostini and Ferenc Krausz for their work on attosecond pulses. “Quantum mechanics has been extremely successful,” she said, explaining how it was invented 100 years ago by Werner Heisenberg on the island of Helgoland. “It has led to new science and new technology – and it’s just the beginning.”

An on-stage panel in a large auditorium
Let’s go Stephanie Simmons, chief quantum officer at Photonic and co-chair of Canada’s National Quantum Strategy advisory council, speaking at the IYQ launch in Paris. (Courtesy: Matin Durrani)

Some of that promise was outlined by Phillips in his plenary lecture. The first quantum revolution led to lasers, semiconductors and transistors, he reminded participants, but said that the second quantum revolution promises more by exploiting effects such as quantum entanglement and superposition – even if its potential can be hard to grasp. “It’s not that there’s something deeply wrong with quantum mechanics – it’s that there’s something deeply wrong with our ability to understand it,” Phillips explained.

The benefits of quantum technology to society were echoed by leading Chinese quantum physicist Jian-Wei Pan of the University of Science and Technology of China in Hefei. “The second quantum revolution will likely provide another human leap in human civilization,” said Pan, who was not at the meeting, in a pre-recorded video statement. “Sustainable funding from government and private sector is essential. Intensive and proactive international co-operation and exchange will undoubtedly accelerate the benefit of quantum information to all of humanity.”

Leaders of the burgeoning quantum tech sector were in Paris too. Addressing the challenges and opportunities of scaling quantum technologies to practical use was a panel made up of Quantinuum chief executive Rajeeb Hazra, QuEra president Takuya Kitawawa, IBM’s quantum-algorithms vice president Katie Pizzoalato, ID Quantique boss Grégoire Ribordy and Microsoft technical fellow Krysta Svore. Also present was Alexander Ling from the National University of Singapore, co-founder of two hi-tech start-ups.

“We cannot imagine what weird and wonderful things quantum mechanics will lead to but you can sure it’ll be marvellous,” said Celia Merzbacher, executive director of the Quantum Economic Development Consortium (QED-C), who chaired the session. All panellists stressed the importance of having a supply of talented quantum scientists and engineers if the industry is to succeed. Hamza also underlined that new products based on “quantum 2.0” technology had to be developed with – and to serve the needs of – users if they are to turn a profit.

The ethical challenges of quantum advancements were also examined in a special panel, as was the need for responsible quantum innovation to avoid a “digital divide” where quantum technology benefits some parts of society but not others. “Quantum science should elevate human dignity and human potential,” said Diederick Croese, a lawyer and director of the Centre for Quantum and Society at Quantum Delta NL in the Netherlands.

A man stood beside a large panel of coloured lights creating an abstract picture
Science in action German artist Robin Baumgarten explains the physics behind his Quantum Jungle art installation. (Courtesy: Matin Durrani)

The cultural impact of quantum science and technology was not forgotten in Paris either. Delegates flocked to an art installation created by Berlin-based artist and game developer Robin Baumgarten. Dubbed Quantum Jungle, it attempts to “visualize quantum physics in a playful yet scientifically accurate manner” by using an array of lights controlled by flickable, bendy metal door stops. Baumgarten claims it is a “mathematically accurate model of a quantum object”, with the brightness of each ring being proportional to the chance of an object being there.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post International quantum year launches in style at UNESCO headquarters in Paris appeared first on Physics World.

Spacewoman: trailblazing astronaut Eileen Collins makes for a compelling and thoughtful documentary subject

“What makes a good astronaut?” asks director Hannah Berryman in the opening scene of Spacewoman. It’s a question few can answer better than Eileen Collins. As the first woman to pilot and command a NASA Space Shuttle, her career was marked by historic milestones, extraordinary challenges and personal sacrifices. Collins looks down the lens of the camera and, as she pauses for thought, we cut to footage of her being suited up in astronaut gear for the third time. “I would say…a person who is not prone to panicking.”

In Spacewoman, Berryman crafts a thoughtful, emotionally resonant documentary that traces Collins’s life from a determined young girl in Elmira, New York, to a spaceflight pioneer.

The film’s strength lies in its compelling balance of personal narrative and technical achievement. Through intimate interviews with Collins, her family and former colleagues, alongside a wealth of archival footage, Spacewoman paints a vivid portrait of a woman whose journey was anything but straightforward. From growing up in a working-class family affected by her parents’ divorce and Hurricane Agnes’s destruction, to excelling in the male-dominated world of aviation and space exploration, Collins’s resilience shines through.

Berryman wisely centres the film on the four key missions that defined Collins’s time at NASA. While this approach necessitates a brisk overview of her early military career, it allows for an in-depth exploration of the stakes, risks and triumphs of spaceflight. Collins’s pioneering 1995 mission, STS-63, saw her pilot the Space Shuttle Discovery in the first rendezvous with the Russian space station Mir, a mission fraught with political and technical challenges. The archival footage from this and subsequent missions provides gripping, edge-of-your-seat moments that demonstrate both the precision and unpredictability of space travel.

Perhaps Spacewoman’s most affecting thread is its examination of how Collins’s career intersected with her family life. Her daughter, Bridget, born shortly after her first mission, offers a poignant perspective on growing up with a mother whose job carried life-threatening risks. In one of the film’s most emotionally charged scenes, Collins recounts explaining the Challenger disaster to a young Bridget. Despite her mother’s assurances that NASA had learned from the tragedy, the subsequent Columbia disaster two weeks later underscores the constant shadow of danger inherent in space exploration.

These deeply personal reflections elevate Spacewoman beyond a straightforward biographical documentary. Collins’s son Luke, though younger and less directly affected by his mother’s missions, also shares touching memories, offering a fuller picture of a family shaped by space exploration’s highs and lows. Berryman’s thoughtful editing intertwines these recollections with historic footage, making the stakes feel immediate and profoundly human.

The film’s tension peaks during Collins’s final mission, STS-114, the first “return to flight” after Columbia. As the mission teeters on the brink of disaster due to familiar technical issues, Berryman builds a heart-pounding narrative, even for viewers unfamiliar with the complexities of spaceflight. Without getting bogged down in technical jargon, she captures the intense pressure of a mission fraught with tension – for those on Earth, at least.

Berryman’s previous films include Miss World 1970: Beauty Queens and Bedlam and Banned, the Mary Whitehouse Story. In a recent episode of the Physics World Stories podcast, she told me that she was inspired to make the film after reading Collins’s autobiography Through the Glass Ceiling to the Stars. “It was so personal,” she said, “it took me into space and I thought maybe we could do that with the viewer.” Collins herself joined us for that podcast episode and I found her to be that same calm, centred, thoughtful person we see in the film and who NASA clearly very carefully chose to command such an important mission.

Spacewoman isn’t just about near-misses and peril. It also celebrates moments of wonder: Collins describing her first sunrise from space or recalling the chocolate shuttles she brought as gifts for the Mir cosmonauts. These light-hearted anecdotes reveal her deep appreciation for the unique experience of being an astronaut. On the podcast, I asked Collins what one lesson she would bring from space to life on Earth. After her customary moment’s pause for thought, she replied “Reading books about science fiction is very important.” She was a fan of science fiction in her younger years , which enabled her to dream of the future that she realized at NASA and in space. But, she told me, these days she also reads about real science of the future (she was deep into a book on artificial intelligence when we spoke) and history too. Looking back at Collins’s history in space certainly holds lessons for us all.

Berryman’s directorial focus ultimately circles back to a profound question: how much risk is acceptable in the pursuit of human progress? Spacewoman suggests that those committed to something greater than themselves are willing to risk everything. Collins’s career embodies this ethos, defined by an unshakeable resolve, even in the face of overwhelming odds.

In the film’s closing moments, we see Collins speaking to a wide-eyed girl at a book signing. The voiceover from interviews talks of the women slated to be instrumental in humanity’s return to the Moon and future missions to Mars. If there’s one thing I would change about the film, it’s that the final word is given to someone other than Collins. The message is a fitting summation of her life and legacy, but I would like to have seen it delivered with her understated confidence of someone who has lived it. It’s a quibble though in a compelling film that I would recommend to anyone with an interest in space travel or the human experience here on Earth.

When someone as accomplished as Collins says that you need to work hard and practise, practise, practise it has a gravitas few others can muster. After all, she spent 10 years practising to fly the Space Shuttle – and got to do it for real twice. We see Collins speak directly to the wide-eyed girl in a flight suit as she signs her book and, as she does so, you can feel the words really hit home precisely because of who says them: “Reach for the stars. Don’t give up. Keep trying because you can do it.”

Spacewoman is more than a tribute to a trailblazer; it’s a testament to human perseverance, curiosity and courage. In Collins’s story, Berryman finds a gripping, deeply personal narrative that will resonate with audiences across the planet.

  • Spacewoman premiered at DOC NYC in November 2024 and is scheduled for theatrical release in 2025. A Haviland Digital Film in association with Tigerlily Productions.

The post <em>Spacewoman</em>: trailblazing astronaut Eileen Collins makes for a compelling and thoughtful documentary subject appeared first on Physics World.

Introducing the Echo-5Q: a collaboration between FormFactor, Tabor Quantum Systems and QuantWare

Watch this short video filmed at the APS March Meeting in 2024, where Mark Elo, chief marketing officer of Tabor Quantum Solutions, introduces the Echo-5Q, which he explains is an industry collaboration between FormFactor and Tabor Quantum Solutions, using the QuantWare quantum processing unit (QPU).

Elo points out that it is an out-of-the-box solution, allowing customers to order a full-stack system, including the software, refrigeration, control electronics and the actual QPU. With the Echo-5, it gets delivered and installed, so that the customer can start doing quantum measurements immediately. He explains that the Echo-5Q is designed at a price and feature point that increases the accessibility for on-site quantum computing.

Brandon Boiko, senior applications engineer with FormFactor, describes the how FormFactor developed the dilution refrigeration technology that the qubits get installed into. Boiko explains that the product has been designed to reduce the cost of entry into the quantum field – made accessible through FormFactor’s test-and- measurement programme, which allows people to bring their samples on site to take measurements.

Alessandro Bruno is founder and CEO of QuantWare, which provides the quantum processor for the Echo-5Q, the part that sits at the milli Kelvin stage of the dilution refrigerator, and that hosts five qubits. Bruno hopes that the Echo-5Q will democratize access to quantum devices – for education, academic research and start-ups.

The post Introducing the Echo-5Q: a collaboration between FormFactor, Tabor Quantum Systems and QuantWare appeared first on Physics World.

Tissue-like hydrogel semiconductors show promise for next-generation bioelectronics

Researchers at the University of Chicago’s Pritzker School of Molecular Engineering have created a groundbreaking hydrogel that doubles as a semiconductor. The material combines the soft, flexible properties of biological tissues with the electronic capabilities of semiconductors, making it ideal for advanced medical devices.

In a study published in Science, the research team, led by Sihong Wang, developed a stretchy, jelly-like material that provides the robust semiconducting properties necessary for use in devices such as pacemakers, biosensors and drug delivery systems.

Rethinking hydrogel design

Hydrogels are ideal for many biomedical applications because they are soft, flexible and water-absorbent – just like human tissues. Material scientists, long recognizing the vast potential of hydrogels, have pushed the boundaries of this class of material. One way is to create hydrogels with semiconducting abilities that can be useful for transmitting information between living tissues and bioelectronic device interfaces – in other words, a hydrogel semiconductor.

Imparting semiconducting properties to hydrogels is no easy task, however. Semiconductors, while known for their remarkable electronic properties, are typically rigid, brittle and water-repellent, making them inherently incompatible with hydrogels. By overcoming this fundamental mismatch, Wang and his team have created a material that could revolutionize the way medical devices interface with the human body.

Traditional hydrogels are made by dissolving hydrogel precursors (monomers or polymers) in water and adding chemicals to crosslink the polymers and form a water-swelled state. Since most polymers are inherently insulating, creating a hydrogel with semiconducting properties requires a special class of semiconducting polymers. The challenges do not stop there, however. These polymers typically only dissolve in organic solvents, not in water.

“The question becomes how to achieve a well-dispersed distribution of these semiconducting materials within a hydrogel matrix,” says first author Yahao Dai, a PhD student in the Wang lab. “This isn’t just about randomly dispersing particles into the matrix. To achieve strong electrical performance, a 3D interconnected network is essential for effective charge transport. So, the fundamental question is: how do you build a hydrophobic, 3D interconnected network within the hydrogel matrix?”

Sihong Wang and Yahao Dai
Innovative material Sihong Wang (left), Yahao Dai (right) and colleagues have developed a novel hydrogel with semiconducting properties. (Courtesy: UChicago Pritzker School of Molecular Engineering/John Zich)

To address this challenge, the researchers first dissolved the polymer in an organic solvent that is miscible with water, forming an organogel – a gel-like material composed of an organic liquid phase in a 3D gel network. They then immersed the organogel in water and allowed the water to gradually replace the organic solvent, transforming it into a hydrogel.

The researchers point out that this versatile solvent exchange process can be adapted to a variety of semiconducting polymers, opening up new possibilities for hydrogel semiconductors with diverse applications.

A two-in-one material

The result is a hydrogel semiconductor material that’s soft enough to match the feel of human tissue. With a Young’s modulus as low as 81 kPa – comparable to that of jelly – and the ability to stretch up to 150% of its original length, this material mimics the flexibility and softness of living tissue. These tissue-like characteristics allow the material to seamlessly interface with the human body, reducing the inflammation and immune responses that are often triggered by rigid medical implants.

The material also has a high charge carrier mobility, a measure of its ability to efficiently transmit electrical signals, of up to 1.4 cm2/V/s. This makes it suitable for biomedical devices that require effective semiconducting performance.

The potential applications extend beyond implanted devices. The material’s high hydration and porosity enable efficient volumetric biosensing and mass transport throughout the entire thickness of the semiconducting layer, which is useful for biosensing, tissue engineering and drug delivery applications. The hydrogel also responds to light effectively, opening up possibilities for light-controlled therapies, such as light-activated wireless pacemakers or wound dressings that use heat to accelerate healing.

A vision for transforming healthcare

The research team’s hydrogel material is now patented and being commercialized through UChicago’s Polsky Center for Entrepreneurship and Innovation. “Our goal is to further develop this material system and enhance its performance and application space,” says Dai. While the immediate focus is on enhancing the electrical and light modulation properties of the hydrogel, the team envisions future work in biochemical sensing.

“An important consideration is how to functionalize various bioreceptors within the hydrogel semiconductor,” explains Dai. “As each biomarker requires a specific bioreceptor, the goal is to target as many biomarkers as possible.”

The team is already exploring new methods to incorporate bioreceptors, such as antibodies and aptamers, within the hydrogels. With these advances, this class of semiconductor hydrogels could act as next-generation interfaces between human tissues and bioelectronic devices, from sensors to tailored drug-delivery systems. This breakthrough material may soon bridge the gap between living systems and electronics in ways once thought impossible.

The post Tissue-like hydrogel semiconductors show promise for next-generation bioelectronics appeared first on Physics World.

Reliability science takes centre stage with new interdisciplinary journal

Journal of Reliability Science and Engineering (Courtesy: IOP Publishing)

As our world becomes ever more dependent on technology, an important question emerges: how much can we truly rely on that technology? To help researchers explore this question, IOP Publishing (which publishes Physics World) is launching a new peer-reviewed, open-access publication called Journal of Reliability Science and Engineering (JRSE). The journal will operate in partnership with the Institute of Systems Engineering (part of the China Academy of Engineering Physics) and will benefit from the editorial and commissioning support of the University of Electronic Science and Technology of China, Hunan University and the Beijing Institute of Structure and Environment Engineering.

“Today’s society relies much on sophisticated engineering systems to manufacture products and deliver services,” says JRSE’s co-editor-in-chief, Mingjian Zuo, a professor of mechanical engineering at the University of Alberta, Canada. “Such systems include power plants, vehicles, transportation and manufacturing. The safe, reliable and economical operation of all these requires the continuing advancement of reliability science and engineering.”

Defining reliability

The reliability of an object is commonly defined as the probability that it will perform its intended function adequately for a specified period of time. “The object in question may be a human being, product, system, or process,” Zuo explains. “Depending on its nature, corresponding sub-disciplines are human-, material-, structural-, equipment-, software- and system reliability.”

Key concepts in reliability science include failure modes, failure rates and reliability function and coherency, as well as measurements such as mean time-to-failure, mean time between failures, availability and maintainability. “Failure modes can be caused by effects like corrosion, cracking, creep, fracture, fatigue, delamination and oxidation,” Zuo explains.

To analyse such effects, researchers may use approaches such as fault tree analysis (FTA); failure modes, effects and criticality analysis (FMECA); and binary decomposition, he adds. These and many other techniques lie within the scope of JRSE, which aims to publish high-quality research on all aspects of reliability. This could, for example, include studies of failure modes and damage propagation as well as techniques for managing them and related risks through optimal design and reliability-centred maintenance.

A focus on extreme environments

To give the journal structure, Zuo and his colleagues identified six major topics: reliability theories and methods; physics of failure and degradation; reliability testing and simulation; prognostics and health management; reliability engineering applications; and emerging topics in reliability-related fields.

Mingjian Zuo
JRSE’s co-editor-in-chief, Mingjian Zuo, a professor of mechanical engineering at the University of Alberta, Canada. (Courtesy: IOP Publishing)

As well as regular issues published four times a year, JRSE will also produce special issues. A special issue on system reliability and safety in varying and extreme environments, for example, focuses on reliability and safety methods, physical/mathematical and data-driven models, reliability testing, system lifetime prediction and performance evaluation. Intelligent operation and maintenance of complex systems in varying and extreme environments are also covered.

Interest in extreme environments was one of the factors driving the journal’s development, Zuo says, due to the increasing need for modern engineering systems to operate reliably in highly demanding conditions. As examples, he cites wind farms being built further offshore; faster trains; and autonomous systems such as drones, driverless vehicles and social robots that must respond quickly and safely to ever-changing surroundings in close proximity to humans.

“As a society, we are setting ever higher requirements on critical systems such as the power grid and Internet, water distribution and transport networks,” he says. “All of these demand further advances in reliability science and engineering to develop tools for the design, manufacture and operation as well as the maintenance of today’s sophisticated engineering systems.”

The go-to platform for researchers and industrialists alike

Another factor behind the journal’s launch is that previously, there were no international journals focusing on reliability research by Chinese organizations. Since the discipline’s leaders include several such organizations, Zuo says the lack of international visibility has seriously limited scientific exchange and promotion of reliability research between China and the global community. He hopes the new journal will remedy this. “Notable features of the journal include gold open access (thanks to our partnership with IOP Publishing, a learned-society publisher that does not have shareholders) and a fast review process,” he says.

In general, the number of academic journals focusing on reliability science and engineering is limited, he adds. “JRSE will play a significant role in promoting the advances in reliability research by disseminating cutting-edge scientific discoveries and creative reliability assurance applications in a timely way.

“We are aiming that the journal will become the go-to platform for reliability researchers and industrialists alike.”

The first issue of JRSE will be published in March 2025, and its editors welcome submissions of original research reports as well as review papers co-authored by experts. “There will also be space for perspectives, comments, replies, and news insightful to the reliability community,” says Zuo. In the future, the journal plans to sponsor reliability-related academic forums and international conferences.

With over 100 experts from around the world on its editorial board, Zuo describes JRSE as scientist-led, internationally-focused and highly interdisciplinary. “Reliability is a critical measure of performance of all engineering systems used in every corner of our society,” he says. “This journal will therefore be of interest to disciplines such as mechanical-, electrical-, chemical-, mining- and aerospace engineering as well as the mathematical and life sciences.”

The post Reliability science takes centre stage with new interdisciplinary journal appeared first on Physics World.

Elastic response explains why cordierite has ultra-low thermal expansion

Hot material The crystal structure of cordierite gives the material its unique thermal properties. (Courtesy: M Dove and L Li/Matter)

The anomalous and ultra-low thermal expansion of cordierite results from the interplay between lattice vibrations and the elastic properties of the material. That is the conclusion of Martin Dove at China’s Sichuan University and Queen Mary University of London in the UK and Li Li at the Civil Aviation Flight University of China. They showed that the material’s unusual behaviour stems from direction-varying elastic forces in its lattice, which act to vary cordierite’s thermal expansion along different directions.

Cordierite is a naturally-occurring mineral that can also be synthesized. Thanks to its remarkable thermal properties, it is used in products ranging from pizza stones to catalytic converters. When heated to high temperatures, it undergoes ultra-low thermal expansion along two directions, and it shrinks a tiny amount along the third direction. This makes it incredibly useful as a material that can be heated and cooled without changing size or suffering damage.

Despite its widespread use, scientists lack a fundamental understanding of how cordierite’s anomalous thermal expansion arises from the properties of its crystal lattice. Normally, thermal expansion (positive or negative) is understood in terms of Grüneisen parameters. These describe how vibrational modes (phonons) in the lattice cause it to expand or contract along each axis as the temperature changes.

Negative Grüneisen parameters describe a lattice that shrinks when heated, and are seen as key to understanding thermal contraction of cordierite. However, the material’s thermal response is not isotropic (it only contracts only along one axis when heated at high temperatures) so understanding cordierite in terms of its Grüneisen parameters alone is difficult.

Advanced molecular dynamics

In their study, Dove and Li used advanced molecular dynamics simulations to accurately model the behaviour of atoms in the cordierite lattice. Their closely matched experimental observations of the material’s thermal expansion, providing them with key insights into why the material has a negative thermal expansion in just one direction.

“Our research demonstrates that the anomalous thermal expansion of cordierite originates from a surprising interplay between atomic vibrations and elasticity,” Dove explains. The elasticity is described in the form of an elastic compliance tensor, which predicts how a material will distort in response to a force applied along a specific direction.

At lower temperatures, lattice vibrations occur at lower frequencies. In this case, the simulations predicted negative thermal expansion in all directions – which is in line with observations of the material.

At higher temperatures, the lattice becomes dominated by high-frequency vibrations. In principle, this should result in positive thermal expansion in all three directions. Crucially, however, Dove and Li discovered that this expansion is cancelled out by the material’s elastic properties, as described by its elastic compliance tensor.

What is more, the unique arrangement of crystal lattice meant that this tensor varied depending on the direction of the applied force, creating an imbalance that amplifies differences between the material’s expansion along each axis.

Cancellation mechanism

“This cancellation mechanism explains why cordierite exhibits small positive expansion in two directions and small negative expansion in the third,” Dove explains. “Initially, I was sceptical of the results. The initial data suggested uniform expansion behaviour at both high and low temperatures, but the final results revealed a delicate balance of forces. It was a moment of scientific serendipity.”

Altogether, Dove and Li’s result clearly shows that cordierite’s anomalous behaviour cannot be understood by focusing solely on the Grüneisen parameters of its three axes. It is crucial to take its elastic compliance tensor into account.

In solving this long-standing mystery, the duo now hope their results could help researchers to better predict how cordierite’s thermal expansion will vary at different temperatures. In turn, they could help to extend the useful applications of the material even further.

“Anisotropic materials like cordierite hold immense potential for developing high-performance materials with unique thermal behaviours,” Dove says. “Our approach can rapidly predict these properties, significantly reducing the reliance on expensive and time-consuming experimental procedures.”

The research is described in Matter.

The post Elastic response explains why cordierite has ultra-low thermal expansion appeared first on Physics World.

Researchers in China propose novel gravitational-wave observatory

Researchers in China have proposed a novel gravitational-wave observatory to search for cracks in Einstein’s general theory of relativity. The Tetrahedron Constellation Gravitational Wave Observatory (TEGO) would detect gravitational waves via four satellites that form a tetrahedral structure in space. Backers of the conceptual plan say TEGO offers significant advantages over designs consisting of a triangular configuration of three satellites.

Gravitational waves are distortions of space–time that occur when massive bodies, such as black holes, are accelerated. They were first detected in 2016 by researchers working on the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) located in Hanford, Washington and Livingston, Louisiana.

The current leading design for a space-based gravitational-wave detector is the Laser Interferometer Space Antenna (LISA). Led by the European Space Agency it is expected to launch in 2035 and operate for at least four years with an estimated to cost €1.5bn.

LISA comprises three identical satellites in an equilateral triangle in space, with each side of the triangle being 2.5 million kilometres – more than six times the distance between the Earth and the Moon.

While ground-based instruments detect gravitational waves with a frequency from a few Hz to a KHz, a space-based mission could pick up gravitational waves with frequencies between 10–4–10–1 Hz.

China has two proposals for a space-based gravitational-wave mission. Dubbed TAIJI and TianQin, they would be launched in the 2030s and, like LISA, consists of three spacecraft in a triangular formation each separated by 2.5 million km.

According to Hong-Bo Jin from the National Astronomical Observatories, Chinese Academy of Sciences, in Beijing, one disadvantage of a triangular array is that when the direction of gravitational-wave propagation as a transverse wave is parallel to the plane of the triangle, it is more difficult to detect the source of the gravitational wave.

A tetrahedral configuration could get around this problem while Jin says that an additional advantage is the extra combinations of optical paths possible with six arms. This means it could be sensitive to six polarization modes of gravitational waves. Einstein’s general theory of relativity predicts that gravitational waves have only two tensor polarization modes, so any detection of so-called vector or scalar polarization modes could signal new physics.

“Detecting gravitational waves based on the TEGO configuration will possibly reveal more polarization modes of gravitational waves, which is conducive to deepening our understanding of general relativity and revealing the essence of gravity and spacetime,” says Jin.

Yet such a design will come with costs. Given that the equipment for TEGO, including the telescopes and optical benches, is twice that of a triangular configuration, estimates for a tetrahedral set-up could also be double.

While TEGO has a separate technical route than TAIJI, Jin says it can “refer” to some of its mature technologies. Given that many technologies still need to be demonstrated and developed, however, TEGO has no specific timeline for when it could be launched.

Italian gravitational-wave physicist Stefano Vitale, a former principal investigator of the LISA Pathfinder mission, told Physics World that “polyhedric” configurations of gravitational-wave detectors are “not new” and are much more difficult to implement than LISA. He adds that even aligning a three-satellite configuration such as LISA is “extremely challenging” and is something the aerospace community has never tried before.

“Going off-plane, like the TEGO colleagues want to do, with telescope add-ons, opens a completely new chapter [and] cannot be considered as incremental relative to LISA,” adds Vitale.

The post Researchers in China propose novel gravitational-wave observatory appeared first on Physics World.

From banking to quantum optics: Michelle Lollie’s unique journey

Michelle Lollie
Quantum attraction Michelle Lollie. (Courtesy: Michelle Lollie)

Michelle Lollie is an advanced laser scientist at Quantinuum, supporting the design, development and construction of complex optical systems that will serve as the foundations of world-class quantum computers. Lollie also participates in various diversity, equity, inclusion and accessibility initiatives, advocating for those who are marginalized in STEM fields, particularly in physics. Outside of wrangling photons, you can often find her at home practicing the violin.

Your initial bachelors degree was in finance, and you went on to work in the field through your 20s before pivoting to physics – what made you take the leap to make this change, and what inspired you to pick physics for your second bachelors degree?

I had dreams of working in finance since high school – indeed, at the time I was on my way to being the most dedicated, most fashionable, and most successful investment banker on Wall Street. I would like to think that, in some other quantum universe, there’s still a Michelle Lollie – investment banker extraordinaire.

So my interest in physics wasn’t sparked until much later in life, when I was 28 years old – I was no longer excited by a career in finance, and was looking for a professional pivot. I came across a groundbreaking theory paper about the quantum teleportation of states. I honestly thought that it referred to “Beam me up, Scotty” from Star Trek, and
I was amazed.

But all jokes aside, quantum physics holds many a mystery that we’re still exploring. As a field, it’s quite new – there are approximately 100 years of dedicated quantum study and discovery, compared to millennia of classical physics. Perusing the paper and understanding about 2% of it, I just decided that this is what I would study. I wanted to learn about this “entanglement” business – a key concept of quantum physics. The rest is history.

Can you tell me a bit about your PhD pathway? You were a part of the APS Bridge Program at Indiana University – how did the programme help you?

After deciding to pursue a physics degree, I had to pick an academic institution to get said degree. What was news to me was that, for second baccalaureate degrees, funding at a public university was hard to come by. I was looking for universities with a strong optics programme, having decided that quantum optics was for me.

I learned about the Rose-Hulman Institute of Technology, in Terre Haute, Indiana by searching for optical engineering programmes. What I didn’t know was that, in terms of producing top engineers, you’d be hard pressed to find a finer institution. The same can be said for their pure science disciplines, although those disciplines aren’t usually ranked. I reached out to inquire about enrolment, was invited to visit and fell in love with the campus. I was funded and my physics journey began.

Prior to graduation, I was struggling with most of my grad-school applications being denied. I wasn’t the most solid student at Rose (it’s a rigorous place), but I wasn’t a poorly performing student, either. Enter the APS Bridge Program, which focuses on students who, for whatever reason, were having challenges applying to grad school. The programme funded two years of education, wherein the student could have more exposure to coursework (which was just what I needed) or have more opportunity for research, after which they could achieve a master’s degree and continue to a PhD.

I was accepted at a bridge programme site at Indiana University Bloomington. The additional two years allowed for a repeat of key undergraduate courses in the first year, with the second year filled with grad courses. I continued on and obtained my master’s degree. I decided to leave IU to collaborate with a professor at Louisiana State University (LSU) who I had always wanted to work with and had done prior research with. So I transferred to LSU and obtained my PhD, focusing on high-dimensional orbital angular momentum states of light for fibre-based quantum cryptography and communication protocols. Without the Bridge Program, it’s likely that you might not be reading this article.

You then went on to Louisiana State University where, in 2022, you were the first African American woman to complete a PhD in physics – what was that like?

It’s funny, but at the time, no-one was really talking about this. I think, for the individual who has to face various challenges due to race, sexual orientation and preference, gender, immigration status and the like, you just try to take your classes and do your research. But, just by your existence and certain aspects that may come along with that, you are often faced with a decision to advocate for yourself in a space that historically was not curated with you or your value in mind.

Michelle Lollie playing violin on stage for an audience
Beyond beamlines Alongside her days spent in the lab, Michelle Lollie is a keen violinist. (Courtesy: Samuel Cooper/@photoscoops)

So while no-one was going up and down the halls saying “Hey, look at us, we have five Black students in our department!”, most departments would bend over backwards for those diversity numbers. Note that five Black students in a department of well over 100 is nothing to write home about. It should be an order of magnitude higher, with 20–30 Black students at least. This is the sad state of affairs across physics and other sciences: people get excited about one Black student and think that they’re doing something great. But, once I brought this fact to the attention of those in the front office and my adviser, a bit of talk started. Consequently, and fortuitously, the president of the university happened to visit our lab the fall before my graduation. Someone at that event noticed me, a Black woman in the physics department, and reached out to have me participate in several high-profile opportunities within the LSU community. This sparked more interest in my identity as a Black woman in the field; and it turned out that I was the first Black woman who would be getting a PhD from the department, in 2022. I am happy to report that three more Black women have earned degrees (one master’s in medical physics, and two PhDs in physics) since then.

My family and I were featured on LSU socials for the historic milestone, especially thanks to Mimi LaValle, who is the media relations guru for the LSU Physics and Astronomy department. They even shared my grandmother’s experience as a  Black woman growing up in the US during the 1930s, and the juxtaposition of her opportunities versus mine were highlighted. It was a great moment and I’m glad that LSU not only acknowledged this story, but they emphasized and amplified it. I will always be grateful that I was able to hand my doctoral degree to my grandmother at graduation. She passed away in August 2024, but was always proud of my achievements. I was just as proud of her, for her determination to survive. Different times indeed. 

What are some barriers and challenges you have faced through your education and career, if any?

The barriers have mostly been structural, embedded within the culture and fabric of physics. But this has made my dedication to be successful in the field a more unique and customized experience that only those who can relate to my identity will understand. There is a concerted effort to say that science doesn’t see colour, gender, etc., and so these societal aspects shouldn’t affect change within the field. I’d argue that human beings do science, so it is a decidedly “social” science, which is impacted significantly by culture – past and present. In fact, if we had more actual social scientists doing research on effecting change in the field for us physical scientists, the negative aspects of working in the field – as told by those who have lived experience – would be mitigated and true scientific broadening could be achieved.

What were the pitfalls, or stresses, of following this career random walk?

Other than the internal work of recognizing that, on a daily basis, I have to make space for myself in a field that’s not used to me, there hasn’t been anything of the sort. I have definitely had to advocate for myself and my presence within the field. But I love what I do and that I get to explore the mysteries of quantum physics. So, I’m not going anywhere anytime soon. The more space that I create, others can come in and feel just fine.

I want things to be as comfortable as possible for future generations of Black scientists. I am a Black woman, so I will always advocate for Black people within the space. This is unique to the history of the African Diaspora. I often advocate for those with cross-marginalized identities not within my culture, but no-one else has as much incentive to root for Black people but Black people. I urge everyone to do the same in highlighting those in their respective cultures and identities. If not you, then who?

What were the next steps for you after your PhD – how did you decide between staying in academia or pursuing a role in industry?

I always knew I was going to industry. I was actually surprised to learn that many physics graduates plan to go into academia. I started interviewing shortly before graduation, I knew what companies I had on my radar. I applied to them, received several offers, and decided on Quantinuum.

A quantum optics lab bench
Tools of the trade At Quantinuum, Michelle Lollie works on the lasers and optics of quantum computers. (Courtesy: Quantinuum)

You are now an advanced laser scientist with Quantinuum – what does that involve, and whats a “day in the life” like for you now?

Nowadays, I can be found either doing CAD models of beamlines, or in the lab building said beamlines. This involves a lot of lasers, alignment, testing and validation. It’s so cool to see an optical system that you’ve designed come to life on an optical table. Its even more satisfying when it is integrated within a full ion-trap system, and it works.  I love practical work in the lab – when I have been designing a system for too long, I often say “Okay, I’ve been in front of this screen long enough. Time to go get the goggles and get the hands dirty.”

What do you know today, that you wish you knew when you were starting your career?

Had I known what I would have had to go through, I might not have ever done it. So, the ignorance of my path was actually a plus. I had no idea what this road entailed so, although the journey was a course in who-is-Michelle-going-to-be-101, I would wish for the “ignorance is bliss” state – on any new endeavour, even now. It’s in the unknowing that we learn who we are.

Be direct and succinct, and leave no room for speculation about what you are saying

What’s your advice for today’s students hoping to pursue a career in the quantum sector?

I always highlight what I’ve learned from Garfield Warren, a physics professor at Indiana University, and one of my mentors. He always emphasized learning skills beyond science that you’ll need to be successful. Those who work in physics often lack direct communication skills, and there can be a lot of miscommunication. Be direct and succinct, and leave no room for speculation about what you are saying. This skill is key.

Also, learn the specific tools of your trade. If you’re in optics, for example, learn the ins and outs of how lasers work. If you have opportunities to build laser set-ups, do so. Learn what the knobs do. Determine what it takes for you to be confident that the readout data is what you want. You should understand each and every component that relates to work that you are doing. Learn all that you can for each project that you work on. Employers know that they will need to train you on company-specific tasks, but technical acumen is assumed to a point. Whatever the skills are for your area, the more that you understand the minutiae, the better.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

Find out more on our quantum channel.

The post From banking to quantum optics: Michelle Lollie’s unique journey appeared first on Physics World.

Thermometer uses Rydberg atoms to make calibration-free measurements

A new way to measure the temperatures of objects by studying the effect of their black-body radiation on Rydberg atoms has been demonstrated by researchers at the US National Institute of Standards and Technology (NIST). The system, which provides a direct, calibration-free measure of temperature based on the fact that all atoms of a given species are identical, has a systematic temperature uncertainty of around 1 part in 2000.

The black-body temperature of an object is defined by the spectrum of the photons it emits. In the laboratory and in everyday life, however, temperature is usually measured by comparison to a reference. “Radiation is inherently quantum mechanical,” says NIST’s Noah Schlossberger, “but if you go to the store and buy a temperature sensor that measures the radiation via some sort of photodiode, the rate of photons converted into some value of temperature that you see has to be calibrated. Usually that’s done using some reference surface that’s held at a constant temperature via some sort of contact thermometer, and that contact thermometer has been calibrated to another contact thermometer – which in some indirect way has been tied into some primary standard at NIST or some other facility that offers calibration services.” However, each step introduces potential error.

This latest work offers a much more direct way of determining temperature. It involves measuring the black-body radiation emitted by an object directly, using atoms as a reference standard. Such a sensor does not need calibration because quantum mechanics dictates that every atom of the same type is identical. In Rydberg atoms the electrons are promoted to highly excited states. This makes the atoms much larger, less tightly bound and more sensitive to external perturbations. As part of an ongoing project studying their potential to detect electromagnetic fields, the researchers turned their attention to atom-based thermometry. “These atoms are exquisitely sensitive to black-body radiation,” explains NIST’s Christopher Holloway, who headed the work.

Packet of rubidium atoms

Central to the new apparatus is a magneto-optical trap inside a vacuum chamber containing a pure rubidium vapour. Every 300 ms, the researchers load a new packet of rubidium atoms into the trap, cool them to around 1 mK and excite them from the 5S energy level to the 32S Rydberg state using lasers. They then allow them to absorb black-body radiation from the surroundings for around 100 μs, causing some of the 32S atoms to change state. Finally, they apply a strong, ramped electric field, ionizing the atoms. “The higher energy states get ripped off easier than the lower energy states, so the electrons that were in each state arrive at the detector at a different time. That’s how we get this readout that tells us the population in each of the states,” explains Schlossberger, the work’s first author. The researchers can use this ratio to infer the spectrum of the black-body radiation absorbed by the atoms and, therefore, the temperature of the black body itself.

The researchers calculated the fractional systematic uncertainty of their measurement as 0.006, which corresponds to around 2 K at room temperature. Schlossberger concedes that this sounds relatively unimpressive compared to many commercial thermometers, but he notes that their thermometer measures absolute temperature, not relative temperature. “If I had two skyscrapers next to each other, touching, and they were an inch different in height, you could probably measure that difference to less than a millimetre,” he says, “If I asked you to tell me the total height of the skyscraper, you probably couldn’t.”

One application of their system, the researchers say, could lie in optical clocks, where frequency shifts due to thermal background noise are a key source of uncertainty. At present, researchers have to perform a lot of in situ thermometry to try to infer the black-body radiation experienced by the clock without disturbing the clock itself. Schlossberger says that, in future, one additional laser, could potentially allow the creation of Rydberg states in the clock atoms. “It’s sort of designed so that all the hardware is the same as atomic clocks, so without modifying the clock significantly it would tell you the radiation experienced by the same atoms that are used in the clock in the location they’re used.”

The work is described in a paper in Physical Review Research. Atomic physicist Kevin Weatherill of Durham University in the UK says “it’s an interesting paper and I enjoyed reading it”. “The direction of travel is to look for a quantum measurement for temperature – there are a lot of projects going on at NIST and some here in the UK,”, he says. He notes, however, that this experiment is highly complex and says “I think at the moment just measuring the width of an atomic transition in a vapour cell [which is broadened by the Doppler effect as atoms move faster] gives you a better bound on temperature than what’s been demonstrated in this paper.”

The post Thermometer uses Rydberg atoms to make calibration-free measurements appeared first on Physics World.

❌