↩ Accueil

Vue lecture

KATRIN sets tighter limit on neutrino mass

Researchers from the Karlsruhe Tritium Neutrino experiment (KATRIN) have announced the most precise upper limit yet on the neutrino’s mass. Thanks to new data and upgraded techniques, the new limit – 0.45 electron volts (eV) at 90% confidence – is half that of the previous tightest constraint, and marks a step toward answering one of particle physics’ longest-standing questions.

Neutrinos are ghostlike particles that barely interact with matter, slipping through the universe almost unnoticed. They come in three types, or flavours: electron, muon, and tau. For decades, physicists assumed all three were massless, but that changed in the late 1990s when experiments revealed that neutrinos can oscillate between flavours as they travel. This flavour-shifting behaviour is only possible if neutrinos have mass.

Although neutrino oscillation experiments confirmed that neutrinos have mass, and showed that the masses of the three flavours are different, they did not divulge the actual scale of these masses. Doing so requires an entirely different approach.

Looking for clues in electrons

In KATRIN’s case, that means focusing on a process called tritium beta decay, where a tritium nucleus (a proton and two neutrons) decays into a helium-3 nucleus (two protons and one neutron) by releasing an electron and an electron antineutrino. Due to energy conservation, the total energy from the decay is shared between the electron and the antineutrino. The neutrino’s mass determines the balance of the split.

“If the neutrino has even a tiny mass, it slightly lowers the energy that the electron can carry away,” explains Christoph Wiesinger, a physicist at the Technical University of Munich, Germany and a member of the KATRIN collaboration. “By measuring that [electron] spectrum with extreme precision, we can infer how heavy the neutrino is.”

Because the subtle effects of neutrino mass are most visible in decays where the neutrino carries away very little energy (most of it bound up in mass), KATRIN concentrates on measuring electrons that have taken the lion’s share. From these measurements, physicists can calculate neutrino mass without having to detect these notoriously weakly-interacting particles directly.

Improvements over previous results

The new neutrino mass limit is based on data taken between 2019 and 2021, with 259 days of operations yielding over 36 million electron measurements. “That’s six times more than the previous result,” Wiesinger says.

Other improvements include better temperature control in the tritium source and a new calibration method using a monoenergetic krypton source. “We were able to reduce background noise rates by a factor of two, which really helped the precision,” he adds.

Photo of two researchers in lab coats and laser safety goggles bending over a box containing optics and other equipment. A beam of green laser light is passing through the box, suffusing the otherwise dark photo with a green glow
Keeping track: Laser system for the analysis of the tritium gas composition at KATRIN’s Windowless Gaseous Tritium Source. Improvements to temperature control in this source helped raise the precision of the neutrino mass limit. (Courtesy: Tritium Laboratory, KIT)

At 0.45 eV, the new limit means the neutrino is at least a million times lighter than the electron. “This is a fundamental number,” Wiesinger says. “It tells us that neutrinos are the lightest known massive particles in the universe, and maybe that their mass has origins beyond the Standard Model.”

Despite the new tighter limit, however, definitive answers about the neutrino’s mass are still some ways off. “Neutrino oscillation experiments tell us that the lower bound on the neutrino mass is about 0.05 eV,” says Patrick Huber, a theoretical physicist at Virginia Tech, US, who was not involved in the experiment. “That’s still about 10 times smaller than the new KATRIN limit… For now, this result fits comfortably within what we expect from a Standard Model that includes neutrino mass.”

Model independence

Though Huber emphasizes that there are “no surprises” in the latest measurement, KATRIN has a key advantage over its rivals. Unlike cosmological methods, which infer neutrino mass based on how it affects the structure and evolution of the universe, KATRIN’s direct measurement is model-independent, relying only on energy and momentum conservation. “That makes it very powerful,” Wiesinger argues. “If another experiment sees a measurement in the future, it will be interesting to check if the observation matches something as clean as ours.”

KATRIN’s own measurements are ongoing, with the collaboration aiming for 1000 days of operations by the end of 2025 and a final sensitivity approaching 0.3 eV. Beyond that, the plan is to repurpose the instrument to search for sterile neutrinos – hypothetical heavier particles that don’t interact via the weak force and could be candidates for dark matter.

“We’re testing things like atomic tritium sources and ultra-precise energy detectors,” Wiesinger says. “There are exciting ideas, but it’s not yet clear what the next-generation experiment after KATRIN will look like.”

The research appears in Science.

The post KATRIN sets tighter limit on neutrino mass appeared first on Physics World.

  •  

On the path towards a quantum economy

The high-street bank HSBC has worked with the NQCC, hardware provider Rigetti and the Quantum Software Lab to investigate the advantages that quantum computing could offer for detecting the signs of fraud in transactional data. (Courtesy: Shutterstock/Westend61 on Offset)

Rapid technical innovation in quantum computing has yielded an array of hardware platforms that can run increasingly sophisticated algorithms. In the real world, however, such technical advances will remain little more than a curiosity if they are not adopted by businesses and the public sector to drive positive change. As a result, one key priority for the UK’s National Quantum Computing Centre (NQCC) has been to help companies and other organizations to gain an early understanding of the value that quantum computing can offer for improving performance and enhancing outcomes.

To meet that objective the NQCC has supported several feasibility studies that enable commercial organizations in the UK to work alongside quantum specialists to investigate specific use cases where quantum computing could have a significant impact within their industry. One prime example is a project involving the high-street bank HSBC, which has been exploring the potential of quantum technologies for spotting the signs of fraud in financial transactions. Such fraudulent activity, which affects millions of people every year, now accounts for about 40% of all criminal offences in the UK and in 2023 generated total losses of more than £2.3 bn across all sectors of the economy.

Banks like HSBC currently exploit classical machine learning to detect fraudulent transactions, but these techniques require a large computational overhead to train the models and deliver accurate results. Quantum specialists at the bank have therefore been working with the NQCC, along with hardware provider Rigetti and the Quantum Software Lab at the University of Edinburgh, to investigate the capabilities of quantum machine learning (QML) for identifying the tell-tale indicators of fraud.

“HSBC’s involvement in this project has brought transactional fraud detection into the realm of cutting-edge technology, demonstrating our commitment to pushing the boundaries of quantum-inspired solutions for near-term benefit,” comments Philip Intallura, Group Head of Quantum Technologies at HSBC. “Our philosophy is to innovate today while preparing for the quantum advantage of tomorrow.”

Another study focused on a key problem in the aviation industry that has a direct impact on fuel consumption and the amount of carbon emissions produced during a flight. In this logistical challenge, the aim was to find the optimal way to load cargo containers onto a commercial aircraft. One motivation was to maximize the amount of cargo that can be carried, the other was to balance the weight of the cargo to reduce drag and improve fuel efficiency.

“Even a small shift in the centre of gravity can have a big effect,” explains Salvatore Sinno of technology solutions company Unisys, who worked on the project along with applications engineers at the NQCC and mathematicians at the University of Newcastle. “On a Boeing 747 a displacement of just 75 cm can increase the carbon emissions on a flight of 10,000 miles by four tonnes, and also increases the fuel costs for the airline company.”

aeroplane-image
A hybrid quantum–classical solution has been used to optimize the configuration of air freight, which can improve fuel efficiency and lower carbon emissions. (Courtesy: Shutterstock/supakitswn)

With such a large number of possible loading combinations, classical computers cannot produce an exact solution for the optimal arrangement of cargo containers. In their project the team improved the precision of the solution by combining quantum annealing with high-performance computing, a hybrid approach that Unisys believes can offer immediate value for complex optimization problems. “We have reached the limit of what we can achieve with classical computing, and with this work we have shown the benefit of incorporating an element of quantum processing into our solution,” explains Sinno.

The HSBC project team also found that a hybrid quantum–classical solution could provide an immediate performance boost for detecting anomalous transactions. In this case, a quantum simulator running on a classical computer was used to run quantum algorithms for machine learning. “These simulators allow us to execute simple QML programmes, even though they can’t be run to the same level of complexity as we could achieve with a physical quantum processor,” explains Marco Paini, the project lead for Rigetti. “These simulations show the potential of these low-depth QML programmes for fraud detection in the near term.”

The team also simulated more complex QML approaches using a similar but smaller-scale problem, demonstrating a further improvement in performance. This outcome suggests that running deeper QML algorithms on a physical quantum processor could deliver an advantage for detecting anomalies in larger datasets, even though the hardware does not yet provide the performance needed to achieve reliable results. “This initiative not only showcases the near-term applicability of advanced fraud models, but it also equips us with the expertise to leverage QML methods as quantum computing scales,” comments Intellura.

Indeed, the results obtained so far have enabled the project partners to develop a roadmap that will guide their ongoing development work as the hardware matures. One key insight, for example, is that even a fault-tolerant quantum computer would struggle to process the huge financial datasets produced by a bank like HSBC, since a finite amount of time is needed to run the quantum calculation for each data point. “From the simulations we found that the hybrid quantum–classical solution produces more false positives than classical methods,” says Paini. “One approach we can explore would be to use the simulations to flag suspicious transactions and then run the deeper algorithms on a quantum processor to analyse the filtered results.”

This particular project also highlighted the need for agreed protocols to navigate the strict rules on data security within the banking sector. For this project the HSBC team was able to run the QML simulations on its existing computing infrastructure, avoiding the need to share sensitive financial data with external partners. In the longer term, however, banks will need reassurance that their customer information can be protected when processed using a quantum computer. Anticipating this need, the NQCC has already started to work with regulators such as the Financial Conduct Authority, which is exploring some of the key considerations around privacy and data security, with that initial work feeding into international initiatives that are starting to consider the regulatory frameworks for using quantum computing within the financial sector.

For the cargo-loading project, meanwhile, Sinno says that an important learning point has been the need to formulate the problem in a way that can be tackled by the current generation of quantum computers. In practical terms that means defining constraints that reduce the complexity of the problem, but that still reflect the requirements of the real-world scenario. “Working with the applications engineers at the NQCC has helped us to understand what is possible with today’s quantum hardware, and how to make the quantum algorithms more viable for our particular problem,” he says. “Participating in these studies is a great way to learn and has allowed us to start using these emerging quantum technologies without taking a huge risk.”

Indeed, one key feature of these feasibility studies is the opportunity they offer for different project partners to learn from each other. Each project includes an end-user organization with a deep knowledge of the problem, quantum specialists who understand the capabilities and limitations of present-day solutions, and academic experts who offer an insight into emerging theoretical approaches as well as methodologies for benchmarking the results. The domain knowledge provided by the end users is particularly important, says Paini, to guide ongoing development work within the quantum sector. “If we only focused on the hardware for the next few years, we might come up with a better technical solution but it might not address the right problem,” he says. “We need to know where quantum computing will be useful, and to find that convergence we need to develop the applications alongside the algorithms and the hardware.”

Another major outcome from these projects has been the ability to make new connections and identify opportunities for future collaborations. As a national facility NQCC has played an important role in providing networking opportunities that bring diverse stakeholders together, creating a community of end users and technology providers, and supporting project partners with an expert and independent view of emerging quantum technologies. The NQCC has also helped the project teams to share their results more widely, generating positive feedback from the wider community that has already sparked new ideas and interactions.

“We have been able to network with start-up companies and larger enterprise firms, and with the NQCC we are already working with them to develop some proof-of-concept projects,” says Sinno. “Having access to that wider network will be really important as we continue to develop our expertise and capability in quantum computing.”

The post On the path towards a quantum economy appeared first on Physics World.

  •  

Microwaves slow down chemical reactions at low temperatures

Through new experiments, researchers in Switzerland have tested models of how microwaves affect low-temperature chemical reactions between ions and atoms. Through their innovative setup, Valentina Zhelyazkova and colleagues at ETH Zurich showed for the first time how the application of microwave pulses can slow down reaction rates via nonthermal mechanisms.

Physicists have been studying chemical reactions between ions and neutral molecules for some time. At close to room temperature, classical models can closely predict how the electric fields emanating from ions will induce dipoles in nearby neutral molecules, allowing researchers to calculate these reaction rates with impressive accuracy. Yet as temperatures drop close to absolute zero, a wide array of more complex effects come into play, which have gradually been incorporated into the latest theoretical models.

“At low temperatures, models of reactivity must include the effects of the permanent electric dipoles and quadrupole moments of the molecules, the effect of their vibrational and rotational motion,” Zhelyazkova explains. “At extremely low temperatures, even the quantum-mechanical wave nature of the reactants must be considered.”

Rigorous experiments

Although these low-temperature models have steadily improved in recent years, the ability to put them to the test through rigorous experiments has so far been hampered by external factors.

In particular, stray electric fields in the surrounding environment can heat the ions and molecules, so that any important quantum effects are quickly drowned out by noise. “Consequently, it is only in the past few years that experiments have provided information on the rates of ion–molecule reactions at very low temperatures,” Zhelyazkova explains.

In their study, Zhelyazkova’s team improved on these past experiments through an innovative approach to cooling the internal motions of the molecules being heated by stray electric fields. Their experiment involved a reaction between positively-charged helium ions and neutral molecules of carbon monoxide (CO). This creates neutral atoms of helium and oxygen, and a positively-charged carbon atom.

To initiate the reaction, the researchers created separate but parallel supersonic beams of helium and CO that were combined in a reaction cell. “In order to overcome the problem of heating the ions by stray electric fields, we study the reactions within the distant orbit of a highly excited electron, which makes the overall system electrically neutral without affecting the ion–molecule reaction taking place within the electron orbit,” explains ETH’s Frédéric Merkt.

Giant atoms

In such a “Rydberg atom”, the highly excited electron is some distance from the helium nucleus and its other electron. As a result, a Rydberg helium atom can be considered an ion with a “spectator” electron, which has little influence over how the reaction unfolds. To ensure the best possible accuracy, “we use a printed circuit board device with carefully designed surface electrodes to deflect one of the two beams,” explains ETH’s, Fernanda Martins. “We then merged this beam with the other, and controlled the relative velocity of the two beams.”

Altogether, this approach enabled the researchers to cool the molecules internally to temperatures below 10 K – where their quantum effects can dominate over externally induced noise. With this setup, Zhelyazkova, Merkt, Martins, and their colleagues could finally put the latest theoretical models to the test.

According to the latest low-temperature models, the rate of the CO–helium ion reaction should be determined by the quantized rotational states of the CO molecule – whose energies lie within the microwave range. In this case, the team used microwave pulses to put the CO into different rotational states, allowing them to directly probe their influence on the overall reaction rate.

Three important findings

Altogether, their experiment yielded three important findings. It confirmed that the reaction rate can vary, depending on the rotational state of the CO molecule; it showed that this reactivity can be modified by using a short microwave pulse to excite the CO molecule from its ground state to its first excited state – with the first excited state being less reactive than the ground state.

The third and most counterintuitive finding is that microwaves can slow down the reaction rate, via mechanisms unrelated to the heat they impart on the molecules absorbing them. “In most applications of microwaves in chemical synthesis, the microwaves are used as a way to thermally heat the molecules up, which always makes them more reactive,” Zhelyazkova says.

Building on the success of their experimental approach, the team now hopes to investigate these nonthermal mechanisms in more detail – with the aim to shed new light on how microwaves can influence chemical reactions via effects other than heating. In turn, their results could ultimately pave the way for advanced new techniques for fine-tuning the rate of reactions between ions and neutral molecules.

The research is described in Physical Review Letters.

The post Microwaves slow down chemical reactions at low temperatures appeared first on Physics World.

  •  

Robert P Crease lifts the lid on 25 years as a ‘science critic’

A quarter of a century ago, in May 2000, I published an article entitled “Why science thrives on criticism”. The article, which ran to slightly over a page in Physics World magazine, was the first in a series of columns called Critical Point. Periodicals, I said, have art and music critics as well as sports and political commentators, and book and theatre reviewers too. So why shouldn’t Physics World have a science critic?

The implication that I had a clear idea of the “critical point” for this series was not entirely accurate. As the years go by, I have found myself improvising, inspired by politics, books, scientific discoveries, readers’ thoughts, editors’ suggestions and more. If there is one common theme, it’s that science is like a workshop – or a series of loosely related workshops – as I argued in The Workshop and the World, a book that sprang from my columns.

Workshops are controlled environments, inside which researchers can stage and study special things – elementary particles, chemical reactions, plant uptakes of nutrients – that appear rarely or in a form difficult to study in the surrounding world. Science critics do not participate in the workshops themselves or even judge their activities. What they do is evaluate how workshops and worlds interact.

This can happen in three ways

Critical triangle

First is to explain why what’s going on inside the workshops matters to outsiders. Sometimes, those activities can be relatively simple to describe, which leads to columns concerning all manner of everyday activities. I have written, for example, about the physics of coffee and breadmaking. I’ve also covered toys, tops, kaleidoscopes, glass and other things that all of us – physicists and non-physicists alike – use, value and enjoy.

Sometimes I draw out more general points about why those activities are important. Early on, I invited readers to nominate their most beautiful experiments in physics. (Spoiler alert: the clear winner was the double-slit experiment with electrons.) I later did something similar about the cultural impact of equations – inviting readers to pick their favourites and reporting on their results (second spoiler alert: Maxwell’s equations came top). I also covered readers’ most-loved literature about laboratories.

Physicists often engage in activities that might seem inconsequential to them yet are an intrinsic part of the practice of physics

When viewing science as workshops, a second role is to explain why what’s outside the workshops matters to insiders. That’s because physicists often engage in activities that might seem inconsequential to them – they’re “just what the rest the world does” – yet are an intrinsic part of the practice of physics. I’ve covered, for example, physicists taking out patents, creating logos, designing lab architecture, taking holidays, organizing dedications, going on retirement and writing memorials for the deceased.

Such activities I term “black elephants”. That’s because they’re a cross between things physicists don’t want to talk about (“elephants in the room”) and things that force them to renounce cherished notions (just as “black swans” disprove that “all swans are white”).

A third role of a science critic is to explain what matters that takes place both inside and outside the workshop. I’m thinking of things like competition, leadership, trustsurprise, workplace training courses, cancel culture and even jokes and funny tales. Interpretations of the meaning of quantum mechanics, such as “QBism”, which I covered both in 2019 and 2022, are an ongoing interest. That’s because they’re relevant both to the structure of physics and to philosophy as they disrupt notions of realism, objectivity, temporality and the scientific method.

Being critical

The term “critic” may suggest someone with a congenitally negative outlook, but that’s wrong. My friend Fred Cohn, a respected opera critic, told me that, in a conversation after a concert, he criticized the performance of the singer Luciano Pavarotti. His remark provoked a woman to shout angrily at him: “Could you do better?” Of course not! It’s the critic’s role to evaluate performances of an activity, not to perform the activity oneself.

illustration of a person sat at a desk using a typewriter
Working practices In his first Critical Point column for Physics World, philosopher and historian of science Robert P Crease interrogated the role of the science critic. (Courtesy: iStock/studiostockart)

Having said that, sometimes a critic must be critical to be honest. In particular, I hate it when scientists try to delegitimize the experience of non-scientists by saying, for example, that “time does not exist”. Or when they pretend they don’t see rainbows but wavelengths of light or that they don’t see sunrises or the plane of a Foucault pendulum move but the Earth spinning. Comments like that turn non-scientists off science by making it seem elitist and other-worldly. It’s what I call “scientific gaslighting”.

Most of all, I hate it when scientists pontificate that philosophy is foolish or worthless, especially when it’s the likes of Steven Pinker, who ought to know better. Writing in Nature (518 300), I once criticized the great theoretical physicist Steven Weinberg, who I counted as a friend, for taking a complex and multivalent text, plucking out a single line, and misreading it as if the line were from a physics text.

The text in question was Plato’s Phaedo, where Socrates expresses his disappointment with his fellow philosopher Anaxagoras for giving descriptions of heavenly bodies “in purely physical terms, without regard to what is best”. Weinberg claimed this statement meant that Socrates “was not very interested in natural science”. Nothing could be further from the truth.

At that moment in the Phaedo, Socrates is recounting his intellectual autobiography. He has just come to the point where, as a youth, he was entranced by materialism and was eager to hear Anaxagoras’s opposing position. When Anaxagoras promised to describe the heavens both mechanically and as the product of a wise and divine mind but could do only the former, Socrates says he was disappointed.

Weinberg’s jibe ignores the context. Socrates is describing how he had once embraced Anaxagoras’s view of a universe ruled by a divine mind but later rejected that view. As an adult, Socrates learned to test hypotheses and other claims through putting them to the test, just as modern-day scientists do. Weinberg was misrepresenting Socrates by describing a position that he later abandoned.

The critical point of the critical point

Ultimately, the “critical point” of my columns over the last 25 years has been to provoke curiosity and excitement about what philosophers, historians and sociologists do for science. I’ve also wanted to raise awareness that these fields are not just fripperies but essential if we are to fully understand and protect scientific activity.

As I have explained several times – especially in the wake of the US shutting its High Flux Beam Reactor and National Tritium Labeling Facility – scientists need to understand and relate to the surrounding world with the insight of humanities scholars. Because if they don’t, they are in danger of losing their workshops altogether.

The post Robert P Crease lifts the lid on 25 years as a ‘science critic’ appeared first on Physics World.

  •  

Helium nanobubble measurements shed light on origins of heavy elements in the universe

New measurements by physicists from the University of Surrey in the UK have shed fresh light on where the universe’s heavy elements come from. The measurements, which were made by smashing high-energy protons into a uranium target to generate strontium ions, then accelerating these ions towards a second, helium-filled target, might also help improve nuclear reactors.

The origin of the elements that follow iron in the periodic table is one of the biggest mysteries in nuclear astrophysics. As Surrey’s Matthew Williams explains, the standard picture is that these elements were formed when other elements captured neutrons, then underwent beta decay. The two ways this can happen are known as the rapid (r) and slow (s) processes.

The s-process occurs in the cores of stars and is relatively well understood. The r-process is comparatively mysterious. It occurs during violent astrophysical events such as certain types of supernovae and neutron star mergers that create an abundance of free neutrons. In these neutron-rich environments, atomic nuclei essentially capture neutrons before the neutrons can turn into protons via beta-minus decay, which occurs when a neutron emits an electron and an antineutrino.

From the night sky to the laboratory

One way of studying the r-process is to observe older stars. “Studies on heavy element abundance patterns in extremely old stars provide important clues here because these stars formed at times too early for the s-process to have made a significant contribution,” Williams explains. “This means that the heavy element pattern in these old stars may have been preserved from material ejected by prior extreme supernovae or neutron star merger events, in which the r-process is thought to happen.”

Recent observations of this type have revealed that the r-process is not necessarily a single scenario with a single abundance pattern. It may also have a “weak” component that is responsible for making elements with atomic numbers ranging from 37 (rubidium) to 47 (silver), without getting all the way up to the heaviest elements such as gold (atomic number 79) or actinides like thorium (90) and uranium (92).

This weak r-process could occur in a variety of situations, Williams explains. One scenario involves radioactive isotopes (that is, those with a few more neutrons than their stable counterparts) forming in hot neutrino-driven winds streaming from supernovae. This “flow” of nucleosynthesis towards higher neutron numbers is caused by processes known as (alpha,n) reactions, which occur when a radioactive isotope fuses with a helium nucleus and spits out a neutron. “These reactions impact the final abundance pattern before the neutron flux dissipates and the radioactive nuclei decay back to stability,” Williams says. “So, to match predicted patterns to what is observed, we need to know how fast the (alpha,n) reactions are on radioactive isotopes a few neutrons away from stability.”

The 94Sr(alpha,n)97Zr reaction

To obtain this information, Williams and colleagues studied a reaction in which radioactive strontium-94 absorbs an alpha particle (a helium nucleus), then emits a neutron and transforms into zirconium-97. To produce the radioactive 94Sr beam, they fired high-energy protons at a uranium target at TRIUMF, the Canadian national accelerator centre. Using lasers, they selectively ionized and extracted strontium from the resulting debris before filtering out 94Sr ions with a magnetic spectrometer.

The team then accelerated a beam of these 94Sr ions to energies representative of collisions that would happen when a massive star explodes as a supernova. Finally, they directed the beam onto a nanomaterial target made of a silicon thin film containing billions of small nanobubbles of helium. This target was made by researchers at the Materials Science Institute of Seville (CSIC) in Spain.

“This thin film crams far more helium into a small target foil than previous techniques allowed, thereby enabling the measurement of helium burning reactions with radioactive beams that characterize the weak r-process,” Williams explains.

To identify the 94Sr(alpha,n)97Zr reactions, the researchers used a mass spectrometer to select for 97Zr while simultaneously using an array of gamma-ray detectors around the target to look for the gamma rays it emits. When they saw both a heavy ion with an atomic mass of 97 and a 97Zr gamma ray, they knew they had identified the reaction of interest. In doing so, Williams says, they were able to measure the probability that this reaction occurs at the energies and temperatures present in supernovae.

Williams thinks that scientists should be able to measure many more weak r-process reactions using this technology. This should help them constrain where the weak r-process comes from. “Does it happen in supernovae winds? Or can it happen in a component of ejected material from neutron star mergers?” he asks.

As well as shedding light on the origins of heavy elements, the team’s findings might also help us better understand how materials respond to the high radiation environments in nuclear reactors. “By updating models of how readily nuclei react, especially radioactive nuclei, we can design components for these reactors that will operate and last longer before needing to be replaced,” Williams says.

The work is detailed in Physical Review Letters.

The post Helium nanobubble measurements shed light on origins of heavy elements in the universe appeared first on Physics World.

  •  

Schrödinger cat states like it hot

Superpositions of quantum states known as Schrödinger cat states can be created in “hot” environments with temperatures up to 1.8 K, say researchers in Austria and Spain. By reducing the restrictions involved in obtaining ultracold temperatures, the work could benefit fields such as quantum computing and quantum sensing.

In 1935, Erwin Schrödinger used a thought experiment now known as “Schrödinger’s cat” to emphasize what he saw as a problem with some interpretations of quantum theory. His gedankenexperiment involved placing a quantum system (a cat in a box with a radioactive sample and a flask of poison) in a state that is a superposition of two states (“alive cat” if the sample has not decayed and “dead cat” if it has). These superposition states are now known as Schrödinger cat states (or simply cat states) and are useful in many fields, including quantum computing, quantum networks and quantum sensing.

Creating a cat state, however, requires quantum particles to be in their ground state. This, in turn, means cooling them to extremely low temperatures. Even marginally higher temperatures were thought to destroy the fragile nature of these states, rendering them useless for applications. But the need for ultracold temperatures comes with its own challenges, as it restricts the range of possible applications and hinders the development of large-scale systems such as powerful quantum computers.

Cat on a hot tin…microwave cavity?

The new work, which was carried out by researchers at the University of Innsbruck and IQOQI in Austria together with colleagues at the ICFO in Spain, challenges the idea that ultralow temperatures are a must for generating cat states. Instead of starting from the ground state, they used thermally excited states to show that quantum superpositions can exist at temperatures of up to 1.8 K – an environment that might as well be an oven in the quantum world.

Team leader Gerhard Kirchmair, a physicist at the University of Innsbruck and the IQOQI, says the study evolved from one of those “happy accidents” that characterize work in a collaborative environment. During a coffee break with a colleague, he realized he was well-equipped to prove the hypothesis of another colleague, Oriol Romero-Isart, who had shown theoretically that cat states can be generated out of a thermal state.

The experiment involved creating cat states inside a microwave cavity that acts as a quantum harmonic oscillator. This cavity is coupled to a superconducting transmon qubit that behaves as a two-level system where the superposition is generated. While the overall setup is cooled to 30 mK, the cavity mode itself is heated by equilibrating it with amplified Johnson-Nyquist noise from a resistor, making it 60 times hotter than its environment.

To establish the existence of quantum correlations at this higher temperature, the team directly measured the Wigner functions of the states. Doing so revealed the characteristic interference patterns of Schrödinger cat states.

Benefits for quantum sensing and error correction

According to Kirchmair, being able to realize cat states without ground-state cooling could bring benefits for quantum sensing. The mechanical oscillator systems used to sense acceleration or force, for example, are normally cooled to the ground state to achieve the necessary high sensitivity, but such extreme cooling may not be necessary. He adds that quantum error correction schemes could also benefit, as they rely on being able to create cat states reliably; the team’s work shows that a residual thermal population places fewer limitations on this than previously thought.

“For next steps we will use the system for what it was originally designed, i.e. to mediate interactions between multiple qubits for novel quantum gates,” he tells Physics World.

Yiwen Chu, a quantum physicist from ETH Zürich in Switzerland who was not involved in this research, praises the “creativeness of the idea”. She describes the results as interesting and surprising because they seem to counter the common view that lack of purity in a quantum state degrades quantum features. She also agrees that the work could be important for quantum sensing, adding that many systems – including some more suited for sensing – are difficult to prepare in the ground state.

However, Chu notes that, for reasons stemming from the system’s parameters and the protocols the team used to generate the cat states, it should be possible to cool this particular system very efficiently to the ground state. This, she says, somewhat diminishes the argument that the method will be useful for systems where this isn’t the case. “However, these parameters and the protocols they showed might not be the only way to prepare such states, so on a fundamental level it is still very interesting,” she concludes.

The post Schrödinger cat states like it hot appeared first on Physics World.

  •  

Very high-energy electrons could prove optimal for FLASH radiotherapy

Electron therapy has long played an important role in cancer treatments. Electrons with energies of up to 20 MeV can treat superficial tumours while minimizing delivered dose to underlying tissues; they are also ideal for performing total skin therapy and intraoperative radiotherapy. The limited penetration depth of such low-energy electrons, however, limits the range of tumour sites that they can treat. And as photon-based radiotherapy technology continues to progress, electron therapy has somewhat fallen out of fashion.

That could all be about to change with the introduction of radiation treatments based on very high-energy electrons (VHEEs). Once realised in the clinic, VHEEs – with energies from 50 up to 400 MeV – will deliver highly penetrating, easily steerable, conformal treatment beams with the potential to enable emerging techniques such as FLASH radiotherapy. French medical technology company THERYQ is working to make this opportunity a reality.

Therapeutic electron beams are produced using radio frequency (RF) energy to accelerate electrons within a vacuum cavity. An accelerator of a just over 1 m in length can boost electrons to energies of about 25 MeV – corresponding to a tissue penetration depth of a few centimetres. It’s possible to create higher energy beams by simply daisy chaining additional vacuum chambers. But such systems soon become too large and impractical for clinical use.

THERYQ is focusing on a totally different approach to generating VHEE beams. “In an ideal case, these accelerators allow you to reach energy transfers of around 100 MeV/m,” explains THERYQ’s Sébastien Curtoni. “The challenge is to create a system that’s as compact as possible, closer to the footprint and cost of current radiotherapy machines.”

Working in collaboration with CERN, THERYQ is aiming to modify CERN’s Compact Linear Collider technology for clinical applications. “We are adapting the CERN technology, which was initially produced for particle physics experiments, to radiotherapy,” says Curtoni. “There are definitely things in this design that are very useful for us and other things that are difficult. At the moment, this is still in the design and conception phase; we are not there yet.”

VHEE advantages

The higher energy of VHEE beams provides sufficient penetration to treat deep tumours, with the dose peak region extending up to 20–30 cm in depth for parallel (non-divergent) beams using energy levels of 100–150 MeV (for field sizes of 10 x 10 cm or above). And in contrast to low-energy electrons, which have significant lateral spread, VHEE beams have extremely narrow penumbra with sharp beam edges that help to create highly conformal dose distributions.

“Electrons are extremely light particles and propagate through matter in very straight lines at very high energies,” Curtoni explains. “If you control the initial direction of the beam, you know that the patient will receive a very steep and well defined dose distribution and that, even for depths above 20 cm, the beam will remain sharp and not spread laterally.”

Electrons are also relatively insensitive to tissue inhomogeneities, such as those encountered as the treatment beam passes through different layers of muscle, bone, fat or air. “VHEEs have greater robustness against density variations and anatomical changes,” adds THERYQ’s Costanza Panaino. “This is a big advantage for treatments in locations where there is movement, such as the lung and pelvic areas.”

It’s also possible to manipulate VHEEs via electromagnetic scanning. Electrons have a charge-to-mass ratio roughly 1800 times higher than that of protons, meaning that they can be steered with a much weaker magnetic field than required for protons. “As a result, the technology that you are building has a smaller footprint and the possibility costing less,” Panaino explains. “This is extremely important because the cost of building a proton therapy facility is prohibitive for some countries.”

Enabling FLASH

In addition to expanding the range of clinical indications that can be treated with electrons, VHEE beams can also provide a tool to enable the emerging – and potentially game changing – technique known as FLASH radiotherapy. By delivering therapeutic radiation at ultrahigh dose rates (higher than 100 Gy/s), FLASH vastly reduces normal tissue toxicity while maintaining anti-tumour activity, potentially minimizing harmful side-effects.

The recent interest in the FLASH effect began back in 2014 with the report of a differential response between normal and tumour tissue in mice exposed to high dose-rate, low-energy electrons. Since then, most preclinical FLASH studies have used electron beams, as did the first patient treatment in 2019 – a skin cancer treatment at Lausanne University Hospital (CHUV) in Switzerland, performed with the Oriatron eRT6 prototype from PMB-Alcen, the French company from which THERYQ originated.

FLASH radiotherapy is currently being used in clinical trials with proton beams, as well as with low-energy electrons, where it remains intrinsically limited to superficial treatments. Treating deep-seated tumours with FLASH requires more highly penetrating beams. And while the most obvious option would be to use photons, it’s extremely difficult to produce an X-ray beam with a high enough dose rate to induce the FLASH effect without excessive heat generation destroying the conversion target.

“It’s easier to produce a high dose-rate electron beam for FLASH than trying to [perform FLASH] with X-rays, as you use the electron beam directly to treat the patient,” Curtoni explains. “The possibility to treat deep-seated tumours with high-energy electron beams compensates for the fact that you can’t use X-rays.”

Panaino points out that in addition to high dose rates, FLASH radiotherapy also relies on various interdependent parameters. “Ideally, to induce the FLASH effect, the beam should be pulsed at a frequency of about 100 Hz, the dose-per-pulse should be 1 Gy or above, and the dose rate within the pulse should be higher than 106 Gy/s,” she explains.

VHEE infographic

Into the clinic

THERYQ is using its VHEE expertise to develop a clinical FLASH radiotherapy system called FLASHDEEP, which will use electrons at energies of 100 to 200 MeV to treat tumours at depths of up to 20 cm. The first FLASHDEEP systems will be installed at CHUV (which is part of a consortium with CERN and THERYQ) and at the Gustave Roussy cancer centre in France.

“We are trying to introduce FLASH into the clinic, so we have a prototype FLASHKNiFE machine that allows us to perform low-energy, 6 and 9 MeV, electron therapy,” says Charlotte Robert, head of the medical physics department research group at Gustave Roussy. “The first clinical trials using low-energy electrons are all on skin tumours, aiming to show that we can safely decrease the number of treatment sessions.”

While these initial studies are limited to skin lesions, clinical implementation of the FLASHDEEP system will extend the benefits of FLASH to many more tumour sites. Robert predicts that VHEE-based FLASH will prove most valuable for treating radioresistant cancers that cannot currently be cured. The rationale is that FLASH’s ability to spare normal tissue will allow delivery of higher target doses without increasing toxicity.

“You will not use this technology for diseases that can already be cured, at least initially,” she explains. “The first clinical trial, I’m quite sure, will be either glioblastoma or pancreatic cancers that are not effectively controlled today. If we can show that VHEE FLASH can spare normal tissue more than conventional radiotherapy can, we hope this will have a positive impact on lesion response.”

“There are a lot of technological challenges around this technology and we are trying to tackle them all,” Curtoni concludes. “The ultimate goal is to produce a VHEE accelerator with a very compact beamline that makes this technology and FLASH a reality for a clinical environment.”

The post Very high-energy electrons could prove optimal for FLASH radiotherapy appeared first on Physics World.

  •  

Tiny sensor creates a stable, wearable brain–computer interface

Brain–computer interfaces (BCIs) enable the flow of information between the brain and an external device such as a computer, smartphone or robotic limb. Applications range from use in augmented and virtual reality (AR and VR), to restoring function to people with neurological disorders or injuries.

Electroencephalography (EEG)-based BCIs use sensors on the scalp to noninvasively record electrical signals from the brain and decode them to determine the user’s intent. Currently, however, such BCIs require bulky, rigid sensors that prevent use during movement and don’t work well with hair on the scalp, which affects the skin–electrode impedance. A team headed up at Georgia Tech’s WISH Center has overcome these limitations by creating a brain sensor that’s small enough to fit between strands of hair and is stable even while the user is moving.

“This BCI system can find wide applications. For example, we can realize a text spelling interface for people who can’t speak,” says W Hong Yeo, Harris Saunders Jr Professor at Georgia Tech and director of the WISH Center, who co-led the project with Tae June Kang from Inha University in Korea. “For people who have movement issues, this BCI system can offer connectivity with human augmentation devices, a wearable exoskeleton, for example. Then, using their brain signals, we can detect the user’s intentions to control the wearable system.”

A tiny device

The microscale brain sensor comprises a cross-shaped structure of five microneedle electrodes, with sharp tips (less than 30°) that penetrate the skin easily with nearly pain-free insertion. The researchers used UV replica moulding to create the array, followed by femtosecond laser cutting to shape it to the required dimensions – just 850 x 1000 µm – to fit into the space between hair follicles. They then coated the microsensor with a highly conductive polymer (PEDOT:Tos) to enhance its electrical conductivity.

Microscale brain sensor between hair strands
Between the hairs The size and lightweight design of the sensor significantly reduces motion artefacts. (Courtesy: W Hong Yeo)

The microneedles capture electrical signals from the brain and transmit them along ultrathin serpentine wires that connect to a miniaturized electronics system on the back of the neck. The serpentine interconnector stretches as the skin moves, isolating the microsensor from external vibrations and preventing motion artefacts. The miniaturized circuits then wirelessly transmit the recorded signals to an external system (AR glasses, for example) for processing and classification.

Yeo and colleagues tested the performance of the BCI using three microsensors inserted into the scalp of the occipital lobe (the brain’s visual processing centre). The BCI exhibited excellent stability, offering high-quality measurement of neural signals – steady-state visual evoked potentials (SSVEPs) – for up to 12 h, while maintaining low contact impedance density (0.03 kΩ/cm2).

The team also compared the quality of EEG signals measured using the microsensor-based BCI with those obtained from conventional gold-cup electrodes. Participants wearing both sensor types closed and opened their eyes while standing, walking or running.

With the participant stood still, both electrode types recorded stable EEG signals, with an increased amplitude upon closing the eyes, due to the rise in alpha wave power. During motion, however, the EEG time series recorded with the conventional electrodes showed noticeable fluctuations. The microsensor measurements, on the other hand, exhibited minimal fluctuations while walking and significantly fewer fluctuations than the gold-cup electrodes while running.

Overall, the alpha wave power recorded by the microsensors during eye-closing was higher than that of the conventional electrode, which could not accurately capture EEG signals while the user was running. The microsensors only exhibited minor motion artefacts, with little to no impact on the EEG signals in the alpha band, allowing reliable data extraction even during excessive motion.

Real-world scenario

Next, the team showed how the BCI could be used within everyday activities – such as making calls or controlling external devices – that require a series of decisions. The BCI enables a user to make these decisions using their thoughts, without needing physical input such as a keyboard, mouse or touchscreen. And the new microsensors free the user from environmental and movement constraints.

The researchers demonstrated this approach in six subjects wearing AR glasses and a microsensor-based EEG monitoring system. They performed experiments with the subjects standing, walking or running on a treadmill, with two distinct visual stimuli from the AR system used to induce SSVEP responses. Using a train-free SSVEP classification algorithm, the BCI determined which stimulus the subject was looking at with a classification accuracy of 99.2%, 97.5% and 92.5%, while standing, walking and running, respectively.

The team also developed an AR-based video call system controlled by EEG, which allows users to manage video calls (rejecting, answering and ending) with their thoughts, demonstrating its use during scenarios such as ascending and descending stairs and navigating hallways.

“By combining BCI and AR, this system advances communication technology, offering a preview of the future of digital interactions,” the researchers write. “Additionally, this system could greatly benefit individuals with mobility or dexterity challenges, allowing them to utilize video calling features without physical manipulation.”

The microsensor-based BCI is described in Proceedings of the National Academy of Sciences.

The post Tiny sensor creates a stable, wearable brain–computer interface appeared first on Physics World.

  •  

How to Watch the Lyrids Meteor Shower

The second major meteor shower of the year starts today and peaks on the night of April 21–22. Here’s everything you need to know to watch it and the many other showers that will appear in 2025.

  •