↩ Accueil

Vue lecture

Robert P Crease lifts the lid on 25 years as a ‘science critic’

A quarter of a century ago, in May 2000, I published an article entitled “Why science thrives on criticism”. The article, which ran to slightly over a page in Physics World magazine, was the first in a series of columns called Critical Point. Periodicals, I said, have art and music critics as well as sports and political commentators, and book and theatre reviewers too. So why shouldn’t Physics World have a science critic?

The implication that I had a clear idea of the “critical point” for this series was not entirely accurate. As the years go by, I have found myself improvising, inspired by politics, books, scientific discoveries, readers’ thoughts, editors’ suggestions and more. If there is one common theme, it’s that science is like a workshop – or a series of loosely related workshops – as I argued in The Workshop and the World, a book that sprang from my columns.

Workshops are controlled environments, inside which researchers can stage and study special things – elementary particles, chemical reactions, plant uptakes of nutrients – that appear rarely or in a form difficult to study in the surrounding world. Science critics do not participate in the workshops themselves or even judge their activities. What they do is evaluate how workshops and worlds interact.

This can happen in three ways

Critical triangle

First is to explain why what’s going on inside the workshops matters to outsiders. Sometimes, those activities can be relatively simple to describe, which leads to columns concerning all manner of everyday activities. I have written, for example, about the physics of coffee and breadmaking. I’ve also covered toys, tops, kaleidoscopes, glass and other things that all of us – physicists and non-physicists alike – use, value and enjoy.

Sometimes I draw out more general points about why those activities are important. Early on, I invited readers to nominate their most beautiful experiments in physics. (Spoiler alert: the clear winner was the double-slit experiment with electrons.) I later did something similar about the cultural impact of equations – inviting readers to pick their favourites and reporting on their results (second spoiler alert: Maxwell’s equations came top). I also covered readers’ most-loved literature about laboratories.

Physicists often engage in activities that might seem inconsequential to them yet are an intrinsic part of the practice of physics

When viewing science as workshops, a second role is to explain why what’s outside the workshops matters to insiders. That’s because physicists often engage in activities that might seem inconsequential to them – they’re “just what the rest the world does” – yet are an intrinsic part of the practice of physics. I’ve covered, for example, physicists taking out patents, creating logos, designing lab architecture, taking holidays, organizing dedications, going on retirement and writing memorials for the deceased.

Such activities I term “black elephants”. That’s because they’re a cross between things physicists don’t want to talk about (“elephants in the room”) and things that force them to renounce cherished notions (just as “black swans” disprove that “all swans are white”).

A third role of a science critic is to explain what matters that takes place both inside and outside the workshop. I’m thinking of things like competition, leadership, trustsurprise, workplace training courses, cancel culture and even jokes and funny tales. Interpretations of the meaning of quantum mechanics, such as “QBism”, which I covered both in 2019 and 2022, are an ongoing interest. That’s because they’re relevant both to the structure of physics and to philosophy as they disrupt notions of realism, objectivity, temporality and the scientific method.

Being critical

The term “critic” may suggest someone with a congenitally negative outlook, but that’s wrong. My friend Fred Cohn, a respected opera critic, told me that, in a conversation after a concert, he criticized the performance of the singer Luciano Pavarotti. His remark provoked a woman to shout angrily at him: “Could you do better?” Of course not! It’s the critic’s role to evaluate performances of an activity, not to perform the activity oneself.

illustration of a person sat at a desk using a typewriter
Working practices In his first Critical Point column for Physics World, philosopher and historian of science Robert P Crease interrogated the role of the science critic. (Courtesy: iStock/studiostockart)

Having said that, sometimes a critic must be critical to be honest. In particular, I hate it when scientists try to delegitimize the experience of non-scientists by saying, for example, that “time does not exist”. Or when they pretend they don’t see rainbows but wavelengths of light or that they don’t see sunrises or the plane of a Foucault pendulum move but the Earth spinning. Comments like that turn non-scientists off science by making it seem elitist and other-worldly. It’s what I call “scientific gaslighting”.

Most of all, I hate it when scientists pontificate that philosophy is foolish or worthless, especially when it’s the likes of Steven Pinker, who ought to know better. Writing in Nature (518 300), I once criticized the great theoretical physicist Steven Weinberg, who I counted as a friend, for taking a complex and multivalent text, plucking out a single line, and misreading it as if the line were from a physics text.

The text in question was Plato’s Phaedo, where Socrates expresses his disappointment with his fellow philosopher Anaxagoras for giving descriptions of heavenly bodies “in purely physical terms, without regard to what is best”. Weinberg claimed this statement meant that Socrates “was not very interested in natural science”. Nothing could be further from the truth.

At that moment in the Phaedo, Socrates is recounting his intellectual autobiography. He has just come to the point where, as a youth, he was entranced by materialism and was eager to hear Anaxagoras’s opposing position. When Anaxagoras promised to describe the heavens both mechanically and as the product of a wise and divine mind but could do only the former, Socrates says he was disappointed.

Weinberg’s jibe ignores the context. Socrates is describing how he had once embraced Anaxagoras’s view of a universe ruled by a divine mind but later rejected that view. As an adult, Socrates learned to test hypotheses and other claims through putting them to the test, just as modern-day scientists do. Weinberg was misrepresenting Socrates by describing a position that he later abandoned.

The critical point of the critical point

Ultimately, the “critical point” of my columns over the last 25 years has been to provoke curiosity and excitement about what philosophers, historians and sociologists do for science. I’ve also wanted to raise awareness that these fields are not just fripperies but essential if we are to fully understand and protect scientific activity.

As I have explained several times – especially in the wake of the US shutting its High Flux Beam Reactor and National Tritium Labeling Facility – scientists need to understand and relate to the surrounding world with the insight of humanities scholars. Because if they don’t, they are in danger of losing their workshops altogether.

The post Robert P Crease lifts the lid on 25 years as a ‘science critic’ appeared first on Physics World.

  •  

Helium nanobubble measurements shed light on origins of heavy elements in the universe

New measurements by physicists from the University of Surrey in the UK have shed fresh light on where the universe’s heavy elements come from. The measurements, which were made by smashing high-energy protons into a uranium target to generate strontium ions, then accelerating these ions towards a second, helium-filled target, might also help improve nuclear reactors.

The origin of the elements that follow iron in the periodic table is one of the biggest mysteries in nuclear astrophysics. As Surrey’s Matthew Williams explains, the standard picture is that these elements were formed when other elements captured neutrons, then underwent beta decay. The two ways this can happen are known as the rapid (r) and slow (s) processes.

The s-process occurs in the cores of stars and is relatively well understood. The r-process is comparatively mysterious. It occurs during violent astrophysical events such as certain types of supernovae and neutron star mergers that create an abundance of free neutrons. In these neutron-rich environments, atomic nuclei essentially capture neutrons before the neutrons can turn into protons via beta-minus decay, which occurs when a neutron emits an electron and an antineutrino.

From the night sky to the laboratory

One way of studying the r-process is to observe older stars. “Studies on heavy element abundance patterns in extremely old stars provide important clues here because these stars formed at times too early for the s-process to have made a significant contribution,” Williams explains. “This means that the heavy element pattern in these old stars may have been preserved from material ejected by prior extreme supernovae or neutron star merger events, in which the r-process is thought to happen.”

Recent observations of this type have revealed that the r-process is not necessarily a single scenario with a single abundance pattern. It may also have a “weak” component that is responsible for making elements with atomic numbers ranging from 37 (rubidium) to 47 (silver), without getting all the way up to the heaviest elements such as gold (atomic number 79) or actinides like thorium (90) and uranium (92).

This weak r-process could occur in a variety of situations, Williams explains. One scenario involves radioactive isotopes (that is, those with a few more neutrons than their stable counterparts) forming in hot neutrino-driven winds streaming from supernovae. This “flow” of nucleosynthesis towards higher neutron numbers is caused by processes known as (alpha,n) reactions, which occur when a radioactive isotope fuses with a helium nucleus and spits out a neutron. “These reactions impact the final abundance pattern before the neutron flux dissipates and the radioactive nuclei decay back to stability,” Williams says. “So, to match predicted patterns to what is observed, we need to know how fast the (alpha,n) reactions are on radioactive isotopes a few neutrons away from stability.”

The 94Sr(alpha,n)97Zr reaction

To obtain this information, Williams and colleagues studied a reaction in which radioactive strontium-94 absorbs an alpha particle (a helium nucleus), then emits a neutron and transforms into zirconium-97. To produce the radioactive 94Sr beam, they fired high-energy protons at a uranium target at TRIUMF, the Canadian national accelerator centre. Using lasers, they selectively ionized and extracted strontium from the resulting debris before filtering out 94Sr ions with a magnetic spectrometer.

The team then accelerated a beam of these 94Sr ions to energies representative of collisions that would happen when a massive star explodes as a supernova. Finally, they directed the beam onto a nanomaterial target made of a silicon thin film containing billions of small nanobubbles of helium. This target was made by researchers at the Materials Science Institute of Seville (CSIC) in Spain.

“This thin film crams far more helium into a small target foil than previous techniques allowed, thereby enabling the measurement of helium burning reactions with radioactive beams that characterize the weak r-process,” Williams explains.

To identify the 94Sr(alpha,n)97Zr reactions, the researchers used a mass spectrometer to select for 97Zr while simultaneously using an array of gamma-ray detectors around the target to look for the gamma rays it emits. When they saw both a heavy ion with an atomic mass of 97 and a 97Zr gamma ray, they knew they had identified the reaction of interest. In doing so, Williams says, they were able to measure the probability that this reaction occurs at the energies and temperatures present in supernovae.

Williams thinks that scientists should be able to measure many more weak r-process reactions using this technology. This should help them constrain where the weak r-process comes from. “Does it happen in supernovae winds? Or can it happen in a component of ejected material from neutron star mergers?” he asks.

As well as shedding light on the origins of heavy elements, the team’s findings might also help us better understand how materials respond to the high radiation environments in nuclear reactors. “By updating models of how readily nuclei react, especially radioactive nuclei, we can design components for these reactors that will operate and last longer before needing to be replaced,” Williams says.

The work is detailed in Physical Review Letters.

The post Helium nanobubble measurements shed light on origins of heavy elements in the universe appeared first on Physics World.

  •  

Schrödinger cat states like it hot

Superpositions of quantum states known as Schrödinger cat states can be created in “hot” environments with temperatures up to 1.8 K, say researchers in Austria and Spain. By reducing the restrictions involved in obtaining ultracold temperatures, the work could benefit fields such as quantum computing and quantum sensing.

In 1935, Erwin Schrödinger used a thought experiment now known as “Schrödinger’s cat” to emphasize what he saw as a problem with some interpretations of quantum theory. His gedankenexperiment involved placing a quantum system (a cat in a box with a radioactive sample and a flask of poison) in a state that is a superposition of two states (“alive cat” if the sample has not decayed and “dead cat” if it has). These superposition states are now known as Schrödinger cat states (or simply cat states) and are useful in many fields, including quantum computing, quantum networks and quantum sensing.

Creating a cat state, however, requires quantum particles to be in their ground state. This, in turn, means cooling them to extremely low temperatures. Even marginally higher temperatures were thought to destroy the fragile nature of these states, rendering them useless for applications. But the need for ultracold temperatures comes with its own challenges, as it restricts the range of possible applications and hinders the development of large-scale systems such as powerful quantum computers.

Cat on a hot tin…microwave cavity?

The new work, which was carried out by researchers at the University of Innsbruck and IQOQI in Austria together with colleagues at the ICFO in Spain, challenges the idea that ultralow temperatures are a must for generating cat states. Instead of starting from the ground state, they used thermally excited states to show that quantum superpositions can exist at temperatures of up to 1.8 K – an environment that might as well be an oven in the quantum world.

Team leader Gerhard Kirchmair, a physicist at the University of Innsbruck and the IQOQI, says the study evolved from one of those “happy accidents” that characterize work in a collaborative environment. During a coffee break with a colleague, he realized he was well-equipped to prove the hypothesis of another colleague, Oriol Romero-Isart, who had shown theoretically that cat states can be generated out of a thermal state.

The experiment involved creating cat states inside a microwave cavity that acts as a quantum harmonic oscillator. This cavity is coupled to a superconducting transmon qubit that behaves as a two-level system where the superposition is generated. While the overall setup is cooled to 30 mK, the cavity mode itself is heated by equilibrating it with amplified Johnson-Nyquist noise from a resistor, making it 60 times hotter than its environment.

To establish the existence of quantum correlations at this higher temperature, the team directly measured the Wigner functions of the states. Doing so revealed the characteristic interference patterns of Schrödinger cat states.

Benefits for quantum sensing and error correction

According to Kirchmair, being able to realize cat states without ground-state cooling could bring benefits for quantum sensing. The mechanical oscillator systems used to sense acceleration or force, for example, are normally cooled to the ground state to achieve the necessary high sensitivity, but such extreme cooling may not be necessary. He adds that quantum error correction schemes could also benefit, as they rely on being able to create cat states reliably; the team’s work shows that a residual thermal population places fewer limitations on this than previously thought.

“For next steps we will use the system for what it was originally designed, i.e. to mediate interactions between multiple qubits for novel quantum gates,” he tells Physics World.

Yiwen Chu, a quantum physicist from ETH Zürich in Switzerland who was not involved in this research, praises the “creativeness of the idea”. She describes the results as interesting and surprising because they seem to counter the common view that lack of purity in a quantum state degrades quantum features. She also agrees that the work could be important for quantum sensing, adding that many systems – including some more suited for sensing – are difficult to prepare in the ground state.

However, Chu notes that, for reasons stemming from the system’s parameters and the protocols the team used to generate the cat states, it should be possible to cool this particular system very efficiently to the ground state. This, she says, somewhat diminishes the argument that the method will be useful for systems where this isn’t the case. “However, these parameters and the protocols they showed might not be the only way to prepare such states, so on a fundamental level it is still very interesting,” she concludes.

The post Schrödinger cat states like it hot appeared first on Physics World.

  •  

Very high-energy electrons could prove optimal for FLASH radiotherapy

Electron therapy has long played an important role in cancer treatments. Electrons with energies of up to 20 MeV can treat superficial tumours while minimizing delivered dose to underlying tissues; they are also ideal for performing total skin therapy and intraoperative radiotherapy. The limited penetration depth of such low-energy electrons, however, limits the range of tumour sites that they can treat. And as photon-based radiotherapy technology continues to progress, electron therapy has somewhat fallen out of fashion.

That could all be about to change with the introduction of radiation treatments based on very high-energy electrons (VHEEs). Once realised in the clinic, VHEEs – with energies from 50 up to 400 MeV – will deliver highly penetrating, easily steerable, conformal treatment beams with the potential to enable emerging techniques such as FLASH radiotherapy. French medical technology company THERYQ is working to make this opportunity a reality.

Therapeutic electron beams are produced using radio frequency (RF) energy to accelerate electrons within a vacuum cavity. An accelerator of a just over 1 m in length can boost electrons to energies of about 25 MeV – corresponding to a tissue penetration depth of a few centimetres. It’s possible to create higher energy beams by simply daisy chaining additional vacuum chambers. But such systems soon become too large and impractical for clinical use.

THERYQ is focusing on a totally different approach to generating VHEE beams. “In an ideal case, these accelerators allow you to reach energy transfers of around 100 MeV/m,” explains THERYQ’s Sébastien Curtoni. “The challenge is to create a system that’s as compact as possible, closer to the footprint and cost of current radiotherapy machines.”

Working in collaboration with CERN, THERYQ is aiming to modify CERN’s Compact Linear Collider technology for clinical applications. “We are adapting the CERN technology, which was initially produced for particle physics experiments, to radiotherapy,” says Curtoni. “There are definitely things in this design that are very useful for us and other things that are difficult. At the moment, this is still in the design and conception phase; we are not there yet.”

VHEE advantages

The higher energy of VHEE beams provides sufficient penetration to treat deep tumours, with the dose peak region extending up to 20–30 cm in depth for parallel (non-divergent) beams using energy levels of 100–150 MeV (for field sizes of 10 x 10 cm or above). And in contrast to low-energy electrons, which have significant lateral spread, VHEE beams have extremely narrow penumbra with sharp beam edges that help to create highly conformal dose distributions.

“Electrons are extremely light particles and propagate through matter in very straight lines at very high energies,” Curtoni explains. “If you control the initial direction of the beam, you know that the patient will receive a very steep and well defined dose distribution and that, even for depths above 20 cm, the beam will remain sharp and not spread laterally.”

Electrons are also relatively insensitive to tissue inhomogeneities, such as those encountered as the treatment beam passes through different layers of muscle, bone, fat or air. “VHEEs have greater robustness against density variations and anatomical changes,” adds THERYQ’s Costanza Panaino. “This is a big advantage for treatments in locations where there is movement, such as the lung and pelvic areas.”

It’s also possible to manipulate VHEEs via electromagnetic scanning. Electrons have a charge-to-mass ratio roughly 1800 times higher than that of protons, meaning that they can be steered with a much weaker magnetic field than required for protons. “As a result, the technology that you are building has a smaller footprint and the possibility costing less,” Panaino explains. “This is extremely important because the cost of building a proton therapy facility is prohibitive for some countries.”

Enabling FLASH

In addition to expanding the range of clinical indications that can be treated with electrons, VHEE beams can also provide a tool to enable the emerging – and potentially game changing – technique known as FLASH radiotherapy. By delivering therapeutic radiation at ultrahigh dose rates (higher than 100 Gy/s), FLASH vastly reduces normal tissue toxicity while maintaining anti-tumour activity, potentially minimizing harmful side-effects.

The recent interest in the FLASH effect began back in 2014 with the report of a differential response between normal and tumour tissue in mice exposed to high dose-rate, low-energy electrons. Since then, most preclinical FLASH studies have used electron beams, as did the first patient treatment in 2019 – a skin cancer treatment at Lausanne University Hospital (CHUV) in Switzerland, performed with the Oriatron eRT6 prototype from PMB-Alcen, the French company from which THERYQ originated.

FLASH radiotherapy is currently being used in clinical trials with proton beams, as well as with low-energy electrons, where it remains intrinsically limited to superficial treatments. Treating deep-seated tumours with FLASH requires more highly penetrating beams. And while the most obvious option would be to use photons, it’s extremely difficult to produce an X-ray beam with a high enough dose rate to induce the FLASH effect without excessive heat generation destroying the conversion target.

“It’s easier to produce a high dose-rate electron beam for FLASH than trying to [perform FLASH] with X-rays, as you use the electron beam directly to treat the patient,” Curtoni explains. “The possibility to treat deep-seated tumours with high-energy electron beams compensates for the fact that you can’t use X-rays.”

Panaino points out that in addition to high dose rates, FLASH radiotherapy also relies on various interdependent parameters. “Ideally, to induce the FLASH effect, the beam should be pulsed at a frequency of about 100 Hz, the dose-per-pulse should be 1 Gy or above, and the dose rate within the pulse should be higher than 106 Gy/s,” she explains.

VHEE infographic

Into the clinic

THERYQ is using its VHEE expertise to develop a clinical FLASH radiotherapy system called FLASHDEEP, which will use electrons at energies of 100 to 200 MeV to treat tumours at depths of up to 20 cm. The first FLASHDEEP systems will be installed at CHUV (which is part of a consortium with CERN and THERYQ) and at the Gustave Roussy cancer centre in France.

“We are trying to introduce FLASH into the clinic, so we have a prototype FLASHKNiFE machine that allows us to perform low-energy, 6 and 9 MeV, electron therapy,” says Charlotte Robert, head of the medical physics department research group at Gustave Roussy. “The first clinical trials using low-energy electrons are all on skin tumours, aiming to show that we can safely decrease the number of treatment sessions.”

While these initial studies are limited to skin lesions, clinical implementation of the FLASHDEEP system will extend the benefits of FLASH to many more tumour sites. Robert predicts that VHEE-based FLASH will prove most valuable for treating radioresistant cancers that cannot currently be cured. The rationale is that FLASH’s ability to spare normal tissue will allow delivery of higher target doses without increasing toxicity.

“You will not use this technology for diseases that can already be cured, at least initially,” she explains. “The first clinical trial, I’m quite sure, will be either glioblastoma or pancreatic cancers that are not effectively controlled today. If we can show that VHEE FLASH can spare normal tissue more than conventional radiotherapy can, we hope this will have a positive impact on lesion response.”

“There are a lot of technological challenges around this technology and we are trying to tackle them all,” Curtoni concludes. “The ultimate goal is to produce a VHEE accelerator with a very compact beamline that makes this technology and FLASH a reality for a clinical environment.”

The post Very high-energy electrons could prove optimal for FLASH radiotherapy appeared first on Physics World.

  •  

Tiny sensor creates a stable, wearable brain–computer interface

Brain–computer interfaces (BCIs) enable the flow of information between the brain and an external device such as a computer, smartphone or robotic limb. Applications range from use in augmented and virtual reality (AR and VR), to restoring function to people with neurological disorders or injuries.

Electroencephalography (EEG)-based BCIs use sensors on the scalp to noninvasively record electrical signals from the brain and decode them to determine the user’s intent. Currently, however, such BCIs require bulky, rigid sensors that prevent use during movement and don’t work well with hair on the scalp, which affects the skin–electrode impedance. A team headed up at Georgia Tech’s WISH Center has overcome these limitations by creating a brain sensor that’s small enough to fit between strands of hair and is stable even while the user is moving.

“This BCI system can find wide applications. For example, we can realize a text spelling interface for people who can’t speak,” says W Hong Yeo, Harris Saunders Jr Professor at Georgia Tech and director of the WISH Center, who co-led the project with Tae June Kang from Inha University in Korea. “For people who have movement issues, this BCI system can offer connectivity with human augmentation devices, a wearable exoskeleton, for example. Then, using their brain signals, we can detect the user’s intentions to control the wearable system.”

A tiny device

The microscale brain sensor comprises a cross-shaped structure of five microneedle electrodes, with sharp tips (less than 30°) that penetrate the skin easily with nearly pain-free insertion. The researchers used UV replica moulding to create the array, followed by femtosecond laser cutting to shape it to the required dimensions – just 850 x 1000 µm – to fit into the space between hair follicles. They then coated the microsensor with a highly conductive polymer (PEDOT:Tos) to enhance its electrical conductivity.

Microscale brain sensor between hair strands
Between the hairs The size and lightweight design of the sensor significantly reduces motion artefacts. (Courtesy: W Hong Yeo)

The microneedles capture electrical signals from the brain and transmit them along ultrathin serpentine wires that connect to a miniaturized electronics system on the back of the neck. The serpentine interconnector stretches as the skin moves, isolating the microsensor from external vibrations and preventing motion artefacts. The miniaturized circuits then wirelessly transmit the recorded signals to an external system (AR glasses, for example) for processing and classification.

Yeo and colleagues tested the performance of the BCI using three microsensors inserted into the scalp of the occipital lobe (the brain’s visual processing centre). The BCI exhibited excellent stability, offering high-quality measurement of neural signals – steady-state visual evoked potentials (SSVEPs) – for up to 12 h, while maintaining low contact impedance density (0.03 kΩ/cm2).

The team also compared the quality of EEG signals measured using the microsensor-based BCI with those obtained from conventional gold-cup electrodes. Participants wearing both sensor types closed and opened their eyes while standing, walking or running.

With the participant stood still, both electrode types recorded stable EEG signals, with an increased amplitude upon closing the eyes, due to the rise in alpha wave power. During motion, however, the EEG time series recorded with the conventional electrodes showed noticeable fluctuations. The microsensor measurements, on the other hand, exhibited minimal fluctuations while walking and significantly fewer fluctuations than the gold-cup electrodes while running.

Overall, the alpha wave power recorded by the microsensors during eye-closing was higher than that of the conventional electrode, which could not accurately capture EEG signals while the user was running. The microsensors only exhibited minor motion artefacts, with little to no impact on the EEG signals in the alpha band, allowing reliable data extraction even during excessive motion.

Real-world scenario

Next, the team showed how the BCI could be used within everyday activities – such as making calls or controlling external devices – that require a series of decisions. The BCI enables a user to make these decisions using their thoughts, without needing physical input such as a keyboard, mouse or touchscreen. And the new microsensors free the user from environmental and movement constraints.

The researchers demonstrated this approach in six subjects wearing AR glasses and a microsensor-based EEG monitoring system. They performed experiments with the subjects standing, walking or running on a treadmill, with two distinct visual stimuli from the AR system used to induce SSVEP responses. Using a train-free SSVEP classification algorithm, the BCI determined which stimulus the subject was looking at with a classification accuracy of 99.2%, 97.5% and 92.5%, while standing, walking and running, respectively.

The team also developed an AR-based video call system controlled by EEG, which allows users to manage video calls (rejecting, answering and ending) with their thoughts, demonstrating its use during scenarios such as ascending and descending stairs and navigating hallways.

“By combining BCI and AR, this system advances communication technology, offering a preview of the future of digital interactions,” the researchers write. “Additionally, this system could greatly benefit individuals with mobility or dexterity challenges, allowing them to utilize video calling features without physical manipulation.”

The microsensor-based BCI is described in Proceedings of the National Academy of Sciences.

The post Tiny sensor creates a stable, wearable brain–computer interface appeared first on Physics World.

  •  

How to Watch the Lyrids Meteor Shower

The second major meteor shower of the year starts today and peaks on the night of April 21–22. Here’s everything you need to know to watch it and the many other showers that will appear in 2025.

  •  

Physicists gather in Nottingham for the IOP’s Celebration of Physics 2025

With so much turmoil in the world at the moment, it’s always great to meet enthusiastic physicists celebrating all that their subject has to offer. That was certainly the case when I travelled with my colleague Tami Freeman to the 2025 Celebration of Physics at Nottingham Trent University (NTU) on 10 April.

Organized by the Institute of Physics (IOP), which publishes Physics World, the event was aimed at “physicists, creative thinkers and anyone interested in science”. It also featured some of the many people who won IOP awards last year, including Nick Stone from the University of Exeter, who was awarded the 2024 Rosalind Franklin medal and prize.

Stone was honoured for his “pioneering use of light for diagnosis and therapy in healthcare”, including “developing novel Raman spectroscopic tools and techniques for rapid in vivo cancer diagnosis and monitoring”. Speaking in a Physics World Live chat, Stone explained why Raman spectroscopy is such a useful technique for medical imaging.

Nottingham is, of course, a city famous for medical imaging, thanks in particular to the University of Nottingham Nobel laureate Peter Mansfield (1933–2017), who pioneered magnetic resonance imaging (MRI). In an entertaining talk, Rob Morris from NTU explained how MRI is also crucial for imaging foodstuffs, helping the food industry to boost productivity, reduce waste – and make tastier pork pies.

Still on the medical theme, Niall Holmes from Cerca Magnetics, which was spun out from the University of Nottingham, explained how his company has developed wearable magnetoencephalography (MEG) sensors that can measures magnetic fields generated by neuronal firings in the brain. In 2023 Cerca won one of the IOP’s business and innovation awards.

Richard Friend from the University of Cambridge, who won the IOP’s top Isaac Newton medal and prize, discussed some of the many recent developments that have followed from his seminal 1990 discovery that semiconducting polymers can be used in light-emitting diodes (LEDs).

The event ended with a talk from particle physicist Tara Shears from the University of Liverpool, who outlined some of the findings of the new IOP report Physics and AI, to which she was an adviser. Based on a survey with 700 responses and a workshop with experts from academia and industry, the report concludes that physics doesn’t only benefit from AI – but underpins it too.

I’m sure AI will be good for physics overall, but I hope it never removes the need for real-life meetings like the Celebration of Physics.

The post Physicists gather in Nottingham for the IOP’s Celebration of Physics 2025 appeared first on Physics World.

  •  

Two-dimensional metals make their debut

Researchers from the Institute of Physics of the Chinese Academy of Sciences have produced the first two-dimensional (2D) sheets of metal. At just angstroms thick, these metal sheets could be an ideal system for studying the fundamental physics of the quantum Hall effect, 2D superfluidity and superconductivity, topological phase transitions and other phenomena that feature tight quantum confinement. They might also be used to make novel electronic devices such as ultrathin low-power transistors, high-frequency devices and transparent displays.

Since the discovery of graphene – a 2D sheet of carbon just one atom thick – in 2004, hundreds of other 2D materials have been fabricated and studied. In most of these, layers of covalently-bonded atoms are separated by gaps. The presence of these gaps mean that neighbouring layers are held together only by weak van der Waals (vdW) interactions, making it relatively easy to “shave off” single layers to make 2D sheets.

Making atomically thin metals would expand this class of technologically important structures. However, because each atom in a metal is strongly bonded to surrounding atoms in all directions, thinning metal sheets to this degree has proved difficult. Indeed, many researchers thought it might be impossible.

Melting and squeezing pure metals

The technique developed by Guangyu Zhang, Luojun Du and colleagues involves heating powders of pure metals between two monolayer-MoS2/sapphire vdW anvils. The team used MoS2/sapphire because both materials are atomically flat and lack dangling bonds that could react with the metals. They also have high Young’s moduli, of 430 GPa and 300 GPa respectively, meaning they can withstand extremely high pressures.

Once the metal powders melted into a droplet, the researchers applied a pressure of 200 MPa. They then continued this “vdW squeezing” until the opposite sides of the anvils cooled to room temperature and 2D sheets of metal formed.

The team produced five atomically thin 2D metals using this technique. The thinnest, at around 6.3 Å, was bismuth, followed by tin (~5.8 Å), lead (~7.5 Å), indium (~8.4 Å) and gallium (~9.2 Å).

“Arduous explorations”

Zhang, Du and colleagues started this project around 10 years ago after they decided it would be interesting to work on 2D materials other than graphene and its layered vdW cousins. At first, they had little success. “Since 2015, we tried out a host of techniques, including using a hammer to thin a metal foil – a technique that we borrowed from gold foil production processes – all to no avail,” Du recalls. “We were not even able to make micron-thick foils using these techniques.”

After 10 years of what Du calls “arduous explorations”, the team finally moved a crucial step forward by developing the vdW squeezing method.

Writing in Nature, the researchers say that the five 2D metals they’ve realized so far are just the “tip of the iceberg” for their method. They now intend to increase this number. “In terms of novel properties, there is still a knowledge gap in the emerging electrical, optical, magnetic properties of 2D metals, so it would be nice to see how these materials behave physically as compared to their bulk counterparts thanks to 2D confinement effects,” says Zhang. “We would also like to investigate to what extent such 2D metals could be used for specific applications in various technological fields.”

The post Two-dimensional metals make their debut appeared first on Physics World.

  •  

Bilayer optical lattices could unravel the secret of high-temperature superconductivity

A proposed experiment that would involve trapping atoms on a two-layered laser grid could be used to study the mechanism behind high-temperature superconductivity. Developed by physicists in Germany and France led by Henning Schlömer the new techniques could revolutionize our understanding of high-temperature superconductivity.

Superconductivity is a phenomenon characterized by an abrupt drop to zero of electric resistance when certain materials are cooled below a critical temperature. It has remained in the physics zeitgeist for over a hundred years and continues to puzzle contemporary physicists. While scientists have a good understanding of “conventional” superconductors (which tend to have low critical temperatures), the physics of high-temperature superconductors remains poorly understood.  A deeper understanding of the mechanisms responsible for high-temperature superconductivity could unveil the secrets behind macroscopic quantum phenomena in many-body systems.

Mimicking real crystalline materials

Optical lattices have emerged as a powerful tool to study such many-body quantum systems. Here, two counter-propagating laser beams overlap to create a standing wave. Extending this into two dimensions creates a grid (or lattice) of potential-energy minima where atoms can be trapped (see figure). The interactions between these trapped atoms can then be tuned to mimic real crystalline materials giving us an unprecedented ability to study their properties.

Superconductivity is characterized by the formation of long-range correlations between electron pairs. While the electronic properties of high-temperature superconductors can be studied in the lab, it can be difficult to test hypotheses because the properties of each superconductor are fixed. In contrast, correlations between atoms in an optical lattice can be tuned, allowing different models and parameters to be explored.

Henning Schlömer and Hannah Lange
Henning Schlömer (left) and Hannah Lange The Ludwig Maximilian University of Munich PhD students collaborated on the proposal. (Courtesy: Henning Schlömer/Hannah Lange)

This could be done by trapping fermionic atoms (analogous to electrons in a superconducting material) in an optical lattice and enabling them to form pair correlations. However, this has proved to be challenging because these correlations only occur at very low temperatures that are experimentally inaccessible. Measuring these correlations presents an additional challenge of adding or removing atoms at specific sites in the lattice without disturbing the overall lattice state. But now, Schlömer and colleagues propose a new protocol to overcome these challenges.

The proposal

The researchers propose trapping fermionic atoms on a two-layered lattice. By introducing a potential-energy offset between the two layers, they ensure that the atoms can only move within a layer and there is no hopping between layers. They enable magnetic interaction between the two layers, allowing the atoms to form spin-correlations such as singlets, where atoms always have opposing spins . The dynamics of such interlayer correlations will give rise to superconducting behaviour.

This system is modelled using a “mixed-dimensional bilayer” (MBD) model. It accounts for three phenomena: the hopping of atoms between lattice sites within a layer; the magnetic (spin) interaction between the atoms of the two layers; and the magnetic interactions within the atoms of a layer.

Numerical simulations of the MBD model suggest the occurrence of superconductor-like behaviour in optical lattices at critical temperatures much higher than traditional models. These temperatures are readily accessible in experiments.

To measure the correlations, one needs to track pair formation in the lattice. One way to track pairs is to add or remove atoms from the lattice without disturbing the overall lattice state. However, this is experimentally infeasible. Instead, the researchers propose doping the energetically higher layer with holes – that is the removal of atoms to create vacant sites. The energetically lower layer is doped with doublons, which are atom pairs that occupy just one lattice site. Then the potential offset between the two layers can be tuned to enable controlled interaction between the doublons and holes. This would allow researchers to study pair formation via this interaction rather than having to add or remove atoms from specific lattice sites.

Clever mathematical trick

To study superconducting correlations in the doped system, the researchers employ a clever mathematical trick. Using a mathematical transformation, they transform the model to an equivalent model described by only “hole-type” dopants without changing the underlying physics. This allows them to map superconducting correlations to density correlations, which can be routinely accessed is existing experiments.

With their proposal, Schlömer and colleagues are able to both prepare the optical lattice in a state, where superconducting behaviour occurs at experimentally accessible temperatures and study this behaviour by measuring pair formation.

When asked about possible experimental realizations, Schlömer is optimistic: “While certain subtleties remain to be addressed, the technology is already in place – we expect it will become experimental reality in the near future”.

The research is described in PRX Quantum

The post Bilayer optical lattices could unravel the secret of high-temperature superconductivity appeared first on Physics World.

  •