↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Paradigm shifts: positivism, realism and the fight against apathy in the quantum revolution

Par : No Author

Science can be a messy business. Scientists caught in the storm of a scientific revolution will try to react with calm logic and reasoning. But in a revolution the stakes are high, the atmosphere charged. Cherished concepts are abandoned as troubling new notions are cautiously embraced. And, as the paradigm shifts, the practice of science is overlaid with passionate advocacy and open hostility in near-equal measure. So it was – and, to a large extent, still is – with the quantum revolution.

Niels Bohr insisted that quantum theory is the result of efforts to describe a fundamentally statistical quantum world using concepts stolen from classical physics, which must therefore be interpreted “symbolically”. The calculation of probabilities, with no reference to any underlying causal mechanism that might explain how they arise, is the best we can hope for.

In the heat of the quantum revolution, Bohr’s “Copenhagen interpretation” was accused of positivism, the philosophy that valid knowledge of the physical world is derived only from direct experience. Albert Einstein famously disagreed, taking the time to explore alternatives more in keeping with a realist metaphysics, with a “trust in the rational character of reality and in its being accessible, to some extent, to human reason”, that had served science for centuries. Lest there be any doubt, Adam Forrest Kay’s Escape from Shadow Physics: the Quest to End the Dark Ages of Quantum Theory demonstrates that the Bohr–Einstein debate remains unresolved, at least to anybody’s satisfaction, and continues to this day.

Escape from Shadow Physics is a singular addition to the popular literature on quantum interpretations. Kay holds PhDs in both literature and mathematics and is currently a mathematics postdoc at the Massachusetts Institute of Technology. He stands firmly in Einstein’s corner, and his plea for a return to a realist programme is liberally sprinkled with passionate advocacy and open hostility in near-equal measure. He writes with the zeal of a true quantum reactionary.

Like many others before him, in arguing his case Kay needs first to build a monstrous, positivist Goliath that can be slain with the slingshot of realist logic and reasoning. This means embracing some enduring historical myths. These run as follows. The Bohr–Einstein debate was a direct confrontation between the subjectivism of the positivist and the objectivism of the realist. Bohr won the debate by browbeating the stubborn, senile and increasingly isolated Einstein into submission. Acting like some fanatical priesthood, physicists of Bohr’s church – such as Wolfgang Pauli, Werner Heisenberg and Léon Rosenfeld – shouted down all dissent, establishing the Copenhagen interpretation as a dogmatic orthodoxy.

Historical scientific myths are not entirely wrong, and typically hold some grains of truth. Rivals to the Copenhagen view were indeed given short shrift by the “Copenhagen hegemony”. Pauli sought to dismantle Louis de Broglie’s “pilot wave” interpretation soon after it was presented in 1927. He went on to dismiss its rediscovery by David Bohm in 1952 as “shadow physics beer-idea wish dreams”, and “not even new nonsense”. Rosenfeld dismissed Hugh Everett III’s “many worlds” interpretation of 1957 as “hopelessly wrong ideas”.

But Kay is not content with the myth as it is familiarly told, and so seeks to deepen it. He confers on Bohr “the charisma of the hypnotist, the charisma of the cult leader”, adding that “the Copenhagen group was, in a very real sense, a personality cult, centred on the special and wise Bohr”. Prosecuting such a case requires a selective reading of science history, snatching quotations where they fit the narrative, ignoring others where they don’t. In fact, Bohr did not deny objective reality, or the reality of electrons and atoms. In interviews conducted shortly before his death in 1962, Bohr reaffirmed that his core principle of “complementarity” (of waves and particles, for example) was “the only possible objective description”. Heisenberg, in contrast, was much less cautious in his use of language and makes an easier target for anti-positivist ire.

It can be argued that the orthodoxy, such as it is, is not actually based on philosophical pre-commitments. The post-war Americanization of physics drove what were judged to be pointless philosophical questions about the meaning of quantum theory to the fringes. Aside from those few physicists and philosophers who continued to nag at the problem, the majority of physicists just got on with their calculations, completely unconcerned about what the theory was supposed to mean. They just didn’t care.

As Bohm explained: “Everybody plays lip service to Bohr, but nobody knows what he says. People then get brainwashed into saying Bohr is right, but when the time comes to do their physics, they are doing something different.” Many who might claim to follow Bohr’s “dogma” satisfy their physical intuitions by continuing to think like Einstein.

Anton Zeilinger, who shared the 2022 Nobel Prize for Physics for his work on quantum entanglement and quantum information science, confessed that even physicists working in this new field consider foundations to be a bit dodgy: “We don’t understand the reason why. Must be psychological reasons, something like that, something very deep.” Kay admits this much when he writes: “Yes, many people think the debate is over and Bohr won, but that is actually a social phenomenon.” In other words, the orthodoxy is not philosophical, it is sociological. It has very little to do with Bohr and the Copenhagen interpretation. In truth, Kay is fighting for attention against the apathy and indifference characteristic of an orthodox mainstream physics, or what Thomas Kuhn called “normal science”.

As to how a modern-day realist programme might be pursued, Kay treats us to some visually suggestive experiments in which oil droplets follow trajectories determined by wave disturbances on the surface of the oil bath on which they move. He argues that such “quantum hydrodynamic analogues” show us that the pilot-wave interpretation merits much more attention than it has so far received. But while these analogues are intuitively appealing, the so-called “quantization” involved is as familiarly classical as musical notes generated by string or wind instruments. And, although such analogues may conjure surprising trajectories and patterns, they cannot conjure Planck’s constant. Or quantum entanglement.

But the pilot-wave interpretation demands a hefty trade-off. It features precisely the non-local, “peculiar mechanism of action at a distance” of the kind that Einstein abhorred, and which discouraged his own exploration of pilot waves in 1927. In an attempt to rescue the possibility that reality may yet be local, Kay reaches for a loophole in John Bell’s famous theorem and inequality. Yet he overlooks the enormous volume and variety of experiments that have been performed since the early 1980s, including tests of an inequality devised by the Nobel-prize-winning theorist Anthony Leggett that explicitly close the loophole he seeks to exploit.

Escape from Shadow Physics is a curate’s egg. Those readers who would condemn Bohr and the Copenhagen interpretation, for whatever reasons of their own, will likely cheer it on. Those looking for balanced arguments more reasoned than diatribe will likely be disappointed. Despite an extensive bibliography, Kay commits some curious sins of omission. But, while the journey that Kay takes may be flawed, there is yet sympathy for his destination. The debate does remain unresolved. Faced with the mystery of entanglement and non-locality, Bohr’s philosophy offers no solace. Kay (quoting a popular textbook) asks that we consider future generations in possession of a more sophisticated theory, who wonder how we could have been so gullible.

  • 2024 Weidenfeld & Nicolson 496pp £25 hb

The post Paradigm shifts: positivism, realism and the fight against apathy in the quantum revolution appeared first on Physics World.

‘Event-responsive’ electron microscopy focuses on fragile samples

Par : No Author

A new scanning transmission electron microscope (STEM) technique that modulates the electron beam in response to the scattering rate allows images to be formed with the fewest electrons possible. The researchers hope their “event-responsive electron microscopy“ could be used on fragile samples that are easily damaged by electron beams. The team is now working to implement their imaging paradigm with other microscopy techniques.

First developed in the 1930s, transmission electron microscopes have been invaluable for exploring almost all branches of science at tiny scales. These instruments rely on the fact that electrons can have far shorter de Broglie wavelengths than optical photons and hence can observe much finer details. Visible light microscopes cannot normally resolve features smaller than about 200 nm, but electron imaging can often achieve resolutions well below 0.1 nm. However, the higher energy of these electrons makes them more damaging to samples than light. Researchers must therefore keep the number of electrons scattered from fragile sample to the absolute minimum needed to build up a clear image.

In a STEM, an image is created by rapidly scanning a focused beam of electrons across a sample in a grid of pixels. Most of these electrons pass straight through the sample, but a small percentage are scattered sharply by collisions. Detectors that surround the beam path record these scattering events. The electron scattering rate from a particular point tells microscopists the density around that point, and thereby allows them to reconstruct an image of the sample.

Unnecessary radiation damage

Normally, the same number of incident electrons is fired at each pixel and the number of scattered electrons is counted. To create enough collisions at weakly scattering regions to resolve them properly, strongly scattering regions are exposed to far more incident electrons than necessary. As a result, samples may suffer unnecessary radiation damage.

In the new work, electron microscopists led by Jonathan Peters and Lewys Jones at Trinity College Dublin, together with Bryan Reed of Integrated Dynamic Electron Solutions in the US and colleagues in the UK and Japan, inverted the traditional measurement protocol by measuring the time required to achieve a fixed number of scattered electrons from every pixel. Jones offers an analogy: “If you look at the weather forecast on TV you see the rainfall in millimetres per hour,” he says; “If you look at how that’s measured by weather forecasters they go and put a beaker outside in the rain and, one hour later, they see how much is in the beaker…If I ask you how hard it’s raining, you’re going to go outside, stick your hand out and see how long it takes for, say, three drops to hit your hand…After you’ve reached some fixed [number of drops], you don’t wait for the rest of the hour in the rain.”

Event response

The researchers implemented an event-responsive microscopy protocol in which the individual scattered electrons from each pixel is recorded, and this information is fed back to the electron microscope. After the set number of scattered electrons is recorded from each individual pixel, a “beam blanker” is switched on until the end of the normal pixel waiting time. “A powerful voltage is applied to skew the beam off into the sidewall,” explains Jones. “It has the same effect of opening and closing a shutter on a camera.” This allowed the researchers to measure the scattering rate from all the sample points without subjecting any of them to unnecessary electron flux. “It’s not a slow process,” says Jones; “The image is formed in front of the user in real-time.”

The researchers used their new protocol to produce images of biologically and chemically fragile samples with little to no radiation damage. They now hope it will prove possible to produce electron micrographs of samples such as some catalysts and drug molecules that are currently obliterated by electron beams before an image can be formed. They are also exploring the protocol’s use in other imaging techniques such as electron energy loss spectroscopy and X-ray microscopy. “It will probably take a number of years for us and other groups to fully unpick what such a fundamental shift in how measurements are made will mean for all the other kinds of techniques that people use microscopes for,” says Jones.

Electron microscopy expert Quentin Ramasse of the University of Leeds is enthusiastic about the work. “It’s inventive, it’s potentially changing the way we record data in a STEM and it’s doing so in a very simple fashion. It could provide an extra tool in our arsenal to not necessarily completely remove beam damage but certainly to minimize it,” he says. “It really is [the result of] clever electronics, clever hardware and a very clever take on how to drive the motion of the probe as a function of what the sample’s response has been up to that point.”

The research is described in Science.     

The post ‘Event-responsive’ electron microscopy focuses on fragile samples appeared first on Physics World.

Introducing Python for electrochemistry research

Par : No Author

To understand electrochemical behaviour and reaction mechanisms, electrochemists must analyze the correlation between current, potential, and other parameters, such as in situ information. As the experimental dataset becomes larger and the analysis task gets more complex, one may spend days sorting data, fitting models, and repeating these routing procedures. Moreover, sharing the analyzing procedure and reproducing the results can be challenging as different commercial software, parameters, and steps can be involved. Therefore, an open-source, free, and all-in-one platform for electrochemistry research is needed.

Python is an interpreted programming language that has emerged as a transformative force within the scientific community. Its syntax prioritizes readability and simplicity, allowing easy reproducing and cross-platform sharing. Furthermore, its rich ecosystem of community-provided packages enables multiple electrochemical tasks, from data analysis and visualization to fitting and simulation.

This webinar presents a general introduction to using Python for electrochemists new to programming concepts. Starting with the basic concepts, Python’s capability in electrochemistry research is demonstrated with examples, from data handling, treatment, fitting, and visualization to electrochemical simulation. Suggestions and resources on learning Python are provided.

An interactive Q&A session follows the presentation.

Zheng Weiran
Weiran Zheng

Weiran Zheng is an associate professor in chemistry at the Guangdong Technion-Israel Institute of Technology (GTIIT), China. His research focuses on understanding the activation and long-term deactivation mechanisms of electrocatalysts from an atomic scale using operando techniques such as spectroscopy and surface probe microscopy. He is particularly interested in water electrolysis, ammonia electrooxidation, and sensing. His research also involves a fundamental discussion of current experimental electrochemistry for better data accountability and reproducibility. Weiran Zheng received his BS (2009) and PhD (2015) from Wuhan University. Before joining GTIIT, he worked as a visiting researcher at the University of Oxford (2012–2014) and a research fellow at the Hong Kong Polytechnic University (2016–2021).

The Electrochemical Society

 

The post Introducing Python for electrochemistry research appeared first on Physics World.

Smashing heavier ions creates superheavy livermorium

Par : No Author

Physicists have used a beam of titanium-50 to create the element livermorium. This is the first time that nuclei heavier than calcium-48 have been used to synthesize a superheavy element. The international team, led by Jacklyn Gates at Lawrence Berkeley National Laboratory (LBNL) in California, hopes that their approach could pave the way for the discovery of entirely new elements.

Superheavy elements are found at the bottom right of the periodic table and have atomic numbers greater than 103. Creating and studying these huge elements pushes our experimental and theoretical capabilities and provides new insights into the forces that hold nuclei together.

Techniques for synthesizing these elements have vastly improved over the decades, and usually involve the irradiation of actinide targets (elements with atomic numbers between 89–102) with beams of transition metal ions.

Earlier in this century, superheavy elements were created by bombarding actinides with beams of calcium-48. “Using this technique, scientists managed to create elements up to oganesson, with an atomic number of 118,” says Gates. Calcium-48 is especially suited for this task because of its highly stable configuration of protons and neutrons, which allows it to fuse effectively with target nuclei.

Short-lived and difficult

Despite these achievements, the discovery of new superheavy elements has stalled. “To create elements beyond oganesson, we would need to use targets made from einsteinium or fermium,” Gates explains. “Unfortunately, these elements are short-lived and difficult to produce in large enough quantities for experiments.”

To try to move forward, physicists have explored alternative approaches. Instead of using heavier and less stable actinide targets, researchers considered how lighter, more stable actinide targets such as plutonium (atomic number 94) would interact with beams of heavier transition metal isotopes.

Several theoretical studies have proposed that new superheavy elements could be produced using specific isotopes of transition metals, such as titanium, vanadium, and chromium. These studies largely agreed that titanium-50 has the highest reaction cross-section with actinide elements, giving it the best chance of producing elements heavier than oganesson.

However, there is significant uncertainty surrounding the nuclear mechanisms involved in these reactions, which have hindered experimental efforts so far.

Theoretical decrease

“Based on theoretical predictions, we expected the production rate of superheavy elements to decrease when beams beyond calcium-48 were used to bombard actinide targets,” Gates explains. “However, we were unsure about the extent of this decrease and what it would mean for producing elements beyond oganesson.”

To address this uncertainty, Gates’ team implemented a reaction that has been explored in several theoretical studies – by firing a titanium-50 beam at a target of plutonium-244. Based on the nuclear mechanisms involved, this reaction has been predicted to produce the superheavy element livermorium, which has an atomic number of 116.

To create the titanium-50 beam, the researchers used LBNL’s VENUS ion source. This uses a superconducting magnet to contain a plasma of highly ionized titanium-50. They then accelerated the ions using LBNL’s 88-Inch Cyclotron facility. After the reaction, the Berkeley Gas-filled Separator isolated livermorium nuclei from other reaction products. This allowed the team to measure the chain of products created as the nuclei decayed.

Altogether, the team detected two decay paths that could be attributed to livermorium-290. This is especially significant because the isotope is thought to lie tantalizingly close to and “island of stability” in the chart of the nuclides. This comprises a group of superheavy nuclei that physicists predict are highly resistant to decay through spontaneous fission. This gives these nuclei vastly longer half-lives compared with lighter isotopes of the same elements.

If the island is reached, it could be a crucial stepping stone for synthesizing new elements beyond oganesson. For now, Gates’ team is hopeful its result could pave the way for a new wave of experiments and plan to use their titanium-50 beam to bombard a heavier target of californium-249. If these experiments see similar levels of success, they could be a crucial next step toward discovering even heavier superheavy elements.

The research is described in a preprint on arXiv.

The post Smashing heavier ions creates superheavy livermorium appeared first on Physics World.

Icy exoplanet found to be potentially habitable

Par : No Author

A research team headed up at the University of Montreal has discovered that the temperate exoplanet LHS 1140 b may have an atmosphere, could be covered in ice, and may even have an ocean of liquid water. If confirmed, this would make it only the third known planet in its host star’s habitable zone to have an atmosphere, after Earth and Mars.

Profound implications

LHS 1140 b, discovered in 2017, is a highly studied exoplanet with observations obtained by several telescopes, including the Transiting Exoplanet Survey Satellite (TESS), the Hubble Space Telescope, the Spitzer Space Telescope and the ESPRESSO spectrograph on the ESO/Very Large Telescope.

Earlier this year, the Montreal-led team reanalysed existing observations to update and refine a range of parameters, including the planet’s radius and mass. The researchers found that its density is inconsistent with a purely Earth-like rocky interior, suggesting the presence of a hydrogen envelope or a water layer atop a rocky, metal-rich core.

They then embarked on a further study of the nature of the exoplanet with the NIRISS (near-infrared imager and slitless spectrograph) instrument on the James Webb Space Telescope (JWST), aiming to distinguish between the “mini-Neptune” or “water world” scenarios.

Presenting the results in The Astrophysical Journal Letters, the astronomers describe how they studied the atmosphere of LHS 1140 b using the transmission spectroscopy technique, which involves observing a planet as it transits in front of its host star.

“Our findings indicate that LHS 1140 b’s atmosphere is not dominated by hydrogen, with the most likely scenario being a nitrogen-rich atmosphere consistent with the water world hypothesis,” says lead author Charles Cadieux, a PhD student at the University of Montreal’s Trottier Institute for Research on Exoplanets, supervised by René Doyon.

Charles Cadieux
Charles Cadieux: “The most likely scenario is a nitrogen-rich atmosphere consistent with the water world hypothesis.” (Courtesy: C Cadieux)

“By collecting this light at different wavelengths using a spectrograph – in this case, the NIRISS instrument on JWST – we can infer the atmospheric composition. Molecules such as H2O, CH4 [methane], CO, CO2 and NH3 [ammonia] all absorb light at specific wavelengths, allowing us to identify their presence, or absence,” Cadieux explains.

“Looking at the whole exoplanet population, no atmosphere on a rocky, terrestrial exoplanet has yet been detected to date, this is hard. Our tentative result of a nitrogen-rich atmosphere on LHS 1140 b, if firmly confirmed by additional observations, would be the first such detection,” he adds.

Cadieux notes that this discovery would place LHS 1140 b as only the third known planet with an atmosphere in the habitable zone of its host star, alongside Earth and Mars, confirming that the implications for future research are “profound” and “would provide a target for studying the habitability and the potential for life to exist on rocky, water-rich exoplanets around low-mass stars”.

Super-Earth

According to Cadieux, the next step is to repeat the observations with JWST to confirm the tentative detection of a nitrogen-rich atmosphere. While the current observations use the NIRISS instrument, the team also plans to use the NIRSpec (near infrared spectrograph) instrument on JWST, which extends further into the infrared and can probe the CO2 content of the atmosphere.

Understanding the CO2 content is crucial, as CO2’s greenhouse effect controls surface temperature and the potential size of a liquid water ocean on LHS 1140 b. Cadieux notes that clear detection of CO2 will require two to three years of observations with JWST and should provide definitive proof that LHS 1140 b is a super-Earth with a significant water reservoir.

One key challenge, besides securing JWST observation time, will be addressing stellar contamination in the transmission data. “Since LHS 1140 b orbits a smaller and cooler M-type star, stellar spots on the star’s surface can form molecules like water, which can be misinterpreted as a planetary signal,” Cadieux explains. “Even with additional data in the future, we must carefully correct for stellar contamination to ensure accurate results.”

The post Icy exoplanet found to be potentially habitable appeared first on Physics World.

Spins hop between quantum dots in new quantum processor

Par : No Author

A quantum computing system that uses the spins of holes in germanium quantum dots has been unveiled by researchers in the Netherlands. Their platform is based on a 1998 proposal by two pioneers of quantum computation theory and could offer significant advantages over today’s technologies.

Today, the leading platforms for quantum bits (qubits) are superconducting quantum circuits and trapped ions, with neutral atoms trailing slightly behind. However, all of these qubits are difficult to scale-up and integrate to create practical quantum computers. For example IBM’s Condor – perhaps today’s most powerful quantum computer – uses 1121 superconducting qubits. These must all be kept at millikelvin temperatures, and as the number of qubits grows towards the tens of thousands or even millions, the energy cost and engineering challenges of keeping processors cold become daunting. Other platforms present other scaling challenges that are as significant.

Quantum dot qubits were proposed in 1998 by Daniel Loss of the University of Basel and David DiVicenzo, then at IBM. The qubit state is defined by the quantum state of a single charge on a semiconductor quantum dot, and shuttling the charge between quantum dots allows quantum gate operations to be performed. These systems would need less cooling, and they could potentially be fabricated in semiconductor foundries.

Quantum well

Menno Veldhorst of QuTech in the Netherlands, who co-led this latest research, describes the architecture of the quantum dots. “First, we have a semiconductor heterostructure,” he explains, “This is layers of silicon, silicon–germanium and germanium in our case. And then we have a 2D sheet of germanium, which is the quantum well that we’re interested in. We can confine charges in that 2D sheet, which is maybe 60 nm thick. And then we have electric gates on top so that, if we apply a voltage to those gates, we can define a potential well in which single charges can be trapped.”

In principle, this is a very attractive design, creating a transistor that is a qubit. In practice, however, it has been very difficult to achieve. Loss and DiVicenzo originally envisaged a series of adjacent quantum dots, all subject to different magnetic fields.

“If you have two quantum dots and you put a charge in one of them it will go to its ground state,” says Veldhorst. “If I want to do a qubit operation, say rotating the spin, I can flip it to another quantum dot. But in the other quantum dot, the ground state will be pointing in a different direction. What happens as a result is that the qubit starts to make oscillations as it precesses around this new quantization axis. If I wait a certain amount of time and then pop back to the original dot, I may have flipped the spin state completely.” This procedure would allow arbitrary qubit rotations to be performed with simple applied voltages.

The challenge is creating different magnetic fields on adjacent quantum dots. For the past two decades groups such as Veldhorst’s have applied high-frequency oscillating external magnetic fields to manipulate the spins of electrons, usually in silicon, as they move between quantum dots. These fields increase the power requirements and add noise.

Spin-orbit interaction

In the new work Veldhorst and colleagues looked at Loss’s idea of using holes – quasiparticles created by the absence of electrons – instead of electrons. Like electrons, holes have an energetic coupling to an external magnetic field, giving rise to two distinct energy levels that can create a qubit. However, holes have a much stronger spin-orbit interaction, in which the momentum of the hole is coupled to its spin. Furthermore, this interaction varies between quantum dots.

The team used holes in their germanium quantum dots. By simply applying a static magnetic field of 40 mT and megahertz-frequency electric fields, the researchers were able to execute quantum logic with single-qubit gate fidelities of 99.97% and two-qubit gate fidelities of 99.3%. Their principal problem today is that they cannot control the anisotropy of the quantum dots. Veldhorst explains that researchers normally try to make quantum dots perfect circles, and their next step will be to make them slightly elliptical to introduce this anisotropy. “This is another thing that, on paper, sounds nice, but whether it will be the case we don’t know,” he says; “But I think we’ve opened a new direction of research with a lot of possibilities.”

Theorist Edwin Barnes of Virginia Tech in the US believes the work is, “a pretty major advance in this area of quantum computing. In the past 20 years people have focused on applying microwaves to perform operations on the qubits, rather than doing this hopping which was in Loss and DiVicenzo’s original paper. In this recent [work] they’re showing that this hopping idea works, and that it seems to work extremely well.” He believes the stochastic variation of the magnetization between quantum dots should not prove insurmountable, as “it just means you just have to spend longer characterizing your device before you start using it”.  The next step, he believes, is scaling up.

The research is described in Science and Nature Communications.

The post Spins hop between quantum dots in new quantum processor appeared first on Physics World.

Tuberculosis-specific PET tracer could enable more effective treatment

Par : No Author

In 2022, more than 10.5 million people fell ill with tuberculosis (TB) and an estimated 1.3 million people died from this curable and preventable disease. TB is currently diagnosed using clinical evaluations and lab tests. While spit (sputum) tests and chest X-rays are common, a physician may also recommend a PET scan.

Current PET scans use radiotracers that could indicate TB or other diseases. A PET scan that targets TB bacteria specifically could enable more effective treatment, say researchers at the Universities of Oxford and Pittsburgh, the Rosalind Franklin Institute and the NIH.

“TB normally requires quite long treatment regimens of many months, and it can be tricky for patients to stay enrolled in these,” explains Benjamin Davis, science director for next generation chemistry at the Rosalind Franklin Institute and a professor of chemical biology at the University of Oxford. “The ability now to readily track the disease and convince a patient to stay engaged and/or for the physician to know that treatment is complete we think could prove very valuable.”

Davis and his team have developed a radiotracer, called FDT (2-[18F]fluoro-2-deoxytrehalose), that is taken up by live tuberculosis bacteria. FDT-PET scans show signal where tuberculosis bacteria are active in a patient’s lungs and measure the metabolic activity of the bacteria, which should decrease as a patient receives treatment.

“We came up with a way of designing a selective sugar for tuberculosis by understanding the enzymology of the cell wall of TB,” Davis says. “Specifically, we target enzymes in the cell wall that modify FDT, so that it effectively embeds itself selectively into the bacterium. If you like, this is a sort of self-painting mechanism where we get the pathogen to use its own enzymes to modify FDT and so ‘paint itself’ with our radiotracer.”

The researchers have characterized FDT and tested the radiotracer in preclinical trials in rabbits and non-human primates. Phase I clinical trials in humans are set to begin in the next year or so. “We are in the process of identifying partners and sites as well as addressing the differing modes of trial registration in corresponding countries,” says Davis.

Another benefit of FDT is that it can be produced from FDG – a common PET radiotracer – without specialist expertise. FDT could thus be a viable option in low- and middle-income countries with less developed healthcare systems, the researchers say, though it does require that a hospital have a PET scanner.

Writing in a press release, Clifton Barry III from the National Institute of Allergy and Infectious Diseases at the NIH, says: “FDT will enable us to assess in real time whether the TB bacteria remain viable in patients who are receiving treatment, rather than having to wait to see whether or not they relapse with active disease. This means FDT could add significant value to clinical trials of new drugs, transforming the way they are tested for use in the clinic.”

The research is published in Nature Communications.

The post Tuberculosis-specific PET tracer could enable more effective treatment appeared first on Physics World.

X(3960) is a tetraquark, theoretical analysis suggests

Par : No Author

A theoretical study has confirmed that a particle observed at CERN’s LHCb experiment in 2022 is indeed a tetraquark – supporting earlier hypotheses that were based on the analysis of its observed decay products. Tetraquarks comprise four quarks and do not fit into the conventional classification of hadrons, which defines only mesons (quark and an antiquark) and baryons (three quarks). Tetraquarks are of great interest to particle physicists because their  exotic nature provides opportunities to deepen our understanding of the intricate physics of the strong interactions that bind quarks together in hadrons.

“X(3960) is a new hadron discovered at the Large Hadron Collider (LHC),” Bing-Dong Wan of Liaoning Normal University and Hangzhou Institute for Advanced Study, and the author of the study, tells Physics World. “Since 2003, many new hadrons have been discovered in experiments, and some of them appear to be tetraquarks, while only a few can be confirmed as such.”

Named for its mass of 3.96 GeV – about four times that of a proton – X(3960) stands out, even amongst exotic hadrons. Its decay into D mesons containing heavy charm quarks implies that X(3960) should contain charm quarks. The details of the interaction of charm quarks with other strongly interacting particles is rather poorly understood, making X(3960) interesting to study.  Additionally, by the standards of unstable strongly interacting particles, X(3960) has a long lifetime – around 10-23 s – indicating unique underlying quark dynamics.

These intriguing properties of X(3960) led Wan to investigate its structure theoretically to determine if it is a tetraquark or not. In a recent paper in Nuclear Physics B, he describes how he used Shifman-Vainshtein-Zakharov sum rules in this calculations. This approach examines strongly interacting particles by relating their properties to those of their constituent quarks and the gluons that bind them together. The dynamics of these constituents can be accurately described by the fundamental theory of strong interactions known as quantum chromodynamics (QCD).

Wan assumed that the X(3960) is composed of a strange quark, a charm quark and their antiparticles. Using the sum rules, he derived its mass and the lifetime to compare these parameters with the observed values.

Mathematical machinery

Using the mathematical machinery of QCD and extensive numerical simulations, he found that the mass of the tetraquark he formulated is 3.98 ± 0.06 GeV. This is a close match to the measured mass of X(3960) at 3.956±0,005 GeV. This confirms that X(3960) comprises a strange quark, a charm quark and their antiparticles. Furthermore, Wan was able to compute the lifetime of his model particle to be 1.389±0.889×10−23 s, which aligns well with the observed value of (1.53−0.26+0.41)×10−23 s, further validating his identification.

While Wan’s work strongly supports the hypothesis that X(3960) is a charm–strange tetraquark, he acknowledges that it is not conclusive proof. In the subatomic world, particles can transform into others and the match of the quark composition of the tetraquark he studied and the decay products of X(3960) is not enough, Indeed, in principle, X(3960) can be even better described by some other quark composition.

“There are many possible structures for tetraquarks, and my work finds that one possible structure can explain the properties of X(3960),” says Wan. “But some other researchers may be able to explain the properties of X(3960) using different quark structures.”

To further validate his approach, Wan applied the sum rule technique to a particle similar to X(3960), called X(4140), previously discovered at the Tevatron collider. His calculations yielded mass and lifetime values very close to the measured ones, further confirming his method’s accuracy.

However, to definitively determine the structure of X(3960), further theoretical and experimental studies are needed. Analysing a larger number of decay events will help reduce measurement errors. On the theoretical side, using the sum rules or other QCD techniques to more accurately analyse these parameters will help reduce computational uncertainties.

“Studying new hadrons may greatly enrich the hadron family and our knowledge of the nature of strong interactions,” Wan concludes. “It is highly expected that we are now at the dawn of enormous discoveries of novel hadronic structures, implying a renaissance in hadron physics.”

The post X(3960) is a tetraquark, theoretical analysis suggests appeared first on Physics World.

US plasma physicists propose construction of a ‘flexible’ stellarator facility

Par : No Author

A group of 24 plasma physicists has called for the construction of a stellarator fusion facility in the US. The so-called Flexible Stellarator Physics Facility would test different approaches to stellarator confinement and whether some of the designs could be scaled up to a fusion plant.

Tokamak and stellarator fusion devices both emerged in the early 1950s. They use magnetic confinement to manipulate plasmas but they differ in the containment vessels’ geometries to confine the plasma. Tokamaks use a toroidal magnetic field generated by an electric current that flows through the plasma while stellarators apply a helical magnetic field, produced by external coils.

Those different geometries give each approach a specific advantage. Tokamaks maintain the plasma temperature more effectively while stellarators do a better job of ensuring the plasma’s stability.

The ITER fusion reactor, currently being built in Cadarache, France, is the largest and most ambitious of the roughly 60 tokamak experiments worldwide. Yet there are only a handful of stellarators operational, the most notable being Germany’s Wendelstein 7-X device, which switched on in 2015 and has since achieved significant experimental advances.

The authors of the white paper write that delivering the “ambitious” US decadal strategy for commercial fusion energy, which was released in 2022, will require “a persuasive” stellarator programme in addition to supporting tokamak advances.

Tokamaks and stellarators “are very close relatives, with many aspects in common,” says Felix Parra Diaz, who is the lead author of the white paper, “physics discoveries that benefit one are usually of interest to the other.”

Yet Parra Diaz, who is head of theory at the Princeton Plasma Physics Laboratory and carries out research on both tokamaks and stellarators, told Physics World that recent advances, especially at Wendelstein 7-X, are propelling the stellarator device as the best route to a fusion power plant.

“Stellarators were widely considered to be difficult to build due to their complex magnets,” says Parra Diaz. “We now think that it is possible to design stellarators with similar or even better confinement than tokamaks. We also believe that it is possible to construct these devices at a reasonable cost due to new magnet designs.”

Multi-stage process

The white paper calls on the US to build a “flexible facility” that would test the validity of theoretical models that suggest where stellarator confinement can be improved and also where it fails.

The design will focus on “scientific gaps” on the path to stellarator fusion. One particular target is the demonstration of “quasi-symmetry” magnetic configurations, which the paper describes as “the most promising strategy to minimize both neoclassical losses and energetic particle transport.”

The authors of the white paper propose a two-stage approach to the new facility. The first stage would involve exploring a range of flexible magnetic configurations while the second would involve upgrading the heating and power systems to further investigate some of the promising configurations from the first stage.

“It will also serve as a testbed for methods to control how the hot fusion plasma interacts with the walls of stellarator pilot plants,” adds Parra Diaz, who says that designing and building such a device could take between 6 to 9 years depending on “the level of funding”.

The move by the group comes as significant delays push back the international ITER fusion reactor’s switch-on to 2034, almost a decade later than the previous “baseline”.

At the same time alternative tokamak technologies continue to emerge from commercial fusion firms. Tokamak Energy of Abingdon, Oxfordshire, for example, is developing a spherical tokamak design that, the company claims, “is more efficient than the traditional ring donut shape.”

The post US plasma physicists propose construction of a ‘flexible’ stellarator facility appeared first on Physics World.

Gianluigi Botton: maintaining the Diamond synchrotron’s cutting edge

Par : No Author

What is the Diamond Light Source?

The Diamond Light Source, which opened in 2007, is a 3 GeV synchrotron that provides intense beams of light that are used by researchers to study a wide range of materials. Diamond serves a user community of around 14,000 scientists working across all manner of fundamental and applied disciplines – from clean-energy technologies to pharma and healthcare; from food science to structural biology and cultural heritage.

And now you are planning a major upgrade, Diamond-II – what does that involve?

Diamond-II will consolidate our position as a world-leading facility, ensuring that we continue to attract the best researchers working in the physical sciences, life sciences and industrial R&D. At £519m, it’s an ambitious programme that will add three new beamlines – taking the total to some 35 – along with a comprehensive series of upgrades to the optics, detectors, sample environments, sample-delivery systems and computing resources across Diamond’s existing beamlines. Users will also benefit from new automation tools to enhance our beamline instrumentation and downstream data analysis.

What is the current status of Diamond-II?

Right now, we are in the planning and design phase of the project, although initial calls for proposals and specifications for core platform technologies have been put out to tender with industry suppliers. We will shut down the synchrotron in December 2027, with the bulk of the upgrade activity completed in summer 2029. From there, we will slowly ramp back up to fully operational by mid-2030, although some beamlines will be back online sooner.

What roles are other advanced light sources playing in Diamond-II?

Even though synchrotron facilities are effectively in competition with each other – to host the best scientists and to enable the best science – what always impresses me is that collaboration and partnership are hard-wired into our community model. At a very basic level, this enables phased scheduling of the Diamond-II upgrade in co-ordination with other large-scale facilities – mainly to avoid several light sources going dark simultaneously.

How is this achieved?

Diamond’s  Machine Advisory Committee – which comprises technical experts from other synchrotron facilities – plays an important networking role in this regard, while also providing external challenge, sense-check and guidance when it comes to review and iteration of our technical ambitions. In the same way, we have engaged extensively with our user community – comprising some 14,000 scientists – over the past decade to ensure that the users’ priorities underpin the Diamond-II science case and, ultimately, that we deliver a next-generation facility with analytical instruments that meet their future research needs.

You’ve been chief executive officer of Diamond since October 2023. What does your typical day look like?

Every day is different at Diamond – always exciting, sometimes exhausting but never dull. At the outset, my number-one priority was to engage broadly with our key stakeholders – staff teams, the user community and the funding agencies – to build a picture of how things work and fit together. That engagement is ongoing, with a large part of the working day spent in meetings with division directors, senior managers and project leaders from across the organization. The task is to figure out what’s working, what isn’t and then quickly address any blockers.

I want to hear what our people really think; not what they think I want to hear

How do you approach this?

Alongside those formal meetings, I try to be visible and available whenever possible. Ad hoc visits to the beamlines and the control room, for example, mean that I can meet our scientists, technicians and support staff. It’s important for staff to have the opportunity to talk candidly and unfiltered with the chief executive officer so that I can understand the everyday issues arising. I want to hear what our people really think; not what they think I want to hear.

How do you attract and retain a diverse workforce?

Diamond is a highly competitive research facility and, by extension, we are able to recruit on a global basis. Diversity is our strength: the best talent, a range of backgrounds, plus in-depth scientific, technical and engineering experience. Ultimately, what excites many of our scientists and engineers is the opportunity to work at the cutting edge of synchrotron science, collaborating with external users and translating their research objectives into realistic experiments that will deliver results on the Diamond beamlines. One of my priorities as chief executive officer is to nurture and tap into that excitement, creating a research environment where all our people feel valued and can see how their individual contribution is making a difference.

How is Diamond optimizing its engagement with industry?

Industry users account for around 5% of beamtime at Diamond – and we co-ordinate that effort on multiple levels. To provide strategic direction, there’s the Diamond Industrial Science Committee, with senior scientists drawn from a range of industries advising on long-term applied research requirements. At an operational level, we have the Industrial Liaison Office, a multidisciplinary team of in-house scientists who work closely with industrial researchers to address R&D problems across diverse applications – from drug discovery and catalysis to aerospace and automotive.

The Diamond Light Source
A brilliant place The Diamond Light Source, the UK’s national synchrotron research facility, is located at the Harwell Science and Innovation Campus in Oxfordshire. (Courtesy: Diamond Light Source)

What about equipment manufacturers?

Our scientists and engineers also maintain ongoing collaborations with equipment manufacturers – in many cases, co-developing custom technologies and instrumentation to support our infrastructure and research capability. Those relationships are a win-win, with Diamond’s leading-edge requirements often shaping manufacturers’ broader product development roadmaps.   

Has Brexit had any impact on Diamond?

While Diamond’s relationship with Europe’s big-science community took a hit in the aftermath of Brexit, we are proactively rebuilding those bridges. Front-and-centre in this effort is our engagement in the League of European Accelerator-based Photon Sources (LEAPS), a strategic consortium initiated by the directors of Europe’s synchrotron and free-electron laser facilities. Working together, LEAPS provides a unified voice and advocacy for big science – engaging with funders at national and European level – to ensure that our scientists feel more valued, not only in terms of career pathways and progression, but also financial remuneration.

Is the future bright for synchrotron science?

We need big science to tackle humanity’s biggest challenges in areas such as health, medicine, energy, agriculture and sustainability. These grand challenges are a team effort, so the future is all about collaboration and co-ordination – not just between Europe’s advanced light sources, but other large-scale research facilities as well. To this end, Diamond has been, and remains, a catalyst in bringing together the global light sources community through the work of Lightsources.org.

The post Gianluigi Botton: maintaining the Diamond synchrotron’s cutting edge appeared first on Physics World.

Sun-like stars seen orbiting hidden neutron stars

Par : No Author

Astronomers have found strong evidence that 21 Sun-like stars orbit neutron stars without losing any mass to their binary companions.  Led by Kareem El-Badry at the California Institute of Technology, the international team spotted the binary systems in data taken by ESA’s Gaia satellite. The research offers new insights into how binary systems evolve after massive stars explode as supernovae. And like many scientific discoveries, the observations raise new questions for astronomers.

Neutron stars are created when massive stars reach the end of their lives and explode in dramatic supernovae, leaving behind dark and dense cores. So far, over 99% of the neutron stars discovered in the Milky Way have been solitary – but in some rare cases, they do exist in binary systems with Sun-like companion stars.

In every one of these previously discovered systems, the neutron star’s powerful gravitation field is ripping gas from its companion star. The gas is heated to extreme temperatures as it accretes onto the neutron star, causing it to shine with X-rays or other radiation.

No accretion

However, as El-Badry explains, “it has long been expected that there should be similar binaries of neutron stars and normal stars in which there is no accretion. Such binaries are harder to find because they produce no X-rays.”

Seeking these more elusive binaries, El-Badry’s team scoured data from the ESA’s Gaia space observatory, which measures the positions, distances, and motions of stars with high precision.

The astronomers looked for Sun-like stars that “wobbled” in the sky as they orbited invisible companions. By measuring the wobble, they could then calculate the size and period of the orbit as well as the masses of both objects in the binary system.

El-Badry explains that Gaia is best at discovering binaries with widely separated orbits. “Gaia monitors more than a billion stars, giving us good chances of finding even very rare objects,”

Gravitational influence

In total, Gaia’s data revealed 21 cases where Sun-like, main sequence stars appear to be orbiting around unseen neutron star companions, without losing any material. If this interpretation is correct, it would be the first time that neutron stars have been discovered purely as a result of their gravitational influence.

The researchers predict that these systems are likely to evolve in the future when their Sun-like stars approach the end of their lives. “When the [Sun-like] stars evolve and become red giants, they will expand and begin transferring mass to the neutron stars, so these systems are progenitors of X-ray binaries,” El-Badry says.

On top of this, the sizes of the orbits observed by team could provide clues about the magnitudes of the supernovae that formed the neutron stars. “The wide orbits of the binaries in our sample would get unbound if the neutron stars had received significant kicks during the supernovae from which they are born,” El-Badry explains. “These objects imply that some neutron stars form with weak kicks.”

Could be white dwarfs

The team’s results throw up some important questions about the nature of these binaries and how they formed. For now, it remains possible that the unseen companions could be white dwarfs. These are the remnants of relatively small stars like the Sun – stars that have exhausted their nuclear fuel and then fade out, rather than exploding to form neutron stars.

If a companion is indeed a neutron star, its progenitor star would have experienced a red supergiant phase before going supernova. This would have created an envelope of gas large enough to affect the binary system. It is not clear why the two stars would not have drawn much closer together or even merged during this phase. Later, when the larger star exploded, it is not clear why the two objects did not go their separate ways.

El-Badry’s team hope that future studies of Gaia data could answer these challenging questions and explain how these curious binary systems form and evolve.

The observations are described in The Open Journal of Astrophysics.

The post Sun-like stars seen orbiting hidden neutron stars appeared first on Physics World.

Cosmic-ray physics: detector advances open up the ultrahigh-energy frontier

Par : No Author

Physics at the extremes provides the raison d’être for the JEM-EUSO research collaboration – or, in long form, the Joint Exploratory Missions for Extreme Universe Space Observatory. Over the past 20 years or so, more than 300 scientists from 16 countries have been working collectively towards the JEM-EUSO end-game: the realization of a space-based, super-wide-field telescope that will scan the night sky and enable astrophysicists to understand the origin and nature of ultrahigh-energy cosmic rays (upwards of 5 x 1019 eV). In other words, JEM-EUSO promises to open a unique window into the Universe at energy regimes far beyond the current generation of man-made particle accelerators.

Looking at the sky from above

For context, cosmic rays are extraterrestrial particles comprising hydrogen nuclei (around 90% of the total) and helium nuclei (roughly 9%), with the remainder made up of heavier nuclei and electrons. Their energy range varies from about 109 to 1020 eV and beyond, while their flux is similarly spread across many orders of magnitude – ranging from 1 particle/m2 per second at low energies (around 10eV) out to roughly 1 particle/km2 per century at extreme energies (around 1020 eV).

While it’s possible to detect cosmic rays directly at low-to-intermediate energies (up to 1015 eV), the flux of particles is so low at higher energies that indirect detection is necessary. In short, that means observing the interaction of cosmic rays (as well as neutrino decays) with the outer layers of the atmosphere, where they produce cascades of subatomic particles known as “extensive air showers” (or secondary cosmic rays).

With this in mind, the JEM-EUSO collaboration has rolled out an ambitious R&D programme over the past decade, with a series of pathfinder experiments geared towards technology validation ahead of future space-based missions (aboard orbiting satellites) to observe cosmic rays at their highest energies. Projects to date include ground-based installations like the EUSO-TA (deployed at the Telescope Array site in Utah, US); various stratospheric balloons (the most recent of which is EUSO-SPB2); and MINI-EUSO (Multiwavelength Imaging New Instrument for the Extreme Universe Space Observatory), a telescope that’s been observing the Earth from inside the International Space Station (ISS) since 2019.

Marco Casolino, co-principal investigator on JEM-EUSO, and members of the Mini-EUSO development team with their assembled detector module
Proof of principle The JEM-EUSO collaboration has rolled out an ambitious programme of pathfinder experiments over the past decade. Above: Marco Casolino (far right), co-principal investigator on JEM-EUSO, and members of the Mini-EUSO development team with their assembled detector module. (Courtesy: JEM-EUSO)

All of these experiments operate at night and require clear weather conditions, surveying regions of the sky with low-artificial-light backgrounds. In each case, the instruments in question monitor Earth’s atmosphere by measuring the fluorescence emissions and Cherenkov light produced by extensive air showers. The fluorescence originates from the relaxation of nitrogen molecules excited by their interaction with charged particles in the air showers, while ultrahigh-energy particles traveling faster than light in the air create a blue flash of Cherenkov light (like the sonic boom created by an aircraft exceeding the speed of sound).

Operationally, because those two light components exhibit different durations – of the order of microseconds for fluorescence light; a few nanoseconds for Cherenkov light – they require dedicated detectors and acquisition electronics: multi-anode photomultiplier tubes (MAPMTs) for fluorescence detection and silicon photomultipliers (SiPMs) for the Cherenkov detectors.

The win-win of technology partnership

So where do things stand with JEM-EUSO’s implementation of current- and next-generation detectors? Among the programme’s core technology partners in this regard is Hamamatsu Photonics, a Japanese optoelectronics manufacturer that operates across diverse industrial, scientific, and medical markets. It’s a long-standing collaboration, with Hamamatsu engineers co-developing and supplying MAPMT and SiPM solutions for various JEM-EUSO experiments.

“We have a close working relationship with Hamamatsu’s technical staff in Italy and, through them, a direct line to the product development team in Japan,” explains Marco Casolino, a research director at the National Institute of Nuclear Physics (INFN), Structure of Rome “Tor Vegata”, and the co-principal investigator on JEM-EUSO (as well as project leader for Mini-EUSO).

The use of MAPMTs is well established within JEM-EUSO for indirect detection of ultrahigh-energy cosmic rays via fluorescence (with the focal surface of JEM-EUSO fluorescence telescopes fabricated from MAPMTs). “Yet although MAPMTs are a volume product line for Hamamatsu,” Casolino adds, “the solutions we employ [for JEM-EUSO experiments] are tailored by its design engineers to our exacting specifications, with an almost artisanal level of craftmanship at times to ensure we get the best product possible for our applications.”

That same approach and attention to detail regarding JEM-EUSO’s evolving requirements also guide the R&D partnership around SiPM technology. Hamamatsu engineers are working to maximize the advantages of the SiPM platform, including significantly lower operating voltage (versus MAPMTs), lightweight and durable structure, and compatibility with magnetic fields. Another plus is that SiPMs are immune to excessive levels of incident light, although the cumulative advantages are offset to a degree by the strong influence of temperature on SiPM detection efficiency (and the consequent need for active compensation schemes).

Massimo Aversa
Massimo Aversa “Our collaboration with JEM-EUSO is a win-win.” (Courtesy: Hamamatsu Photonics)

Currently, JEM-EUSO scientists are focused on an exhaustive programme of test, measurement and calibration to optimize their large-scale SiPM detector designs – considering geometry, weight, packaging and robustness – for mass-critical applications in satellite-based instrumentation. “We are being very cautious with the next steps because SiPM technology has never been tested in space using very large detector arrays [e.g. 1024 x 1024 pixels],” explains Casolino. “Ultimately, it’s all about technical readiness level – ensuring that SiPM modules can handle the harsh environment in open space and the daily swings in temperature and radiation levels for the duration of a three- or four-year mission.”

Those priorities for continuous improvement are echoed by Massimo Aversa, senior product manager for MAPMT and SiPM product lines in Hamamatsu’s Rome division. “Our collaboration with JEM-EUSO is a win-win,” he concludes. “On the one hand, we are working to develop higher-resolution SiPM detector arrays with enhanced radiation-hardness – products that can be deployed for observations in space over extended timeframes. By extension, the lessons we learn here are transferable and will inform Hamamatsu’s SiPM development roadmap for diverse applications in high-energy physics.”

Further reading

Simon Bacholle et al. 2021 Mini-EUSO mission to study Earth UV emissions on board the ISS ApJS 253 36

Francesca Bisconti 2023 Use of silicon photomultipliers in the detectors of the JEM-EUSO program Instruments 7 55

SiPMs: versatile by design

The SiPM – also known as a Multi-Pixel Photon Counter (MPPC) – is a solid-state photomultiplier comprised of a high-density matrix of avalanche photodiodes operating in Geiger mode (such that a single electron–hole pair generated by absorption of a photon can trigger a strong “avalanche” effect). In this way, the technology provides the basis of an optical sensing platform that’s ideally suited to single-photon counting and other ultralow-light applications at wavelengths ranging from the vacuum-ultraviolet through the visible to the near-infrared.

Hamamatsu, for its part, currently supplies commercial SiPM solutions to a range of established and emerging applications spanning academic research (e.g. quantum computing and quantum communication experiments); nuclear medicine (e.g. positron emission tomography); hygiene monitoring in food production facilities; as well as light detection and ranging (LiDAR) systems for autonomous vehicles. Other customers include instrumentation OEMs specializing in areas such as fluorescence microscopy and scanning laser ophthalmoscopy.

Near term, Hamamatsu is also focused on emerging applications in astroparticle physics and gamma-ray astronomy, while further down the line there’s the promise of at-scale SiPM deployment within particle accelerator facilities like CERN, KEK and Fermilab.

Taken together, what underpins these diverse use-cases is the SiPM’s unique specification sheet, combining high photon detection efficiency with ruggedness, resistance to excess light and immunity to magnetic fields.

 

The post Cosmic-ray physics: detector advances open up the ultrahigh-energy frontier appeared first on Physics World.

Angels & Demons, Tom Hanks and Peter Higgs: how CERN sold its story to the world

Par : No Author

“Read this,” said my boss as he dropped a book on my desk sometime in the middle of the year 2000. As a dutiful staff writer at CERN, I ploughed my way through the chunky novel, which was about someone stealing a quarter of a gram of antimatter from CERN to blow up the Vatican. It seemed a preposterous story but my gut told me it might put the lab in a bad light. So when the book’s sales failed to take off, all of us in CERN’s communications group breathed a sigh of relief.

Little did I know that Dan Brown’s Angels & Demons would set the tone for much of my subsequent career. Soon after I finished the book, my boss left CERN and I became head of communications. I was now in charge of managing public relations for the Geneva-based lab and ensuring that CERN’s activities and functions were understood across the world.

I was to remain in the role for 13 eventful years that saw Angels & Demons return with a vengeance; killer black holes maraud the tabloids; apparently superluminal neutrinos have the brakes applied; and the start-up, breakdown and restart of the Large Hadron Collider (LHC). Oh, and the small business of a major discovery and the award of the Nobel Prize for Physics to François Englert and Peter Higgs in 2013.

Fear, black holes and social media

Back in 2000 the Large Electron-Positron collider, which had been CERN’s flagship facility since 1989, was reaching the end of its life. Fermilab was gearing up to give its mighty Tevatron one more crack at discovering the Higgs boson, and social media was just over the horizon. Communications teams everywhere struggled to work out how to adapt to this new-fangled phenomenon, which was giving a new platform to an old emotion.

Fear of the new is as old as humanity, so it’s not surprising that some people were nervous about big machines like the Tevatron, the Relativistic Heavy Ion Collider and the LHC. One individual had long been claiming that such devices would create “strangelets”, mini-black holes and other supposedly dangerous phenomena that, they said, would engulf the world. Before the Web, and certainly before social media, theirs was a voice in the wilderness. But social media gave them a platform and the tabloid media could not resist.

James Gillies
Comms boss James Gillies, shown here in 2013, ran CERN’s media relations with the world from 2003 to 2016. Courtesy: CERN)

For the CERN comms team, it became almost a full-time job pointing out that the LHC was a minnow compared to the energies generated by the cosmos. All we were doing was bringing natural phenomena into the laboratory where they could be easily studied, as I wrote in Physics World at the time. Perhaps the Nobel-prize-winning physicist Sam Ting was right to switch his efforts from the terrestrial cacophony to the quiet of space, where his Alpha Magnetic Spectrometer on the International Space Station observes the colossal energies of the universe at first hand.

Despite our best efforts, the black-hole myth steadily grew. At CERN open days, we arranged public discussions on the subject for those who did not know quite what to make of it. Most people seemed to realize that it was no more than a myth. The British tabloid newspaper the Sun, for example, playfully reminded readers to cancel their subscriptions before LHC switch-on day.

There were lawsuits, death threats and calls for CERN to be shut down.

But some still took it seriously. There were lawsuits, death threats and calls for CERN to be shut down. There were reports of schools being closed on start-up day so that children could be with their parents if the world really did end. Worse still, in 2005 the BBC made a drama documentary End Day, seemingly inspired by Martin Rees’s book Our Final Century. The film played out a number of calamitous scenarios for humankind, culminating with humanity taking on Pascal’s wager and losing. I have read the book. That is not what Rees was saying.

We were now faced with another worry. Brown’s follow-up book, The Da Vinci Code, had become a blockbuster and it was clear that Angels & Demons, after its slow start, would follow suit. I therefore found myself in a somewhat surreal meeting with CERN’s then director-general (DG) Robert Aymar mulling over how CERN should respond. I suggested that the book’s success was a great opportunity for us to talk about the real physics of antimatter, which is anyway far more interesting than the novel.

To my relief, Aymar agreed – and in 2005 visitors to CERN’s website were greeted with a picture of our top-secret space plane that the DG uses to hop around the world in minutes. Or does he? Anyone clicking on the picture would discover that CERN doesn’t actually have a space plane, but we do make antimatter. We could even make a quarter of a gram of it, given 250 million years.

More importantly, we hoped that visitors to the website would learn that the really interesting thing about antimatter is that nature seems to favour matter and we still don’t know why. They’d also discover that antimatter plays an important role in medicine, in the form of positron-emission tomography (PET) scanners, and that CERN has long played an important part in their development.

Thanks to our playful, interactive approach, many people did click through. In fact, CERN’s Web traffic jumped by a factor of 10 almost overnight. The lab was on its way to becoming a household name and, in time, a synonym for excellence. In 2005, however, that was yet to come. We still had several years of black-hole myth-busting ahead.

Collider countdown

A couple of years later, an unexpected ally appeared in the form of Hollywood, which came knocking to ask if we’d be comfortable working with them on a film version of Angels & Demons. Again, the DG agreed and in 2009 the film appeared, starring Tom Hanks, along with Ayelet Zurer as a brilliant female physicist who saves the day. Fortunately, much of the book’s dodgy science and misrepresentation of CERN didn’t make it onto the screen (see box below).

Of course, the angels, the demons and the black holes were all a distraction from CERN’s main thrust – launching the LHC. By 2008 Fermilab’s Tevatron was well into its second run, but the elusive Higgs boson remained undiscovered. The mass range available for it was increasingly constrained and particle physicists knew that if the Tevatron didn’t find it, the LHC would (assuming the Higgs existed). The stakes were high, and a date was set to thread the first beams around the LHC. First Beam Day would be 10 September 2008.

Angels & Demons: when Hollywood came to CERN

Tom Hanks, Ayelet Zurer and Ron Howard in front of the Globe at CERN
Movie magic Angels & Demons was previewed to the entertainment press at CERN in February 2009. Lead actors Tom Hanks (left) and Ayelet Zurer (centre) attended, while director Ron Howard (right) spoke to the press. (Courtesy: CERN)

Dan Brown’s 2000 mystery thriller Angels & Demons is a race against the clock to stop antimatter stolen from CERN from blowing up the Vatican. Despite initial slow sales, the book eventually proved so successful that it was turned into a 2009 movie of the same name, directed by Ron Howard. He visited CERN more than once and I was impressed by his wish to avoid the book’s shaky science.

In the movie version, which stars Tom Hanks and Ayelet Zurer, CERN is confined to the pre-opening title sequence, with the ATLAS cavern reconstructed in CGI. Howard’s team even gave me a watermarked script and asked for feedback on the science. Howard also made a short film about CERN for the movie’s Blu-ray release. Ahead of that event, we found ourselves fielding calls from Howard’s office at all times of day and night about the science.

The movie was officially launched at CERN to the entertainment press, with Howard, Hanks and Zurer in attendance, who all gushed what an amazing place the lab is. Handled by Sony Pictures, the event proved much more tightly controlled than typical CERN gatherings, with Sony closely vetting which science journalists we’d invited. My colleague Rolf Landua and I ended up having dinner with Hanks, Zurer and Howard – something I could never have imagined happening when Angels & Demons first came out.

Any big new particle accelerator is its own prototype. Switching such a machine on is best done in peace and quiet, away from the media glare. But CERN’s new standing on the world’s stage, coupled with the still-present black-hole myth, dictated otherwise. Media outlets started contacting us – not to ask if they could come for the switch-on, but to tell us they would be there. Outside the CERN fence if necessary.

Another surreal conversation with the DG ensued. Media were coming, I told him, whether we liked it or not. Lots of them. We could either make plans to invite them in and allow them to follow the attempts to get beams around the LHC, or we could have them outside the lab reporting that CERN was starting the doomsday machine in secrecy behind the fence.

Around 1000 media professionals representing some 350 outlets descended on the lab.

The DG agreed that it might be better to let them in, and so we did. Around 1000 media professionals representing some 350 outlets descended on the lab. Among them was a team from BBC Radio 4. Some months earlier, a producer called Sasha Feachem had rung CERN to say she’d been trying to persuade her boss, Mark Damazer, to do a full day’s outside broadcast from CERN, and would I come to London to convince him.

I tried, and in an oak-panelled room at Broadcasting House, failed completely to do so. But Damazer did accept an invitation to visit CERN. After hitting it off with the DG, Radio 4’s Big Bang Day was approved and an up-and-coming science presenter by the name of Brian Cox was chosen to anchor the BBC’s coverage. It was the first time a media team had ever broadcast wall-to-wall from a science lab and I don’t think Radio 4 has done anything like it since.

Journalists were accredited. A media centre was set up. Late-coming reporters were found places in the CERN main auditorium where they could watch a live feed from the control room, along with the physicists. We even installed openable windows in the conference room overlooking the control room so that TV crews could get clean shots of the action below.

The CERN Control Centre packed with staff and press
Global interest Around 1000 media professionals representing some 350 outlets arrived at CERN in September 2008 to see the first proton beams enter and travel round the Large Hadron Collider. (Courtesy: CERN/Maximilien Brice)

A time was set early that September morning for the first attempt at beam injection into the LHC, and the journalists were all in place. Then there was a glitch, and the timing was put back a couple of hours. Project leader Lyn Evans had agreed to give a countdown, and when the conditions for injection were back, he began. A dot appeared on a screen indicating that a proton beam had been injected.

After an agonising wait, a second dot appeared, indicating that the beam had gone round the 27 km-long machine once. There were tears and laughter, and the journalists who were parked in the auditorium with the physicists later said they’d had the best seats in the house. They were able to witness the magnitude of that moment alongside those whose lives it was about to change.

It was an exhausting but brilliant day. On my way home, I ran into Evans as he was driving out of the lab. He rolled down his window and said: “Just another day at the office, eh James!” Everyone was on top of the world. Media coverage was massive and positive, with many of those present telling us how refreshing it was to take part in something so clearly genuine in a world where much is hidden.

From joy to disaster

The joy proved short lived. The LHC has something like 10,000 high-current superconducting interconnects. One was not perfect, so it had a bit of resistance, which led to an electrical arc that released helium into the cryostat with enough force to knock several magnets off their stands. Nine days after switch-on, CERN suddenly had a huge and unexpected repair job on its hands.

The Higgs boson was still nowhere in sight. The Tevatron was still running and the painstaking task began of working out what had gone wrong at the LHC. CERN not only had to repair the damaged section, but also understand why it had happened and ensure it wouldn’t happen again. Other potentially imperfect interconnects had to be identified and remade. The machine also had to be equipped with systems that would release pressure should helium gas build up inside the cryostat.

Close-up of damage to superconducting magnet
What happened here? CERN had a job on its hands in 2008 explaining to the world how a damaged superconducting interconnect led to the Large Hadron Collider breaking down just nine days after the first beams had entered the machine. (Courtesy: CERN/Maximilien Brice)

My mantra throughout this period was that CERN had to be honest, open, trustworthy and timely in all communications – an approach that, I think, paid dividends. The media were kind to us, capturing the pioneering nature of our research and admiring the culture of an organization that sought not to attribute blame, but to learn and move on.

When beams were back in the LHC in November 2009, they cheered us on. By the end of the year, the first data had been recorded. LHC running began in earnest in 2010, and with the world clearly still in place, the black-hole myth gave way to excitement about a potential major discovery. The Tevatron collided its last beams in September 2010, leaving the field clear for the LHC.

As time progressed, hints of something began to appear in the data, and by 2012 there was a palpable sense of expectation.

As time progressed, hints of something began to appear in the data, and by 2012 there was a palpable sense of expectation. A Higgs Update Seminar was arranged at CERN for 4 July – the last day possible for the spokespeople of the LHC’s ATLAS and CMS experiments to be at CERN before heading to Melbourne for the 2012 International Conference on High-Energy Physics, which is always a highlight in particle physicists’ calendars.

Gerry Guralnik and Carl Hagan – early pioneers of spontaneous symmetry breaking – asked whether they could attend the CERN seminar, so we thought we’d better invite Peter Higgs and François Englert too. (Robert Brout, who had been co-author on Englert’s 1964 paper in Physical Review Letters (13 321) predicting what we now called the Brout–Englert–Higgs mechanism, had died in 2011.) Right up to the last minute, we didn’t know if we’d be making a discovery announcement, or just saying “Watch this space.” One person, however, did decide that he’d be able to say, “I think we have it.”

As DG since 2009, Rolf-Dieter Heuer had seen the results of both experiments, and was convinced that even if neither could announce the discovery individually, the combined data were sufficient. On the evening of 3 July 2012, as I left my office, which was next to the CERN main auditorium, I had to step over people laying out sleeping bags in the corridor to guarantee their places in the room the next day.

CERN auditorium full of people clapping and cheering
One famous day The discovery of the Higgs boson, announced on 4 July 2012, was the highlight of James Gillies’ career as CERN’s comms chief. Fabiola Gianotti (foreground, wearing red top) leads the applause in the packed CERN auditorium. (Courtesy: CERN)

As it turned out, both experiments had strong enough measurements to make a positive statement on the day, though the language was still cautious. The physicists talked simply about “the discovery of a new particle with features consistent with those of the Higgs boson predicted by the Standard Model of particle physics”. Higgs and Englert heard the news seated side by side, Higgs famously wiping a tear from his eye and saying that it was remarkable that the discovery had been made in his lifetime.

The media were present in force, and everyone wanted to talk to the theorists. It’s a sign of the kind of person Higgs was that he told them they’d have plenty of opportunity to talk to him later, but that today was a day to celebrate the experimentalists.

Nature versus nature

The Higgs discovery was undoubtedly the highlight of my career in communications at CERN, but the Higgs boson is just one aspect of CERN’s research programme. I could tell you about the incredible precision achieved by the LHCb experiment, seeking deviations from the Standard Model in very rare decays. I could talk about the discovery of a range of composite particles predicted by theory. Or about the insights brought by a mind-boggling range of research at low energies, from antimatter to climate change.

Then there is CERN’s neutrino programme. It’s now focused on the US long baseline project, but it brought its own headaches to the communications team when muon neutrinos from CERN’s Super Proton Synchrotron appeared to be arriving at the Gran Sasso Laboratory in Italy faster than the speed of light.

“Have you checked all the cables?” said one of our directors to the scientists involved, in a meeting in the DG’s office. “Of course,” they insisted. As it turned out, there had been a false reading – not strictly speaking from a poorly chosen cable, but a faulty fibre-optic connection. The laws of physics were safe. Unfortunately, this was not before a seminar was held in the CERN Main Auditorium in September 2011.

Had they held the seminar at Gran Sasso, I’m sure they’d have got less coverage. Our approach was to say: “This is how science works – you get a measurement that you don’t understand, and you put yourself up to scrutiny from your peers.” It led to a memorable editorial in Nature (484 287) entitled “No shame“, which concluded that “Scientists are not afraid to question the big ideas. They are not afraid to open themselves to public scrutiny. And they should not be afraid to be wrong.”

Nature caught us off guard, not once but twice, when animals brought low the world’s mightiest machine.

That remark in Nature was a positive outcome for CERN from a potentially embarrassing episode, but nature of another kind caught us off guard, not once but twice, when animals brought low the world’s mightiest machine. First, breadcrumbs and feathers led us to believe that a bird had had a lucky escape when it tripped an electrical substation. Later, a pine marten, which also caused a power outage after gnawing through a live cable, was not so lucky. It has now joined the gallery of animals that have met unusual ends in the Rotterdam Museum of Natural History.

A coiled wire sculpture hanging from a ceiling
Different worlds Visits are vital for CERN, which has hosted everyone from pupils and politicians to pop stars and artists – including Antony Gormley, whose metal sculpture Feeling Material XXXIV hangs in the lab’s main building. (Sculpture donated by the artist. Photo courtesy: CERN/Benoit Jeannet)

There were also visitors. Endless visitors, from school children to politicians and from pop stars to artists. On a return visit of my own to Antony Gormley’s London studio after having given him a tour of CERN, he spontaneously presented me with one of his pieces. Feeling Material XXXIV – a metal sculpture that’s part of a series giving an impression of the artist’s body – now hangs proudly in CERN’s main building

There was an incredible moment at one of the TEDxCERN events we organized when Will.i.am joined two local children’s choirs for a rendition of his song “Reach for the Stars”. And there were many visits from the late landscape architect Charles Jencks and Lily Jencks who produced a marvellously intelligent design for a new visitor centre in the form of a cosmic Ouroboros – like a snake biting its own tail, it appeared like two mirror-image question marks forming a circle. One of my only regrets is that we were unable to fund its construction.

For a physicist-turned-science-communicator such as myself, there was no better place to be than at my desk through the opening years of the 21st century. CERN is a unique and remarkable institution that shows what humanity is capable of when differences are cast aside, and we focus on what we have in common. To paraphrase Charles Jencks, to whom I’m leaving the last word, CERN is perhaps the last bastion of the enlightenment.

The post <em>Angels & Demons</em>, Tom Hanks and Peter Higgs: how CERN sold its story to the world appeared first on Physics World.

Fluorescent dye helps reveal the secrets of ocean circulation

Par : No Author

Seawater located more than 2 km below the ocean’s surface drives the oceanic circulation that helps regulate the Earth’s climate. At these depths, turbulent mixing drives water towards the surface in a process called upwelling. How quickly this upwelling happens dictates how carbon and heat from the ocean are exchanged with the atmosphere.

It has been difficult to directly test how carbon storage in the ocean is controlled by deep-sea mixing processes. But with the help of a non-toxic fluorescein dye, a research team headed up at UC San Diego’s Scripps Institution of Oceanography has now directly measured cold, deep-water upwelling along the slope of a submarine canyon in the Atlantic Ocean.

Oceanic circulation a key phenomenon

The Earth relies on large-scale ocean circulation – known as conveyor belt circulation – to maintain balance. In this process, seawater becomes cold and dense near the poles and sinks into deep oceans, eventually rising back up elsewhere and becoming warm again. The cycle is then repeated. This natural mechanism helps to maintain a regular turnover of heat, nutrients and carbon, which underpins marine ecosystems and the natural ability to mitigate human-driven climate change. However, the return of cold water from the deep ocean to the surface via upwelling has been difficult to measure.

Back in the 1960s, oceanographer Walter Munk predicted that the average speed of upwelling was 1 cm/day. But while upwelling at this speed would transport large volumes of water, directly measuring this rate across entire oceans is not feasible. Munk also suggested that upwelling was caused by turbulent mixing from internal waves breaking under the ocean’s surface; but more modern-day measurements have shown that turbulence is the highest near the seafloor.

This created a paradox: if turbulence is highest at the seafloor, it would push cold water down instead of up, making the bottom waters colder and denser. But it has been confirmed in the field that the deep ocean is not completely filled with cold and dense water from the poles.

Direct evidence of diapycnal upwelling

In the last few years, a new theory has surfaced. Namely, that it is the steep slopes on the ocean’s seafloor (such as the walls of underwater canyons) that are responsible for the turbulent mixing that causes upwelling, because they provide the right type of turbulence to move water upwards.

Fluorescent dye in a barrel
Dye delivery The researchers released the non-toxic fluorescent dye in this barrel just above the sea floor. (Courtesy: San Nguyen)

To investigate this theory, first author Bethan Wynne-Cattanach and colleagues used a fluorescein dye to investigate the upwelling across isopycnals (layers of constant density). The research was conducted off the coast of Ireland at a 2000 m-deep canyon in the Rockall Trough.

The researchers released over 200 l of fluorescein dye at 10 m above the canyon floor, which had a local temperature of 3.53 °C. They used a fastCTD (FCTD) rapid profiler housing a fluorometer (with resolution down to the parts per billion range) to investigate how the dye moved at depths as low as 2200 m. The FCTD also carried a micro-conductivity probe to assess the dissipation rate of temperature variance – a key metric for determining turbulent mixing.

The team tracked the dye for 2.5 days. During this time, the dye’s movements showed a turbulence-driven bottom-focused diapycnal (across the isopycnals) upwelling along the slope of the canyon. They found that the flow was much faster than the original estimates, with measurements showing that the upwelling occurred at a rate of around 100 m per day. The researchers note that this first direct measurement of upwelling and its rapid speed, combined with measurements of downwelling in other parts of the oceans, suggests that there are upwelling hotspots.

The overarching conclusion of the study is that mixing of ocean waters at topographic features – such as canyons – leads to a globally significant upwelling, and that upwelling within canyons could have a more significant role in overturning deep water than previously thought. Given the numbers of submarine canyons across the globe, it’s also thought that previous global scaling based on weaker upwelling velocities could be underestimated.

This research is published in Nature.

The post Fluorescent dye helps reveal the secrets of ocean circulation appeared first on Physics World.

Why we need gender equality in big science

Par : No Author

Investments in “big-science” projects, whether it’s the next particle collider or a new synchrotron, are often justified in terms of the benefits for science, such as the discovery of a new particle or the opening of a new vista on the cosmos. The positive impact of large facilities on society and the economy are often cited too, such as spin-off technologies in medical physics. Gender equality, however, is rarely acknowledged as a necessary objective when building these multi-billion-euro facilities or investing in the research required to develop them. That lack of focus on gender equality is something that I believe must change.

The lack of gender-based targets for big science is laid bare in a tool created as part of the European Union’s Horizon 2020 funding programme. Produced by the Research Infrastructure Impact Assessment Pathways project, it assesses the impact of research infrastructures on the economy and society via 122 “impact indicators” in four areas: human resources; policy; society; and economy and innovation. But only one indicator – contribution to gender balance in society – gives any mention to gender equality.

Yet improvements can be made when it comes to supporting female scientists in big science. Take the EU-wide ATTRACT project, which funds the development of breakthrough technologies via proof-of-concept projects. It is led by large research organizations such as the CERN particle-physics laboratory near Geneva, the X-ray Free Electron Laser in Hamburg, Germany, and the European Southern Observatory. Between 2018 and 2020, ATTRACT supported 170 projects, half of which focused on health, environment and biological and related sciences. However, only 11% of the funded ATTRACT projects had a woman as the principal investigator, even though women receive almost half of doctoral degrees in those areas.

Such numbers tell us we have a long way to go. After all, big-science facilities receive significant amounts of public money and employ thousands of people in different professional roles. We need to promote big science as a career destination not only for science graduates but also those in law, management and policy. Monitoring gender balance among the staff and users of research infrastructures and the members of big-science projects is crucial to ensuring that women graduates, who outnumber men in Europe, see big science as a place where they can thrive professionally.

In that regard, there have been some positive developments. The EU’s €96bn Horizon Europe programme – the successor to Horizon 2020 – now requires that all benefiting organizations, many of which participate in big-science projects, have a gender equality plan. Several industry sectors are also doing lots to integrate equality, diversity and inclusion into human resources practices to attract talent.

Tapping into the talent pool

But more needs to be done. That’s why since 2020 the Women in Big Science Business Forum (WBSBF) has been promoting gender equality as part of the Big Science Business Forum (BSBF). The WBSBF was set up by a group of people at Fusion for Energy, which manages the EU’s contribution to the ITER fusion experiment being built in France. The BSBF itself is trying to advance gender equality across research infrastructures, universities and supplier companies. For instance, since research infrastructures distribute billions of euros of public money in procurement and investment, they can adopt procurement processes that question the supplier’s compliance with gender-equality legislation and ask for examples of efforts they have made to recruit and retain women.

“Gender budgeting” is a tool that big-science projects can also use to assess how their budget decisions impact gender equality. That could mean eliminating the gender pay gap, making provisions for equal parental leave or ensuring that research grants are the same for projects whether led by women or men. Budgets could also be earmarked to help staff achieve a work–life balance. I think it’s important as well that we improve training in gender equality and that we “gender proof” recruitment by identifying and removing potential biases to assessment criteria that could favour men. Big-science projects can also make use of the European Charter & Code for Researchers, which includes a dozen gender-equality indicators as part of the EU initiative “human resources strategy for researchers”.

At the BSBF meeting in Granada in 2022, the WBSBF launched a recognition award to acknowledge, celebrate and promote successful measures taken by big-science organizations to increase the proportion of women among their staff and users of research infrastructures. There are three categories: “advances in organizational culture”; “collaborative partnerships”; and “societal impact”. Some 13 organizations applied for an award in 2022, with organizations such as XFEL and CERN being recognized.

The WBSBF is building on that progress at this year’s BSBF event in Trieste, Italy, in October with activities on socially responsible procurement, gender balance in work policies, and the socioeconomic impact of investment in big science. There will also be a live-streamed round-table session with leaders from big science. At Trieste, we’ll also be introducing a WBSBF trainee scheme, which will place three to five students or recent graduates on in-house trainee programmes run by labs, companies or intergovernmental bodies taking part in BSBF. Those roles don’t have to be scientific or technical, but could also be in, say, legal, communication or human resources.

Big science needs more women and I hope these initiatives will help to turn the tide. The talent pool for women is already there and big science must get better at tapping into it, not only for the discoveries that lie ahead but also for building a better relationship with society.

  • The WBSBF group comprises Francesca Fantini, Aris Apollonatos, Romina Bemelmans, Silvia Bernal Blanco, Carmen Casteras Roman, Ana Belen Del Cerro Gordo, Pilar Rosado, Cristina Sordilli and Nikolaj Zangenberg
  • To find out more about the recognition award and the WBSBF trainee scheme, e-mail wbsbf@f4e.europa.eu

The post Why we need gender equality in big science appeared first on Physics World.

Robotic radiotherapy could ease treatment for eye disease

Par : No Author

A single dose of radiation can reduce the number of eye injections needed to treat patients with neovascular age-related macular degeneration (AMD). That’s the conclusion of a UK-based clinical trial of more than 400 patients with the debilitating eye disease.

AMD affects 8% of adults globally, and is a leading cause of central blindness in people over 60 in developed nations. Neovascular (or wet) AMD, the most advanced and aggressive form of the disease, causes new blood vessels to grow into the macula, the light-sensing layer of cells inside the back of the eye. Leakage of blood and fluid from these abnormal vessels can lead to a rapid, permanent and severe loss of sight.

The condition is treated with injections of drugs into the eye, with most people requiring an injection every 1–3 months to effectively control the disease. The drugs inhibit vascular endothelial growth factor (VEGF), a key driver of vascular leakage and proliferation. Reporting their findings in The Lancet, the investigators suggest that SRT could eliminate 1.8 million anti-VEGF injections per year globally across all high-income countries.

STAR treatment

The STAR (stereotactic radiotherapy for wet AMD) study, led by Timothy Jackson of King’s College London, is a double-blinded trial that enrolled patients with previously treated chronic active neovascular AMD from 30 hospitals in the UK. All participants received the robotic treatment, with or without delivery of 16 Gy of radiation, at one of three UK treatment centres.

The team used a robotically controlled SRT system that delivers three highly collimated 5.33 Gy radiation beams, targeted to avoid lens irradiation and overlapping at the macula. To stabilize the eye being treated, a suction-coupled contact lens was secured to the cornea and connected to a positioning gimble with infrared reflectors. The SRT device tracked the reflectors, stopping the treatment if the eye moved out of position.

The researchers randomly allocated 274 patients to receive the 16 Gy SRT treatment and 137 to receive identical sham treatment without radiation. Immediately afterwards, all patients received a standard dose of the anti-VEGF drug ranibizumab injected into the eye.

After radiotherapy, participants visited their recruiting hospital for follow-up exams every four weeks up to the 96-week primary endpoint. During these review sessions, patients received an intraocular injection of ranibizumab each time they needed retreatment. The researchers are continuing assessments at three and four years to determine the safety and long-term efficacy of this approach.

The final study analysis included 241 participants in the 16 Gy SRT group and 118 participants in the sham group, with total of 409 patients treated and forming the safety population. The findings are encouraging: patients who received SRT required a mean of 10.7 injections after 96 weeks, compared with 13.3 injections for the conventional drug-only group.

Reducing the burden of anti-VEGF treatment would be highly beneficial for both patients and hospitals. Preliminary analyses suggest that the cost of the SRT treatment may be more than offset by the reduction in injections. The authors plan to prepare a detailed cost evaluation.

Vision outcome for both cohorts was comparable. While the sham group had slightly less worsening of best corrected visual acuity at two years, there was no statistical difference between the two. The systemic safety was also similar in the two groups, with similar rates of adverse events. Evaluation using multimodal imaging determined that 35% of the SRT-treated participants and 12% of the sham group had retinal microvascular abnormalities.

The study outcome supports the findings of a similar phase II clinical trial, INTREPID, whose results were published in 2020. The INTREPID study of 230 randomized patients showed that a single radiation dose of 16 or 24 Gy administered by SRT reduced injections by 29% for the ensuing 12 months, compared with the control group.

Jackson tells Physics World that the researchers are currently analysing data from patients reporting for their three- and four-year anniversary examinations. The data suggest increasing benefit with respect to injection frequency over time. The investigators also note that the benefits of SRT may be eroded by the introduction of newer intravitreal drugs such as faricimab, or higher doses of existing anti-VEGF drugs, which have longer dosing intervals than ranibizumab.

Writing in an accompany commentary in The Lancet, Gui-shuang Ying and Brian VanderBeek of the University of Pennsylvania Perelman School of Medicine state: “The STAR study has indicated a potential alternative treatment paradigm that appears to significantly reduce treatment burden without impacting visual acuity outcomes over two years, but additional gaps in knowledge need to be addressed before the widespread adoption of this therapy.”

They add: “If the reduction in anti-VEGF injection rate, non-inferior visual acuity results and acceptable safe profile of SRT remain through future studies, the STAR study will be a foundational piece in advancing a promising adjunctive therapy forward. Patients eagerly await the day when the injection burden is reduced, and SRT might well be a path to getting there.”

The post Robotic radiotherapy could ease treatment for eye disease appeared first on Physics World.

Speedy stars point to intermediate-mass black hole in globular cluster

Par : No Author
Omega Centauri
Hubble zooms in A wide view of Omega Centauri reveals a dense collection of stars (left). The middle image shows a closer view of the central region of the cluster. The image on the right shows the location of the IMBH candidate in the cluster. (Courtesy: ESA/Hubble & NASA, M Häberle (MPIA))

The best evidence yet for an intermediate-mass black hole has been claimed by an international team of astronomers. Maximillian Häberle at the Max Planck Institute for Astronomy in Heidelberg and colleagues saw the gravitational effects of the black hole in long-term observations of the stellar cluster Omega Centauri. They predict that similarly-sized black holes could exist at the centres of other large, dense stellar clusters – which could explain why so few of them have been discovered so far.

Black holes are small and extraordinarily dense regions of space with huge gravitational fields that not even light can escape. Astronomers know of many stellar-mass black holes that weigh-in under about 100 solar masses. They are also are aware of supermassive black holes, which have 100,000s to billions of solar masses and reside at the centres of galaxies.

However, researchers know very little about the existence (or otherwise) of intermediate-mass black holes (IMBHs) in the 100–100,000 solar mass range. While candidate IMBHs have been spotted, no definite discoveries have been made. This raises questions about how supermassive black holes were able to form early in the history of the universe.`

Seeding supermassive growth

“One potential pathway for the formation of these early supermassive black holes is by the merger of intermediate mass ‘seed’ black holes,” explains Häberle. “However, the exact mass and frequency of these seeds is still unknown. If we study IMBHs in the present day, local, universe we will be able to differentiate between different seeding mechanisms.”

Häberle’s team examined the motions of stars within the globular cluster Omega Centauri, located around 17,000 light–years from Earth. Containing roughly 10 million stars, the cluster is widely believed to be the core of an ancient dwarf galaxy that was swallowed by the Milky Way. This would make it a prime target in the ongoing hunt for an IMBH within our own galaxy.

Häberle’s team analysed a series of images of Omega Centauri taken by the Hubble Space Telescope across a 20 years. By comparing the relative positions of the cluster’s stars in successive images, they identified stars that were moving faster than expected. Accelerated motion would be strong evidence that an IMBH is lurking somewhere in the cluster.

“This approach is not new, but we combined improved data reduction techniques with a much larger dataset, containing more than 500 individual images taken with the Hubble Space Telescope,” Häberle explains. “Therefore, our new catalogue is several times larger and more precise than all previous efforts.”

While some previous studies have presented evidence of an IMBH at the centre of Omega Centauri, the gravitational influence of unseen stellar-mass black holes could not be ruled out.

Seven speedy stars

Häberle’s team identified a total of seven stars at the very centre of Omega Centauri that appear to be moving much faster than the cluster’s escape velocity. Without some immense gravitational intervention, the researchers calculated that each of these stars would have left the centre of the cluster in less than 1000 years – a small blip on astronomical timescales – before escaping the cluster entirely.

“The best explanation why these stars are still around in the centre of the cluster is that a massive object is gravitationally pulling on them and preventing their escape,” Häberle claims. “The only object that can be massive enough is an intermediate-mass black hole with at least 8200 solar masses.”

The study makes Omega Centauri the best candidate in the Milky Way for having a IMBH. If confirmed, the IMBH will be the most massive black hole in the Milky Way after Sagittarius A* – the SMBH residing at our galaxy’s centre.

“To draw further conclusions and gain a statistical sample, we will now need to extend this research to other massive star clusters, where there might be still some hidden black holes,” Häberle says. The astronomers now hope that similar observations could soon be made using instruments including the Multi Unit Spectroscopic Explorer at the Very Large Telescope, and the James Webb Space Telescope’s Near-IR Spectrograph.

The research is described in Nature.

The post Speedy stars point to intermediate-mass black hole in globular cluster appeared first on Physics World.

An evening of landscape astrophotography

Par : No Author
Burrow Farm Engine House on Exmoor
Painted with light Burrow Farm Engine House on Exmoor in Somerset, UK, photographed with the galactic core in the background. (Courtesy: Shaun Davey)

Shaun suggested our initial face-to-face meet should be at around 11 p.m., on a remote track on the flanks of the wild hinterland of Exmoor National Park in Somerset, UK. We were to rendezvous at the rather delightful what3words coordinates of: ///otters.grins.greet near a monument colloquially named “Naked Boy Stone”.

As is usual when I venture onto the moor, I informed my son before setting off. Mobile signals can get a bit iffy above about 300 m, although what I expect him to do from Oxford, more than 150 km away, I’m not altogether sure. In hindsight, I might also have chosen my words more carefully when I told him not to worry about meeting this man because “We’d followed each other on Twitter/X for years”. Perhaps I should have begun by telling him that Shaun Davey is an acclaimed photographer of the night skies.

Under the illumination of our head torches, we set off, traipsing along a branch-strewn trail that followed a section of the disused West Somerset Mineral Line. Built between 1857 and 1864, the rail track was constructed to carry iron ore from the mines of Exmoor down to Watchet Harbour, for onward shipment to Wales and the steel blast furnaces of the Ebbw Valley.

The last trains ran in 1913, and the isolated location was ideal for our purposes, though it did make our journey somewhat perilous – the edge of the unlit path ran along a deep rail cutting. It was amazing to think that it would have been carved out by manual labour alone. However, my first thought was “that’s one hell of a drop if we slip”.

Our destination was a small stone building that had once housed a steam engine, used to raise and lower the miners and to pump water from the tunnels. Illuminated by a handheld LED lamp, it cast a dramatic picture against the stars.

It’s no mean feat for a photographer to capture objects that are both a few metres and a few light-years away in a single image, but this is where Shaun’s vast experience of landscape astrophotography came into play. He set up two tripod-mounted cameras; one to photograph the landscape and the other to capture the sky, each using different exposure times. Correct exposures are governed by three variables: the shutter speed, the lens aperture and the ISO (detector sensitivity). All are adjustable, so picking the right settings is an art.

When it came to photographing the sky, Shaun mounted the camera on a device fitted with the most powerful laser I’ve ever seen outside a laboratory. He calibrated the camera platform by aligning the laser beam with the polar star. I’d never been so aware of the extent of laser collimation until I witnessed the needle-sharp beam projecting deep into the blackness of the heavens. Once the tracking mount has been aligned, the attached camera can precisely follow the motion of celestial bodies as the Earth rotates. Without the tracking tripod, you’d just get blurred slashes caused by unwanted star – or rather Earth – motion.

It’s no mean feat for a photographer to capture objects that are both a few metres and a few light-years away in a single image

After the photos had been taken, Shaun post-processed them with Photoshop and Lightroom to correct for colour and optical distortions (he uses a wide-angle lens that can make straight lines appear curved). Finally, the images were merged to create one stunning photograph.

Light pollution is, of course, the biggest irritant for astrophotographers, which is why Shaun picked a cloudless night and a location within one of the six designated Dark Sky Reserves of the UK. Nevertheless, light pollution was evident even at 2.30 a.m. Some sources were obvious from the direction of the glow: the industry of Wales (50 km away), Tiverton/Cullompton and/or Exeter (all approximately directly south, and at about 19, 23 and 40 km, respectively), and the Sun. In midsummer, when our expedition took place, it’s never far below the northern horizon.

This adventure kicked off after I posted on X that I’d never seen the Milky Way with my naked eye in my life (I’m 70 years old). Shaun replied with a promise that he could knock that one off my bucket list by showing me the galactic core (the brightest region of the Milky Way at the centre of our galaxy). He kept his promise.

  • Readers are invited to submit their own Lateral Thoughts. Articles should be between 750 and 800 words long, and can be e-mailed to pwld@ioppublishing.org

The post An evening of landscape astrophotography appeared first on Physics World.

Structured electrons have chiral mass and charge

Par : No Author

This article has been updated because the original version incorrectly claimed that the observed electrons follow “chiral paths” and “chiral trajectories”.

Structured electrons with chiral mass and charge have been created by researchers in Germany. The researchers say their work, which is analogous to work done with photons in 2010, achieves chirality in single-electron matter waves without angular momentum. Some other researchers, however, are puzzled by this claim.

In 2010, David Grier of New York University and colleagues created helical optical beams with much greater intensity at the beam edge than at the centre. Then, they showed that objects could be trapped by such beams and, depending on the chirality of the helix, pulled back towards the light source.

“At the time, we speculated that you ought, in principle, to be able to drop matter waves into one of these states,” says Grier. “People have controlled matter waves with light; they’ve created vortices in matter waves, but as far as I know no one has taken the next step and created not just a topological phase but a topological intensity distribution.”

Femtosecond electron pulses

In the new work, Peter Baum and colleagues at University of Konstanz fired femtosecond electron pulses (almost none of which contained more than one electron) from an ultrafast transmission electron microscope at a 50-nm-thick silicon nitride membrane. They directed optical laser pulses with orbital angular momentum at the same membrane.

The silicon nitride was transparent to electrons, but the laser pulses shifted the electron density in the membrane such that parts of the electron’s wavefunction were accelerated and other parts decelerated. This created single electrons with chiral mass and charge distributions. The researchers characterized these with a second femtosecond laser and silicon nitride membrane.

The team showed that, if they used laser pulses with zero angular momentum, the output could be modelled by electrons with no chirality. If the angular momentum quantum number was 1, the electronic charge and mass wavefunction has the chirality of a single, left-handed coil. If it was −2, the wavefunction was a right-handed double helix. They also scattered these chiral wave packets off chiral nanoparticles, with a left-handed electron showing less chirality when scattered off a left-handed nanoparticle and extra chirality when scattered off a right-handed nanoparticle and vice versa.

Imprinting chirality

The researchers explain that the optical pulses imprint chirality onto the electron’s wavefunction, converting it into a coil of charge and mass, without actually giving the electron either polarization or orbital angular momentum. “The coil propagates as a whole,” explains Baum; “the centre of mass is on a line.”

The researchers believe that these properties could be useful in a range of applications including electron microscopy, the study of magnetic materials, and construction of subnanometre optical tweezers. It could even, they say, have cosmological implications if such electron coils occur in nature.  “Are they all around?,” ponders Baum, “We are currently starting to explore these possibilities.”

Grier is both impressed and puzzled by the results. “Light you can control essentially with consumer electronics,” he says, It’s much harder with matter waves. I consider this [work by Baum’s team] a really interesting implementation in matter waves of what my group demonstrated in 2010 in light waves.” He does note, however, that other groups have previously implemented chiral optical beams and shaped the intensity of electron beams, and that this research was not cited by Baum and colleagues in their paper in Science that describes their research. (Baum’s team accepts this and say they were unaware of the previous work.)

He is perplexed, however, by the researchers’ insistence that the chiral electron beams have no orbital angular momentum. He says that the chiral wavefunctions the researchers have achieved can generally be expressed as superpositions of so-called Bessel modes. “All but a special few of those superpositions carry orbital angular momentum through a net helical structure in the overall phase,” he says. “You have to do something a bit special to create a solenoidal mode with no orbital angular momentum. I don’t see how [Baum’s team] achieved that balancing act, or how they verified it. It seems to me that they just assume it to be so.”

Miles Padgett of the University of Glasgow in Scotland says that the research is “lovely” and says he “would happily accept that there is something interesting in generating an electron beam which goes above and beyond generating a solenoid optical beam.” He says, however, that “that’s not what [their] paper tells me, because this paper doesn’t recognize the generation of a previous optical solenoid beam [Grier’s work is not cited].” He is also puzzled by the claim of chirality without angular momentum, and is curious about whether or not the chiral electrons generate DC magnetic fields – which would indicate rotation.  The researchers have not experimentally measured this.

The post Structured electrons have chiral mass and charge appeared first on Physics World.

Inverse Mpemba effect seen in a trapped-ion qubit

Par : No Author

The inverse Mpemba effect has been observed in a quantum bit (qubit). The research was done at the Weizmann Institute in Israel and suggests that under certain conditions a cooler trapped-ion qubit may heat up faster than a similar warmer qubit. The observation could have important implications for quantum computing because many leading qubit types must be maintained at cryogenic temperatures.

The Mpemba effect is the puzzling observation that hot water sometimes freezes faster than cold water. It was first recorded in antiquity and is named after Erasto Mpemba, who as a teenager in Tanzania in the 1960s sought an explanation for the effect – which he first encountered while making ice cream and then confirmed in a series of experiments. Despite the best efforts of physicists over the past six decades, the effect remains poorly understood.

Researchers have also observed the inverse Mpemba effect whereby a cold system heats up faster than a warm system. Theoretical and experimental studies have revealed a range of systems – magnetic, granular, quantum and more – that exhibit Mpemba effects.

Avoiding decoherence

Quantum Mpemba effects are of particular interest to people developing cryogenic qubits. These must be operated at very low temperatures to reduce noise, which destroys quantum calculations in a process called decoherence.

In a new experiment described in Physics Review Letters, Shahaf Aharony Shapira and colleagues observed an inverse Mpemba effect in a single trapped strontium-88 ion coupled to an external thermal bath. This low-temperature ion acted as a qubit that interacted with the thermal bath, causing a slow decoherence of its quantum state over time.

“Most studies are about the direct Mpemba effect, which is easier to understand if you think classically,” says Aharony Shapira.

She offers an intuitive description of the classical Mpemba effect. Imagine, she says, a double-well potential where one well is a global minimum – the system’s most stable state – and the other is a local minimum – a comparatively less stable state.

Uniform energy distribution

When a system is at a high temperature, its energy distribution is relatively uniform, allowing it to transition between the two wells more freely. At lower temperatures, the system’s energy distribution becomes much narrower, concentrating near the bottom of each of the wells.

If the system starts in the local minimum, higher-temperature systems have lower energy barriers between the two wells, allowing them to transition quicker to the global minimum as it cools down.

“However, the inverse effect that we saw has a different intuition,” says Aharony Shapira.

End-state shortcut

To simulate the thermal bath, the team used laser pulses to induce transitions between the qubit states and the higher energy states of the trapped ion. Eventually, the interaction between the thermal photons from the laser caused the qubit to decohere.

The path the system takes as it moves towards its end-state is known as its “relaxation path”. This path is governed by the system’s interactions with the bath and its intrinsic quantum properties, such as coherence and interference effects that can suppress or enhance certain relaxation modes.

Unlike in classical systems, the relaxation rates in quantum systems do not change linearly with temperature. For certain initial conditions, a colder qubit might have a relaxation path that allows it to bypass certain energy barriers more efficiently than a warmer qubit. This shortcut allows it to reach the higher temperature equilibrium state faster than the warmer qubit – which is what the researchers observed.

School bus analogy

Team member Yotam Shapira explains the observation using the analogy of a bus driver waiting for schoolchildren to disembark. The bus driver, he said, finishes work when the last child gets off and is therefore limited by the speed of the slowest child.

“What we saw is that we can find conditions where it’s like the slowest child didn’t show up that morning,” he says, “Now the transition is much faster.”

Hisao Hayakawa is a researcher from Kyoto University whose team observed the Mpemba effect in a quantum dot. He says that the mechanisms observed at Weizmann Institute were similar to those seen in previous experiments. However, he suggests that the research may provide more insights into finer control mechanisms for quantum computing systems.

“These experiments suggest that the speed control to reach a desired state in quantum computers might be possible if we know the physics of the quantum Mpemba effect after a quench,” he said. A quench refers to a sudden change in a quantum system’s conditions, such as its temperature or magnetic field.

The research could influence the design of large-scale, temperature-sensitive qubit systems. “Maybe not cooling the system as much as you can would be best in the future,” said Aharony Shapira, “You need to be sensitive to special modes that, like in our case, can heat up very fast.”

The post Inverse Mpemba effect seen in a trapped-ion qubit appeared first on Physics World.

Could athletes mimic basilisk lizards and turn water-running into an Olympic sport?

Par : No Author

The world’s best runners are gathering in Paris for the 2024 Summer Olympics. Sprinters vying for the title “world’s fastest” hope to chase records set by all-time greats such as Usain Bolt and Florence Griffith-Joyner, competing on a highly engineered athletic track. Across the Atlantic, however, a different type of sprinter is practising its craft daily, not on polymer-laced rubber, but on water.

Basilisk lizards – nicknamed “Jesus Christ lizards” for their ability to run on water – don’t run for accolades or titles; they’re just looking to escape predators. When threatened, these pint-sized powerhouses take a running start on land, then skitter across the water. At 100 grams, basilisk lizards are hardly heavyweights, but they’re much too heavy to be supported by surface tension.

The ability to run on water is one of the most impressive feats in the animal kingdom – a triumph of physics as much as biology. It’s a question that has intrigued researchers for many years, but there’s something else all good physicists will want to know: could humans moving at speed ever run on water?

Slap, stroke, recover

Biologist Tonia Hsieh was first struck by the basilisks’ water-running in her undergraduate class on herpetology – the study of amphibians and reptiles. She couldn’t stop thinking about their ability to seemingly defy the laws of physics, so she pursued a PhD at Harvard University in 1999 chasing these little reptiles’ superpower.

A few years before, two other Harvard researchers had studied the same problem, developing a mathematical model that was Hsieh’s starting point (Nature 380 340). Tom McMahon and Jim Glasheen had analysed videos of the lizards and showed that each step they take across the water can be broken down into three stages – slap, stroke and recovery (see “On your marks” image).

Sequence of images showing the slap, stroke, recovery cycle of a basilisk lizard on water
On your marks The basilisk lizard is capable of short sprints on water thanks to its slap–stroke–recovery step cycle. (Courtesy: J. Exp. Biol. 206 23)

When the basilisk runs, its foot slaps the water’s surface, just like a human sprinter on a track. With every footfall, the runner drives their shoe into the track, and the track pushes back. That’s Newton’s third law: every action has an equal and opposite reaction.  The same holds for the basilisk. Each time the lizard’s foot hits the water, the liquid exerts an upward force. The larger the lizard’s foot and the faster it hits the water, the more upward force the slap generates.

Unlike a human sprinter, the basilisk is running on a yielding surface. When its foot dips into the water, the basilisk extends its leg like a swimmer’s arm, but it moves so fast that in the milliseconds before the water rushes in, an air-filled cavity forms above the foot. This is the stroke. During this phase the lizard’s foot experiences a lifting force proportional to the amount of water it moves.

In the final phase – recovery – the basilisk quickly pulls its foot up and lifts it for the next slap. Anyone who’s waded through knee-high water knows this isn’t easy. The basilisk’s foot must make it out before the water closes around it, otherwise it will be dragged down.

Glasheen and McMahon showed that thanks to the basilisk’s speed and large feet, the slap–stroke–recover sequence should generate enough upward thrust to support the lizard’s weight. For Hsieh, however, many questions remained. Staying above the water is only the first challenge: the basilisk also needs to move forward, and it needs to do this while balancing on the ever-changing surface of a liquid.

Like running on a mattress

To tackle these questions, Hsieh built a watery track for her runners using large aquarium tanks. A platform on either end gave her subjects solid starting and finishing lines. Hsieh stood by with a high-speed camera, poised to capture each run. She soon discovered, however, the challenges of running a scientific experiment on live, free-willed lizards.

“I had so many videos of them running across the water and then turning and running smack into the window,” she recalls. For useful data, she therefore had to wait until the lizards decided to run. Eventually, though, Hsieh was able to capture the basilisk foot’s speed and orientation in each phase. To calculate the forces, however, she needed to capture the motion of the water as well as the lizards.

Fluid dynamicists use a technique called particle image velocimetry (PIV) to measure the speed and direction (i.e. the velocity) of flow. For an analogy, think about late-afternoon sunlight slanting through a window. If it falls at a particular angle, the light will illuminate the dust particles in the room, revealing air currents that are otherwise invisible. PIV works the same way.

Hsieh filled her tank with 12-micron glass spheres that matched the water’s density. These tiny particles acted as tracers – just like dust – that followed the same path as the water. To illuminate them, she replaced sunlight with a 1 mm-thick laser sheet focused near the basilisk’s foot. By tracking the particles, Hsieh saw exactly how the lizards accomplished their gravity-defying sprint (PNAS 101 16784).

The lizards were using the water like a squishy starting block – propelling themselves forwards as well as upwards. During the slap, they did this by angling their feet slightly down, so they met the surface at an angle, and during the stroke they were pushing against the wall of the cavity.

But Hsieh’s biggest surprise was the strong side-to-side forces, ranging from 37–79% of the lizard’s body weight. The basilisks were throwing themselves from side-to-side like a wobbling toddler. “It never occurred to me,” says Hsieh, “that if you can produce enough force to stay on top of water, how are you going to maintain your balance? That’s a really major problem.”

Everything we take for granted about moving around in the world is actually really, really hard

Imagine sprinting on a thick foam mattress. With every step, you’ll wobble as you try to stay balanced. That, Hsieh realized, explained the basilisk’s ungainly sideways motion. “They’re basically tripping every single step, and they’re catching themselves. Everything we take for granted about moving around in the world is actually really, really hard.”

Water running is a young basilisk’s game. As Hsieh’s lizards grew, they became bigger and slower, struggling to support their weight – potentially bad news for aspiring human water runners. However, these lightweight critters don’t have the last word in running on water. In fact, nature’s most successful water runners outweigh the basilisk lizard by a factor of 10.

Water dancers

Western grebes and their near-doppelgängers, Clark’s grebes, are unassuming birds. They have slender, swan-like necks and black and white plumage. Their most striking feature is their red eyes. But behind this unremarkable exterior lurks a water-running powerhouse.

Biologist Glenna Clifton first saw the grebes in the early 2010s in a BBC documentary. At the time she was a PhD student at Harvard, and her animal-behaviour class was discussing mating displays. Like many birds, the western and Clark’s grebes begin their displays with mated pairs mirroring one another. A head shake, a riffle of a beak through plumage. Then the birds extend their long necks, lock eyes and rush (see “It takes two” image).

A pair of western grebes running across water
It takes two By rapidly slapping their large feet, western grebes can run across water. Pairs of grebes perform this feat as part of a mating display. (Courtesy: iStock/Wirestock)

As a synchronized pair, the grebes rise out of the water, feet beating furiously, wings held stationary behind them. With heads proudly raised, the birds run up to 20 metres in just a few seconds. Having studied ballet since the age of three, Clifton was “really sparked and captivated” by this dance. But apart from previous work on basilisk lizards, she found no information on the water-running physics of grebes, so she set out to answer the question herself.

Grebes won’t run on water in a laboratory, so Clifton planned a field study to capture wild grebes rushing. In May 2012 she set out for Oregon’s Upper Klamath Lake with two high-speed cameras and two field assistants. They called themselves the Grebe Squad.

Each morning the group pitched up tents and arranged their equipment on a narrow spit of land between the highway barrier and the lake. Their experiment used two synchronized cameras, placed about 40 metres apart, to view the same birds from different angles. By placing a known object – in their case, a T-shaped calibration wand – in the field of view of both cameras, they could work out the sizes, angles and positions of the grebes (J. Exp. Biol. 218 1235).

This sounds straightforward, but it was anything but. The Grebe Squad spent days on the lake, scanning hundreds of birds for signs of an imminent rushing display. “We usually got about three to seven seconds of warning,” Clifton says, “because they would have a certain look in their eye. They would call to each other…with a certain kind of intensity.” With that scant warning, they coordinated over walkie-talkies, pointed both cameras, manually focused and collected 1.7 seconds of high-speed footage. Then they raced to get the calibration wand to the same spot the birds had been in before either camera moved. An errant elbow, a gust of wind or a sinking tripod would ruin the data.

“Grebes are, arguably, way stronger water-runners than basilisk lizards because they start from within the water,” Clifton explains. “Imagine treading water fast enough to get up out of it. It’s just crazy.” Synchronized swimmers and water-polo players would agree. Typically, a swimmer (or bird) is supported by buoyancy. A floating object displaces a water volume equal to the object’s weight, as described by Archimedes’ principle. In turn, the object feels an upward, buoyant force equal to the displaced water’s weight. As the grebe rises, it displaces less and less water, giving it less and less buoyant force.

Is there a balance between foot size and energy expended that would let humans run on water?

Without buoyancy holding it up, the grebe counters its weight the same way a basilisk does – by slapping its feet. And the Grebe Squad’s data revealed that grebes slap a lot. During rushing, Clifton says, “[grebes] take up to 20 steps per second, which is a really high stride rate for animals.” An Olympic sprinter, in contrast, takes about five steps per second.

To estimate the forces a grebe produces, Clifton dropped aluminium models of grebe feet into a laboratory water tank and measured the impact force. The grebes’ feet are proportionally bigger and they move faster than the basilisk’s. They produce stronger slaps capable of supporting 30–55% of the grebe’s mass, compared to only 18% for the lizard’s. If a basilisk lizard were scaled up to the mass of a grebe, its feet would still be 25% smaller in area than the grebe.

Larger feet push more water with each slap, but they also require more energy to accelerate and they generate more drag. Is there a balance between foot size and energy expended that would let humans run on water?

Could humans run on water?

Although it was built for basilisk lizards, the model developed by Glasheen and McMahon at Harvard also tells us what it takes for a human to run on water. The idea is simple: to run on water, the total impulse from a slap and stroke must be greater than the impulse needed to support the runner’s mass. Impulse is simply a force multiplied by the time over which a force is applied – in this case, the time between steps. With a little algebra and some simple assumptions, you can determine the slap velocity needed, given the runner’s mass and foot area, as well as the time between steps and the depth the foot reaches.

A man in a harness running in a pool of water, and a pair of flippers
All in a day’s work In 2012 a group of Italian researchers used bungee harnesses and flippers to test whether humans could run on water in low-gravity conditions. (Courtesy: CC BY 4.0/Minetti et al. 2012 PLOS ONE 7 e37300)

In their original paper, Glasheen and McMahon calculate that an 80 kg human with an average foot size and a world-class sprinter’s stride rate would have to slap the water at a speed of nearly 30 metres per second to support themselves. Unfortunately, the power needed for a stroke at that speed is almost 15 times greater than a human’s maximum sustained output. In other words, no human can run on water – at least, not on Earth.

That’s the theory, but does it stack up in reality? To find out, in 2012 a group led by Alberto Minetti, a physiologist at the University of Milan, studied whether reduced gravity conditions would enable humans to run on water (PLOS One 7 e37300). Their volunteers wore a special harness that reduced their effective weight to a fraction of its Earth-normal amount, along with fins that made their feet as proportionately large as a basilisk’s (see “All in a day’s work” images). Then they attempted to run on the spot in a small, inflatable pool. The video footage is spectacular.

The test subjects look more like cyclists than runners. Their thighs pump up and down, churning the water into splashes higher than their heads. Their legs stroke to a depth a little over halfway to their knees, but it’s clear that they manage to support their reduced weight for the seven to eight seconds the researchers deemed a success.

The team found that everyone could water-run at 10% of Earth’s gravity, but as they adjusted the harness so that the effective gravitational force increased, fewer runners could keep up. At 16% of Earth’s gravity – roughly equivalent to the Moon’s gravity – most of the runners could support themselves. At 22% of Earth’s gravity – still less than that on Mars – only one subject could. The Martian edition of the Space Olympics is unlikely to include water-running.

An image of Saturn’s moon Titan
Running on Titan An image of Saturn’s moon Titan taken by NASA’s Cassini mission. The blue areas show Titan’s ethane and methane lakes. Under Titan’s low-gravity conditions, would it be possible for a human to run across these lakes like the basilisk lizard? (Courtesy: NASA/JPL-Caltech/ASI/USGS)

But water isn’t the only liquid found in our solar system. Titan, Saturn’s largest moon, has lakes and seas comparable to ours, and its gravitational acceleration is only 13.8% of Earth’s. (That’s a little less than our Moon’s.) Unlike Earth’s lakes, Titan’s are made of frigid liquid ethane and methane (see “Running on Titan” image).

So, could a human being – like current women’s 100 m world champion Sha’Carri Richardson – run on Titan’s lakes? Ethane – even at Titan’s 94 kelvin – is less dense than water, so it offers less impulse to runners. But Titan’s lighter gravity counters that.

At 45 kg, Richardson, who is representing the US in Paris, is petite but blisteringly fast. In the 2023 World Athletics Championships she won gold with a championship record of 10.65 seconds in the 100 m. Her UK size five shoes provide a good foot area. When sprinting (on land, admittedly), she takes ~4.6 steps a second. I’ll assume on ethane that she sinks about 8 centimetres – a bit less than the Italian water-runners – during each step.

To stay atop Titan’s ethane, Richardson would have to slap the surface at about 9.0 m/s. That slap would provide more than 60% of her necessary vertical impulse and require running at about 8.7 metres per second (31.2 kilometres per hour). Her world-championship time was significantly faster at 9.3 metres per second.

So quick dashes across Titan’s lakes are theoretically possible – at least for humanity’s fastest. Just make sure to dress warmly and maybe hold your breath.

  • For more from Nicole Sharp about the fluid dynamics of animals, listen to the July edition of the Physics World Stories podcast.

The post Could athletes mimic basilisk lizards and turn water-running into an Olympic sport? appeared first on Physics World.

Matter-wave interferometry puts new limits on ‘chameleon particles’

Par : No Author

A matter-wave interferometer has measured the effect of gravity on individual atoms at the highest precision to date. That is according to its creators in the US and Italy, who built their instrument by combining matter-wave interferometry with the spatial control of optical lattices. The research was led by Cris Panda at the University of California, Berkeley and puts new constraints on some theories of dark energy involving “chameleon particles”.

Matter-wave interferometry is a powerful technique for probing fundamental physics. It takes advantage of wave–particle duality in quantum physics, which says that atoms behave as waves as well as particles.

“Lasers are used to split each atom in a quantum spatial superposition, such that each atom is effectively in two places at once,” Panda explains. “The two parts are then recombined and interfere either constructively, if the two parts are in-phase, or destructively, if they are out of phase.”

Search for new physics

When existing in two places, the phase of each component of the matter wave can be affected differently by external forces such as gravity. As a result, matter-wave interferometry can be used to make extremely precise measurement of the nature of these forces. This means that it can be used to search for deviations from the Standard Model of particle physics and Einstein’s general theory of relativity.

One fruitful area of investigation involves probing the gravitational force by placing a matter-wave interferometer next to a large mass. Atoms are split between locations at two different distances from the mass, allowing a comparison of the gravitational attraction between the atom and mass at two different places.

A shortcoming of such experiments, however, is that the atoms quickly fall out of place under Earth’s gravity, so making measurements longer than a few tens of milliseconds is difficult – limiting accuracy.

Atoms on hold

In 2019, the Berkeley team showed that optical lattices offer a solution by using lasers to hold atoms in position in Earth’s gravitational field. Earlier this year Panda and colleagues managed to hold atoms for 70 s in this way. Now, they have integrated a tungsten mass into their instrument.

Their latest experiment began with a gas of neutral caesium atoms that was cooled to near absolute zero in a magneto-optical trap. Some of the atoms were then transferred to a vertical optical lattice located just below a cylindrical tungsten mass that is about 25 mm in diameter and height (see figure).

A laser pulse was used to put the atoms into a superposition of two different micron-scale distances below the mass. There, they were held until a second pulse combined the superposition so that interference could be observed.

Information accumulation

“During the hold, the part of the atom in each location accumulates information about the local fields, particularly gravity, which can be read out at the end of the interferometer,” Panda explains. “The hold time can be many seconds and up to one minute, much longer than possible when atoms are falling under Earth’s gravity, which makes this device exceedingly sensitive.”

While the experiment did not reveal any deviations from Newton’s law of universal gravitation, it did allow the team to put new constraints on some theories of dark energy – which is a hypothetical form of energy that is invoked to explain the universe’s accelerating rate of expansion. Specifically, the team put limits on the possible existence of “chameleon particles”, which are dark energy candidates that couple to normal matter via gravity.

The team is also confident that their technique could have exciting implications for a wide range of research. “Our interferometer opens the way for further applications, such as searches for new theories of physics through precise measurements of fundamental constants,” Panda says. “It could also enable compact and practical quantum sensors: such as gravimeters, gyroscopes, gradiometers, or inertial sensors.”

The research is described in Nature.

The post Matter-wave interferometry puts new limits on ‘chameleon particles’ appeared first on Physics World.

❌