↩ Accueil

Vue lecture

Higher-order brain function revealed by new analysis of fMRI data

An international team of researchers has developed new analytical techniques that consider interactions between three or more regions of the brain – providing a more in-depth understanding of human brain activity than conventional analysis. Led by Andrea Santoro at the Neuro-X Institute in Geneva and Enrico Amico at the UK’s University of Birmingham, the team hopes its results could help neurologists identify a vast array of new patterns in human brain data.

To study the structure and function of the brain, researchers often rely on network models. In these, nodes represent specific groups of neurons in the brain and edges represent the electrical connections between neurons using statistical correlations.

Within these models, brain activity has often been represented as pairwise interactions between two specific regions. Yet as the latest advances in neurology have clearly shown, the real picture is far more complex.

“To better analyse how our brains work, we need to look at how several areas interact at the same time,” Santoro explains. “Just as multiple weather factors – like temperature, humidity, and atmospheric pressure – combine to create complex patterns, looking at how groups of brain regions work together can reveal a richer picture of brain function.”

Higher-order interactions

Yet with the mathematical techniques applied in previous studies, researchers have not confirmed whether network models incorporating these higher-order interactions between three or more brain regions could really be more accurate than simpler models, which only account for pairwise interactions.

To shed new light on this question, Santoro’s team built upon their previous analysis of functional MRI (fMRI) data, which identify brain activity by measuring changes in blood flow.

Their approach combined two powerful tools. One is topological data analysis. This identifies patterns within complex datasets like fMRI, where each data point depends on a large number of interconnected variables. The other is time series analysis, which is used to identify patterns in brain activity which emerge over time. Together, these tools allowed the researchers to identify complex patterns of activity occurring across three or more brain regions simultaneously.

To test their approach, the team applied it to fMRI data taken from 100 healthy participants in the Human Connectome Project. “By applying these tools to brain scan data, we were able to detect when multiple regions of the brain were interacting at the same time, rather than only looking at pairs of brain regions,” Santoro explains. “This approach let us uncover patterns that might otherwise stay hidden, giving us a clearer view of how the brain’s complex network operates as a whole.”

Just as they hoped, this analysis of higher-order interactions provided far deeper insights into the participants’ brain activity compared with traditional pairwise methods. “Specifically, we were better able to figure out what type of task a person was performing, and even uniquely identify them based on the patterns of their brain activity,” Santoro continues.

Distinguishing between tasks

With its combination of topological and time series analysis, the team’s method could distinguish between a wide variety of tasks in the participants: including their expression of emotion, use of language, and social interactions.

By building further on their approach, Santoro and colleagues are hopeful it could eventually be used to uncover a vast space of as-yet unexplored patterns within human brain data.

By tailoring the approach to the brains of individual patients, this could ultimately enable researchers to draw direct links between brain activity and physical actions.

“Down the road, the same approach might help us detect subtle brain changes that occur in conditions like Alzheimer’s disease – possibly before symptoms become obvious – and could guide better therapies and earlier interventions,” Santoro predicts.

The research is described in Nature Communications.

The post Higher-order brain function revealed by new analysis of fMRI data appeared first on Physics World.

Start-stop operation and the degradation impact in electrolysis

start-stop graph

This webinar will detail recent efforts in proton exchange membrane-based low temperature electrolysis degradation, focused on losses due to simulated start-stop operation and anode catalyst layer redox transitions. Ex situ testing indicated that repeated redox cycling accelerates catalyst dissolution, due to near-surface reduction and the higher dissolution kinetics of metals when cycling to high potentials. Similar results occurred in situ, where a large decrease in cell kinetics was found, along with iridium migrating from the anode catalyst layer into the membrane. Additional processes were observed, however, and included changes in catalyst oxidation, the formation of thinner and denser catalyst layers, and platinum migration from the transport layer coating. Complicating factors, including the loss of water flow and temperature control were evaluated, where a higher rate of interfacial tearing and delamination were found. Current efforts are focused on bridging these studies into a more relevant field-test and include evaluating the possible differences in catalyst reduction through an electrochemical process versus hydrogen exposure, either direct or through crossover. These studies seek to identify degradation mechanisms and voltage loss acceleration, and to demonstrate the impact of operational stops on electrolyzer lifetime.

An interactive Q&A session follows the presentation.

Shaun Alia
Shaun Alia

Shaun Alia has worked in several areas related to electrochemical energy conversion and storage, including proton and anion exchange membrane-based electrolyzers and fuel cells, direct methanol fuel cells, capacitors, and batteries. His current research involves understanding electrochemical and degradation processes, component development, and materials integration and optimization. Within HydroGEN, a part of the U.S. Department of Energy’s Energy Materials network, Alia has been involved in low temperature electrolysis through NREL capabilities in materials development and ex and in situ characterization. He is further active within in situ durability, diagnostics, and accelerated stress test development for H2@Scale and H2NEW.

 

 

The post Start-stop operation and the degradation impact in electrolysis appeared first on Physics World.

Quasiparticles become massless – but only when they’re moving in the right direction

Physicists at Penn State and Columbia University in the US say they have seen the “smoking gun” signature of an elusive quasiparticle predicted by theorists 16 years ago. Known as semi-Dirac fermions, the quasiparticles were spotted in a crystal of the topological semimetal ZrSiS and they have a peculiar property: they only behave like they have mass when they’re moving in a certain direction.

“When we shine infrared light on ZrSiS crystals and carefully measure the reflected light, we observed optical transitions that follow a unique power-law scaling, B2/3, with B being the magnetic field,” explains Yinming Shao, a physicist at Penn State and lead author of a study in Physical Review X on the quasiparticle. “This special power-law turns out to be the exact prediction from 16 years ago of semi-Dirac fermions.”

The team performed the experiments using the 17.5 Tesla magnet at the US National High Magnetic Field Laboratory in Florida. This high field was crucial to the result, Shao explains, because applying a magnetic field to a material causes its electronic energy levels to become quantized into discrete (Landau) levels. The energy gap between these levels then depends on the electrons’ mass and the strength of the field.

Normally, the energy levels of the electrons should increase by set amounts as the magnetic field increases, but in this case they didn’t. Instead, they followed the B2/3 pattern.

Realizing semi-Dirac fermions

Previous efforts to create semi-Dirac fermions relied on stretching graphene (a sheet of carbon just one atom thick) until the material’s two so-called Dirac points touch. These points occur in the region where the material’s valence and conduction bands meet. At these points, something special happens: the relationship between the energy and momentum of charge carriers (electrons and holes) in graphene is described by the Dirac equation, rather than the standard Schrödinger equation as is the case for most crystalline materials. The presence of these unusual band structures (known as Dirac cones) enables the charge carriers in graphene to behave like massless particles.

The problem is that making Dirac points touch in graphene turned out to require an unrealistically high level of strain. Shao and colleagues chose to work with ZrSiS instead because it also has Dirac points, but in this case, they exist continuously along a so-called nodal line. The researchers found evidence for semi-Dirac fermions at the crossing points of these nodal lines.

Interesting optical responses

The idea for the study stemmed from an earlier project in which researchers investigating a similar compound, ZrSiSe, spotted some interesting optical responses when they applied a magnetic field to the material out-of-plane. “I found that similar band-structure features that make ZrSiSe interesting would require applying a magnetic field in-plane for ZrSiS, so we carried out this measurement and indeed observed many unexpected features,” Shao says.

The greatest challenges, he recalls, was to figure out how to interpret the observations, since real materials like ZrSiS have a much more complicated Fermi surface than the ones that feature in early theoretical models. “We collaborated with many different theorists and eventually singled out the signatures originating from semi-Dirac fermions in this material,” he says.

The team still has much to understand about the material’s behaviour, he tells Physics World. “There are some unexplained fine electronic energy level-splitting in the data that we do not fully understand yet and which may originate from electronic interaction effects.”

As for applications, Shao notes that ZrSiS is a layered material, much like graphite – a form of carbon that is, in effect, made up of many layers of graphene. “This means that once we can figure out how to obtain a single layer cut of this compound, we can harness the power of semi-Dirac fermions and control its properties with the same precision as graphene,” he says.

The post Quasiparticles become massless – but only when they’re moving in the right direction appeared first on Physics World.

NASA’s Parker Solar Probe survives its first close-up solar encounter

NASA has confirmed that its Parker Solar Probe has survived its record-breaking closest approach to the solar surface. The incident occurred on 24 December where it flew some 6.1 million kilometres above the surface of the Sun – well within the orbit of Mercury. A “beacon tone” that was received on 26 December with further telemetry taken on 1 January confirmed that the spacecraft not only survived but also executed the commands that had been pre-programmed into its flight computers before the flyby.

The Parker Solar Probe – named after physicist Eugene Parker who was born in 1927 and made several breakthroughs of our understanding of the solar wind and also explained why the Sun’s corona is hotter than its surface – was launched in 2018 from NASA’s Kennedy Space Center in Florida.

The mission carries four instruments including magnetometers, an imager and two dedicated particle analysers. To withstand the intense temperatures, which can reach almost 1400°C, the spacecraft and instruments are protected by a 11.4 cm carbon-composite shield.

During the mission’s seven-year lifespan, it will perform 24 orbits around the Sun with the next close solar passes occurring on 22 March and 19 June. Data transmission from the first pass in December will begin later this month when the spacecraft and its most powerful onboard antenna are in better alignment with Earth to transmit at higher data rates.

“Flying this close to the Sun is a historic moment in humanity’s first mission to a star,” notes Nicky Fox, head of the Science Mission Directorate at NASA headquarters in Washington. “By studying the Sun up close, we can better understand its impacts throughout our solar system, including on the technology we use daily on Earth and in space, as well as learn about the workings of stars across the universe to aid in our search for habitable worlds beyond our home planet.”

The post NASA’s Parker Solar Probe survives its first close-up solar encounter appeared first on Physics World.

Humanitarian engineering can improve cancer treatment in low- and middle-income countries

This episode of the Physics World Weekly podcast explores how the concept of humanitarian engineering can be used to provide high quality cancer care to people in low- and middle-income countries (LMICs). This is an important challenge because today only 5% of global radiotherapy resources are located in LMICs, which are home to the majority of the world’s population.

Our guests are two medical physicists at the University of Washington in the US who have contributed to the ebook Humanitarian Engineering for Global Oncology. They are Eric Ford, who edited the ebook and Afua Yorke, who along with Ford wrote the chapter “Cost-effective radiation treatment delivery systems for low- and middle-income countries”.

They are in conversation with Physics World’s Tami Freeman.

The post Humanitarian engineering can improve cancer treatment in low- and middle-income countries appeared first on Physics World.

NASA’s Nancy Grace Roman Space Telescope nears completion

Engineers have successfully integrated key parts of NASA’s $4bn Nancy Grace Roman Space Telescope marking a significant step towards completion.  The space agency has announced that the mission’s payload, which includes the telescope, two instruments and the instrument carrier, has been combined with the spacecraft that will deliver the observatory to its place in space at Lagrangian point L2.

The Roman telescope, which was previously named the Wide-Field Infrared Survey Telescope, was given top priority among large space-based missions in the 2010 US National Academy of Science Decadal Survey.

Since then, however, the telescope has had a difficult existence. In Donald Trump’s first term as US president it was twice given zero funding only for US Congress to reinstate its budget.

Roman will be the most stable large telescope ever built, at least 10 times more so than NASA’s James Webb Space Telescope.

NASA’s Nancy Grace Roman Space Telescope
NASA’s Nancy Grace Roman Space Telescope (courtesy: NASA/Chris Gunn)

The telescope’s primary instrument is the Wide Field Instrument, a 300-megapixel infrared camera that will give it a deep, panoramic view of the universe. This will be used to study exoplanets, stars, galaxies and black holes with Roman able to image large areas of the sky 1000 times faster than Hubble with the same sharp, sensitive image quality.

The next steps for the telescope involve installing its solar panels, aperture cover – that shields the telescope from unwanted light – as well as a “outer barrel assembly” that serves as the telescope’s exoskeleton. The Roman mission should be complete next year with a launch before May 2027.

“With this incredible milestone, Roman remains on track for launch, and we’re a big step closer to unveiling the cosmos as never before,” notes Mark Clampin, acting deputy associate administrator for the Science Mission Directorate at NASA.

The post NASA’s Nancy Grace Roman Space Telescope nears completion appeared first on Physics World.

NMR technology shows promise in landmine clearance field trials

Novel landmine detectors based on nuclear magnetic resonance (NMR) have passed their first field-trial tests. Built by the Sydney-based company mRead, the devices could speed up the removal of explosives in former war zones. The company tested its prototype detectors in Angola late last year, finding that they could reliably sense explosives buried up to 15 cm underground — the typical depth of a deployed landmine.

Landmines are a problem in many countries recovering from armed conflict. According to NATO, some 110 million landmines are located in 70 countries worldwide including Cambodia and Bosnia despite conflict ending in both nations decades ago. Ukraine is currently the world’s most mine-infested country, making vast swathes of Ukraine’s agricultural land potentially unusable for decades.

Such landmines also continue to kill innocent civilians. According to the Landmine and Cluster Munition Monitor, nearly 2000 people died from landmine incidents in 2023 – double the number compared to 2022 – and a further 3660 were injured. Over 80% of the casualties were civilians, with children accounting for 37% of deaths.

Humanitarian “deminers”, who are trying to remove these explosives, currently inspect suspected minefields with hand-held metal detectors. These devices use magnetic induction coils that respond to the metal components present in landmines. Unfortunately, they react to every random piece of metal and shrapnel in the soil, leading to high rates of false positives.

“It’s not unreasonable with a metal detector to see 100 false alarms for every mine that you clear,” says Matthew Abercrombie, research and development officer at the HALO Trust, a de-mining charity. “Each of these false alarms, you still have to investigate as if it were a mine.” But for every mine excavated, about 50 hours is wasted on excavating false positives, meaning that clearing a single minefield could take months or years.

“Landmines make time stand still,” adds HALO Trust research officer Ronan Shenhav. “They can lie silent and invisible in the ground for decades. Once disturbed they kill and maim civilians, as well as valuable livestock, preventing access to schools, roads, and prime agricultural land.”

Hope for the future

One alternative landmine-detecting technology option is NMR, which is already widely used to look for underground mineral resources and scan for drugs at airports. NMR results in nuclei inside atoms emitting a weak electromagnetic signal in the presence of a strong constant magnetic field and a weak oscillating field. As the frequency of the signal depends on the molecule’s structure, every chemical compound has a specific electromagnetic fingerprint.

The problem with using it to sniff out landmines is pervasive environmental radio noise, with the electromagnetic signal emitted by the excited molecules being 16 orders of magnitude weaker than that used to trigger the effect. Digital radio transmission, electricity generators and industrial infrastructure all produce noise of the same frequency as the one the detectors are listening for. Even thunderstorms trigger such a radio hum that can spread across vast distances.

mRead scanner
The handheld detectors developed by MRead emit radio pulses at frequencies between 0.5 and 5 MHz (courtesy: mRead)

“It’s easier to listen to the Big Bang at the edge of the Universe,” says Nick Cutmore, chief technology officer at mRead. “Because the signal is so small, every interference stops you. That stopped a lot of practical applications of this technique in the past.” Cutmore is part of a team that has been trying to cut the effects of noise since the early 2000s, eventually finding a way to filter out this persistent crackle through a proprietary sensor design.

MRead’s handheld detectors emit radio pulses at frequencies between 0.5 and 5 MHz, which are much higher than the kilohertz-range frequencies used by conventional metal detectors. The signal elicits the magnetic resonance response in atoms of sodium, potassium and chlorine, which are commonly found in explosives. A sensor inside the detector “listens out” for the particular fingerprint signal, locating a forgotten mine more precisely than is possible with conventional metal detectors.

With over two million landmines laid in Ukraine since 2022, landmine clearance needs to be faster, safer, and smarter

James Cowan

Given that the detected signal is so small, it has be amplified, but this resulted in adding noise. The company says it has found a way to make sure the electronics in the detector do not exacerbate the problem. “Our current handheld system only consumes 40 to 50 W when operating,” says Cutmore. “Previous systems have sometimes operated at a few kilowatts, making them power-hungry and bulky.”

Having tested the prototype detectors in a simulated minefield in Australia in August 2024, mRead engineers have now deployed them in minefields in Angola in cooperation with the HALO Trust. As the detectors respond directly to the explosive substance, they almost eliminated false positives completely, allowing deminers to double-check locations flagged by metal detectors before time-consuming digging took place.

During the three-week trial, the researchers also detected mines that had a low content of metal, which is difficult to spot with metal detectors.“Instead of doing 1000 metal detections and finding one mine, we can isolate those detections and very quickly before people start digging,” says Cutmore.

Researchers at mRead plan to return to Angola later this year for further tests. They also want to finetune their prototypes and begin working on devices that could be produced commercially. “I am tremendously excited by the results of these trials,” says James Cowan, chief executive officer of the HALO Trust. “With over two million landmines laid in Ukraine since 2022, landmine clearance needs to be faster, safer, and smarter.”

The post NMR technology shows promise in landmine clearance field trials appeared first on Physics World.

Sun-like stars produce ‘superflares’ about once a century

Stars like our own Sun produce “superflares” around once every 100 years, surprising astronomers who had previously estimated that such events occurred only every 3000 to 6000 years. The result, from a team of astronomers in Europe, the US and Japan, could be important not only for fundamental stellar physics but also for forecasting space weather.

The Sun regularly produces solar flares, which are energetic outbursts of electromagnetic radiation. Sometimes, these flares are accompanied by plasma in events known as coronal mass ejections. Both activities can trigger powerful solar storms when they interact with the Earth’s upper atmosphere, posing a danger to spacecraft and satellites as well as electrical grids and radio communications on the ground.

Despite their power, though, these events are much weaker than the “superflares” recently observed by NASA’s Kepler and TESS missions at other Sun-like stars in our galaxy. The most intense superflares release energies of about 1025 J, which show up as short, sharp peaks in the stars’ visible light spectrum.

Observations from the Kepler space telescope

In the new study, which is detailed in Science, astronomers sought to find out whether our Sun is also capable of producing superflares, and if so, how often they happen. This question can be approached in two different ways, explains study first author Valeriy Vasilyev, a postdoctoral researcher at the Max Planck Institute for Solar System Research, Germany. “One option is to observe the Sun directly and record events, but it would take a very long time to gather enough data,” Vasilyev says. “The other approach is to study a large number of stars with characteristics similar to those of the Sun and extrapolate their flare activity to our Sun.”

The researchers chose the second option. Using a new method they developed, they analysed Kepler space telescope data on the fluctuations of more than 56,000 Sun-like stars during the period between 2009‒2013. This dataset, which is much larger and more representative than previous datasets because it based on recent advances in our understanding of Sun-like stars, corresponds to around 220,000 years of solar observations.

The new technique can detect superflares and precisely localize them on the telescope images with sub-pixel resolution, Vasilyev says. It also accounts for how light propagates through the telescope’s optics as well as instrumental effects that could “contaminate” the data.

The team, which also includes researchers from the University of Graz, Austria; the University of Oulu, Finland; the National Astronomical Observatory of Japan; the University of Colorado Boulder in the US; and the Commissariat of Atomic and Alternative Energies of Paris-Saclay and the University of Paris-Cité, both in France; carefully analysed the detected flares. They checked for potential sources of error, such as those originating from unresolved binary stars, flaring M- and K-dwarf stars and fast-rotating active stars that might have been wrongly classified. Thanks to these robust, statistical evaluations, they identified almost 3000 bright stellar flares in the population they observed – a detection rate that implies that superflares occur roughly once per century, per star.

Sun should also be capable of producing superflares

According to Vasilyev, the team’s results also suggest that solar flares and stellar superflares are generated by the same physical mechanisms. This is important because reconstructions of past solar activity, which are based on the concentrations of cosmogenic isotopes in terrestrial archives such as tree rings, tell us that our Sun occasionally experiences periods of higher or lower solar activity lasting several decades.

One example is the Maunder Minimum, a decades-long period during the 17th century when very few sunspots were recorded. At the other extreme, solar activity was comparatively higher during the Modern Maximum that occurred around the mid-20th century. Based on the team’s analysis, Vasilyev says that “so-called grand minima and grand maxima are not regular but tend to cluster in time. This means that centuries could pass by without extreme solar flares followed by several such events occurring over just a few years or decades.”

It is possible, he adds, that a superflare occurred in the past century but went unnoticed. “While we have no evidence of such an event, excluding it with certainty would require continuous and systematic monitoring of the Sun,” he tells Physics World.  The most intense solar flare in recorded history, the so-called “Carrington event” of September 1859, was documented essentially by chance: “By the time he [the English astronomer Richard Carrington] called someone to show them the bright glow he observed (which lasted only a few minutes), the brightness had already faded.”

Between 1996 and 2002, when instruments provided direct measurements of total solar brightness with sufficient accuracy and temporal resolution, 12 flares with Carrington-like energies were detected. Had these flares been aimed at Earth, it is possible that they would have had similar effects, he says.

The researchers now plan to investigate the conditions required to produce superflares. “We will be extending our research by analysing data from next-generation telescopes, such as the European mission PLATO, which I am actively involved in developing,” Vasilyev says. “PLATO’s launch is due for the end of 2026 and will provide valuable information with which we can refine our understanding of stellar activity and even the impact of superflares on exoplanets.”

The post Sun-like stars produce ‘superflares’ about once a century appeared first on Physics World.

Vacuum expertise enables physics research

Whether creating a contaminant-free environment for depositing material or minimizing unwanted collisions in spectrometers and accelerators, vacuum environments are a crucial element of many scientific endeavours. Creating and maintaining very low pressures requires a holistic approach to system design that includes material selection, preparation, and optimization of the vacuum chamber and connection volumes. Measurement strategies also need to be considered across the full range of vacuum to ensure consistent performance and deliver the expected outcomes from the experiment or process.

Developing a vacuum system that achieves the optimal low-pressure conditions for each application, while also controlling the cost and footprint of the system, is a complex balancing act that benefits from specialized expertise in vacuum science and engineering. A committed technology partner with extensive experience of working with customers to design vacuum systems, including those for physics research, can help to define the optimum technologies that will produce the best solution for each application.

Over many years, the technology experts at Agilent have assisted countless customers with configuring and enhancing their vacuum processes. “Our best successes come from collaborations where we take the time to understand the customer’s needs, offer them guidance, and work together to create innovative solutions,” comments John Screech, senior applications engineer at Agilent. “We strive to be a trusted partner rather than just a commercial vendor, ensuring our customers not only have the right tools for their needs, but also the information they need to achieve their goals.”

In his role Screech works with customers from the initial design phase all the way through to installation and troubleshooting. “Many of our customers know they need vacuum, but they don’t have the time or resources to really understand the individual components and how they should be put together,” he says. “We are available to provide full support to help customers create a complete system that performs reliably and meets the requirements of their application.”

In one instance, Screech was able to assist a customer who had been using an older technology to create an ultrahigh vacuum environment. “Their system was able to produce the vacuum they needed, but it was unreliable and difficult to operate,” he remembers. By identifying the problem and supporting the migration to a modern, simpler technology, Screech helped his customer to achieve the required vacuum conditions improve uptime and increase throughput.

Agilent collaborates with various systems integrators to create custom vacuum solutions for scientific instruments and processes. Such customized designs must be compact enough to be integrated within the system, while also delivering the required vacuum performance at a cost-effective price point. “Customers trust us to find a practical and reliable solution, and realize that we will be a committed partner over the long term,” says Screech.

Expert partnership yields success

The company also partners with leading space agencies and particle physics laboratories to create customized vacuum solutions for the most demanding applications. For many years, Agilent has supplied high-performance vacuum pumps to CERN, which created the world’s largest vacuum system to prevent unwanted collisions between accelerated particles and residual gas molecules in the Large Hadron Collider.

particle collider
Physics focus: The Large Hadron Collider (Courtesy: Shuttershock Ralf Juergen Kraft)

When engineering a vacuum solution that meets the exact specifications of the facility, one key consideration is the physical footprint of the equipment. Another is ensuring that the required pumping performance is achieved without introducing any unwanted effects – such as stray magnetic fields – into the highly controlled environment. Agilent vacuum experts have the experience and knowledge to engineer innovative solutions that meet such a complex set of criteria. “These large organizations already have highly skilled vacuum engineers who understand the unique parameters of their system, but even they can benefit from our expertise to transform their requirements into a workable solution,” says Screech.

Agilent also shares its knowledge and experience through various educational opportunities in vacuum technologies, including online webinars and dedicated training courses. The practical aspects of vacuum can be challenging to learn online, so in-person classes emphasize a hands-on approach that allows participants to assemble and characterize rough- and high-vacuum systems. “In our live sessions everyone has the opportunity to bolt a system together, test which configuration will pump down faster, and gain insights into leak detection,” says Screech. “We have students from industry and academia in the classes, and they are always able to share tips and techniques with one another.” Additionally, the company maintains a vacuum community as an online resource, where questions can be posed to experts, and collaboration among users is encouraged.

Agilent recognizes that vacuum is an enabler for scientific research and that creating the ideal vacuum system can be challenging. “Customers can trust Agilent as a technology partner,” says Screech. “We can share our experience and help them create the optimal vacuum system for their needs.”

The post Vacuum expertise enables physics research appeared first on Physics World.

Solid-state nuclear clocks brought closer by physical vapour deposition

8-1-25 PVD thorium clock article
Solid-state clock Illustration of how thorium atoms are vaporized (bottom) and then deposited in a thin film on the substrate’s surface (middle). This film could form the basis for a nuclear clock (top). (Courtesy: Steven Burrows/Ye group)

Physicists in the US have taken an important step towards a practical nuclear clock by showing that the physical vapour deposition (PVD) of thorium-229 could reduce the amount of this expensive and radioactive isotope needed to make a timekeeper. The research could usher in an era of robust and extremely accurate solid-state clocks that could be used in a wide range of commercial and scientific applications.

Today, the world’s most precise atomic clocks are the strontium optical lattice clocks created by Jun Ye’s group at JILA in Boulder, Colorado. These are accurate to within a second in the age of the universe. However, because these clocks use an atomic transition between electron energy levels, they can easily be disrupted by external electromagnetic fields. This means that the clocks must be operated in isolation in a stable lab environment. While other types of atomic clock are much more robust – some are deployed on satellites – they are no where near as accurate as optical lattice clocks.

Some physicists believe that transitions between energy levels in atomic nuclei could offer a way to make robust, portable clocks that deliver very high accuracy. As well as being very small and governed by the strong force, nuclei are shielded from external electromagnetic fields by their own electrons. And unlike optical atomic clocks, which use a very small number of delicately-trapped atoms or ions, many more nuclei can be embedded in a crystal without significantly affecting the clock transition. Such a crystal could be integrated on-chip to create highly robust and highly accurate solid-state timekeepers.

Sensitive to new physics

Nuclear clocks would also be much more sensitive to new physics beyond the Standard Model – allowing physicists to explore hypothetical concepts such as dark matter. “The nuclear energy scale is millions of electron volts; the atomic energy scale is electron volts; so the effects of new physics are also much stronger,” explains Victor Flambaum of Australia’s University of New South Wales.

Normally, a nuclear clock would require a laser that produces coherent gamma rays – something that does not exist. By exquisite good fortune, however, there is a single transition between the ground and excited states of one nucleus in which the potential energy changes due to the strong nuclear force and the electromagnetic interaction almost exactly cancel, leaving an energy difference of just 8.4 eV. This corresponds to vacuum ultraviolet light, which can be created by a laser.

That nucleus is thorium-229, but as Ye’s postgraduate student Chuankun Zhang explains, it is very expensive. “We bought about 700 µg for $85,000, and as I understand it the price has been going up”.

In September, Zhang and colleagues at JILA measured the frequency of the thorium-229 transition with unprecedented precision using their strontium-87 clock as a reference. They used thorium-doped calcium fluoride crystals. “Doping thorium into a different crystal creates a kind of defect in the crystal,” says Zhang. “The defects’ orientations are sort of random, which may introduce unwanted quenching or limit our ability to pick out specific atoms using, say, polarization of the light.”

Layers of thorium fluoride

In the new work, the researchers collaborated with colleagues in Eric Hudson’s group at University of California, Los Angeles and others to form layers of thorium fluoride between 30 nm and 100 nm thick on crystalline substrates such as magnesium fluoride. They used PVD, which is a well-established technique that evaporates a material from a hot crucible before condensing it onto a substrate. The resulting samples contained three orders of magnitude less thorium-229 than the crystals used in the September experiment, but had the comparable thorium atoms per unit area.

The JILA team sent the samples to Hudson’s lab for interrogation by a custom-built vacuum ultraviolet laser. Researchers led by Hudson’s student Richard Elwell observed clear signatures of the nuclear transition and found the lifetime of the excited state to be about four times shorter than observed in the crystal. While the discrepancy is not understood, the researchers say this might not be problematic in a clock.

More significant challenges lie in the surprisingly small fraction of thorium nuclei participating in the clock operation – with the measured signal about 1% of the expected value, according to Zhang. “There could be many reasons. One possibility is because the vapour deposition process isn’t controlled super well such that we have a lot of defect states that quench away the excited states.” Beyond this, he says, designing a mobile clock will entail miniaturizing the laser.

Flambaum, who was not involved in the research, says that it marks “a very significant technical advance,” in the quest to build a solid-state nuclear clock – something that he believes could be useful for sensing everything from oil to variations in the fine structure constant. “As a standard of frequency a solid state clock is not very good because it’s affected by the environment,” he says, “As soon as we know the frequency very accurately we will do it with [trapped] ions, but that has not been done yet.”

The research is described in Nature

The post Solid-state nuclear clocks brought closer by physical vapour deposition appeared first on Physics World.

Metamaterials hit the market: how the UK Metamaterials Network is turning research into reality

Metamaterials are artificial 3D structures that can provide all sorts of properties not available with “normal” materials. Pioneered around a quarter of a century ago by physicists such as John Pendry and David Smith, metamaterials can now be found in a growing number of commercial products.

Claire Dancer and Alastair Hibbins, who are joint leads of the UK Metamaterials Network, recently talked to Matin Durrani about the power and potential of these “meta-atom” structures. Dancer is an associate professor and a 125th anniversary fellow at the University of Birmingham, UK, while Hibbins is a professor and director of the Centre of Metamaterials Research and Innovation at the University of Exeter, UK.

Photos of a woman and a man
Metamaterial mentors University of Birmingham materials scientist Claire Dancer (left) and University of Exeter physicist Alastair Hibbins are joint leads of the UK Metamaterials Network. (Courtesy: Claire Dancer; Jim Wileman)

Let’s start with the basics: what are metamaterials?

Alastair Hibbins (AH): If you want to describe a metamaterial in just one sentence, it’s all about adding functionality through structure. But it’s not a brand new concept. Take the stained-glass windows in cathedrals, which have essentially got plasmonic metal nanoparticles embedded in them. The colour of the glass is dictated by the size and the shape of those particles, which is what a metamaterial is all about. It’s a material where the properties we see or hear or feel depend on the structure of its building blocks.

Physicists have been at the forefront of much recent work on metamaterials, haven’t they?

AH: Yes, the work was reignited just before the turn of the century – in the late 1990s – when the theoretical physicist John Pendry kind of recrystallized this idea (see box “Metamaterials and John Pendry”). Based at Imperial College, London, he and others were was looking at artificial materials, such as metallic meshes, which had properties that were really different from the metal of which they were comprised.

In terms of applications, why are metamaterials so exciting?

Claire Dancer (CD): Materials can do lots of fantastic things, but metamaterials add a new functionality on top. That could be cloaking or it might be mechanically bending and flexing in a way that its constituent materials wouldn’t. You can, for example, have “auxetic metamaterials” with a honeycomb structure that gets wider – not thinner – when stretched. There are also nanoscale photonic metamaterials, which interact with light in unusual ways.

John Pendry: metamaterial pioneer

A man holding folded paper up
Deep thinker John Pendry, whose work on negative refraction underpins metamaterials, was awarded the Isaac Newton medal from the Institute of Physics in 2013 and has often been tipped as a potential future Nobel laureate. (Courtesy: Per Henning/NTNU)

Metamaterials are fast becoming commercial reality, but they have their roots in physics –in particular, a landmark paper published in 2000 by theoretical physicist John Pendry at Imperial College, London (Phys. Rev. Lett. 85 3966). In the paper, Pendry described how a metamaterial could be created with a negative index of refraction for microwave radiation, calculating that it could be used to make a “perfect” lens that would focus an image with a resolution not restricted by the wavelength of light (Physics World September 2001 pp47–51).

A metamaterial using copper rings deposited on an electronic circuit board was built the following year by the US physicist David Smith and colleagues at the University of California, San Diego (Science 292 77). Pendry later teamed up with Smith and others to use negative-index metamaterials to create a blueprint for an invisibility cloak – the idea being that the metamaterial would guide light around an object to be hidden (Science 312 1780). While the mathematics describing how electromagnetic radiation interacts with metamaterials can be complicated, Pendry realized that it could be described elegantly by borrowing ideas from Einstein’s general theory of relativity.

Matin Durrani

What sorts of possible applications can metamaterials have?

CD: There are lots, including some exciting innovations in body armour and protective equipment for sport – imagine customized “auxetic helmets” and protective devices for contact sports like rugby. Metamaterials can also be used in communications, exploiting available frequencies in an efficient, discrete and distinct way. In the optical range, we can create “artificial colour”, which is leading to interesting work on different kinds of glitter and decorative substances. There are also loads of applications in acoustics, where metamaterials can absorb some of the incidental noise that plagues our world.

Have any metamaterials reached the commercial market yet?

AH: Yes. The UK firm Sonnobex won a Business Innovation Award from the Institute of Physics (IOP) in 2018 for its metamaterials that can reduce traffic noise or the annoying “buzz” from electrical power transformers. Another British firm – Metasonnix – won an IOP business award last year for its lightweight soundproofing metamaterial panels. They let air pass through so could be great as window blinds – cutting noise and providing ventilation at the same time.

A man holding a square of solid transparent material in front of his face
Sonic boom A spin-out firm from the universities of Bristol and Sussex, Metasonixx is turning metamaterials into commercial reality as noise-abatement products. (Courtesy: Metasonixx Sonoblind Air)

High-end audio manufacturers, such as KEF, are using metamaterials as part of the baffle behind the main loudspeaker. There’s also Metahelios, which was spun out from the University of Glasgow in 2022. It’s making on-chip, multi-wavelength pixelated cameras that are also polarization-sensitive and could have applications in defence and aerospace.

The UK has a big presence in metamaterials but the US is strong too isn’t it?

AH: Perhaps the most famous metamaterial company is Metalenz, which makes flat conformal lenses for mobile phones – enabling amazing optical performance in a compact device. It was spun off in 2021 from the work of Federico Capasso at Harvard University. You can already find its products in Apple and Samsung phones and they’re coming to Google’s devices too.

Other US companies include Kymeta, which makes metamaterial-based antennas, and Lumotive, which is involved in solid-state LIDAR systems for autonomous vehicles and drones. There’s also Echodyne and Pivotal Commware. Those US firms have all received a huge amount of start-up and venture funding, and are doing really well at showing how metamaterials can make money and sell products.

What are the aims of the UK Metamaterials Network?

CD: One important aim is to capitalize on all the work done in this country, supporting fundamental discovery science but driving commercialization too. We’ve been going since 2021 and have grown to a community of about 900 members – largely UK academics but with industry and overseas researchers too. We want to provide outsiders with a single source of access to the community and – as we move towards commercialization – develop ways to standardize and regulate metamaterials.

As well as providing an official definition of metamaterials (see box “Metamaterials: the official definition”), we also have a focus on talent and skills, trying to get the next generation into the field and show them it’s a good place to work.

How is the UK Metamaterials Network helping get products onto the market?

CD: The network wants to support the beginning of the commercialization process, namely working with start-ups and getting industry engaged, hopefully with government backing. We’ve also got various special-interest groups, focusing on the commercial potential of acoustic, microwave and photonics materials. And we’ve set up four key challenge areas that cut across different areas of metamaterials research: manufacturing; space and aviation; health; and sustainability.

Metamaterials: the official definition

Metamaterials
(Courtesy: iStock/Tomasz Śmigla)

One of the really big things the UK Metamaterials Network has done is to crowdsource the definition of a metamaterial, which has long been a topic of debate. A metamaterial, we have concluded, is “a 3D structure with a response or function due to collective effects of their building blocks (or meta-atoms) that is not possible to achieve conventionally with any individual constituent material”.

A huge amount of work went into this definition. We talked with the community and there was lots of debate about what should be in and what should be out. But I think we’ve emerged with a really nice definition there that’s going to stay in place for many years to come. It might seem a little trivial but it’s one of our great achievements.

Alastair Hibbins

What practical support can you give academics?

CD: The UK Metamaterials Network has been funded by the Engineering and Physical Sciences Research Council to set up a Metamaterials Network Plus programme. It aims to develop more research in these areas so that metamaterials can contribute to national and global priorities by, for example, being sustainable and ensuring we have the infrastructure for testing and manufacturing metamaterials on a large scale. In particular, we now have “pump prime” funding that we can distribute to academics who want to explore new applications of – and other reserach into – metamaterials.

What are the challenges of commercializing metamaterials?

CD: Commercializing any new scientific idea is difficult and metamaterials are no exception. But one issue with metamaterials is to ensure industry can manufacture them in big volumes. Currently, a lot of metamaterials are made in research labs by 3D printing or by manually sticking and gluing things together, which is fine if you just want to prove some interesting physics. But to make metamaterials in industry, we need techniques that are scalable – and that, in turn, requires resources, funding, infrastructure and a supply of talented, skilled workers. The intellectual property also needs to be carefully managed as much of the underlying work is done in collaborations with universities. If there are too many barriers, companies will give up and not bother trying.

Looking ahead, where do you think metamaterials will be a decade from now?

AH: If we really want to fulfil their potential, we’d ideally fund metamaterials as a national UK programme, just as we do with quantum technology. Defence has been one of the leaders in funding metamaterials because of their use in communications, but we want industry more widely to adopt metamaterials, embedding them in everyday devices. They offer game-changing control and I can see metamaterials in healthcare, such as for artificial limbs or medical imaging. Metamaterials could also provide alternatives in the energy sector, where we want to reduce the use of rare-earth and other minerals. In space and aerospace, they could function as incredibly lightweight, but really strong, blast-resistant materials for satellites and satellite communications, developing more capacity to send information around the world.

How are you working with the IOP to promote metamaterials?

AH: The IOP has an ongoing programme of “impact projects”, informed by the physics community in the UK and Ireland. Having already covered semiconductors, quantum tech and the green economy through such projects, the IOP is now collaborating with the UK Metamaterials Network on a “pathfinder” impact project. It will examine the commercialization and exploitation of metamaterials in ICT, sustainability, health, defence and security.

Have you been able to interact with the research community?

CD: We’ve so far run three annual industry events showcasing the applications of metamaterials. The first two were at the National Physical Laboratory in Teddington, and in Leeds, with last year’s held at the IOP in December. It included a panel discussion about how to overcome barriers to commercialization along with demonstrations of various technologies, and presentations from academics and industrialists about their innovations. We also discussed the pathfinder project with the IOP as we’ll need the community’s help to exploit the power of metamaterials.

What’s the future of the UK Metamaterials Network?

AH: It’s an exciting year ahead working with the IOP and we want to involve as many new sectors as possible. We’re also likely to hit a thousand members of our network: we’ll have a little celebration when we reach that milestone. We’ll be running a 2025 showcase event as well so there’s a lot to look forward to.

  • This article is an edited version of an interview on the Physics World Weekly podcast of 5 December 2024

The post Metamaterials hit the market: how the UK Metamaterials Network is turning research into reality appeared first on Physics World.

Moonstruck: art and science collide in stunning collection of lunar maps and essays

As I write this [and don’t tell the Physics World editors, please] I’m half-watching out of the corner of my eye the quirky French-made, video-game spin-off series Rabbids Invasion. The mad and moronic bunnies (or, in a nod to the original French, Les Lapins Crétins) are presently making another attempt to reach the Moon – a recurring yet never-explained motif in the cartoon – by stacking up a vast pile of junk; charming chaos ensues.

As explained in LUNAR: A History of the Moon in Myths, Maps + Matter – the exquisite new Thames & Hudson book that presents the stunning Apollo-era Lunar Atlas alongside a collection of charming essays – madness has long been associated with the Moon. One suspects there was a good kind of mania behind the drawing up of the Lunar Atlas, a series of geological maps plotting the rock formations on the Moon’s surface that are as much art as they are a visualization of data. And having drooled over LUNAR, truly the crème de la crème of coffee table books, one cannot fail but to become a little mad for the Moon too.

Many faces of the Moon

As well as an exploration of the Moon’s connections (both etymologically and philosophically) to lunacy by science writer Kate Golembiewski, the varied and captivating essays of 20 authors collected in LUNAR cover the gamut from the Moon’s role in ancient times (did you know that the Greeks believed that the souls of the dead gather around the Moon?) through to natural philosophy, eclipses, the space race and the Artemis Programme. My favourite essays were the more off-beat ones: the Moon in silent cinema, for example, or its fascinating influence on “cartes de visite”, the short-lived 19th-century miniature images whose popularity was boosted by Queen Victoria and Prince Albert. (I, for one, am now quite resolved to have my portrait taken with a giant, stylised, crescent moon prop.)

The pulse of LUNAR, however, are the breathtaking reproductions of all 44 of the exquisitely hand-drawn 1:1,000,000 scale maps – or “quadrangles” – that make up the US Geological Survey (USGS)/NASA Lunar Atlas (see header image).

Drawn up between 1962 and 1974 by a team of 24 cartographers, illustrators, geographers and geologists, the astonishing Lunar Atlas captures the entirety of the Moon’s near side, every crater and lava-filled maria (“sea”), every terra (highland) and volcanic dome. The work began as a way to guide the robotic and human exploration of the Moon’s surface and was soon augmented with images and rock samples from the missions themselves.

One could be hard-pushed to sum it up better than the American science writer Dava Sobel, who pens the book’s forward: “I’ve been to the Moon, of course. Everyone has, at least vicariously, visited its stark landscapes, driven over its unmarked roads. Even so, I’ve never seen the Moon quite the way it appears here – a black-and-white world rendered in a riot of gorgeous colours.”

Many moons ago

Having been trained in geology, the sections of the book covering the history of the Lunar Atlas piqued my particular interest. The Lunar Atlas was not the first attempt to map the surface of the Moon; one of the reproductions in the book shows an earlier effort from 1961 drawn up by USGS geologists Robert Hackman and Eugene Shoemaker.

Hackman and Shoemaker’s map shows the Moon’s Copernicus region, named after its central crater, which in turn honours the Renaissance-era Polish polymath Nicolaus Copernicus. It served as the first demonstration that the geological principles of stratigraphy (the study of rock layers) as developed on the Earth could also be applied to other bodies. The duo started with the law of superposition; this is the principle that when one finds multiple layers of rock, unless they have been substantially deformed, the older layer will be at the bottom and the youngest at the top.

“The chronology of the Moon’s geologic history is one of violent alteration,” explains science historian Matthew Shindell in LUNAR’s second essay. “What [Hackman and Shoemaker] saw around Copernicus were multiple overlapping layers, including the lava plains of the maria […], craters displaying varying degrees of degradations, and materials and features related to the explosive impacts that had created the craters.”

From these the pair developed a basic geological timeline, unpicking the recent history of the Moon one overlapping feature at the time. They identified five eras, with the Copernican, named after the crater and beginning 1.1 billion years ago, being the most recent.

Considering it was based on observations of just one small region of the Moon, their timescale was remarkably accurate, Shidnell explains, although subsequent observations have redefined its stratigraphic units – for example by adding the Pre-Nectarian as the earliest era (predating the formation of Nectaris, the oldest basin), whose rocks can still be found broken up and mixed into the lunar highlands.

Accordingly, the different quadrants of the atlas very much represent an evolving work, developing as lunar exploration progressed. Later maps tended to be more detailed, reflecting a more nuanced understanding of the Moon’s geological history.

New moon

Parts of the Lunar Atlas have recently found new life in the development of the first-ever complete map of the lunar surface, the “Unified Geologic Map of the Moon”. The new digital map combines the Apollo-era data with that from more recent satellite missions, including the Japan Aerospace Exploration Agency (JAXA)’s SELENE orbiter.

As former USGS Director and NASA astronaut Jim Reilly said when the unified map was first published back in 2020: “People have always been fascinated by the Moon and when we might return. So, it’s wonderful to see USGS create a resource that can help NASA with their planning for future missions.”

I might not be planning a Moon mission (whether by rocket or teetering tower of clutter), but I am planning to give the stunning LUNAR pride of place on my coffee table next time I have guests over – that’s how much it’s left me, ahem, “over the Moon”.

  • 2024 Thames and Hudson 256pp £50.00

 

 

 

 

 

 

 

 

 

The post Moonstruck: art and science collide in stunning collection of lunar maps and essays appeared first on Physics World.

❌