↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierPhysics World

Physics in Ukraine: scientific endeavour lives on despite the Russian invasion

Par : No Author
20 mai 2024 à 10:00
Slide show of the photos from Ukraine - these are repeated through the article with full caption information
Photographs of researchers and scientific facilities in Kharkiv, Ukraine, taken in November 2023. Read the article below to hear from photographer Eric Lusito about his experiences documenting the effects of the Russian invasion on Ukraine’s physics community. (All images courtesy: Eric Lusito)

Kharkiv, Ukraine’s second-largest city, has a long history as a world cradle of physics. The first experiments in the Soviet Union on nuclear fission were conducted there in the 1930s, and in 1932 the future Nobel-prize-winning physicist Lev Landau founded an influential school of theoretical physics in the city. Over the years, Kharkiv has been visited by influential scientists including Niels Bohr, Paul Dirac and Paul Ehrenfest.

Kharkiv is still home to many research institutes, but, located just 30 km from the Russian border, the city has been heavily damaged and suffered hundreds of casualties since the Russian invasion that began in February 2022. In the past week, advances made by the Russian forces have put the region under increased pressure, with thousands of civilians fleeing away from the border towards Kharkiv.

I first travelled to Kharkiv in November 2021 as part of a photography project documenting Soviet-era scientific facilities. The Russian army was massing at the border, but I was nevertheless stunned when, having returned home to France, I heard the news of the invasion.

The day after I arrived, explosions rang out in the city centre

I considered abandoning my project, but I felt compelled to document what was happening and in November 2023, I returned to Kharkiv. Though the Ukrainian counter-offensive had pushed back the Russian forces, the city was nevertheless plunged into darkness, the streets deserted. The day after I arrived, explosions rang out in the city centre.

For two weeks, I photographed Kharkiv’s scientific facilities, many of which had been destroyed or badly damaged since my previous visit. I also interviewed and photographed scientists who were continuing to perform their research despite the ongoing war. As the situation in Kharkiv becomes increasingly critical, the pictures I took stand as a record of the effects of the conflict on the people of Ukraine.

Department of Solid State Physics and Condensed Matter, Kharkiv Institute of Physics and Technology, November 2021 and November 2023

Even when I was there in 2021, visiting the old cryogenics laboratory of the Kharkiv Institute of Physics and Technology (KIPT) was like stepping back in time (photo 1). Founded in 1928 as the Ukrainian Physics and Technology Institute, KIPT is one of the oldest and largest physical science research institutes in Ukraine. New facilities have been built on the outskirts of the city, but the original buildings are still standing and, because of the historical value of the site, a project is underway to convert them into a museum.

Cryogenics laboratory at Department of Solid State Physics and Condensed Matter, Kharkiv Institute of Physics and Technology – November 2021
Photo 1 Cryogenics laboratory at the Department of Solid State Physics and Condensed Matter, Kharkiv Institute of Physics and Technology, November 2021. (Courtesy: Eric Lusito)
Alexander Mats, pictured next to an electron microscope
Photo 2 Alexander Mats, pictured next to an electron microscope, at the Department of Solid State Physics and Condensed Matter, Kharkiv Institute of Physics and Technology, November 2023. (Courtesy: Eric Lusito)

When I returned in 2023, the corridors were silent, the heating was shut off and all the doors were closed. However, a few of the offices were still occupied by researchers in KIPT’s Department of Solid State Physics and Condensed Matter (photos 2, 3). During my visit, I also met students from the Kharkiv National University who were visiting with their teacher, Alexander Gapon (photo 4). Their faculty had been damaged in the early days of the fighting, and they had moved lab classes to KIPT so that teaching could continue.

Vladimir Kloshko, ultrasound specialist at the Department of Solid State Physics and Condensed Matter, Kharkiv Institute of Physics and Technology, November 2023
Photo 3 Vladimir Kloshko, ultrasound specialist at the Department of Solid State Physics and Condensed Matter, Kharkiv Institute of Physics and Technology, November 2023. (Courtesy: Eric Lusito)
Alexander Gapon, Kharkiv National University
Photo 4 Alexander Gapon, Kharkiv National University, November 2023. (Courtesy: Eric Lusito)

Plasma Physics Institute, Kharkiv Institute of Physics and Technology, November 2023

The Plasma Physics Institute houses two huge stellarators for studying nuclear fusion. These machines confine plasma under high magnetic fields (photo 5), creating the hot, dense conditions necessary for nuclei to fuse.

The stellarator Uragan-2M at the Plasma Physics Institute, Kharkiv Institute of Physics and Technology, November 2023. (Courtesy: Plasma Physics Institute, Kharkiv Institute of Physics and Technology
Photo 5 The stellarator Uragan-2M at the Plasma Physics Institute, Kharkiv Institute of Physics and Technology. (Courtesy: Eric Lusito)

I met Igor Garkusha, the director of the institute, who showed me where the roof of the facility had been pierced by projectiles during the early days of the Russian invasion. Debris was still scattered everywhere and, in the room housing the stellarators, he held up a piece of shrapnel (photo 6). The roof played its protective role and the stellarators have not suffered any damage, but with research temporarily halted, Garkusha worried that they were at risk of losing skills.

Igor Garkusha holding a piece of shrapnel in the stellarator room of the Plasma Physics Institute, Kharkiv Institute of Physics and Technology
Photo 6 Igor Garkusha holding a piece of shrapnel in the stellarator room of the Plasma Physics Institute, Kharkiv Institute of Physics and Technology. (Courtesy: Eric Lusito)

“I never thought the Russians would wage war on us because I have a lot of family and friends on the other side,” Garkusha said. “I attended a nuclear fusion conference in London organized by the IAEA (International Atomic Energy Agency) in October. There were Russian scientists whom I’ve known for 20 or 30 years, and not one of them spoke to me.”

Damaged dormitories at the Kharkiv Institute of Physics and Technology
Photo 7 Dormitories at the Kharkiv Institute of Physics and Technology, in the same neighbourhood as the Plasma Physics Institute. (Courtesy: Eric Lusito)

The Institute for Scintillation Materials, Kharkiv, November 2023

“I can’t complain: we get a lot of orders,” says Borys Grynyov (photo 8), director of the Institute for Scintillation Materials. Scintillation crystals, which emit light when exposed to radiation, are a vital component of particle detectors, and even though some of its buildings have been destroyed, the institute continues to participate in CERN research programmes.

Borys Grynyov, director of the Institute for Scintillation Materials
Photo 8 Borys Grynyov, director of the Institute for Scintillation Materials, Kharkiv. (Courtesy: Eric Lusito)

For several months at the beginning of the war, around 50 people – members of staff, and their families, lived in the basement of the facility. The teams were eventually able to move the equipment to better-protected rooms, which allowed the production of the scintillation crystals to resume (photo 9).

a technician processing alkali-halide scintillating crystals at the Institute for Scintillation Materials, Kharkiv
Photo 9 A technician processing alkali-halide scintillation crystals at the Institute for Scintillation Materials, Kharkiv. (Courtesy: Eric Lusito)

Kharkiv Polytechnic Institute, November 2023

At around 5.30 a.m. on 19 August 2022 a missile struck a university building of the Kharkiv Polytechnic Institute (photo 10). From 8 a.m., Kseniia Minakova, head of the optics and photonics laboratory, searched through the rubble for what could be salvaged. Much of the equipment had been destroyed, but she and her colleagues found some microscopes, welding equipment and computers. After a few days, heavier equipment such as vacuum pumps could be evacuated.

The university building that was destroyed on 19 August 2022.
Photo 10 The university building of the Kharkiv Polytechnic Institute that was destroyed on 19 August 2022. (Courtesy: Eric Lusito)

I met Minakova at the institute, where she showed me her small temporary laboratory (photo 11). Her research is dedicated to solar energy and improving heat dissipation in photovoltaic systems. To enable Minakova’s team to continue working, Tulane University in the US, where they have collaborators, donated photovoltaic cells as well as a new 3D printer, screens, oscilloscopes and other equipment.

Kseniia Minakova, from the Kharkiv Polytechnic Institute, pictured with equipment recovered from the rubble of the laboratory
Photo 11 Kseniia Minakova, from the Kharkiv Polytechnic Institute, pictured with equipment recovered from the rubble of the laboratory. (Courtesy: Eric Lusito)

But what she was most grateful for was at the back of the room, where her team was huddled around a solar simulator donated by a Canadian company (photo 12). “I explained to them what I needed and they came up with a technical solution for our laboratory and completely assembled a new installation for us from scratch. This equipment is worth $60,000!” exclaimed Minakova.

Kseniia Minakova and her colleagues around the solar simulator
Photo 12 Kseniia Minakova and her colleagues around the solar simulator. (Courtesy: Eric Lusito)

In 2023 she was appointed ambassador of the Optica foundation and a finalist for the L’Oréal-UNESCO “For Women in Science” prize. She has received several offers to work abroad. “I turned them down. My place is here, in Ukraine. If everyone leaves, who will take care of rebuilding our country?”

The post Physics in Ukraine: scientific endeavour lives on despite the Russian invasion appeared first on Physics World.

  •  

Zurich Instruments launch their SHF+ series platform for quantum computing technologies

Par : No Author
17 mai 2024 à 16:22

In this short video, filmed at the 2024 March Meeting of the American Physical Society earlier this year, Vikrant Mahajan, president of Zurich Instruments USA, outlines the company’s new products to support scientists in quantum computing.

Mahajan explains how Zurich Instruments, which has more than 10 locations around the globe, wants to move quantum computing forward with its products, such as its newly released SHF+ series platform. “Our mission is to help build a practical quantum computer,” he says.

Moritz Kirste, head of business development quantum technologies, then describes the main features of the SHF+ product line, which are the building blocks for its quantum computing control systems. “It provides qubit control and read-out functionality of any chip size from one to hundreds of qubits,” he says.

Next up is Linsey Rodenbach, application scientist quantum technologies, who explains that the SHF+ offers better qubit performance metrics – and therefore higher algorithm fidelities. The new platform has lower noise, which means less measurement-induced dephasing, thereby boosting phase coherence between control pulses, especially for long-lived qubits.

As Rodenbach adds, the SHF+ provides a route for developing high quality qubits or when operating large quantum processing units, with Zurich Instruments partnering with some of the world’s leading labs to ensure that the technical specifcations of the SHF+ provide the desired performance benefits.

Get in touch with their team to discuss your specific requirements and collaborate on advancing quantum technology.

The post Zurich Instruments launch their SHF+ series platform for quantum computing technologies appeared first on Physics World.

  •  

Venus is losing water much faster than previously thought, study suggests

Par : No Author
17 mai 2024 à 12:20

Venus could be shedding water to space at a much faster rate than previously thought. That is the conclusion of researchers in the US, who have identified a mechanism in the Venusian ionosphere that could be involved in water loss.

How much water Venus had in the past is uncertain, with some planetary scientists suggesting that the planet may have once had oceans that eventually evaporated as the Venusian greenhouse effect began to run away with itself. Today, only 0.002% of the planet’s atmosphere is composed of water vapour. If condensed on Venus’ surface, this water would form a global equivalent layer (GEL) just 3 cm deep – compared to Earth’s GEL of 3 km.

If Venus began life with a large amount of water, then it has lost almost all of it. Current thinking is that non-thermal hydrogen escape is responsible. This process involves solar radiation splitting water molecules into oxygen and hydrogen. Being lightweight, some of this hydrogen will then escape into space and be swept away by the solar wind.

Lost forever

“Once a hydrogen atom has gone, Venus has lost, in some sense, a water molecule forever,” says Mike Chaffin of the University of Boulder, Colorado.

In recent years, Chaffin’s team have explored water loss from planetary atmospheres via a different mechanism involving the formyl cation (HCO+). This ion is a product of molecular recombination in a planetary ionosphere after molecules such as water and carbon dioxide are broken apart by solar radiation. In their model, Chaffin and his colleagues describe the dissociative recombination of HCO+, whereby an electron that has been liberated when an atom or molecule is ionized then collides with the ion, splitting HCO+ apart into carbon monoxide(CO) and an energetic hydrogen atom. This “hot hydrogen” has enough energy to escape the planet’s gravity.

In 2023, Chaffin and colleague Bethan Gregory found that HCO+ dissociative recombination is responsible for 5–50% of the water loss from Mars’ ionosphere.

However, when Chaffin, Gregory and Eryn Cangi led their team to apply the same mechanism to Venus’ atmosphere, the models showed that HCO+ dissociative recombination must be the dominant form of water loss on the planet, doubling the previously calculated rate of water loss.

Lack of data

The problem is, HCO+ has yet to be detected on Venus.

“I worried about this a lot when we were preparing our paper [describing our results],” Chaffin told Physics World. “Based on our modelling work, there should be a lot more HCO+ on Venus than we thought previously, but how much can we trust our model?”

Chaffin says that the uncertainty is related to NASA’s Pioneer Venus Orbiter, which has been the only space mission so far with an instrument capable of probing Venus’ ionosphere. Launched in 1978, Pioneer was not specifically designed to detect HCO+.

“This has been a gap in measurements of Venus,” says Chaffin, although he does say that by extrapolating from the Pioneer results, one can infer a water-loss rate that is “in the same ballpark” as that predicted by the HCO+ mechanism. Chaffin takes this as indirect confirmation of the model.

Chaffin’s confidence is backed up by Janet Luhmann  at the Space Sciences Laboratory at the University of California, Berkeley. “So long as [Chaffin and colleagues’] assumptions are accurate, there is no reason I can see to dismiss this concept,” she says.

Water sources

If true, then the HCO+ dissociative recombination model changes the history of water on Venus, somewhat. If Venus did have oceans, then some of its surviving water vapour will originate from those oceans, some will come from outgassing via volcanoes, and the remainder arriving through comet and asteroid impacts.

The increase in the escape rate means that if Venus did once have oceans, compressing the time it takes to lose that water to space thanks to the efficiency of the HCO+ mechanism means those oceans could have survived on the surface for longer. If Venus never had oceans, then either the rate of outgassing or the rate of impacts, or both, must be higher to at least keep pace with the speed of water loss.

Unfortunately, forthcoming missions to Venus may not be able to confirm the presence of HCO+. Neither Europe’s Envision mission, planned to launch in 2031, nor NASA’s DAVINCI spacecraft that will blast off in the late 2020s, will study the ionosphere.

However, Sanjay Limaye of the University of Wisconsin, Madison points out that Russia has proposed an instrument for India’s planned Venus orbiter that may be able to detect HCO+ around 2031.

However, Luhmann, who specializes in studying the interaction between the solar wind and planetary atmospheres, thinks there might be another way. The escaping hot hydrogen from the HCO+ dissociative recombination process is moving fast. Whereas Chaffin believes it will be too fast to detect, Luhmann is not so ready to dismiss it.

“In-situ measurements of hydrogen pickup ions may be useful if they are sensitive enough and can distinguish the characteristic initial energies of the escaping neutrals before they are ionized,” she says, pointing to previous work on Mars that has accomplished the same thing.

The research is described in Nature.

The post Venus is losing water much faster than previously thought, study suggests appeared first on Physics World.

  •  

Next-generation quantum sensors detect human biomagnetism

Par : No Author
16 mai 2024 à 17:30

Anna Kowalczyk, an assistant professor in the University of Birmingham’s Centre for Human Brain Health, has set out to develop new tools that could help neuroscientists do their work better.

“Our motivation is to use transcranial magnetic stimulation with a brain imaging technique called magnetoencephalography, and this is a very challenging task, because commercial sensors may not work well in this setup,” Kowalczyk says. “I wanted to start developing quantum sensors for brain imaging, and I basically started from scratch.”

Transcranial magnetic stimulation uses magnetic fields to stimulate neurons in the brain by placing a magnetic coil against the scalp and generating brief magnetic pulses that induce electrical currents in the brain. Magnetoencephalography measures the magnetic fields produced by this resultant activity and can be used to map brain activity and understand the relationships between brain regions and their functions.

Optically pumped magnetometers (OPMs) are emerging as preferred sensors for biomagnetism applications. Commercial OPMs used for magnetoencephalography are “zero field sensors,” meaning that they can detect very small magnetic fields, often at femtotesla-level sensitivities. Many zero-field sensors are sensitive to environmental magnetic fields and so require extensive shielding systems and noise cancellation techniques.

Kowalczyk’s research, led by Harry Cook, a graduate student studying physics at the University of Birmingham, decided to employ “slightly different physics” with their sensors. “We decided to make use of slightly different physics to develop a sensor that can work in higher magnetic fields with great precision,” says Kowalczyk.

Their prototype sensor is an optically pumped magnetic gradiometer (OPMG) that operates by nonlinear magneto-optical rotation: linearly polarized light passes through a rubidium vapour, preparing the atoms in the vapour to be magnetically sensitive. When an external magnetic field changes, the frequency of atoms’ precession around the magnetic field changes, which is in turn detected with laser light. In other words, changes in the magnetic field are tracked by measuring changes in light properties.

With one “cell” of rubidium vapour, the researchers can measure an external magnetic field locally. With two cells, they can directly measure the difference, or gradient, in the magnetic field (i.e., their sensor becomes a gradiometer).

After designing and prototyping the sensor, the researchers tested its performance by characterizing an auditory evoked response in the human brain, a common benchmark for OPMs. Next, they set out to measure the field from a human heartbeat. During recording, the building lifts were running and the team recorded the resulting magnetic disturbance using commercial sensors.

“[In our experimental setup] we’re inside this metal box, basically, but the [external field] is still many times larger than a human heartbeat, let alone the human brain. So we wanted to see how well our sensor could measure the gradient from the heart while rejecting the unwanted magnetic field,” Cook explains. “It wasn’t a perfect setup, but we could clearly see that there’s a human heartbeat that’s well-characterized from a magnetic perspective…Our gradient method remained almost flat during this measurement, while the reference sensor showed all the variations.”

The researchers concluded that the OPMG could be explored further for biomagnetism applications and are pursuing two directions for research going forward: improving the current sensor; and developing sensors that could measure multiple types of neural activity simultaneously.

“We need to make the sensor more robust…and we need to make more sensors. The main motivation is to make it work with transcranial magnetic stimulation, and we also want to make hybridized quantum sensors that can measure two kinds of neural activities at the same time,” says Kowalczyk, referencing functional near infrared spectroscopy, another optical method for measuring brain activity. “Each method would bring different kinds of information. We’re not saying that our sensor is better than commercial sensors, it’s just different. Our aim is to develop technology that enables new approaches and new capabilities in neuroscience and beyond.”

Initial results from the OPMG prototype are published in Quantum Science and Technology.

The post Next-generation quantum sensors detect human biomagnetism appeared first on Physics World.

  •  

Decimal time: life in a world where our days are divided differently

Par : No Author
15 mai 2024 à 12:00

In an era where conspiracy theories run amok, this fictional thriller invites more than just the curiosity of its adolescent target audience. Written by the London-based author Sam Sedgman – a self-styled “nerd and enthusiastic ferroequinologist” – The Clockwork Conspiracy is a story about the potential impact of a new law to decimalize time in the UK. According to Miriam – one of the MPs in the novel – the law will “simplify the way we measure things [and] make the UK a leader in scientific research”.

The plan sounds innocuous enough: a day will be divided into 10 hours, each of 100 minutes, with each minute being 100 seconds. The proposed new law is not entirely in the realms of fiction – after all, decimal time was the legal standard in France during the French Revolution.

Before the bill is debated in parliament, however, renowned horologist Diggory disappears from the belfry at Big Ben. His son Isaac begins a quest to find his father, assisted by newly acquainted partner in crime, the rebellious Hattie.

As well as learning about the personalities of the protagonists, readers gain knowledge of topics they might otherwise know little about. Diggory, for example, elucidates the mechanics of clocks, Isaac provides insight into the sciences, and Hattie sheds light on how parliament works as she helps them get to the bottom of Diggory’s disappearance.

Thanks to her penchant for exploration, we are privy to the top-secret meeting of the Timekeepers – the group responsible for monitoring and protecting time – held at none other than the Royal Observatory, home of Greenwich Mean Time and the Prime Meridian. It’s an exciting adventure as our protagonists navigate London to overcome the plots’ obstacles.

To bolster the plausibility of decimal time as a threat to our everyday life, Sedgman refers to various significant historical events, including the suffragette movement, the introduction of the decimal pound in 1971 and the Millennium Bug. Timekeeper Penny explains the threat of the new law to the children: “Time is like water…it can be misused”.

The novel portrays different perspectives surrounding the bill in a nicely balanced way. These include the views of the Timekeepers (who disapprove of the bill), the Machinists (who rebel against it with vandalism) and the parliamentarians (who mysteriously support it). It’s an approach that lets readers appreciate the different views and rationale behind them.

The Clockwork Conspiracy uses fun facts to propel the plot and teach readers about time and the history of timekeeping, without potentially dry information being clumsily shoehorned in. Amid the suspense, the novel highlights the universal importance of friendship, family and personal integrity. Despite being aimed at children, it offers insight into the mechanics of society at large. It’s therefore a book that can be enjoyed by readers of any age.

The post Decimal time: life in a world where our days are divided differently appeared first on Physics World.

  •  

Pump–probe microscopy reveals how historical paintings fade

Par : No Author
14 mai 2024 à 17:49
Munch's Despair
Mellowed yellows Despair is an example of how Edvard Munch used yellows in his paintings. (Edvard Munch 1894. File courtesy: Munchmuseet)

New insights into how a yellow pigment widely used in historical artwork fades over time have been gained by researchers in the US. Using an imaging technique that they had developed to detect skin cancer, Martin Fischer and colleagues at Duke University showed how signs of paint degradation can appear on a microscopic scale, before it is visible to the eye.

As they painted their masterpieces, artists throughout history knew that the colours they used would fade over time. Recently, however, analytical techniques are providing insights into the properties of microscopic grains of pigment and why they fade. This allows us to imagine artworks as they looked when they were first painted, and how to conserve and restore paintings.

“Understanding the reason for pigment degradation is extremely important to halt damage that has occurred, prevent damage that has not yet happened, and to get an idea how a degraded masterpiece might have looked originally,” Fischer explains.

One of the most challenging aspects of this task is the deep complexity hidden beneath the surface of a painting. In many paintings, numerous pigments have been mixed together and layered on top of each other, making them difficult to analyse without damaging the artwork.

In their study, Fischer and colleagues overcame this challenge using a method called pump-probe microscopy, which uses pairs of synchronized femtosecond laser pulses. The two different pulsed laser beams are superimposed and then focused onto the sample being imaged, with a controlled delay between the arrival of the pump and pulse. The pump pulse comes first and creates excitations within the sample. Then the probe pulse interacts with the sample such that the reflected light contains information about specific excitations and therefore the chemical composition of the sample.

Powerful technique

Pump–prove microscopy has become a powerful technique for generating high-contrast images of non-fluorescent images, especially in living tissues. Indeed, Fischer’s team have already adapted it to examine moles for signs of skin cancer. Now, the group has used technique to examine the degradation of pigments hidden within complex layers of paint. They focused on pigments containing cadmium sulphide, which are bright yellow and have played an important role in the history of art.

“CdS was popular with artists like Edvard Munch, Henri Matisse, and Pablo Picasso, but is also very prone to degradation,” Fischer explains. “Despite the importance of CdS, the influence of the environmental conditions and manufacturing methods on the degradation process is not very well understood.”

To investigate the effect, the team started by synthesizing a CdS pigment, using a historical method often used by artists of the past.  They then accelerated the aging process by exposing their pigment to high levels of light and humidity. This quickly degraded the CdS grains into hydrated cadmium sulphate, causing the yellow colour to fade.

Variable breakdown

During the degradation process, the researchers used pump–probe microscopy to monitor changes to individual CdS grains. Their experiment revealed that the breakdown process can vary widely, depending on the size and shape of the grains.

“We discovered that degradation tends to happen more strongly for small, rough CdS crystals that are closer to the surface,” Fischer explains. In contrast, “degradation in larger crystals tends to start from the outside in, and from top to bottom.”

The experiment also showed that signs of degradation can start to appear well before they are visible even to the sharp eyes of art conservators. “Having such an early warning signal could be very helpful to adjust storage or display conditions for the artwork or to indicate the need for early intervention,” Fischer adds.

Based on their success, Fischer and colleagues now hope that pump–probe microscopy will help them to gain a better understanding of the degradation processes that cause fading in other types of pigment. In turn, their work could enable conservators to develop new and improved techniques, helping them to protect priceless historical artwork for years to come.

The research is described in JPhys Photonics.

The post Pump–probe microscopy reveals how historical paintings fade appeared first on Physics World.

  •  

Effective Science Communication (3rd edition) with Sam Illingworth

Par : No Author
14 mai 2024 à 14:51

Whatever career stage you are at, join us to learn more about communicating science to your colleagues and external partners.

We will seek to empower scientists to effectively share their work, enhancing the impact of their research and facilitating a deeper public understanding of science. Our goal is to bridge the gap between the scientific community and society at large, promoting science as a cornerstone of informed citizenship and social progress.

Sam Illingworth

Sam Illingworth is an associate professor at Edinburgh Napier University, where his work and research focus on using poetry and games as a way of developing dialogues between scientists and other publics. He is also an award-winning science communicator, poet, games designer, Principal Fellow of Advance HE (PFHEA), chief executive editor of Geoscience Communication and the founder of Consilience, the world’s first peer-reviewed science and poetry journal.

 

 

About this ebook

Effective Science Communication: A practical guide to surviving as a scientist (3rd edition) is an essential handbook tailored for scientists at any stage of their career, aiming to enhance both their inward-facing and outward-facing communication skills. The book is structured into detailed chapters, each focusing on different aspects of science communication, from publishing in peer-reviewed journals to engaging with the media. It offers a blend of theoretical insights and practical exercises designed to build confidence and competence in communicating scientific research. This guide seeks to empower scientists to effectively share their work, enhancing the impact of their research and facilitating a deeper public understanding of science. Through this, it aims to bridge the gap between the scientific community and society at large, promoting science as a cornerstone of informed citizenship and social progress.

Authors Sam Illingworth and Grant Allen

The post Effective Science Communication (3rd edition) with Sam Illingworth appeared first on Physics World.

  •  

Sucking up crude oil with laser-treated cork

Par : No Author
14 mai 2024 à 11:37

New research suggests that laser-treated cork could be used to tackle crude oil spills. In a study published in Applied Physics Letters, researchers from China and Israel found that femtosecond laser processing alters the surface structure of cork so that it heats rapidly in sunlight and absorbs oil.

Oil spills are ecological disasters that wreak havoc on marine ecosystems, with devastating, long lasting effects on marine animals and their habitats. Oil spill cleanup also presents a major technical challenge.

There’s a lack of effective strategies for clearing water contaminated with high-viscosity oil. Various techniques are employed, such as physically skimming the crude oil off the surface, use of chemical dispersants and even setting the oil alight, but these are often expensive with low efficacy and can cause secondary pollution. Alternative, environmentally friendly solutions are needed.

The authors of the latest research stumbled upon cork as a possible material for cleaning up oil spills by accident.

“In a different laser experiment, we accidentally found that the wettability of the cork processed using a laser changed significantly, gaining superhydrophobic (water-repelling) and superoleophilic (oil-attracting) properties,” says first author Yuchun He, a physicist at Central South University, China, in a press statement.

Following this discovery, the researchers wondered whether, as a porous material, cork could be used to suck up oil. They also found that after femtosecond laser processing, the surface of the cork became very dark, “which made us realize that it might be an excellent material for photothermal conversion,” He explains.

By absorbing energy from sunlight and heating the crude oil, the scientists hypothesized that the black cork would lower the oil’s viscosity, making it easier to absorb.

To characterize the properties of the femtosecond laser-processed cork, the team used scanning electron and confocal laser scanning microscopy. Their analysis showed that while untreated cork had an uneven honeycomb-like surface structure, the laser-processed cork was covered in a regular, striped arrangement of micro-scale grooves, scattered with nanoparticles.

The microscopy studies also showed that the laser-processed cork contained a higher proportion of carbon than untreated cork. This carbonization process occurs when heat from the laser breaks down long-chain cellulose molecules and releases carbon oxides as gas, reducing the cork’s relative oxygen content.

Laser-processed cork
Cork modification Femtosecond laser (FSLA) treatment alters the structure of the cork, which changes the behaviour of light incident upon on its surface. (Courtesy: Yuchun He)

Tests showed that the laser-treated cork had an extremely high solar absorption rate, absorbing more than 90% of light and reflecting less than 10%. Under simulated sunlight, the processed cork hit temperatures of 63°C after 120 s, while untreated cork rose to only 50°C. The laser-treated cork was also found to heat rapidly, reaching 50°C within 15–30 s.

When the researchers tested how well cork absorbed oil, they found that in the dark, the performance of laser-processed and untreated cork was similar, with crude oil gradually infiltrating over 45 min. Under light exposure, however, similar samples of laser-treated cork became saturated with oil within 2 min, while 10 min was needed for complete infiltration of unprocessed cork.

The team says that the nanoscale grooves and an increased surface area on the carbonized, laser-treated cork trap light. This causes the cork to rapidly heat in sunlight and heat nearby oil, reducing its high viscosity. The processed cork’s superoleophilic properties, linked to the changed surface structure and carbonization, then come into play, enabling the cork to suck up the more fluid oil while repelling water.

Cork comes from the bark of cork oak trees. As it is a renewable material that the trees can replace after harvesting, the researchers say it would be a sustainable, inexpensive and environmentally friendly material to use for cleaning up oil spills.

They propose that a pump connected to an oil tanker could be used to suck the crude oil out of cork as it is absorbed from the seawater. In laboratory experiments, the team demonstrated that a small-scale version of this setup could successfully extract oil from seawater in a Petri dish.

“Oil recovery is a complex and systematic task, and participating in oil recovery throughout its entire life cycle is our goal,” He says. “The next step is to prepare electrothermal materials using polyurethane foam as the skeleton for oil adsorption, combining photothermal and electrothermal techniques to form an all-weather oil recovery system.”

The post Sucking up crude oil with laser-treated cork appeared first on Physics World.

  •  

Implantable and biocompatible battery powered by the body’s own oxygen

Par : No Author
13 mai 2024 à 15:00

When a medical device such as a pacemaker or neurostimulator is implanted within a person’s body, the immediate question is how long its battery will function before requiring surgical removal and replacement.

Researchers in China have developed an implantable Na–O2 battery with an open cathode structure that runs on oxygen circulating in the body, potentially removing the limit on battery life. In tests on laboratory rats, the team showed that the proof-of-concept design delivers stable power and has excellent biocompatibility.

Metal–O2 batteries have been previously tested for potential implantable use, but have encountered challenges. Their design requires an open cathode architecture to absorb oxygen from body fluids, and any discharge products created must be easily metabolized by the body during battery cycling. In addition, all components must be biocompatible and the battery must be flexible enough to enable stable contact with soft tissues.

The novel battery consists of a nanoporous gold catalytic cathode, an ion-selective membrane that acts as the separator and an anode made from a sodium-based alloy (NaGaSn). Nanoporous gold, which demonstrates excellent biocompatibility, has previously been used as a cathode in metal–air batteries to provide the oxygen reduction reaction. In the Na–O2 battery, oxygen continuously supplied from body fluids is reduced through the catalysis of the nanoporous gold during discharging.

The ion-selective membrane prevents body fluids from reaching the anode, though the team note that the NaGaSn alloy electrode possesses high safety and stability in water. The entire battery is encased within a soft and flexible porous polymer film.

Principal co-designers Yang Lv, Xizheng Liu and Jiucong Liu, of Tianjin University of Technology, initially conducted in vitro experiments, after which they implanted the battery under the skin on the backs of laboratory rats. After 24 h, they observed an unstable discharge voltage plateau. However, after two weeks of implantation, the battery was able to produce stable voltages of between 1.3 and 1.4 V, with a maximum power density of 2.6 µW/cm2.

“We were puzzled by the unstable electricity output right after implantation,” explains Xizheng Liu in a press statement. “It turned out that we had to give the wound time to heal, for blood vessels to regenerate around the battery and supply oxygen, before the battery could provide stable electricity. This is a surprising and interesting finding because it means that the battery can help monitor wound healing.”

The rats healed well after battery implantation, with the hair on their backs completely regrown after four weeks. Importantly, blood vessels regenerated well around the cathode, providing a continuous source of oxygen. Quantitative analysis confirmed that the number of capillaries around the battery was the same in rats with implanted batteries and control animals without batteries. In fact, the number of capillaries gradually increased with prolonged implantation time.

The researchers assessed the biocompatibility of the implanted battery through biochemical and immunohistochemical analyses. None of the rats developed inflammation around the batteries. Byproducts created by the chemical reactions of the battery, including sodium ions, hydroxide ions and low levels of hydrogen peroxide, were easily metabolized in the kidneys and liver. Upon completion of the study four weeks later, the rats did not experience any negative physiological effects, suggesting that the implanted battery has potential for practical applications.

While the energy generated by the proof-of-concept battery is not sufficient to power medical devices for human use, the results demonstrate that harnessing oxygen in the body for energy is possible. Xizheng Liu advises that the team’s next plan is to improve the battery’s energy delivery by exploring more efficient electrode materials and optimizing the battery structure and design. “We think that the battery will be easy to scale up in production, and choosing cost-effective materials will further lower the cost to produce.”

The researchers note that, in addition to having a novel architecture with the ability to generate extremely high energy densities, the battery’s oxygen concentration can be controlled precisely. This capability may expand its use to therapeutic applications, such as starving cancerous tumours of oxygen or converting the battery energy to heat to destroy cancer cells. Future research initiatives will include evaluating other uses for this promising implantable battery.

The research is reported in Chem.

The post Implantable and biocompatible battery powered by the body’s own oxygen appeared first on Physics World.

  •  

A career in physics: a universe of possibilities

Par : No Author
13 mai 2024 à 13:26

Demand for physics skills and knowledge is growing. New opportunities can be found in emerging fields such as data science and AI. At the same time, physics graduates are increasingly sought after in established sectors like construction, business and innovation. Learn how to navigate the current jobs market with Physics World Careers 2024, a free-to-read guide packed with tips, interviews and case studies.

The post A career in physics: a universe of possibilities appeared first on Physics World.

  •  

The future of 2D materials: grand challenges and opportunities

Par : No Author
10 mai 2024 à 16:41
Source: Shutterstock, Marco de Benedictis

Graphene, the first 2D material, was isolated by Prof. Andre Geim and Prof. Konstantin Novoselov in 2004. Since then, a variety of 2D materials have been discovered, including transition metal dichalcogenides, phosphorene and mxene. 2D materials have remarkable characteristics and are making significant contributions towards quantum technologies, electronics, medicine, and renewable energy generation and storage to name but a few fields. However, we are still exploring the full potential of 2D materials, and many challenges must be overcome.

Join us for this panel discussion, hosted by 2D Materials, where leading experts will share their insights and perspectives on the current status, challenges and future directions of 2D materials research. You will have the opportunity to ask questions during the Q&A session.

Have a question for the panel?

We welcome questions in advance of the webinar, so please fill in this form.

Left to right: Stephan Roche, Konstantin Novoselov, Joan Redwing, Yury Gogotsi and Cecilia Mattevi

Chair
Prof. Stephan Roche has been ICREA Research Professor and head of the Theoretical & Computational Nanoscience Group at the Catalan Institute of Nanoscience and Nanotechnology (ICN2). He is a theoretician expert in the study of quantum transport theory in condensed matter, spin transport physics and devices simulation.

Speakers
Prof. Konstantin Novoselov is the Langworthy Professor of Physics and Royal Society Research Professor at The University of Manchester. In 2004, he isolated graphene alongside Andre Geim and was awarded the Nobel Prize in Physics in 2010 for his achievements.

Prof. Joan Redwing is a Distinguished Professor of Materials Science and Engineering at Penn State University where she holds an adjunct appointment in the Department of Electrical and Computer Engineering. Her research focuses on crystal growth and epitaxy of electronic materials, with an emphasis on thin film and nanomaterial synthesis by metalorganic chemical vapour deposition.

Prof. Yury Gogotsi is a Distinguished University Professor and Charles T and Ruth M Bach Endowed Chair in the Department of Materials Science and Engineering at Drexel University. He is the founding director of the A.J. Drexel Nanomaterials Institute.

Prof. Cecilia Mattevi is a Professor of Materials Science in the Department of Materials at Imperial College London. Cecilia’s expertise centres on science and engineering of novel 2D atomically thin materials to enable applications in energy conversion and energy storage.

About this journal

2D Materials is a multidisciplinary, electronic-only journal devoted to publishing fundamental and applied research of the highest quality and impact covering all aspects of graphene and related two-dimensional materials.

Editor-in-chief: Wencai Ren Shenyang National Laboratory for Materials Science, Chinese Academy of Sciences, China.

The post The future of 2D materials: grand challenges and opportunities appeared first on Physics World.

  •  

Data science CDT puts industry collaboration at its heart

Par : No Author
10 mai 2024 à 15:01

Physics is a constantly evolving field – how do we make sure the next generation of physicists receive training that keeps pace with new developments and continues to support the cutting edge of research?

According to Carsten P Welsch, a distinguished accelerator scientist at the University of Liverpool, in the age of machine learning and AI, PhD students in different physics disciplines have more in common than they might think.

“Research is increasingly data-intensive, so while a particle physicist and a medical physicist might spend their days thinking about very different concepts, the approaches, the algorithms, even the tools that people use, are often either the same or very similar,” says Professor Welsch.

Data science is extremely important for any type of research and will probably outlive any particular research field

Professor Welsch

Welsch is the director of the Liverpool Centre for Doctoral Training (CDT) for Innovation in Data Intensive Science (LIV.INNO). Founded in 2022, the CDT is currently recruiting its third cohort of PhD students. Current students are undertaking research that spans medical, environmental, particle and nuclear physics, but their projects are all underpinned by data science. According to Professor Welsch, “Data science is extremely important for any type of research and will probably outlive any particular research field.”

Next-generation PhD training

Carsten Welsch has a keen interest in improving postgraduate education, he was chair of STFC’s Education Training and Careers Committee and a member of the UKRI Skills Advisory Group. When it comes to the future of doctoral training he says “The big question is ‘where do we want UK researchers to be in a few years, across all of the different research areas?’”

He believes that LIV.INNO holds the solution. The CDT aims to give students with data-intensive PhD projects the skills that will enable them to succeed not only in their research but throughout their careers.

Lauryn Eley is a PhD student in the first LIV.INNO cohort who is researching medical imaging. She became interested in this topic during her undergraduate studies because it applied what she had learned in university to real-world situations. “It’s important that I can see the benefits of my work translated into everyday experiences, which I think medical imaging does quite nicely,” she says.

Miss Eley’s project is partnered with medical technology company Adaptix. The company has developed a mobile X-ray device which, it hopes, will enable doctors to produce a high-quality 3D X-ray image more cheaply and easily than with a traditional CT scanner.

Her task is to build a computational model of the X-ray device and investigate how to optimize the images it produces. To generate high-quality results she must simulate millions of X-rays. She says that the data science training she received at the start of the PhD has been invaluable.

From their first year, students attend lectures on data science topics which cover Monte Carlo simulation, high-performance computing, machine learning and AI, and data analysis. Lauryn Eley has an experimental background, and she says that the lectures enabled her to get to grips with the C++ she needed for her research.

Boosting careers with industry placements

Professor Welsch says that from the start, industry partnership has been at the centre of the LIV.INNO CDT. Students spend six months of their PhD on an industrial placement, and Lauryn Eley says that her work with Adaptix has been eye-opening, enabling her to experience first-hand the fast-paced, goal-driven world of industry, which she found very different to academic research.

While the CDT may particularly appeal to those keen on pursuing a career in industry, Professor Welsch emphazises the importance of students delivering high-quality research. Indeed, he believes that LIV.INNO’s approach provides students with the best chance of success in their academic endeavours. Students are taught to use project management skills to plan and deliver their projects, which he says puts them “in the driving seat” as researchers. They are also empowered to take initiative, working in partnership with their supervisors rather than waiting for external guidance.

LIV.INNO builds on a previous programme called the Liverpool Big Data Science Centre for Doctoral Training, which ran between 2017 and 2024. Professor Welsch was also the director of that CDT, and he has noticed that when it comes to partnering with student projects, industry attitudes have undergone a shift.

“When we approached the companies for the first time, you could definitely see that there was a lot of scepticism,” he says. “However, with the case studies from the first CDT, they found it much easier to attract industry partners to LIV.INNO.” Professor Welsch thinks that this demonstrates the benefits that industry-academia partnerships bring to both students and companies.

The first cohort from LIV.INNO are only in their second year, but many of the students from the previous CDT secured full-time jobs from the company where they did their placement. But whatever career path students eventually go down, Carsten Welsch is convinced that the cross-sector experience students get with LIV.INNO sets them up for success, saying “They can make a much better informed decision about where they would like to continue their careers.”

LIVINNO CDT logo

The post Data science CDT puts industry collaboration at its heart appeared first on Physics World.

  •  

GMT or TMT? Fate of next-generation telescope falls to expert panel set up by US National Science Foundation

Par : No Author
10 mai 2024 à 14:01

The US National Science Foundation (NSF) is to assemble a panel to help it decide whether to fund the Giant Magellan Telescope (GMT) or the Thirty Meter Telescope (TMT). The agency expects the panel, whose membership has yet to be determined, to report by 30 September, the end of the US government’s financial year.

The NSF first announced in February that it would support the construction of only one of the two next-generation ground-based telescopes due to rising costs. The GMT, priced at $2.54bn, will be located in Chile, while the TMT, which is expected to cost at least $3bn, is set to be built in Hawaii.

A decision on which telescope to fund was initially slated for May. But at a meeting of the National Science Board (NSB) last week, NSF boss Sethuraman Panchanathan revealed the panel would provide further advice to the agency. The decision to look to outsiders followed discussions with the US government and the NSB, which oversees the NSF.

The panel, which will include scientists and engineers, will assess “the readiness of the project from all perspectives” and consider how supporting each telescope would affect the NSF’s overall budget.

It will examine progress made to date, the level of partnerships and resources, and risk management. Complementarity to the European Extremely Large Telescope, opportunities for early-career scientists, and public engagement will be looked at too.

“I want to be very clear that this is not a decision to construct any telescopes,” Panchanathan, who originally trained as a physicist, told the NSB. “This is simply part of a process of gathering critical information to inform my decision-making on advancing either project to the final design stage.”

The post GMT or TMT? Fate of next-generation telescope falls to expert panel set up by US National Science Foundation appeared first on Physics World.

  •  

Magnetic islands stabilize fusion plasma, simulations suggest

Par : No Author
9 mai 2024 à 17:23

By combining two different approaches to plasma stabilization, physicists in the US and Germany have developed a new technique for suppressing instabilities in tokamak fusion reactors. The team, led by Qiming Hu at Princeton Plasma Physics Laboratory, hopes its computer-modelling results could be an important step towards making nuclear fusion a viable source of energy.

Tokamak fusion reactors use intense magnetic fields to confine and heat hydrogen plasma within their doughnut-shaped interiors. At suitably high temperatures, the hydrogen nuclei will gain enough energy to overcome their mutual repulsion and fuse together to form helium nuclei, releasing energy in the process.

If more energy is released in the reaction than is fed into the tokamak, it would provide an abundant source of clean energy. This has been a goal of researchers since fusion was first created in the laboratory in the 1930s.

Stubborn roadblock

One of the most stubborn roadblocks to achieving sustained fusion is the emergence of periodic plasma instabilities called edge-localized modes (ELMs). These originate in the outer regions of the plasma and result in energy leaking into the tokamak’s walls. If left unchecked, this will cause the fusion reaction to fizzle out, and it can even damage the tokamak.

One of the most promising approaches for suppressing ELMs is the use of resonant magnetic perturbations (RMPs). These are controlled ripples in the confining magnetic field that create closed loops of magnetic fields to form inside the plasma.

Dubbed magnetic islands, these loops do not always have a desirable influence. If they are too large, they risk destabilizing the plasma even further. But by carefully engineering RMPs to generate islands with just the right size, it should be possible to redistribute the pressure inside the plasma, suppressing the growth of ELMs.

In their study, Hu’s team introduced an extra step to this process, which would enable them to better control the parameters of RMPs to generate magnetic islands of just the right size.

Spiralling electrons

This involved injecting the plasma with high-frequency microwaves in a method called edge-localized electron cyclotron current drive (ECCD). Inside the plasma, these waves cause energetic electrons to spiral along the direction of the confining magnetic field lines, generating local currents which run parallel to the field lines.

In previous experiments, ECCD microwaves were most often injected into the core of the plasma. But in their simulations, the Hu and colleagues instead directed them to the edge.

“Usually, people think applying localized ECCD at the plasma edge is risky because the microwaves may damage in-vessel components,” Hu explains. “We’ve shown that it’s doable, and we’ve demonstrated the flexibility of the approach.”

Tight control

In simulated tokamak reactors, the team found that their new approach can lower the amount of current necessary to generate RMPs, while also providing tight control over the sizes of magnetic islands as they formed in the plasma.

“Our simulation refines our understanding of the interactions in play,” Hu continues. “When the ECCD was added in the same direction as the current in the plasma, the width of the island decreased, and the pedestal pressure increased.”

The pedestal pressure refers to the region close to the edge of the plasma where the pressure peaks, before dropping off steeply towards the plasma boundary. “Applying the ECCD in the opposite direction produced opposite results, with island width increasing and pedestal pressure dropping or facilitating island opening,” explains Hu.

These simulation results could provide important guidance for physicists running tokamaks – including ITER experiment, which should begin operation in late 2025. If the same results can be replicated in real plasma it could bring the long-awaited goal of sustained nuclear fusion a step closer.

The research is described in Nuclear Fusion.

The post Magnetic islands stabilize fusion plasma, simulations suggest appeared first on Physics World.

  •  

Astronomy conference travel is on par with Africa’s per-capita carbon footprint

Par : No Author
9 mai 2024 à 14:15

Travel to more than 350 astronomy meetings in 2019 resulted in the emission of 42 500 tonnes of carbon dioxide. That’s the conclusion of the first-ever study to examine the carbon emissions from travel to meetings by an entire field. The carbon cost amounts to about one tonne of carbon dioxide equivalent (tCO2e) per participant per meeting – roughly Africa’s average per capita carbon footprint in 2019 (1.2 tCO2e) (PNAS Nexus 3 pgae143).

Carried out by a team led by Andrea Gokus at Washington University in St. Louis in the US, the study examined 362 meetings in 2019 that were open to anyone in the astronomical community. These included conferences disseminating scientific findings as well as schools providing lectures and training to students and early-career scientists.

Using data on each participant’s home institutions that were available for 300 of the meetings, the researchers estimated travel-related emissions for each event, assuming delegates went by train or plane. For these meetings, the emissions totalled 38 000 tCO2e and a distance equivalent to travelling to the Sun and half way back.

For the other 62 meetings that did not have details of the participants’ home institutions, the team estimated the emissions using average data from other conferences. Emissions from those events were put at 4500 tCO2e, bringing the total to 42 500 tCO2e.

The meeting with the highest emissions per participant was Great Barriers in Planet Formation held in Palm Cove, Queensland in Australia, with almost all attendees traveling from outside the country. The travel from the 115 participants resulted in 461 tCO2e, or 4 tCO2e for every person, on average. The team found that emissions could have been more than halved if it had been held in Europe or the northeastern US.

Hub model

Gokus says that while meetings are important for researchers, “adjustments can be made to reduce their hefty carbon cost”, for example by knowing where participants are based. The researchers found, for example, that emissions from 2019’s biggest astronomical conference – the 223rd American Astronomical Society (AAS) meeting in Seattle – could have been cut by a quarter if it had been held in a more central US location.

The team also explored the impact of switching the 223rd AAS meeting from a single-venue meeting to a hub model, in which simultaneous satellite events are held at different locations. A two-hub model for that conference, with an eastern and western US hub, would have reduced emissions by around 60%, the study finds. Adding a third European hub could have saved 65% of emissions, while a fourth hub in Asia, for instance in Tokyo, would have cut emissions by about 70%.

The researchers claim that such alternative meeting setups as well as virtual attendance, could have benefits beyond the environment. They point out that finances, complex visa processes, parenting and other careering responsibilities as well as disabilities can make travelling to meetings challenging for some.

“By making use of technology to connect virtually, we can foster a more inclusive collaborative approach, which can help us advance our understanding of the Universe further,” says Gokus. “It is important that we work together as a community to achieve this goal, because there is no Planet B.”

The post Astronomy conference travel is on par with Africa’s per-capita carbon footprint appeared first on Physics World.

  •  

Tetris-inspired radiation detector uses machine learning

Par : No Author
8 mai 2024 à 16:19

Inspired by the tetromino shapes in the classic video game Tetris, researchers in the US have designed a simple radiation detector that can monitor radioactive sources both safely and efficiently. Created by Mingda Li and colleagues at the Massachusetts Institute of Technology, the device employs a machine learning algorithm to process data, allowing it to build up accurate maps of sources using just four detector pixels.

Wherever there is a risk of radioactive materials leaking into the environment, it is critical for site managers to map out radiation sources as accurately as possible.

At first glance, there is an obvious solution to maximizing precision, while keeping costs as low as possible, explains Li. “When detecting radiation, the inclination might be to draw nearer to the source to enhance clarity. However, this contradicts the fundamental principles of radiation protection.”

For the people tasked with monitoring radiation, these principles advise that the radiation levels they expose themselves to should be kept as low as reasonably achievable.

Complex and expensive

However, since radiation can interact with intervening objects via a wide array of mechanisms, it is often both complex and expensive to map out radiation sources from reasonably safe distances.

“Thus, the crux of the matter lies in simplifying detector setups without compromising safety by minimizing proximity to radiation sources,” Li explains.

In a typical detector, radiation maps are created by monitoring intensity distribution patterns across a 10×10 array of detector pixels. The main drawback here is that radiation can approach the detector from a variety of directions and distances, making it difficult to extract useful information about the source of that radiation. This is usually done by placing an absorbing mask over the pixels, which provides some directional information, and by doing lots of data processing.

For Li’s team, the first step to reducing the complexity of this process was to minimize redundant information collected by multiple pixels within the array. “By strategically incorporating small [lead] paddings between pixels, we enhance contrast to ensure that each detector receives distinct information, even when the radioactive source is distant,” Li explains.

Machine learning

Next, the team developed machine learning algorithms to extract more accurate information regarding the direction of incoming radiation and the detector’s distance to the source.

Inspiration for the final step of the design would come from an unlikely source. In Tetris, players encounter seven unique tetrominoes, which represent every possible way that four squares can be arranged contiguously to create shapes.

By using these shapes to create detector pixel arrays, the researchers predicted they could achieve similar levels of accuracy as detectors with far larger square arrays. As Li explains, “these shapes offer superior efficiency in utilizing pixels, thereby enhancing accuracy.”.

To demonstrate this, the team designed a series of four–pixel radiation detectors, with the pixels arranged in Tetris-inspired tetromino shapes. To build up radiation maps, these arrays were moved in circular paths around the radioactive sources being studied. This allowed the detector’s algorithms to discern accurate information about source positions and directions, based on the counts received by the four pixels.

Successful field test

“Particularly noteworthy was our successful execution of a field-test at Lawrence Berkeley National Laboratory,” Li recalls. “Even when we withheld the precise source location, the machine learning algorithm could effectively localize it within real experimental data.”

Li’s team is now confident that its novel approach to detector design and data processing could be useful for radiation detection. “The adoption of Tetris-like configurations not only enhances accuracy but also minimizes complexity in detector setups,” Li says. “Moreover, our successful field-test underscores the real-world applicability of our approach, paving the way for enhanced safety and efficacy in radiation monitoring.”

Based on their success, the team hopes the detector design could soon be implemented for applications including the routine monitoring of nuclear reactors, the processing of radioactive material, and the safe storage of harmful radioactive waste.

The detector is described in Nature Communications.

The post Tetris-inspired radiation detector uses machine learning appeared first on Physics World.

  •  

From pulsars and fast radio bursts to gravitational waves and beyond: a family quest for Maura McLaughlin and Duncan Lorimer

Par : No Author
7 mai 2024 à 18:41

Most physicists dream of making new discoveries that expand what we know about the universe, but they know that such breakthroughs are extremely rare. It’s even more surprising for a scientist to make a great discovery with someone who is not just a colleague, but also their life partner. The best-known husband-and-wife couples in physics are the Curies, Marie and Pierre; as well as their daughter, Irène Joliot-Curie and her husband Frédéric Joliot-Curie. Each couple won a Nobel prize, in 1903 and 1935 respectively, for early work on radioactivity.

Joining the ranks of these pioneering physicists are contemporary married couple Maura McLaughlin and Duncan Lorimer, who last year were two of three laureates awarded the $1.2m Shaw Prize in Astronomy (see box below) for their breakthroughs in radio astronomy. Together with astrophysicist Matthew Bailes, director of the Australian Research Council Centre of Excellence for Gravitational Wave Discovery, McLaughlin and Lorimer won the prize for their 2007 discovery of fast radio bursts (FRBs) – powerful but short-lived pulses of radio waves from distant cosmological sources. Since their discovery, several thousand of these mysterious cosmic flashes, which last for milliseconds, have been spotted.

Over the years, McLaughlin and Lorimer’s journeys – through academia and their personal life – have been inherently entwined and yet distinctly discrete, as the duo developed careers in radio astronomy and astrophysics that began with pulsars, then included FRBs and now envelop gravitational waves. The couple have also advanced science education and grown astronomical research and teaching at their home base, West Virginia University (WVU) in the US. There, McLaughlin is Eberly Family distinguished professor of physics and astronomy, and chair of the Department of Physics and Astronomy, while Lorimer currently serves as associate dean for research in WVU’s Eberly College of Arts and Sciences.

The Shaw Prize

Photo of two people superimposed with artist impression of radio waves
Shaw laureates Astrophysicists Duncan Lorimer and Maura McLaughlin received the Shaw Prize in 2023 for their discovery of fast radio bursts. (Courtesy: WVU Photo/Raymond Thompson Jr)

The 2023 Shaw Prize in Astronomy, awarded jointly to Duncan Lorimer and Maura McLaughlin, and to their colleague Matthew Bailes, is part of the legacy of Sir Run Run Shaw (1907–2014), a successful Hong Kong-based film and television mogul. Known for his philanthropy, he gave away billions in Hong Kong dollars to support schools and universities, hospitals and charities in Hong Kong, China and elsewhere.

In 2002 he established the Shaw Prize to recognize “those persons who have achieved distinguished contributions in academic and scientific research or applications or have conferred the greatest benefit to mankind”. A gold medal and a certificate for each Shaw laureate, and a monetary award of $1.2m shared among the laureates, is given yearly in astronomy, life science and medicine, and mathematical sciences. Previous winners of the Shaw Prize in Astronomy include Ronald Drever, Kip Thorne and Rainer Weiss, for the first observation of gravitational waves with LIGO. They are among the 16 of the 106 Shaw laureates since 2004 who have also been awarded Nobel prizes.

Accidental cosmic probe

Radio astronomy, which led to much of McLaughlin and Lorimer’s work, was not initially a formal area of research. Instead, it began rather serendipitously in 1928, when Bell Labs radio engineer Karl Jansky was trying to find the possible sources of static at 20.5 MHz that were disrupting the new transatlantic radio telephone service. Among the types of static that he detected was a constant “hiss” from an unknown source that he finally tracked down to the centre of the Milky Way galaxy, using a steerable antenna 30 m in length. His 1933 paper “Electrical disturbances apparently of extraterrestrial origin” received considerable media attention but little notice from the astronomy establishment of the time (see “Radio astronomy: from amateur roots by worldwide groups” by Emma Chapman).

Radio astronomy truly flourished after the Second World War, with new purpose-built facilities. An early example from 1957 was the steerable 76 m dish antenna built by Bernard Lovell and colleagues at Jodrell Bank in the UK – where McLaughlin and Lorimer would later work. Other researchers who led the way include the Nobel-prize-winning astronomer Sir Martin Ryle, who pioneered radio interferometry and developed aperture synthesis; as well as Australian electrical engineer Bernard Mills, who designed and built radio interferometers.

Extraterrestrial radio signals soon yielded important science. In 1951 researchers detected a predicted emission from neutral hydrogen at 1.4 GHz – a fingerprint of this fundamental atom. In 1964 Arno Penzias and Robert Wilson (also based at Bell Labs) inadvertently found a 4.2 GHz signal across the whole sky, while testing orbiting telecom satellites – thereby discovering the cosmic background radiation. And in 1968 another spectacular discovery shaped McLaughlin and Lorimer’s careers, when University of Cambridge graduate student Jocelyn Bell Burnell and her PhD supervisor Antony Hewish announced the observation of an unusual radio signal from space – a pulse that arrived every 1.3 seconds. That signal was the first to come from what were soon called “pulsars”. Hewish would go on to share the 1974 Nobel Prize for Physics for the discovery – while Bell Burnell was infamously left out, supposedly due to her then student status.

As more pulsars were found with varied periods and in different directions of the sky, it became clear that the signals were not being sent by an alien civilization as some researchers had speculated – after all, the chances of an extraterrestrial civilization sending many signals of varying periods, or different civilizations sending out different periodic signals, was slim. One clue was that the pulses were short and coherent, so they had to come from sources smaller than the distance light could travel during the pulse’s lifetime – for instance, the source of a 5 ms pulse could be at a maximum of 1500 km.

As it happened, the signals were our first look at neutron stars – small, extremely dense and rapidly rotating remnants of massive stars after they have gone supernova and had their protons and electrons squeezed into neutrons by gravity’s implacable power. As the star rotates, its strong off-axis magnetic field produces beams of electromagnetic radiation from the magnetic poles. These beams create regular pulses as they sweep past a detector on a direct line of sight. Pulsars are mostly studied at radio frequencies, but they also radiate at other, higher frequencies.

Pulsars to fast bursts

Lorimer and McLaughlin began their careers by studying these exotic stellar objects, but each of them had already been captivated by astronomy and astrophysics as teenagers. Lorimer was born in Darlington, UK. After studying astrophysics as an undergraduate at the University of Wales in Cardiff, he moved to the University of Manchester in 1994, where his PhD research focused on analysing classes of radio pulsars with different periods.

McLaughlin was born in Philadelphia, Pennsylvania, and first studied pulsars as an undergraduate student at Penn State. Her PhD dissertation at Cornell University in 2001 covered pulsars that variously emitted radio waves, X-rays or gamma rays. By 1995 Lorimer was working as a researcher at the Max Planck Institute for Radio Astronomy in Bonn, Germany, whereas McLaughlin joined the Jodrell Bank Observatory in 2003. He met McLaughlin in 1998 while working at the Arecibo Observatory in Puerto Rico. McLaughlin and Lorimer moved to the UK in 2001 to work at the Jodrell Bank observatory.

It was an interesting and exciting time in the pulsar research community, with new pulsars found by computerized Fourier transform analysis that detected the telltale periodicities in vast amounts of observational data. But radio astronomers also sometimes saw transient signals, and McLaughlin had written computer code designed to find single bright pulses. This led to the 2006 discovery of a new class of pulsars dubbed rotating radio transients (RRATS, an acronym recalling a pet rat McLaughlin once had). These stars could be detected only through their sporadic millisecond-long bursts, unlike most pulsars, which were found through their periodic emissions. The discovery in turn initiated further searches for transient pulses (Nature 439 817).

The following year, Lorimer and McLaughlin, now a married couple, joined WVU’s department of physics and astronomy as assistant professors. To uncover more distant and bright pulsars, Lorimer gave his graduate student Ash Narkevic the task of looking through archival observational data that the Parkes radio telescope in Australia had taken of the Large and Small Magellanic Clouds – two small galaxies that are satellites to our very own Milky Way, roughly 200,000 light-years away from Earth – of which the Large was already known to host 15 pulsars.

Narkevic examined the data and found a single strong burst – nearly 100 times stronger than the background – at 1.4 GHz with a 5 msec duration. But the burst seemed to come from the Small Magellanic Cloud, where there were only five known pulsars at that time. Even more surprising was the fact that this extremely bright burst did not arrive all the same time. Known as pulse or frequency dispersion, this occurs when radio waves travelling through interstellar space interact with free electrons, dispersing the waves, as higher-frequency waves travel through the free-electron plasma quicker than lower-frequency ones, and arrive earlier at our telescopes.

This dispersion depends on the total number of electrons (or the column density) along the path. The further away the source of the burst, the more likely it is that the waves will encounter even more electrons on their path to Earth, and so the lag between the high- and low-frequency waves is greater. The pulse Narkevic spotted was so distorted by the time it reached Earth that it suggested the source was almost three billion light-years away – well beyond our local galactic neighbourhood. This also meant that the source must be significantly smaller than the Sun, and more on par with the proposed size of pulsars, while also somehow being 1012 times more luminous than a typical pulsar.

1 The first burst

Photo of two men holding a sheaf of paper and a graph of radio data showing a clear black line
Courtesy: Duncan Lorimer; Lorimer et al., NRAO/AUI/NSF

(Top) Duncan Lorimer (left) and Ash Narkevic in 2008 with the paper they published in Science about their observation of a fast radio burst (bottom).

The report of this seemingly new phenomenon – a single extremely energetic event at an enormous cosmological distance – was published in Science later that year, after being initially rejected (Science 318 777). This first detected fast radio burst came to be known as the “Lorimer burst” (figure 1). After several years and significant further work by Lorimer, McLaughlin, Bailes and others, they found first four and then tens of similar bursts. This launched a new class of cosmological phenomena that now includes more than 1000 FRBs, which have fulfilled the prediction in 2007 that they would serve as cosmological probes.

Thanks to FRBs having been found in different galaxies beyond our own across the sky, they serve as a probe of the intergalactic medium, allowing astrophysicists to measure the density of the material that lies between Earth and the host galaxy (Nature 581 391). By measuring the distance to the source of the FRB, and then looking at the dispersion as a function of wavelength of the pulses, astronomers can determine the density of the matter the pulse passed through, thereby yielding a value for the baryonic density of our universe. This is otherwise extremely difficult to measure, thanks to how diffused this matter is in our observable universe. FRBs have also provided an independent measurement for the Hubble constant, the exact value of which has lately come under new scrutiny (MNRAS 511 662).

Detecting a gravitational-wave background

While Lorimer is still working on pulsars and FRBs, McLaughlin has now moved into another area of pulsar astronomy. That’s because for almost two decades, she has been a researcher in and co-director of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) Physics Frontier Center, which uses pulsars to detect low-frequency gravitational waves with periods of years to decades. One of its facilities is the steerable 100 m Green Bank Telescope about 150 km south of WVU.

“We are observing an array of pulsars distributed across the sky,” says McLaughlin. “These are 70 millisecond pulsars, so very rapidly rotating. We search for very small deviations in the arrival times of the pulsars that we can’t explain with a timing model that accounts for all the known astrophysical delays.” General relativity predicts that certain deviations in the timing would depend on the relative orientation of pairs of pulsars, so seeing this special angular correlation in the timing would be a clear sign of gravitational waves.

2 Gravitational-wave spectrum

Two figures: a globe covered in coloured symbols and a chart
Courtesy: NANOGrav Collaboration

(a) The NANOGrav 15-year data set contains timing observations from 68 pulsars using the Arecibo Observatory, the Green Bank Telescope and the Very Large Array. The map shows pulsar locations in equatorial co-ordinates. (b) The background comes from correlating changes in pulsar arrival times between all possible pairs of the 67 pulsars (2211 distinct pairs in total), and is based on three or more years of timing data. The black line is the expected correlation predicted by general relativity. These calculations assume the gravitational-wave background is from inspiralling supermassive black-hole binaries.

In June 2023 the NANOGrav collaboration published an analysis of 15 years of its data (figure 2), looking at 68 pulsars with millisecond periods, which showed this signature for the first time (ApJL 951 L8). McLaughlin says that it represents not just one source of gravitational waves, but a background arising from all gravitational events such as merging supermassive black holes at the hearts of galaxies. This background may contain information about how galaxies interact and perhaps also the early universe. Five years from now, she predicts, NANOGrav will be detecting individual supermassive black-hole binaries and will tag their locations in specific galaxies, to form a black hole atlas.

Star-crossed astronomers

The connections between McLaughlin and Lorimer that played a role in their academic achievements began rather fittingly with an interaction in 1999, at the Arecibo radio telescope in Puerto Rico (now sadly decommissioned). Lorimer was based there at the time, while McLaughlin was a visiting graduate student, and their contact, though not in person, was definitely not cordial. Lorimer sent what he calls a “little snippy e-mail” to McLaughlin about her use of the computer that blocked his own access, which she also recalls as “pretty grumpy”.

Two photos: a woman stood on a telescope gantry and a man in a control room
Near miss Maura McLaughlin and Duncan Lorimer both worked at Arecibo Observatory in Puerto Rico in 1999, but they didn’t meet in person during that time. (Courtesy: Maura McLaughlin and Duncan Lorimer)

But things improved after they later met in person, and they joined the Jodrell Bank Observatory in the UK. The pair married in 2003 and now have three sons. Over the years, they moved together to the US, set up their own astronomy group at WVU by 2006, and proceeded to work together and alongside each other, publishing many research papers, both joint and separate.

Given all these successes, how do the two researchers balance science and family, especially when they first arrived at WVU with a five-month-old baby to join a department with just one astronomer and no graduate astronomy programme? McLaughlin says it was “Really hard work. Lots of grant writing, developing courses,” but adds that it was also “really fun because we were both building a programme and building a family and moving to a new place”.

Life got even busier in 2007, when another child and the FRB discovery both arrived. The couple says that it was all doable because they fully understood the need to shift scientific or family responsibilities to each other as necessary. According to McLaughlin, this includes equal parenting from her husband, for which she feels “very lucky”. As Lorimer puts it, “We get each other’s mindset.”

However, the fact that they are married may have coloured perceptions of their work and status. “When we first started here at WVU,” Lorimer explains, “a lot of people assumed we were sharing a single position. But the university’s been great. It’s always made it clear from the get-go that we’re obviously on different career trajectories.” And they agree that as they’ve progressed in their individual careers and are known for different things, they’re now unmistakably seen as two distinct scientists.

Three photos of the same couple: their wedding; riding a tandem bike; and posing with a dog
Shared wavelength Maura McLaughlin and Duncan Lorimer married in 2003 (top). They credit their ability to both have successful careers to sharing and shifting family responsibilities as needed, as well as taking their initially similar career paths on different trajectories. (Courtesy: Maura McLaughlin and Duncan Lorimer)

Beyond the Shaw Prize

The Shaw Prize came as a total surprise to the couple. The pair both received e-mails simultaneously one evening, but Lorimer spotted his first. “We almost missed it as it was just about time to go to bed and the announcement was being made in Hong Kong a few hours after that,” says Lorimer. McLaughlin recalls her husband screaming and excitedly running up the stairs to give her the news. “He doesn’t scream much to begin with, maybe only when the dogs do something bad, and I’m wondering ‘Why is he screaming late on a Sunday night?’ He told me to pull up the e-mail and I thought it was a prank. I read it again and realized it was real. That was quite a Sunday night.” Amusingly, the e-mail for their co-winner Matthew Bailes initially went into his spam folder. The trio would later describe their work in a Shaw Prize Lecture in Hong Kong in November 2023.

So what comes next for the stellar pair? Further research into the different types of FRBs that are still being found, using new telescopes and detection schemes. One new project, an extension of Lorimer’s earlier work in pulsar populations, is to locate FRBs in specific galaxies and among groups of both younger and older stars using the Green Bank telescope in West Virginia, along with others, to help uncover what causes them. FRBs may come from neutron stars with especially huge magnetic fields – dubbed magnetars – but this remains to be seen.

Data from Green Bank is also used in the Pulsar Science Collaboratory, co-founded by McLaughlin and Lorimer (see box below). Meanwhile, the NANOGrav pulsar observation of the gravitational wave background, where McLaughlin continues her long-time involvement, has been hailed by the LIGO Collaboration for opening up the spectrum in the exciting new era of gravitational-wave astronomy and cosmology.

The Pulsar Science Collaboratory

Photo of two high-schoolers and a woman looking at data on a computer screen
Engaging science Participants in the Pulsar Science Collaboratory, at the Green Bank Telescope control room. (Courtesy: NSF/AUI/GBO)

The Pulsar Science Collaboratory (PSC) was founded in 2007 by Maura McLaughlin, Duncan Lorimer and Sue Ann Heatherly at the Green Bank Observatory; with support from the US National Science Foundation. It is an educational project in which, to date, more than 2000 high-school students have been involved in the search for new pulsars.

Students are trained via a six-week online course and then must pass a certification test to use an online interface to access terabytes of pulsar data from the Green Bank Observatory. They are also invited to a summer workshop at the observatory. McLaughlin and Lorimer proudly note the seven new pulsars that high-school students have so far discovered. Many of these students have continued as college undergraduates or even graduate students working on pulsar and fast-radio-burst science.

At the end of the Shaw Prize Lecture, Lorimer pointed out that there is “still much left to explore”. In an interview for the press, McLaughlin said “We’ve really just started.” Both statements seem fair predictions for anything each one does in their areas of interest in the future – surely with hard work but also with the continuing sense that it’s “really fun”.

The post From pulsars and fast radio bursts to gravitational waves and beyond: a family quest for Maura McLaughlin and Duncan Lorimer appeared first on Physics World.

  •  

Australia raises eyebrows by splashing A$1bn into US quantum-computing start-up PsiQuantum

Par : No Author
7 mai 2024 à 17:16

The Australian government has controversially announced it will provide A$940m (£500m) for the US-based quantum-startup PsiQuantum. The investment, which comes from the country’s National Quantum Strategy budget, makes PsiQuantum the world’s most funded independent quantum company.

Founded in 2015 by five physicists who were based in the UK, PsiQuantum aims to build a large-scale quantum computer by 2029 using photons as quantum bits (or qubits). As photonic technology is silicon-based, it benefits from advances in large-scale chip-making fabrication and does not need as much cryogenic cooling as other qubit platforms require.

The company has already reported successful on-chip generation and the detection of single-photon qubits, but the technique is not plain sailing. In particular, optical losses still need to be reduced to sufficient levels, while detection needs to be more efficient to improve the quality (or fidelity) of the qubits.

Despite these challenges, PsiQuantum has already attracted several supporters. In 2021 private investors gave the firm $665m and in 2022 the US government provided $25m to both GlobalFoundries and PsiQuantum to develop and build photonic components.

The money from the Australian government comes mostly via equity-based investment as well as grants and loans. The amount represents half of the budget that was allocated by the government last year to boost Australia’s quantum industry over a seven-year period until 2030.

The cash come with some conditions, notably that PsiQuantum should build its regional headquarters in the Queensland capital Brisbane and operate the to-be-developed quantum computer from there. Anthony Albanese, Australia’s prime minister, claims the move will create up to 400 highly skilled jobs, boosting Australia’s tech sector.

A bold declaration

Stephen Bartlett, a quantum physicist from the University of Sydney, welcomes the news. He adds that the scale of the investment “is required to be on par” with companies such as Google, Microsoft, AWS, and IBM that are investing similar amounts into their quantum computer programmes.

Ekaterina Almasque, general partner at the venture capital firm OpenOcean, says that the investment may bring further benefits to Australia. “The [move] is a bold declaration that quantum will be at the heart of Australia’s national tech strategy, firing the starting gun in the next leg of the race for quantum [advantage],” she says. “This will ripple across the venture capital landscape, as government funding provides a major validation of the sector and reduces the risk profile for other investors.”

Open questions

The news, however, did not please everyone. Paul Fletcher, science spokesperson for Australia’s opposition Liberal/National party coalition, criticises the selection process. He says it was “highly questionable” and failed to meet normal standards of transparency and contestability.

“There was no public transparent expression of interest process to call for applications. A small number of companies were invited to participate, but they were required to sign non-disclosure agreements,” says Fletcher. “And the terms made it look like this had all been written so that PsiQuantum was going to be the winner.”

Fletcher adds that is is “particularly troubling” that the Australian government “has chosen to allocate a large amount of funding to a foreign based quantum-computing company” rather than home-grown firms. “It would be a tragedy if this decision ends up making it more difficult for Australian-based quantum companies to compete for global investment because of a perception that their own government doesn’t believe in them,” he states.

Kees Eijkel, director of business development at the quantum institute QuTech in the Netherlands, adds that it is still an open question what “winning technology” will result in a full-scale quantum computer due to the “huge potential” in the scalability of other qubit platforms.

Indeed, quantum physicist Chao-Yang Lu from University of Science and Technology of China took to X to note that there is “no technologically feasible pathway to the fault-tolerant quantum computers PsiQuantum promised” adding that there are many “formidable challenges”.

Lu points out that PsiQuantum had already claimed to have a working quantum computer by 2020, which was then updated to 2025. He says that the date now slipping to 2029 “is [in] itself worrying”.

The post Australia raises eyebrows by splashing A$1bn into US quantum-computing start-up PsiQuantum appeared first on Physics World.

  •  

Sound and light waves combine to create advanced optical neural networks

Par : No Author
6 mai 2024 à 14:00

One of the things that sets humans apart from machines is our ability to process the context of a situation and make intelligent decisions based on internal analysis and learned experiences.

Recent years have seen the development of new “smart” and artificially “intelligent” machine systems. While these do have intelligence based on analysing data and predicting outcomes, many intelligent machine networks struggle to contextualize information and tend to just create a general output that may or may not have situational context.

Whether we want to build machines that can make informed contextual decisions like humans can is an ethical debate for another day, but it turns out that neural networks can be equipped with recurrent feedback that allows them to process current inputs based on information from previous inputs. These so-called recurrent neural networks (RNNs) can contextualize, recognise and predict sequences of information (such as time signals and language) and have been used for numerous tasks including language, video and image processing.

There’s now a lot of interest in transferring electronic neural networks into the optical domain, creating optical neural networks that can process large data volumes at high speeds with high energy efficiency. But while there’s been much progress in general optical neural networks, work on recurrent optical neural networks is still limited.

New optoelectronics required

Development of recurrent optical neural networks will require new optoelectronic devices with a short-term memory that’s programmable, computes optical inputs, minimizes noise and is scalable. In a recent study led by Birgit Stiller at the Max Planck Institute for the Science of Light, researchers demonstrated an optoacoustic recurrent operator (OREO) that meets these demands.

optoacoustic recurrent operator concept
OREO concept Information in an optical pulse is partially converted into an initial acoustic wave, which affects the second and third light–sound processing steps. (Courtesy: Stiller Research Group, MPL)

The acoustic waves in the OREO link subsequent optical pulses and capture the information within, using it to manipulate the next operations. The OREO is based on stimulated Brillouin-Mandelstam scattering, an interaction between the optical waves and travelling sound waves that’s used to add latency and slow the acoustic velocity. This process enables the OREO to contextualize a time-encoded stream of information using sound waves as a form of memory, which could be used not only to remember previous operations but as a basis to manipulate the output of the current operation – much like in electronic RNNs.

“I am very enthusiastic about the generation of sound waves by light waves and the manipulation of light by the means of acoustic waves,” says Stiller. “The fact that sound waves can create fabrication-less temporary structures that can be seen by light and can manipulate light in a hair-thin optical fibre is fascinating to me. Building a smart neural network based on this interaction of optical and acoustic waves motivated me to embark on this new research direction.”

Designed to function in any optical waveguide, including on-chip devices, the OREO controls the recurrent operation entirely optically. In contrast to previous approaches, it does not need an artificial reservoir that requires complex manufacturing processes. The all-optical control is performed on a pulse-by-pulse basis and offers a high degree of reconfigurability that can be used to implement a recurrent dropout (a technique used to prevent overfitting in neural networks) and perform pattern recognition of up to 27 different optical pulse patterns.

“We demonstrated for the first time that we can create sound waves via light for the purposes of optical neural networks,” Stiller tells Physics World. “It is a proof of concept of a new physical computation architecture based on the interaction and reciprocal creation of optical and acoustic waves in optical fibres. These sound waves are, for example, able to connect several subsequent photonic computation steps with each other, so they give a current calculation access to past knowledge.”

Looking to the future

The researchers conclude that they have, for the first time, combined the field of travelling acoustic waves with artificial neural networks, creating the first optoacoustic recurrent operator that connects information carried by subsequent optical data pulses.

These developments pave the way towards more intelligent optical neural networks that could be used to build a new range of computing architectures. While this research has brought an intelligent context to the optical neural networks, it could be further developed to create fundamental building blocks such as nonlinear activation functions and other optoacoustic operators.

“This demonstration is only the first step into a novel type of physical computation architecture based on combining light with travelling sound waves,” says Stiller. “We are looking into upscaling our proof of concepts, working on other light–sound building blocks and aiming to realise a larger optical processing structure mastered by acoustic waves.”

The research is published in Nature Communications.

The post Sound and light waves combine to create advanced optical neural networks appeared first on Physics World.

  •  

Bilayer of ultracold atoms has just a 50 nm gap

Par : No Author
2 mai 2024 à 20:00

Two Bose-Einstein condensates (BECs) of magnetic atoms have been created just 50 nm apart from each other – giving physicists the first opportunity to study atomic interactions on this length scale. The work by physicists in the US could lead to studies of several interesting collective phenomena in quantum physics, and could even be useful in quantum computing.

First created in 1995, BECs have become invaluable tools for studying quantum physics. A BEC is a macroscopic entity comprising thousands of atoms that are described by a single quantum wavefunction.  They are created by cooling a trapped cloud of bosonic atoms to a temperature so low that a large fraction of the atoms are in the lowest energy (ground) state of the system.

BECs should be ideal for studying the quantum physics of exotic, strongly interacting systems. However, to prolong the lifetime of a BEC, physicists need to keep it isolated from the outside world to prevent decoherence. This need for isolation makes it difficult to manoeuvre BECs close enough together for the interactions to be studied.

Pancake layers

In the new work, researchers at Massachusetts Institute of Technology in the group of Wolfgang Ketterle (who shared the 2001 Nobel Prize for Physics for cresting BECs) tackled this problem by creating a double-layer BEC of dysprosium atoms, with the two layers just 50 nm apart. To achieve this, the researchers had to keep two pancake-like condensate layers a constant distance apart using lasers with wavelengths more than ten times their separation. This would have been almost impossible using separate optical traps.

Instead, the researchers utilized the fact that dysprosium has a very large spin magnetic moment. They lifted the degeneracy of two electronic spin states using an applied magnetic field. Atoms with opposite spins coupled to light with slightly different frequencies and opposite polarizations. The researchers sent light at both frequencies down the same optical fibre onto the same mirror. Both beams formed standing waves in the cavity. “If the frequency of these two standing waves is slightly different, then at the position where we load this bilayer array, these two standing waves are going to slightly walk off,” says Li Du, who is lead author on the paper describing the research. “Therefore by tuning the frequency difference we’re able to tune the interlayer separation,” he adds.

As both beams utilize the same optical fibre and the same mirror, they are robust to physical disturbance of these components. “Our scheme guarantees that you have two standing waves that can shake a little – or maybe a lot – but the shaking is a common mode, so the difference between the two layers is always fixed,” says Du.

Ringing atoms

The researchers heated one of the layers by about 2 μK and showed how the heat flowed across the vacuum gap to the other layer through the magnetic coupling of the atomic dipoles. Next they induced oscillations in the position of one layer and showed how these affected the position of the other layer: “We hit one layer with a hammer and we see that the other [layer] also starts to ring,” says Du.

The researchers now hope to use the platform to study how atoms closer together than one photon wavelength interact with light. “If the separation is much smaller than the wavelength of light, then the light can no longer tell [the atoms] apart,” says Du. “That potentially allows us to study a special effect called super-radiance.”

Beyond this, the researchers would like to investigate the work’s potential in quantum computing: “We would really like to implement a magnetic quantum gate purely driven by the magnetic dipole-dipole interaction,” he says. The same platform could also be used with BECs of molecules, which would open up the study of electric dipole–dipole interactions. Indeed, in late 2023, researchers at Columbia University in the US published a preprint that describes how they created a BEC of dipolar molecules. This preprint has yet to be peer reviewed.

Twisted graphene

Experimental atomic physicist Cheng Chin of the University of Chicago in Illinois, who last year collaborated with researchers at Shanxi University in China to produce a double layer of rubidium atoms to model twisted bilayer graphene, says that Ketterle and colleagues’ research is “very, very interesting”.

He adds, “This is the first time we’re able to prepare cold atom systems in two layers with such a small spacing…To control such a 2D system is hard but necessary in order to induce the interaction that’s required in two planes. It’s a very smart choice of atom because dysprosium has a very large dipole-dipole interaction. At a conventional spacing of half a micron, you wouldn’t be able to see any kind of coupling between the two layers, but 50 nm is just enough to show that the atoms in the two planes can really talk to each other.”

He suggests follow-up work from both teams’ research could focus on simulating new phases of matter and simulating emergent ones such as superconducting bilayer graphene.

The research is described in Science

The post Bilayer of ultracold atoms has just a 50 nm gap appeared first on Physics World.

  •  

Quantum Machines’ processor-based approach for quantum control

Par : No Author
2 mai 2024 à 15:13

This short video – filmed at the March Meeting of the American Physical Society in Minneapolis earlier in the year – features Itamar Sivan, chief executive and co-founder of Quantum Machines (QM). In the video, he introduces explains how QM makes the control electronics parts of quantum computers – that is, the classical hardware that drives quantum processors.

Yonatan Cohen, chief technology officer and fellow co-founder, then outlines the firm’s evolution to a processor-based approach for quantum control. As a result, QM has a unique processor that generates all the signals that communicate with it, allowing the firm to build more scalable architectures while maintaining high performance.

Cohen explains that the key for QM technology is to implement quantum error correction at scale – which is where the firm’s OPX1000 platform comes in. It is a scaled-up system with very high channel density, which means it can control many qubits with a relatively small system – making it, the firm says, the most scalable control system on the market.

Cohen also discusses the importance to QM of hiring staff who combine an expert knowledge with a passion for the technology and explainshow partnerships help QM maintain a competitive edge in the market. One such tie-up, with NVIDIA, allowed QM to create a link between its control system and the NVIDIA GPU-CPU platform – bringing more computing power to the heart of the quantum computer.

Sivan believes that after installation of the QM tech, within a couple of days, the customer can realize all the experiments that they had conceived.

The post Quantum Machines’ processor-based approach for quantum control appeared first on Physics World.

  •  
❌
❌