↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 8 mai 2024Physics World

Tetris-inspired radiation detector uses machine learning

Par : No Author
8 mai 2024 à 16:19

Inspired by the tetromino shapes in the classic video game Tetris, researchers in the US have designed a simple radiation detector that can monitor radioactive sources both safely and efficiently. Created by Mingda Li and colleagues at the Massachusetts Institute of Technology, the device employs a machine learning algorithm to process data, allowing it to build up accurate maps of sources using just four detector pixels.

Wherever there is a risk of radioactive materials leaking into the environment, it is critical for site managers to map out radiation sources as accurately as possible.

At first glance, there is an obvious solution to maximizing precision, while keeping costs as low as possible, explains Li. “When detecting radiation, the inclination might be to draw nearer to the source to enhance clarity. However, this contradicts the fundamental principles of radiation protection.”

For the people tasked with monitoring radiation, these principles advise that the radiation levels they expose themselves to should be kept as low as reasonably achievable.

Complex and expensive

However, since radiation can interact with intervening objects via a wide array of mechanisms, it is often both complex and expensive to map out radiation sources from reasonably safe distances.

“Thus, the crux of the matter lies in simplifying detector setups without compromising safety by minimizing proximity to radiation sources,” Li explains.

In a typical detector, radiation maps are created by monitoring intensity distribution patterns across a 10×10 array of detector pixels. The main drawback here is that radiation can approach the detector from a variety of directions and distances, making it difficult to extract useful information about the source of that radiation. This is usually done by placing an absorbing mask over the pixels, which provides some directional information, and by doing lots of data processing.

For Li’s team, the first step to reducing the complexity of this process was to minimize redundant information collected by multiple pixels within the array. “By strategically incorporating small [lead] paddings between pixels, we enhance contrast to ensure that each detector receives distinct information, even when the radioactive source is distant,” Li explains.

Machine learning

Next, the team developed machine learning algorithms to extract more accurate information regarding the direction of incoming radiation and the detector’s distance to the source.

Inspiration for the final step of the design would come from an unlikely source. In Tetris, players encounter seven unique tetrominoes, which represent every possible way that four squares can be arranged contiguously to create shapes.

By using these shapes to create detector pixel arrays, the researchers predicted they could achieve similar levels of accuracy as detectors with far larger square arrays. As Li explains, “these shapes offer superior efficiency in utilizing pixels, thereby enhancing accuracy.”.

To demonstrate this, the team designed a series of four–pixel radiation detectors, with the pixels arranged in Tetris-inspired tetromino shapes. To build up radiation maps, these arrays were moved in circular paths around the radioactive sources being studied. This allowed the detector’s algorithms to discern accurate information about source positions and directions, based on the counts received by the four pixels.

Successful field test

“Particularly noteworthy was our successful execution of a field-test at Lawrence Berkeley National Laboratory,” Li recalls. “Even when we withheld the precise source location, the machine learning algorithm could effectively localize it within real experimental data.”

Li’s team is now confident that its novel approach to detector design and data processing could be useful for radiation detection. “The adoption of Tetris-like configurations not only enhances accuracy but also minimizes complexity in detector setups,” Li says. “Moreover, our successful field-test underscores the real-world applicability of our approach, paving the way for enhanced safety and efficacy in radiation monitoring.”

Based on their success, the team hopes the detector design could soon be implemented for applications including the routine monitoring of nuclear reactors, the processing of radioactive material, and the safe storage of harmful radioactive waste.

The detector is described in Nature Communications.

The post Tetris-inspired radiation detector uses machine learning appeared first on Physics World.

  •  

What’s hot in particle and nuclear physics? Find out in the latest Physics World Briefing

8 mai 2024 à 15:41
Cover of the 2024 Physics World Particle & Nuclear Briefing
Stay tuned The first Physics World Particle and Nuclear Briefing is out now.

From the Higgs boson at CERN to nuclear reactions inside stars, who doesn’t love particle and nuclear physics?

There’s so much exciting work going on in both fields, which is why we’re bringing you this new Physics World Particle & Nuclear Briefing.

The 30-page, free-to-read digital magazine contains the best of our recent coverage in the two areas, including – of course – plenty on CERN, which is celebrating its 70th anniversary this year.

In addition to former CERN science communicator Achintya Rao looking back at the famous day in 2012 when the lab announced the discovery of the Higgs boson, there’s an interview with Freya Blekman, who talks about the joy of a career in physics as part of the CMS experiment at the Large Hadron Collider.

You can also find out how CERN’s Quantum Technology Initiative is encouraging collaboration between the high-energy physics and quantum tech communities.

But it’s not all about CERN. Over in the US, there are in-depth interviews with Lia Merminga, the physicist who’s current director of the Fermi National Accelerator Laboratory, and with Mike Witherell, who’s head of the Lawrence Berkeley National Laboratory.

Looking to the future, we’ve included an analysis of the influential “P5” report into the future of US particle physics, which recently called for the construction of a muon collider. Physics World also talks to Ambrogio Fasoli – the new head of EUROfusion, who says that Europe must ramp up its efforts to build a demonstration fusion reactor.

And with our pick of the best recent news and research updates, the new Physics World Particle & Nuclear Briefing really is the place for you to start.

If that’s not enough, do keep checking our particle and nuclear channel on the Physics World website for regular updates in the two fields.

The post What’s hot in particle and nuclear physics? Find out in the latest <em>Physics World Briefing</em> appeared first on Physics World.

  •  

Radiation-transparent RF coil designed for MR guidance of particle therapy

Par : Tami Freeman
8 mai 2024 à 10:50

Particle therapy is usually delivered using a large and costly gantry to change the angle of incidence of the therapeutic ion beam relative to the patient. If the patient were rotated instead, a simpler fixed-beam configuration could provide 360° access for the particle beam. During patient rotation, however, the changing direction of the gravitational force will deform and displace the tumour and surrounding organs in an unpredictable way. To ensure precise dose delivery to the tumour, such anatomical changes must be detected and compensated for during irradiation.

“Image guidance is absolutely necessary for particle therapy with patient rotation,” explains Kilian Dietrich from Heidelberg University Hospital and the German Cancer Research Center (DKFZ). “To exploit the main benefit of particle therapy – high dose escalation at the tumour with minimal dose to surrounding healthy tissue – prior knowledge of the tissue composition in the irradiation path is required.”

In conventional photon-based radiotherapy, MRI can be implemented in so-called MR-linacs, which offer the possibility to visualize changes in anatomy or patient position with high soft-tissue contrast. However, combining MRI with particle therapy including patient rotation remains a significant challenge.

Particle beams of protons, carbon ions or helium ions are extremely sensitive to non-homogeneous materials in the irradiation path, placing constraints on the MRI magnet and components. To address these limitations, Dietrich and colleagues are developing a radiation-transparent body coil to enable MR-guided particle therapy in combination with patient rotation, describing their work in Medical Physics.

Radiation transparency

One key obstacle when integrating MRI with particle therapy is the design of the radiofrequency (RF) coils used to flip the magnetization of the tissue and receive the generated MR signals. Conventional imaging coils contain highly attenuating electronic components that, if located in the beam path, will cause ion attenuation and scattering that alter the delivered dose distribution and reduce treatment efficacy.

To prevent such adverse effects, the team designed an RF coil with minimal ion attenuation, based on a cylindrical 16-rung birdcage configuration. This specific birdcage coil only has capacitors on the end rings, thereby avoiding attenuation and scattering in a large window in between. And since the birdcage functions both as a transmit and a receive coil, no additional RF coils are required. The design also allows easy integration into a capsule that enables rotation of the patient and the coil together, providing 360° access for a fixed ion beam source.

The researchers built the RF coil from a 35 µm-thick copper conductor embedded between layers of flexible polyimide and adhesive. The coil has an inner diameter of 53 cm and an axial length of 52 cm – providing a large enough field-of-view for full-body cross section imaging.

Measuring the Bragg peak shift caused by the entire RF coil confirmed its total water equivalent thickness (WET, a measure of ion attenuation) as 420 µm. This includes the polyimide and adhesive layers, which are homogeneous and can be compensated for with higher particle beam energy. The WET of the copper layer alone, which is inhomogeneous and cannot simply be compensated for, was approximately 210 µm. This is well within the clinical precision required for dose planning, which lies in the order of millimetres. As such, the team classifies the RF coil as radiation transparent.

Effective imaging

To characterize the imaging quality of their RF coil, the researchers imaged a homogeneous tissue-simulating phantom using a 1.5 T MR system. For the three central planes in the phantom, the transmit RF field distributions were homogeneous and resembled those of simulations and the MR system’s internal body coil. The measured transmit power efficiencies (between 0.17 and 0.26 µT/√W) were lower than the simulated values, but exceeded those of the internal body coil.

To examine the impact of coil rotation, they determined the mean transmit power efficiency in a central subvolume of the phantom for a full capsule rotation. Compared with the simulations, the measurements showed a slight dependence on rotation angle, with optimal transmit power efficiency at rotation angles close to 0° and 180°.

The RF coil also exhibited uniform signal acquisition in the three central phantom planes, with similar receive sensitivity profiles as observed in the simulations, both with the phantom in the horizontal position and when rotated by 30°. For a full rotation of the capsule, the measured receive sensitivity varied between 62% and 125%, decreasing at rotation angles between 15° and 120° and at 205°.

The signal-to-noise ratio (SNR) of the RF body coil showed a slight dependence on the rotation angle, ranging between 103 and 150. Overall, an increase of 10%–43% over the SNR of the internal body coil was achieved, indicating reasonable imaging quality for thoracic, abdominal and pelvic MRI.

To estimate the effect of realistic patient loading in the RF coil, the team also simulated a heterogeneous human voxel model, observing high transmit power efficiency and receive sensitivity for all rotation angles. The next step will be to perform in vivo measurements.

“The RF coil has not been tested in vivo yet since further tests are necessary before the whole setup can be tested,” Dietrich tells Physics World. “This includes patient acceptance for the rotation system as well as the time required to rescue the patient in times of emergency.”

The post Radiation-transparent RF coil designed for MR guidance of particle therapy appeared first on Physics World.

  •  
À partir d’avant-hierPhysics World

From pulsars and fast radio bursts to gravitational waves and beyond: a family quest for Maura McLaughlin and Duncan Lorimer

Par : No Author
7 mai 2024 à 18:41

Most physicists dream of making new discoveries that expand what we know about the universe, but they know that such breakthroughs are extremely rare. It’s even more surprising for a scientist to make a great discovery with someone who is not just a colleague, but also their life partner. The best-known husband-and-wife couples in physics are the Curies, Marie and Pierre; as well as their daughter, Irène Joliot-Curie and her husband Frédéric Joliot-Curie. Each couple won a Nobel prize, in 1903 and 1935 respectively, for early work on radioactivity.

Joining the ranks of these pioneering physicists are contemporary married couple Maura McLaughlin and Duncan Lorimer, who last year were two of three laureates awarded the $1.2m Shaw Prize in Astronomy (see box below) for their breakthroughs in radio astronomy. Together with astrophysicist Matthew Bailes, director of the Australian Research Council Centre of Excellence for Gravitational Wave Discovery, McLaughlin and Lorimer won the prize for their 2007 discovery of fast radio bursts (FRBs) – powerful but short-lived pulses of radio waves from distant cosmological sources. Since their discovery, several thousand of these mysterious cosmic flashes, which last for milliseconds, have been spotted.

Over the years, McLaughlin and Lorimer’s journeys – through academia and their personal life – have been inherently entwined and yet distinctly discrete, as the duo developed careers in radio astronomy and astrophysics that began with pulsars, then included FRBs and now envelop gravitational waves. The couple have also advanced science education and grown astronomical research and teaching at their home base, West Virginia University (WVU) in the US. There, McLaughlin is Eberly Family distinguished professor of physics and astronomy, and chair of the Department of Physics and Astronomy, while Lorimer currently serves as associate dean for research in WVU’s Eberly College of Arts and Sciences.

The Shaw Prize

Photo of two people superimposed with artist impression of radio waves
Shaw laureates Astrophysicists Duncan Lorimer and Maura McLaughlin received the Shaw Prize in 2023 for their discovery of fast radio bursts. (Courtesy: WVU Photo/Raymond Thompson Jr)

The 2023 Shaw Prize in Astronomy, awarded jointly to Duncan Lorimer and Maura McLaughlin, and to their colleague Matthew Bailes, is part of the legacy of Sir Run Run Shaw (1907–2014), a successful Hong Kong-based film and television mogul. Known for his philanthropy, he gave away billions in Hong Kong dollars to support schools and universities, hospitals and charities in Hong Kong, China and elsewhere.

In 2002 he established the Shaw Prize to recognize “those persons who have achieved distinguished contributions in academic and scientific research or applications or have conferred the greatest benefit to mankind”. A gold medal and a certificate for each Shaw laureate, and a monetary award of $1.2m shared among the laureates, is given yearly in astronomy, life science and medicine, and mathematical sciences. Previous winners of the Shaw Prize in Astronomy include Ronald Drever, Kip Thorne and Rainer Weiss, for the first observation of gravitational waves with LIGO. They are among the 16 of the 106 Shaw laureates since 2004 who have also been awarded Nobel prizes.

Accidental cosmic probe

Radio astronomy, which led to much of McLaughlin and Lorimer’s work, was not initially a formal area of research. Instead, it began rather serendipitously in 1928, when Bell Labs radio engineer Karl Jansky was trying to find the possible sources of static at 20.5 MHz that were disrupting the new transatlantic radio telephone service. Among the types of static that he detected was a constant “hiss” from an unknown source that he finally tracked down to the centre of the Milky Way galaxy, using a steerable antenna 30 m in length. His 1933 paper “Electrical disturbances apparently of extraterrestrial origin” received considerable media attention but little notice from the astronomy establishment of the time (see “Radio astronomy: from amateur roots by worldwide groups” by Emma Chapman).

Radio astronomy truly flourished after the Second World War, with new purpose-built facilities. An early example from 1957 was the steerable 76 m dish antenna built by Bernard Lovell and colleagues at Jodrell Bank in the UK – where McLaughlin and Lorimer would later work. Other researchers who led the way include the Nobel-prize-winning astronomer Sir Martin Ryle, who pioneered radio interferometry and developed aperture synthesis; as well as Australian electrical engineer Bernard Mills, who designed and built radio interferometers.

Extraterrestrial radio signals soon yielded important science. In 1951 researchers detected a predicted emission from neutral hydrogen at 1.4 GHz – a fingerprint of this fundamental atom. In 1964 Arno Penzias and Robert Wilson (also based at Bell Labs) inadvertently found a 4.2 GHz signal across the whole sky, while testing orbiting telecom satellites – thereby discovering the cosmic background radiation. And in 1968 another spectacular discovery shaped McLaughlin and Lorimer’s careers, when University of Cambridge graduate student Jocelyn Bell Burnell and her PhD supervisor Antony Hewish announced the observation of an unusual radio signal from space – a pulse that arrived every 1.3 seconds. That signal was the first to come from what were soon called “pulsars”. Hewish would go on to share the 1974 Nobel Prize for Physics for the discovery – while Bell Burnell was infamously left out, supposedly due to her then student status.

As more pulsars were found with varied periods and in different directions of the sky, it became clear that the signals were not being sent by an alien civilization as some researchers had speculated – after all, the chances of an extraterrestrial civilization sending many signals of varying periods, or different civilizations sending out different periodic signals, was slim. One clue was that the pulses were short and coherent, so they had to come from sources smaller than the distance light could travel during the pulse’s lifetime – for instance, the source of a 5 ms pulse could be at a maximum of 1500 km.

As it happened, the signals were our first look at neutron stars – small, extremely dense and rapidly rotating remnants of massive stars after they have gone supernova and had their protons and electrons squeezed into neutrons by gravity’s implacable power. As the star rotates, its strong off-axis magnetic field produces beams of electromagnetic radiation from the magnetic poles. These beams create regular pulses as they sweep past a detector on a direct line of sight. Pulsars are mostly studied at radio frequencies, but they also radiate at other, higher frequencies.

Pulsars to fast bursts

Lorimer and McLaughlin began their careers by studying these exotic stellar objects, but each of them had already been captivated by astronomy and astrophysics as teenagers. Lorimer was born in Darlington, UK. After studying astrophysics as an undergraduate at the University of Wales in Cardiff, he moved to the University of Manchester in 1994, where his PhD research focused on analysing classes of radio pulsars with different periods.

McLaughlin was born in Philadelphia, Pennsylvania, and first studied pulsars as an undergraduate student at Penn State. Her PhD dissertation at Cornell University in 2001 covered pulsars that variously emitted radio waves, X-rays or gamma rays. By 1995 Lorimer was working as a researcher at the Max Planck Institute for Radio Astronomy in Bonn, Germany, whereas McLaughlin joined the Jodrell Bank Observatory in 2003. He met McLaughlin in 1998 while working at the Arecibo Observatory in Puerto Rico. McLaughlin and Lorimer moved to the UK in 2001 to work at the Jodrell Bank observatory.

It was an interesting and exciting time in the pulsar research community, with new pulsars found by computerized Fourier transform analysis that detected the telltale periodicities in vast amounts of observational data. But radio astronomers also sometimes saw transient signals, and McLaughlin had written computer code designed to find single bright pulses. This led to the 2006 discovery of a new class of pulsars dubbed rotating radio transients (RRATS, an acronym recalling a pet rat McLaughlin once had). These stars could be detected only through their sporadic millisecond-long bursts, unlike most pulsars, which were found through their periodic emissions. The discovery in turn initiated further searches for transient pulses (Nature 439 817).

The following year, Lorimer and McLaughlin, now a married couple, joined WVU’s department of physics and astronomy as assistant professors. To uncover more distant and bright pulsars, Lorimer gave his graduate student Ash Narkevic the task of looking through archival observational data that the Parkes radio telescope in Australia had taken of the Large and Small Magellanic Clouds – two small galaxies that are satellites to our very own Milky Way, roughly 200,000 light-years away from Earth – of which the Large was already known to host 14 pulsars.

Narkevic examined the data and found a single strong burst – nearly 100 times stronger than the background – at 1.4 GHz with a 5 msec duration. But the burst seemed to come from the Small Magellanic Cloud, where there were five known pulsars at that time. Even more surprising was the fact that this extremely bright burst did not arrive all the same time. Known as pulse or frequency dispersion, this occurs when radio waves travelling through interstellar space interact with free electrons, dispersing the waves, as higher-frequency waves travel through the free-electron plasma quicker than lower-frequency ones, and arrive earlier at our telescopes.

This dispersion depends on the total number of electrons (or the column density) along the path. The further away the source of the burst, the more likely it is that the waves will encounter even more electrons on their path to Earth, and so the lag between the high- and low-frequency waves is greater. The pulse Narkevic spotted was so distorted by the time it reached Earth that it suggested the source was almost three billion light-years away – well beyond our local galactic neighbourhood. This also meant that the source must be significantly smaller than the Sun, and more on par with the proposed size of pulsars, while also somehow being 1012 times more luminous than a typical pulsar.

1 The first burst

Photo of two men holding a sheaf of paper and a graph of radio data showing a clear black line
Courtesy: Duncan Lorimer; Lorimer et al., NRAO/AUI/NSF

(Top) Duncan Lorimer (left) and Ash Narkevic in 2008 with the paper they published in Science about their observation of a fast radio burst (bottom).

The report of this seemingly new phenomenon – a single extremely energetic event at an enormous cosmological distance – was published in Science later that year, after being initially rejected (Science 318 777). This first detected fast radio burst came to be known as the “Lorimer burst” (figure 1). After several years and significant further work by Lorimer, McLaughlin, Bailes and others, they found first four and then tens of similar bursts. This launched a new class of cosmological phenomena that now includes more than 1000 FRBs, which have fulfilled the prediction in 2007 that they would serve as cosmological probes.

Thanks to FRBs having been found in different galaxies beyond our own across the sky, they serve as a probe of the intergalactic medium, allowing astrophysicists to measure the density of the material that lies between Earth and the host galaxy (Nature 581 391). By measuring the distance to the source of the FRB, and then looking at the dispersion as a function of wavelength of the pulses, astronomers can determine the density of the matter the pulse passed through, thereby yielding a value for the baryonic density of our universe. This is otherwise extremely difficult to measure, thanks to how diffused this matter is in our observable universe. FRBs have also provided an independent measurement for the Hubble constant, the exact value of which has lately come under new scrutiny (MNRAS 511 662).

Detecting a gravitational-wave background

While Lorimer is still working on pulsars and FRBs, McLaughlin has now moved into another area of pulsar astronomy. That’s because for almost two decades, she has been a researcher in and co-director of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) Physics Frontier Center, which uses pulsars to detect low-frequency gravitational waves with periods of years to decades. One of its facilities is the steerable 100 m Green Bank Telescope about 150 km south of WVU.

“We are observing an array of pulsars distributed across the sky,” says McLaughlin. “These are 70 millisecond pulsars, so very rapidly rotating. We search for very small deviations in the arrival times of the pulsars that we can’t explain with a timing model that accounts for all the known astrophysical delays.” General relativity predicts that certain deviations in the timing would depend on the relative orientation of pairs of pulsars, so seeing this special angular correlation in the timing would be a clear sign of gravitational waves.

2 Gravitational-wave spectrum

Two figures: a globe covered in coloured symbols and a chart
Courtesy: NANOGrav Collaboration

(a) The NANOGrav 15-year data set contains timing observations from 68 pulsars using the Arecibo Observatory, the Green Bank Telescope and the Very Large Array. The map shows pulsar locations in equatorial co-ordinates. (b) The background comes from correlating changes in pulsar arrival times between all possible pairs of the 67 pulsars (2211 distinct pairs in total), and is based on three or more years of timing data. The black line is the expected correlation predicted by general relativity. These calculations assume the gravitational-wave background is from inspiralling supermassive black-hole binaries.

In June 2023 the NANOGrav collaboration published an analysis of 15 years of its data (figure 2), looking at 68 pulsars with millisecond periods, which showed this signature for the first time (ApJL 951 L8). McLaughlin says that it represents not just one source of gravitational waves, but a background arising from all gravitational events such as merging supermassive black holes at the hearts of galaxies. This background may contain information about how galaxies interact and perhaps also the early universe. Five years from now, she predicts, NANOGrav will be detecting individual supermassive black-hole binaries and will tag their locations in specific galaxies, to form a black hole atlas.

Star-crossed astronomers

The connections between McLaughlin and Lorimer that played a role in their academic achievements began rather fittingly with an interaction in 1999, at the Arecibo radio telescope in Puerto Rico (now sadly decommissioned). Lorimer was based there at the time, while McLaughlin was a visiting graduate student, and their contact, though not in person, was definitely not cordial. Lorimer sent what he calls a “little snippy e-mail” to McLaughlin about her use of the computer that blocked his own access, which she also recalls as “pretty grumpy”.

Two photos: a woman stood on a telescope gantry and a man in a control room
Near miss Maura McLaughlin and Duncan Lorimer both worked at Arecibo Observatory in Puerto Rico in 1999, but they didn’t meet in person during that time. (Courtesy: Maura McLaughlin and Duncan Lorimer)

But things improved after they later met in person, and they joined the Jodrell Bank Observatory in the UK. The pair married in 2003 and now have three sons. Over the years, they moved together to the US, set up their own astronomy group at WVU by 2006, and proceeded to work together and alongside each other, publishing many research papers, both joint and separate.

Given all these successes, how do the two researchers balance science and family, especially when they first arrived at WVU with a five-month-old baby to join a department with just one astronomer and no graduate astronomy programme? McLaughlin says it was “Really hard work. Lots of grant writing, developing courses,” but adds that it was also “really fun because we were both building a programme and building a family and moving to a new place”.

Life got even busier in 2007, when another child and the FRB discovery both arrived. The couple says that it was all doable because they fully understood the need to shift scientific or family responsibilities to each other as necessary. According to McLaughlin, this includes equal parenting from her husband, for which she feels “very lucky”. As Lorimer puts it, “We get each other’s mindset.”

However, the fact that they are married may have coloured perceptions of their work and status. “When we first started here at WVU,” Lorimer explains, “a lot of people assumed we were sharing a single position. But the university’s been great. It’s always made it clear from the get-go that we’re obviously on different career trajectories.” And they agree that as they’ve progressed in their individual careers and are known for different things, they’re now unmistakably seen as two distinct scientists.

Three photos of the same couple: their wedding; riding a tandem bike; and posing with a dog
Shared wavelength Maura McLaughlin and Duncan Lorimer married in 2003 (top). They credit their ability to both have successful careers to sharing and shifting family responsibilities as needed, as well as taking their initially similar career paths on different trajectories. (Courtesy: Maura McLaughlin and Duncan Lorimer)

Beyond the Shaw Prize

The Shaw Prize came as a total surprise to the couple. The pair both received e-mails simultaneously one evening, but Lorimer spotted his first. “We almost missed it as it was just about time to go to bed and the announcement was being made in Hong Kong a few hours after that,” says Lorimer. McLaughlin recalls her husband screaming and excitedly running up the stairs to give her the news. “He doesn’t scream much to begin with, maybe only when the dogs do something bad, and I’m wondering ‘Why is he screaming late on a Sunday night?’ He told me to pull up the e-mail and I thought it was a prank. I read it again and realized it was real. That was quite a Sunday night.” Amusingly, the e-mail for their co-winner Matthew Bailes initially went into his spam folder. The trio would later describe their work in a Shaw Prize Lecture in Hong Kong in November 2023.

So what comes next for the stellar pair? Further research into the different types of FRBs that are still being found, using new telescopes and detection schemes. One new project, an extension of Lorimer’s earlier work in pulsar populations, is to locate FRBs in specific galaxies and among groups of both younger and older stars using the Green Bank telescope in West Virginia, along with others, to help uncover what causes them. FRBs may come from neutron stars with especially huge magnetic fields – dubbed magnetars – but this remains to be seen.

Data from Green Bank is also used in the Pulsar Science Collaboratory, co-founded by McLaughlin and Lorimer (see box below). Meanwhile, the NANOGrav pulsar observation of the gravitational wave background, where McLaughlin continues her long-time involvement, has been hailed by the LIGO Collaboration for opening up the spectrum in the exciting new era of gravitational-wave astronomy and cosmology.

The Pulsar Science Collaboratory

Photo of two high-schoolers and a woman looking at data on a computer screen
Engaging science Participants in the Pulsar Science Collaboratory, at the Green Bank Telescope control room. (Courtesy: NSF/AUI/GBO)

The Pulsar Science Collaboratory (PSC) was founded in 2007 by Maura McLaughlin, Duncan Lorimer and Sue Ann Heatherly at the Green Bank Observatory; with support from the US National Science Foundation. It is an educational project in which, to date, more than 2000 high-school students have been involved in the search for new pulsars.

Students are trained via a six-week online course and then must pass a certification test to use an online interface to access terabytes of pulsar data from the Green Bank Observatory. They are also invited to a summer workshop at the observatory. McLaughlin and Lorimer proudly note the seven new pulsars that high-school students have so far discovered. Many of these students have continued as college undergraduates or even graduate students working on pulsar and fast-radio-burst science.

At the end of the Shaw Prize Lecture, Lorimer pointed out that there is “still much left to explore”. In an interview for the press, McLaughlin said “We’ve really just started.” Both statements seem fair predictions for anything each one does in their areas of interest in the future – surely with hard work but also with the continuing sense that it’s “really fun”.

The post From pulsars and fast radio bursts to gravitational waves and beyond: a family quest for Maura McLaughlin and Duncan Lorimer appeared first on Physics World.

  •  

Australia raises eyebrows by splashing A$1bn into US quantum-computing start-up PsiQuantum

Par : No Author
7 mai 2024 à 17:16

The Australian government has controversially announced it will provide A$940m (£500m) for the US-based quantum-startup PsiQuantum. The investment, which comes from the country’s National Quantum Strategy budget, makes PsiQuantum the world’s most funded independent quantum company.

Founded in 2015 by five physicists who were based in the UK, PsiQuantum aims to build a large-scale quantum computer by 2029 using photons as quantum bits (or qubits). As photonic technology is silicon-based, it benefits from advances in large-scale chip-making fabrication and does not need as much cryogenic cooling as other qubit platforms require.

The company has already reported successful on-chip generation and the detection of single-photon qubits, but the technique is not plain sailing. In particular, optical losses still need to be reduced to sufficient levels, while detection needs to be more efficient to improve the quality (or fidelity) of the qubits.

Despite these challenges, PsiQuantum has already attracted several supporters. In 2021 private investors gave the firm $665m and in 2022 the US government provided $25m to both GlobalFoundries and PsiQuantum to develop and build photonic components.

The money from the Australian government comes mostly via equity-based investment as well as grants and loans. The amount represents half of the budget that was allocated by the government last year to boost Australia’s quantum industry over a seven-year period until 2030.

The cash come with some conditions, notably that PsiQuantum should build its regional headquarters in the Queensland capital Brisbane and operate the to-be-developed quantum computer from there. Anthony Albanese, Australia’s prime minister, claims the move will create up to 400 highly skilled jobs, boosting Australia’s tech sector.

A bold declaration

Stephen Bartlett, a quantum physicist from the University of Sydney, welcomes the news. He adds that the scale of the investment “is required to be on par” with companies such as Google, Microsoft, AWS, and IBM that are investing similar amounts into their quantum computer programmes.

Ekaterina Almasque, general partner at the venture capital firm OpenOcean, says that the investment may bring further benefits to Australia. “The [move] is a bold declaration that quantum will be at the heart of Australia’s national tech strategy, firing the starting gun in the next leg of the race for quantum [advantage],” she says. “This will ripple across the venture capital landscape, as government funding provides a major validation of the sector and reduces the risk profile for other investors.”

Open questions

The news, however, did not please everyone. Paul Fletcher, science spokesperson for Australia’s opposition Liberal/National party coalition, criticises the selection process. He says it was “highly questionable” and failed to meet normal standards of transparency and contestability.

“There was no public transparent expression of interest process to call for applications. A small number of companies were invited to participate, but they were required to sign non-disclosure agreements,” says Fletcher. “And the terms made it look like this had all been written so that PsiQuantum was going to be the winner.”

Fletcher adds that is is “particularly troubling” that the Australian government “has chosen to allocate a large amount of funding to a foreign based quantum-computing company” rather than home-grown firms. “It would be a tragedy if this decision ends up making it more difficult for Australian-based quantum companies to compete for global investment because of a perception that their own government doesn’t believe in them,” he states.

Kees Eijkel, director of business development at the quantum institute QuTech in the Netherlands, adds that it is still an open question what “winning technology” will result in a full-scale quantum computer due to the “huge potential” in the scalability of other qubit platforms.

Indeed, quantum physicist Chao-Yang Lu from University of Science and Technology of China took to X to note that there is “no technologically feasible pathway to the fault-tolerant quantum computers PsiQuantum promised” adding that there are many “formidable” challenges”.

Lu points out that PsiQuantum had already claimed to have a working quantum computer by 2020, which was then updated to 2025. He says that the date now slipping to 2029 “is [in] itself worrying”.

The post Australia raises eyebrows by splashing A$1bn into US quantum-computing start-up PsiQuantum appeared first on Physics World.

  •  

Dark-field X-ray imaging reveals potential of nanoparticle-delivered gene therapy

Par : Tami Freeman
7 mai 2024 à 10:30

Cystic fibrosis is a genetic disorder in which defects in the CFTR protein (arising from mutations in the CFTR gene) can cause life-threatening symptoms in multiple organs. In the respiratory system, cystic fibrosis dehydrates the airway and produces sticky mucus in the lungs, leading to breathing problems and increasing the risk of lung infections.

One proposed treatment for cystic fibrosis is gene therapy, in which a viral vector delivers a healthy copy of the CFTR gene into airway cells to produce functional CFTR protein. To transport this vector to target cells and keep it there long enough to interact with them – key challenges for all gene therapies – researchers have coupled the vector to magnetic nanoparticles, which should allow controlled delivery to the airways using an external magnetic field.

Researchers at the University of Adelaide are now tackling another pressing challenge for successful gene therapy – visualizing the magnetic nanoparticles within live airways and manipulating them in vivo. To achieve this, they explored the use of dark-field X-ray imaging to enhance nanoparticle contrast and understand how magnetic nanoparticles move within the airway of a live rat, reporting their findings in Physics in Medicine & Biology.

While conventional X-ray imaging relies on the absorption of X-rays, dark-field X-ray imaging detects small-angle scattering from microstructures within a sample. To perform dark-field imaging, the researchers used a 25.0 keV monochromatic beam at the SPring-8 Synchrotron in Japan. They placed a phase grid into the beam upstream of the sample, creating a pattern of beamlets at the detector. These beamlets diffuse as they scatter through the sample, and the dark-field signal can be extracted from the strength of this blurring at the detector.

University of Adelaide researchers
Research team From left to right: Martin Donnelley, Kaye Morgan, David Parsons, Ronan Smith and Alexandra McCarron during their visit to Japan to use the SPring-8 Synchrotron. (Courtesy: Martin Donnelley)

“My group previously used high-resolution phase-contrast X-ray imaging for imaging nanoparticle delivery, and we were at the synchrotron when we realised the images weren’t showing the full picture,” first author Ronan Smith tells Physics World. “I developed new methods for directional dark-field imaging during my PhD, so we thought we’d see if that could help.”

Imaging nanoparticle delivery

The researchers first examined the delivery of superparamagnetic nanoparticles to an anaesthetized rat, positioned with the synchrotron beam passing through its trachea at 45°. Imaging a living animal inevitably creates background signals from the surrounding anatomy. To supress this background during nanoparticle delivery, the team employed a novel approach based on analysing the components of the directional dark-field signal.

A suspension of nanoparticles should scatter X-rays isotropically, and the major and minor scattering components of the directional dark-field signal should be equal. Asymmetric structures such as tissue, skin and hair, however, will scatter anisotropically, with most of the signal seen in the major component. By examining just the minor component, the team could enhance the contrast of the nanoparticles signal above the background.

“The directional dark-field retrieval approach was key in isolating the isotropic dark-field signal, generated by nanoparticles entering the airways, from the overlying directional dark-field signal generated by the surrounding anatomy,” Smith explains. “No one has taken this approach before as far as I know.”

Smith and colleagues delivered the nanoparticles into the rat’s trachea over 25 s, capturing 180 frames during this time, guided by the animal’s breathing. Initially, a diagonal line appeared in both the X-ray transmission and dark-field images, showing the nanoparticles starting to flow from the delivery tube into the trachea. At 22.91 s, the minor dark-field signal revealed a noticeable feature in the lower half of the tube, which became gradually clearer before being pushed out by an air bubble at the end of the delivery. The dark-field signal captured this event with 3.5 times higher signal-to-noise ratio than the transmission signal.

Directional dark-field X-ray imaging
Nanoparticle imaging Transmission (a), directional dark-field (b), and major (c) and minor (d) components of the dark-field images. (Courtesy: CC BY 4.0/Phys. Med. Biol. 10.1088/1361-6560/ad40f5)

Imaging the delivery process revealed that the nanoparticles unexpectedly settled inside the delivery tube, with many only reaching the trachea during the last 10% of the delivery. The researchers note that this could lead to suboptimal cellular uptake of viral vectors being delivered by nanoparticles, adding that this process could not have been observed without dark-field imaging.

Rotating nanoparticle strings

Next, the team exposed the rat to a 1.17 T magnet, which caused the nanoparticles to form into string-like structures, and rotated the magnet around its trachea. With the magnet above the rat, transmission images showed that the strings were aligned vertically. As the magnet moved, the strings remained aligned to the magnetic field, suggesting that dynamic magnetic fields could indeed manipulate nanoparticles in situ.

With the magnet alongside the rat (partially aligning the strings along the beam axis), the strings also produced a directional dark-field signal. However, this signal was not clearly visible when the particles were aligned vertically, likely due to the beam passing through fewer nanoparticles in this position.

Smith says that the biologists in his group are now using these imaging results to enhance their work on airway gene therapy. “It’s a cyclic development process, so we have more synchrotron experiments planned to answer the questions that their results give, using a mixture of phase-contrast and directional dark-field imaging,” he explains. “We are also looking at other respiratory applications of dark-field imaging.”

The post Dark-field X-ray imaging reveals potential of nanoparticle-delivered gene therapy appeared first on Physics World.

  •  

Sound and light waves combine to create advanced optical neural networks

Par : No Author
6 mai 2024 à 14:00

One of the things that sets humans apart from machines is our ability to process the context of a situation and make intelligent decisions based on internal analysis and learned experiences.

Recent years have seen the development of new “smart” and artificially “intelligent” machine systems. While these do have intelligence based on analysing data and predicting outcomes, many intelligent machine networks struggle to contextualize information and tend to just create a general output that may or may not have situational context.

Whether we want to build machines that can make informed contextual decisions like humans can is an ethical debate for another day, but it turns out that neural networks can be equipped with recurrent feedback that allows them to process current inputs based on information from previous inputs. These so-called recurrent neural networks (RNNs) can contextualize, recognise and predict sequences of information (such as time signals and language) and have been used for numerous tasks including language, video and image processing.

There’s now a lot of interest in transferring electronic neural networks into the optical domain, creating optical neural networks that can process large data volumes at high speeds with high energy efficiency. But while there’s been much progress in general optical neural networks, work on recurrent optical neural networks is still limited.

New optoelectronics required

Development of recurrent optical neural networks will require new optoelectronic devices with a short-term memory that’s programmable, computes optical inputs, minimizes noise and is scalable. In a recent study led by Birgit Stiller at the Max Planck Institute for the Science of Light, researchers demonstrated an optoacoustic recurrent operator (OREO) that meets these demands.

optoacoustic recurrent operator concept
OREO concept Information in an optical pulse is partially converted into an initial acoustic wave, which affects the second and third light–sound processing steps. (Courtesy: Stiller Research Group, MPL)

The acoustic waves in the OREO link subsequent optical pulses and capture the information within, using it to manipulate the next operations. The OREO is based on stimulated Brillouin-Mandelstam scattering, an interaction between the optical waves and travelling sound waves that’s used to add latency and slow the acoustic velocity. This process enables the OREO to contextualize a time-encoded stream of information using sound waves as a form of memory, which could be used not only to remember previous operations but as a basis to manipulate the output of the current operation – much like in electronic RNNs.

“I am very enthusiastic about the generation of sound waves by light waves and the manipulation of light by the means of acoustic waves,” says Stiller. “The fact that sound waves can create fabrication-less temporary structures that can be seen by light and can manipulate light in a hair-thin optical fibre is fascinating to me. Building a smart neural network based on this interaction of optical and acoustic waves motivated me to embark on this new research direction.”

Designed to function in any optical waveguide, including on-chip devices, the OREO controls the recurrent operation entirely optically. In contrast to previous approaches, it does not need an artificial reservoir that requires complex manufacturing processes. The all-optical control is performed on a pulse-by-pulse basis and offers a high degree of reconfigurability that can be used to implement a recurrent dropout (a technique used to prevent overfitting in neural networks) and perform pattern recognition of up to 27 different optical pulse patterns.

“We demonstrated for the first time that we can create sound waves via light for the purposes of optical neural networks,” Stiller tells Physics World. “It is a proof of concept of a new physical computation architecture based on the interaction and reciprocal creation of optical and acoustic waves in optical fibres. These sound waves are, for example, able to connect several subsequent photonic computation steps with each other, so they give a current calculation access to past knowledge.”

Looking to the future

The researchers conclude that they have, for the first time, combined the field of travelling acoustic waves with artificial neural networks, creating the first optoacoustic recurrent operator that connects information carried by subsequent optical data pulses.

These developments pave the way towards more intelligent optical neural networks that could be used to build a new range of computing architectures. While this research has brought an intelligent context to the optical neural networks, it could be further developed to create fundamental building blocks such as nonlinear activation functions and other optoacoustic operators.

“This demonstration is only the first step into a novel type of physical computation architecture based on combining light with travelling sound waves,” says Stiller. “We are looking into upscaling our proof of concepts, working on other light–sound building blocks and aiming to realise a larger optical processing structure mastered by acoustic waves.”

The research is published in Nature Communications.

The post Sound and light waves combine to create advanced optical neural networks appeared first on Physics World.

  •  

Ship-based atomic clock passes precision milestone

6 mai 2024 à 10:30

A new ultra-precise atomic clock outperforms existing microwave clocks in time-keeping and sturdiness under real-world conditions. The clock, made by a team of researchers from the California, US-based engineering firm Vector Atomic, exploits the precise frequencies of atomic transitions in iodine molecules and recently passed a three-week trial aboard a ship sailing around Hawaii.

Atomic clocks are the world’s most precise timekeeping devices, and they are essential to staples of modern life such as global positioning systems, telecommunications and data centres. The most common types of atomic clock used in these real-world applications were developed in the 1960s, and they work by measuring the frequency at which atoms oscillate between two energy states. They are often based on caesium atoms, which absorb and emit radiation at microwave frequencies as they oscillate, and the best of them are precise to within one second in six million years.

Clocks that absorb and emit at higher, visible, frequencies are even more precise, with timing errors of less than 1 second in 30 billion years. These optical atomic clocks are, however, much bulkier than their microwave counterparts, and their sensitivity to disturbances in their surroundings means they only work properly under well-controlled conditions.

Prototypes based on iodine

The Vector Atomic work, which the team describe in Nature, represents a step towards overturning these limitations. Led by Vector Atomic co-founder and study co-author Jamil-Abo-Shaeer, the team developed three robust optical clock prototypes based on transitions in iodine molecules (I2). These transitions occur at wavelengths conveniently near those of routinely-employed commercial frequency-doubled lasers, and the iodine itself is confined in a vapour cell, doing away with the need to cool atoms to extremely cold temperatures or keep them in an ultrahigh vacuum. With a volume of around 30 litres, the clocks are also compact enough to fit on a tabletop.

While the precision of these prototype optical clocks lags behind that of the best lab-based versions, it is still 1000 times better than clocks of a similar size that ships currently use, says Abo-Shaeer. The prototype clocks are also 100 times more precise than existing microwave clocks of the same size.

Sea trials

The researchers tested their clocks aboard a Royal New Zealand Navy ship, HMNZS Aotearoa, during a three-week voyage around Hawaii. They found that the clocks performed almost as well as in the laboratory, despite the completely different conditions. Indeed, two of the larger devices recorded errors of less than 400 picoseconds (10-12 seconds) over 24 hours.

The team describe the prototypes as a “key building block” for upgrading the world’s timekeeping networks from the nanosecond to the picosecond regime. According to team member Jonathan Roslund, the goal is to build the world’s first fully integrated optical atomic clock with the same “form factor” as a microwave clock, and then demonstrate that it outperforms microwave clocks under real-world conditions.

“Iodine optical clocks are certainly not new,” he tells Physics World. “In fact, one of the very first optical clocks utilized iodine, but researchers moved onto more exotic atoms with better timekeeping properties. Iodine does have a number of attractive properties, however, for making a compact and simple portable optical clock.”

The most finicky parts of any atomic-clock system, Roslund explains, are the lasers, but iodine can rely on industrial-grade lasers operating at both 1064 nm and 1550 nm. “The vapour cell architecture we employ also uses no consumables and requires neither laser cooling nor a pre-stabilization cavity,” Roslund adds.

The next generation

After testing their first-generation clocks on HMNZS Aotearoa, the researchers developed a second-generation device that is 2.5 times more precise. With a volume of just 30 litres including the power supply and computer control, the upgraded version is now a commercial product called Evergreen-30. “We are also hard at work on a 5-litre version targeting the same performance, and an ultracompact 1-litre version,” Roslund reveals.

As well as travelling aboard ships, Roslund says these smaller clocks could have applications in airborne and space-based systems. They might also make a scientific impact: “We have just finished an exciting demonstration in collaboration with the University of Arizona, in which our Evergreen-30 clocks served as the timebase for a radio observatory in the Event Horizon Telescope Array, which is imaging distant supermassive blackholes.”

The post Ship-based atomic clock passes precision milestone appeared first on Physics World.

  •  

Superfluid helium: the quantum curiosity that enables huge physics experiments

6 mai 2024 à 10:27
Jianqin Zhang with the beta elliptical cryomodule at the ESS superconducting linear accelerator
European Spallation Source Cryogenics engineer and test leader Jianqin Zhang inspects the first medium beta elliptical cryomodule to be installed at the ESS superconducting linear accelerator. Each cryomodule contains several superconducting radio-frequency cavities. (Courtesy: Ulrika Hammarlund/ESS)

The largest use of helium II is currently in particle accelerators, how is it used at these facilities?

Helium II has two main uses in particle accelerators. One is to cool superconducting electromagnets to temperatures below 2.2 K. These create the large magnetic fields that bend and focus particle beams. The conducting wires in these magnets are usually made from niobium–titanium, which becomes a superconductor below about 9 K. However, further cooling allows the magnets to support higher current densities and higher field strengths. As a result, almost all the magnets on the Large Hadron Collider (LHC) at CERN are cooled by helium II.

The second main use of helium II at accelerators is to cool superconducting radio-frequency (SRF) cavities, which are used to accelerate particles. These are made from niobium, which is a superconductor at temperatures below about 9 K. Again, these cavities perform much better at superfluid temperatures, where they use less energy to achieve the same acceleration.

An important benefit of using helium II to cool magnets and SRFs is the superfluid’s very high effective thermal conductivity. As well as making it very efficient at removing heat, the high effective conductivity means that helium does not boil in the bulk – unlike normal liquid helium. This confers great advantage in cooling, particularly when it comes to SRF cavities. This is because the cavities are resonant devices and can be detuned by mechanical vibrations caused by boiling.

While CERN is currently the biggest user of helium II, it is also used at other accelerators worldwide. How will it be used at your institute, the European Spallation Source (ESS), which will be up and running next year?

Like existing spallation sources in the UK, US, Switzerland and Japan, the ESS will accelerate protons to very high energies in a linear accelerator. These protons will then strike a tungsten target, where neutrons will be created by the spallation (fragmentation) of the target nuclei. These neutrons with then be slowed down so that their de Broglie wavelengths are on par with the separations of atoms in solids and molecules. Such neutrons are ideal for experiments that explore the properties of matter.

The ESS accelerator is about 400 m in length and 90% of the acceleration will be done by SRF cavities operating at 2 K. The superfluid is created by a helium refrigerator providing up to 3 kW of cooling at 2 K.

Other accelerator facilities that use superfluid cooling include the Thomas Jefferson Laboratory in the US and the European X-ray Free Electron Laser in Germany. A future International Linear Collider – a possible successor to the LHC – would also employ superfluid-cooled SRFs.

While superfluid-cooled magnets are used in particle accelerators, that was not their first application.

That’s right. They were first designed for use in the Tore Supra tokamak, which began operation in 1988 in France. It has since been upgraded and called WEST, which operates today. Tore Supra, like other tokamaks, used magnetic fields to confine a hot hydrogen plasma. The ultimate goal of researchers working on tokamaks is to develop a practical way to harness nuclear fusion as a source of energy.

John Weisend
John Weisend Accelerator engineer and author of a book that outlines the history of how helium II has revolutionized science. (Courtesy: ESS)

Tore Supra’s designers wanted to create longer-lasting plasma pulses and realized that this would not be possible using conventional magnets. They saw superfluid-cooled superconducting magnets as the way forward. The Tore Supra team worked out how to handle liquid helium and also they also developed a piece of technology called a cold compressor that would allow them to efficiently and reliably get down to 2 K. These two developments showed that it was possible operate superfluid-cooled magnets.

Helium II has also been used in space, what was the first mission to be superfluid cooled?

The first real use of helium II in space was to cool a space telescope called the Infrared Interferometer Spectrometer and Radiometer (IRAS). This mission was launched in 1983 by the US, the Netherlands and the UK and it surveyed the entire sky at infrared wavelengths. The atmosphere absorbs infrared light, which is why the telescope was launched into space. Once in orbit, its sensors must be kept as cold as possible to detect low levels of infrared light.

This cooling was done using helium II and mission designers had to overcome significant challenges such as how vent helium vapour when it is mixed in with blobs of liquid in a low-gravity environment.

IRAS was a watershed mission in astronomy because nobody had so extensively observed the universe in these infrared wavelengths before. Astronomers could peer through dust clouds and see objects that had been invisible to other telescopes.

IRAS observed the universe for 300 days before its superfluid ran out, but a decade later NASA was able to transfer liquid helium in space. How was that done?

Yes, that was a project called Superfluid Helium On-Orbit Transfer (SHOOT), which carried superfluid helium onboard a Space Shuttle. The demonstration involved transferring superfluid from a full dewar to an empty dewar in microgravity. This was done using a pump that made use of the “fountain effect” in helium II.

How does the fountain effect work?

The effect can be understood in terms of the two fluid model, which describes helium II as having a superfluid component and a normal fluid component. These aren’t real physical phases within helium II, but rather provide a convenient way of understanding many of its mechanical and thermal properties.

The effect occurs when two regions of helium II are separated by a porous plug with micron-sized channels. If the helium II in one region is heated and the other region is cold, the superfluid component will move through the porous media towards the heater. This is possible because the superfluid component has zero viscosity and can move without resistance through the tiny channels – something that the normal fluid component cannot do.

Large Hadron Collider at CERN
Superfluid superuser The Large Hadron Collider at CERN is the world’s largest user of helium II. (Courtesy: Maximilien Brice/CERN)

In the heated region, some of the superfluid component will become normal. However, the normal component is viscous and cannot exit the warm region via the porous plug, so pressure builds up. This pressure can be used to pump helium II without the need for mechanical components.

SHOOT was an important demonstration of how helium II could be used transferred in space. However, researchers realized that it is more cost efficient to launch experiments with larger dewars and lower heat loads, than to refill a dewar during a mission.

Helium II also has the ability to flow up the wall of a dewar, but despite its exotic properties a superfluid is relatively easy to handle in bulk. Why is that?

Research done in the 1970s and 80s showed that bulk helium II has essentially the same fluid mechanical properties as a conventional fluid – something that can also be explained by the two fluid model. When helium II flows, quantized vortices in the superfluid component interact with the viscosity of the normal fluid component. The result is that the bulk properties are the same as a conventional fluid.

This is tremendously helpful to engineers like me, I suppose we can be thankful that sometimes the universe is kind. The standard engineering rules that are used to design fluid-handling systems also apply to helium II – rules that help use chose components such as pipes, pumps and valves for a given system. The only instances when we need to consider the special properties of helium II are when we are transferring heat, using porous media or creating thin films of the superfluid.

There are several Nobel Prizes for Physics that were made possible by helium II cooling. Do you have a favourite?

For me it’s the 1996 prize, which went to David Lee, Douglas Osheroff and Robert Richardson for their discovery of superfluidity in helium-3. The superfluid that we have been talking about so far in this interview is helium-4, which is by far the most abundant isotope of the element. Helium-4 is a boson and bosonic atoms are able to condense into the lowest quantum energy state of the system, creating a superfluid.

Helium-3 atoms are not bosons, but are fermions. These atoms cannot undergo this Bose–Einstein condensation directly to create a superfluid.  However, it the early 1970s Lee, Osheroff and Richardson showed that helium-3 can condense into a superfluid at the much lower temperature of 2.7 mK. The physical mechanism for this is similar to what occurs in superconductors, where at low temperatures, fermionic electrons pair up. These “Cooper pairs” are bosons, so they can condense to create a superconductor in which the electrons can flow without resistance.

Because of its magnetic properties, superfluid helium-3 is a much more complicated substance than superfluid helium-4. It has three different superfluid phases, rather than the one phase of helium-4.

What I like about this discovery is that the trio weren’t  searching for superfluity in their experiment. Instead, they were studying the properties of solid helium-3 at very low temperatures and high pressure. I really like the fact that they were looking for one thing and found something entirely different. Often, the most exciting scientific discoveries are made this way.

Further reading

The post Superfluid helium: the quantum curiosity that enables huge physics experiments appeared first on Physics World.

  •  

Modified pulse tube refrigerator cuts cryogenic cooling times in half

5 mai 2024 à 14:29
NIST refrigerator animation
How it works: the bottom animation shows how the addition of an adjustable needle valve between the refrigerator and helium reservoir prevents the relief valve from being used. (Courtesy: S. Kelley/NIST)

A simple modification to a popular type of cryogenic cooler could save $30 million in global electricity consumption and enough cooling water to fill 5000 Olympic swimming pools. That is the claim of researchers at the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder who describe their energy-efficient design in Nature Communications.

Ryan Snodgrass and colleagues in the US have designed a new way to operate pulse tube refrigerators (PTRs), which compress and expand helium gas in cooling cycle that is similar to that used in a household refrigerator. Developed in the 1980s, PTRs can now reach temperatures of just a few Kelvin, which is below the temperature that helium becomes a liquid (4.2 K).

While PTRs are reliable and used widely in research and industry, they are very power hungry. When Snodgrass and team looked at why commercial PTRs consume so much energy, they found that the devices were designed to be efficient at their final operating temperature of about 4 K. At higher temperatures, the PTRs are much less efficient – and this is a problem because the cooling process begins at room temperature.

Easier repairs

As well as using lots of electricity to cool down, this inefficiency means that it can take a very long time to cool objects. For example, the Cryogenic Underground Observatory for Rare Events (CUORE) – which is looking for neutrinoless double beta decay deep under a mountain in Italy – is cooled to a preliminary 4 K by five PTRs in a process that takes 20 days. Reducing such long cooling times would make it easier and less costly to modify or repair cryogenic systems.

A careful study of the room-temperature operation of PTRs revealed that the helium gas is compressed to a very high pressure. This causes a relief valve to open, sending some of the helium back to the compressor. Less helium is therefore used for cooling, reducing the efficiency of the PTR.

Snodgrass and colleagues solved this problem by replacing the manufacturer-supplied needle valves in a PTR with customized needle valves that can be adjusted constantly. These needle valves control the flow of gas between the refrigerator and its helium reservoirs. They are normally set to optimize the operation of the PTR at cryogenic temperatures.

In the new operating protocol developed at NIST, the needle valves are open at room temperature. This allows gas to flow in and out of the reservoir, which moderates the pressure in the refrigerator. As the temperature drops, the valves are slowly closed – keeping the system at an ideal pressure throughout its operation.

The team found that the modification can boost the cooling rate of PTRs by 1.7–3.5 times. As well as making cooling quicker and more energy efficient, the new design could also be used to reduce the size or number PTRs needed for specific applications. This could be very important for applications in space, where PTRs are already used to cool infrared telescopes such as MIRI on the James Webb Space Telescope.

 

The post Modified pulse tube refrigerator cuts cryogenic cooling times in half appeared first on Physics World.

  •  

In real-world social networks, your enemy’s enemy is indeed your friend, say physicists

3 mai 2024 à 19:01

If you’ve ever tried to remain friends with both halves of a couple going through a nasty divorce, or hung out with a crowd of mutuals that also includes someone you can’t stand, you’ll know what an unbalanced social network feels like.

You’ll probably also sympathize with the 20th-century social psychologist Fritz Heider, who theorized that humans strive to avoid such awkward, unbalanced situations, and instead favour “balanced” networks that obey rules like “the friend of my friend is also my friend” and “the enemy of my enemy is my friend”.

But striving and favouring aren’t the same thing as achieving, and the question of whether real-world social networks exhibit balance has proved surprisingly hard to answer. Some studies suggest that they do. Others say they don’t. And annoyingly, some “null models” – that is, models used to assess the statistical significance of patterns observed in real networks – fail to identify balance even in artificial networks expressly designed to have it.

Two physicists at Northwestern University in the US now report that they’ve cracked this problem – and it turns out that Heider was right. Using data collected from two Bitcoin trading platforms, the tech news site Slashdot, a product review site called Epinions, and interactions between members of the US House of Representatives, István Kovács and Bingjie Hao showed that most social networks do indeed demonstrate strong balance. Their result, they say, could be a first step towards “understanding and potentially reducing polarization in social media” and might also have applications in brain connectivity and protein-protein interactions.

Positive and negative signs

Mathematically speaking, social networks look like groups of nodes (representing people) connected by lines or edges (representing the relationships between them). If two people have an unfriendly or distrustful relationship, the edge connecting their nodes carries a negative sign. Friendly or trustful relationships get a positive sign.

Under this system, the micro-network described by the statement “the enemy of my enemy is my friend” looks like a triangle made up of one negative edge connecting you to your enemy, another negative edge connecting your enemy to their enemy, and one positive edge connecting you to your enemy’s enemy. The total number of negative edges is even, so the network is balanced.

Complicating factors

While the same mathematical framework can be applied to networks of any size and complexity, real-world social networks contain a few wrinkles that are hard to capture in null models. One such wrinkle is that not everyone knows each other. If the enemy of your enemy lives overseas, for example, you might not even know they exist, never mind whether to count them as a friend. Another complicating factor is that some people are friendlier than others, so they will have more positive connections.

In their study, which they describe in Science Advances, Kovács and Hao created a new null model that preserves both the topology (that is, the structure of the connections) and the “signed node degree” (that is, the “friendliness” or otherwise of individual nodes) that characterize real-world networks. By comparing this model to three- and four-node mini-networks in their chosen datasets, they showed that real-world networks are indeed more balanced than would be expected based on the more accurate null model.

So the next time you have to choose between two squabbling friends, or decide whether to trust someone who dislikes the same people as you, take heart: you’re performing a simple mathematical operation, and the most likely outcome will be a social network with more balance. Problem solved!

The post In real-world social networks, your enemy’s enemy is indeed your friend, say physicists appeared first on Physics World.

  •  

Protecting phone screens with non-Newtonian fluids

3 mai 2024 à 14:21

New research shows that phones could be strengthened by adding a layer of material to the screen that fluidized during an impact. In a paper published in PNAS, the team from the University of Edinburgh and Corning, a US-based materials company, developed a mathematical model of an object hitting a phone screen. Using modelling and experiments they identify the optimized fluid properties for this application. Their results show that fluids that become runnier during impact are most effective at protecting the screen.

Despite the development of toughened glass, a smashed phone screen is a commonplace annoyance. James Richards, a postdoc in Edinburgh who led the research, explains that the aim was to design a fluid-based alternative that would sit under the glass and absorb impacts.

The suspension of a car uses a piston moving through hydraulic fluid to absorb bumps in the road. The resistance of the fluid increases the faster the piston moves, which allows the system to adapt to large and small shocks.

In this project, instead of mechanical components, the screen would be protected by a layer of fluid, like a mattress sitting below the glass. To build a system that would adapt to different impacts, the researchers turned to a class of materials called non-Newtonian fluids, whose viscosity changes depending on the force applied. A mixture of cornflour and water is an example of a shear-thickening fluid because it becomes more viscous the harder it is hit. It is also possible to have shear-thinning fluids that become runnier under impact – an example of this is paint.

Soaking Kevlar vests in shear-thickening fluid can make them more resistant to projectiles because the fabric can absorb the impact whilst remaining flexible when worn. As a result, Richards and colleagues suspected that a shear-thickening fluid could also be used to protect phone screen glass.

When an object exerts a force on a screen, the fluid resists the deformation, but the force on the glass itself depends on how much the screen has deformed. This feedback loop makes it difficult to predict how a given fluid will respond, particularly if the fluid is non-Newtonian. “The challenge here is we didn’t know where we were in a design space,” says Richards “So we needed something much, much more general.”

The researchers wanted to perform an optimization that would test their theory that shear-thickening fluids are best at protecting the screen. This is challenging because the height of the bending screen varies continuously, so there are effectively an infinite number of variables to be optimized.

Simplified phone screen for design optimization

The team looked for a way to simplify the system whilst still capturing the essential physics. They identified that the problem would be a lot easier to solve if the screen was flat – meaning the height during impact would be the same everywhere. The quantity that determines whether the screen breaks would then just be the bending moment – the product of the diameter of the plate and the force on it.

The researchers argue that close to the impact, there will be some area of the plate that is effectively flat, with  the size of this flat part becoming smaller the more the screen bends. By solving the equations of motion of the fluid under the plate, the researchers were able to reduce the problem of the flexible plate to a single flat plate whose diameter changes as it squeezes down.

With this simplified system, the team was able to factor in shear thickening or shear thinning fluid behaviour, allowing them to identify the fluid that minimized the bending moment. They were surprised to find that the optimal fluid was not shear-thickening but shear-thinning “It turns out our initial thoughts were entirely wrong” says Richards.

A tight squeeze causes an unexpected fluid response

They attribute this unexpected behaviour to the geometry of the system. During impact, the deformation of the screen squeezes the fluid through a smaller and smaller gap. It’s harder to push a shear-thickening fluid through a narrower space, so whilst it stops the impact, the glass experiences a large force. By contrast, if the fluid is shear-thinning, it will get easier to squeeze as the screen bends. This means the impact spreads out over a longer time, and provided the fluid never gets too runny, it is still possible to absorb the force whilst protecting the screen.

As proof of concept, the researchers tested transparent shear-thickening and shear-thinning fluids in an experiment that mimicked a phone screen. The fluid was sandwiched between a solid base and a sheet of glass, and the force on the glass was measured as a solid wedge pushed down on it. Their result confirms that the force on the glass increases more gradually during impact with the shear-thinning fluid, indicating that this class of fluids would be most effective as screen protectors.

The researchers say that one of their main motivations was to develop a shock absorber that could be used to build flexible phone screens. Their work establishes a framework to optimize the squeezing of non-Newtonian fluids, and they believe it could have applications such as in car windows or even to study how skin creams are applied.

The post Protecting phone screens with non-Newtonian fluids appeared first on Physics World.

  •  

China launches Chang’e-6 mission to return samples from the Moon’s far side

3 mai 2024 à 11:32

China has successfully launched a mission to bring back sample from the far side of the Moon – the first attempt to do so. Chang’e-6 was launched at 17:27 p.m. local time today by a Long March 5 rocket from Wenchang Satellite Launch Center on Hainan Island. If the landing is successful, the craft is expected to collect and return to Earth up to two kilograms of soil from an area not previously sampled.

China has made considerable progress in lunar exploration in recent years, which began in 2007 with the launch of the lunar orbiter Chang’e-1.

Since then it has carried out four further uncrewed missions that included Chang’e-4, which in 2019 became the first mission to touch down on the far side of the Moon. That craft landed in the Von Kármán crater in the South Pole-Aitken Basin – one of the oldest known impact craters in the Solar System and represents one of the Moon’s most scientifically rich regions.

China’s previous lunar mission was Chang’e-5, which launched in November 2020, and successfully brought back 1.7 kg of samples from the near side of the Moon a month later, the first recovery of lunar samples in 45 years.

Most of the returned samples are stored at the National Astronomical Observatories of China, Chinese Academy of Sciences, in Beijing, with possible access by foreign scientists through collaboration with Chinese colleagues.

To the dark side

Chang’e-6 was built as a back-up for Chang’e-5, but following the success of that mission Chang’e-6 could be repurposed for its own assignment.

Weighing 8.2 tonnes, Chang’e-6 consists of four parts: an ascender, lander, returner and orbiter. Upon entering orbit around the Moon, the ascender and lander will separate and touch down in the southern part of the Apollo crater, which lies in the northeastern side of the South Pole-Aitkin Basin.

The lander will use a panoramic camera, spectrometer and ground-penetrating radar among other payloads to document the landing site. Chang’e-6 will also carry payloads from France, Italy, Sweden and Pakistan, which includes an instrument to measure surface levels of radon.

Within 48 hours after touching down, Chang’e-6 will use a robotic arm to scoop up small rocks from the surface and drill up to 2 m into the ground with the aim to collect about 2 kg of material.

The ascender will lift off from the top of the lander and dock with the returner-orbiter in orbit. The sample container is then transferred to the returner, which will head back to the Earth.

A relay satellite – Queqiao-2 – was launch in March to help communications between Chang’e-6 and ground stations on Earth.

It is hoped that the returned samples will shed light on the early evolution of the Moon given that the far side is not as extensively covered by ancient lava flows as the near side, which helps to preserves materials from the Moon’s early formation. It is also hoped that the results from the mission will provide clues to why the two sides are so different.

China plans two further lunar missions with Chang’e-7 in 2026 that will explore the lunar south pole for water followed by Chang’e-8 in 2028 that will build a rudimentary outpost on the Moon in collaboration with Russia.

China then aims to put astronauts on the Moon by 2030, some four years after the US Artemis crewed mission to the Moon, which is currently planned for September 2026.

The post China launches Chang’e-6 mission to return samples from the Moon’s far side appeared first on Physics World.

  •  

Bilayer of ultracold atoms has just a 50 nm gap

Par : No Author
2 mai 2024 à 20:00

Two Bose-Einstein condensates (BECs) of magnetic atoms have been created just 50 nm apart from each other – giving physicists the first opportunity to study atomic interactions on this length scale. The work by physicists in the US could lead to studies of several interesting collective phenomena in quantum physics, and could even be useful in quantum computing.

First created in 1995, BECs have become invaluable tools for studying quantum physics. A BEC is a macroscopic entity comprising thousands of atoms that are described by a single quantum wavefunction.  They are created by cooling a trapped cloud of bosonic atoms to a temperature so low that a large fraction of the atoms are in the lowest energy (ground) state of the system.

BECs should be ideal for studying the quantum physics of exotic, strongly interacting systems. However, to prolong the lifetime of a BEC, physicists need to keep it isolated from the outside world to prevent decoherence. This need for isolation makes it difficult to manoeuvre BECs close enough together for the interactions to be studied.

Pancake layers

In the new work, researchers at Massachusetts Institute of Technology in the group of Wolfgang Ketterle (who shared the 2001 Nobel Prize for Physics for cresting BECs) tackled this problem by creating a double-layer BEC of dysprosium atoms, with the two layers just 50 nm apart. To achieve this, the researchers had to keep two pancake-like condensate layers a constant distance apart using lasers with wavelengths more than ten times their separation. This would have been almost impossible using separate optical traps.

Instead, the researchers utilized the fact that dysprosium has a very large spin magnetic moment. They lifted the degeneracy of two electronic spin states using an applied magnetic field. Atoms with opposite spins coupled to light with slightly different frequencies and opposite polarizations. The researchers sent light at both frequencies down the same optical fibre onto the same mirror. Both beams formed standing waves in the cavity. “If the frequency of these two standing waves is slightly different, then at the position where we load this bilayer array, these two standing waves are going to slightly walk off,” says Li Du, who is lead author on the paper describing the research. “Therefore by tuning the frequency difference we’re able to tune the interlayer separation,” he adds.

As both beams utilize the same optical fibre and the same mirror, they are robust to physical disturbance of these components. “Our scheme guarantees that you have two standing waves that can shake a little – or maybe a lot – but the shaking is a common mode, so the difference between the two layers is always fixed,” says Du.

Ringing atoms

The researchers heated one of the layers by about 2 μK and showed how the heat flowed across the vacuum gap to the other layer through the magnetic coupling of the atomic dipoles. Next they induced oscillations in the position of one layer and showed how these affected the position of the other layer: “We hit one layer with a hammer and we see that the other [layer] also starts to ring,” says Du.

The researchers now hope to use the platform to study how atoms closer together than one photon wavelength interact with light. “If the separation is much smaller than the wavelength of light, then the light can no longer tell [the atoms] apart,” says Du. “That potentially allows us to study a special effect called super-radiance.”

Beyond this, the researchers would like to investigate the work’s potential in quantum computing: “We would really like to implement a magnetic quantum gate purely driven by the magnetic dipole-dipole interaction,” he says. The same platform could also be used with BECs of molecules, which would open up the study of electric dipole–dipole interactions. Indeed, in late 2023, researchers at Columbia University in the US published a preprint that describes how they created a BEC of dipolar molecules. This preprint has yet to be peer reviewed.

Twisted graphene

Experimental atomic physicist Cheng Chin of the University of Chicago in Illinois, who last year collaborated with researchers at Shanxi University in China to produce a double layer of rubidium atoms to model twisted bilayer graphene, says that Ketterle and colleagues’ research is “very, very interesting”.

He adds, “This is the first time we’re able to prepare cold atom systems in two layers with such a small spacing…To control such a 2D system is hard but necessary in order to induce the interaction that’s required in two planes. It’s a very smart choice of atom because dysprosium has a very large dipole-dipole interaction. At a conventional spacing of half a micron, you wouldn’t be able to see any kind of coupling between the two layers, but 50 nm is just enough to show that the atoms in the two planes can really talk to each other.”

He suggests follow-up work from both teams’ research could focus on simulating new phases of matter and simulating emergent ones such as superconducting bilayer graphene.

The research is described in Science

The post Bilayer of ultracold atoms has just a 50 nm gap appeared first on Physics World.

  •  

Social media: making it work for physics-related businesses

2 mai 2024 à 15:55

Many physicists work for small-to-medium-sized companies that provide scientific instrumentation and services – and some have founded companies of their own. Such businesses can have limited resources for marketing and customer service, so using social media can be an efficient way to connect with existing users and attract new customers.

In this episode of the Physics World Weekly podcast, Alex Peroff and Neil Spinner of Pine Research Instrumentation explain how they use social media – including podcasts, videos, webinars and live chats – to get their message out.

From their base in Durham, North Carolina, the duo also share their top tips for getting the most out of social media.

 

Thyracont logo

 

This podcast is sponsored by Thyracont Vacuum Instruments, which provides all types of vacuum metrology for a broad variety of applications ranging from laboratory research to coating and the semiconductor industry. Explore their sensors, handheld vacuum meters, digital and analogue transducers as well as vacuum accessories and components at thyracont-vacuum.com.

The post Social media: making it work for physics-related businesses appeared first on Physics World.

  •  

Quantum Machines’ processor-based approach for quantum control

Par : No Author
2 mai 2024 à 15:13

This short video – filmed at the March Meeting of the American Physical Society in Minneapolis earlier in the year – features Itamar Sivan, chief executive and co-founder of Quantum Machines (QM). In the video, he introduces explains how QM makes the control electronics parts of quantum computers – that is, the classical hardware that drives quantum processors.

Yonatan Cohen, chief technology officer and fellow co-founder, then outlines the firm’s evolution to a processor-based approach for quantum control. As a result, QM has a unique processor that generates all the signals that communicate with it, allowing the firm to build more scalable architectures while maintaining high performance.

Cohen explains that the key for QM technology is to implement quantum error correction at scale – which is where the firm’s OPX1000 platform comes in. It is a scaled-up system with very high channel density, which means it can control many qubits with a relatively small system – making it, the firm says, the most scalable control system on the market.

Cohen also discusses the importance to QM of hiring staff who combine an expert knowledge with a passion for the technology and explainshow partnerships help QM maintain a competitive edge in the market. One such tie-up, with NVIDIA, allowed QM to create a link between its control system and the NVIDIA GPU-CPU platform – bringing more computing power to the heart of the quantum computer.

Sivan believes that after installation of the QM tech, within a couple of days, the customer can realize all the experiments that they had conceived.

The post Quantum Machines’ processor-based approach for quantum control appeared first on Physics World.

  •  

Semiconductor substrate behaves ‘like the tail wagging the dog’, say scientists

2 mai 2024 à 14:00

The substrates on which semiconductor chip are grown usually get ignored, but they may be more important than we think. This is the finding of researchers in the US and Germany, who used high-energy X-rays to study titanium dioxide – a common substrate for insulator-to-metal semiconductors. The discovery that this material is far more than just a passive platform could help scientists develop next-generation electronics.

Materials that switch from metal-like to insulating very quickly offer a promising route for developing super-fast electronic transistors. To this end, a team led by materials scientist and physicist Venkatraman Gopalan of Pennsylvania State University, US, began studying a leading candidate for such devices, vanadium dioxide (VO2). Vanadium dioxide is unusual in that its electrons are strongly correlated. This means that, unlike in silicon-based electronics, the repulsion between electrons cannot be ignored.

Crucially, though, the researchers did not look at the VO2 layer on its own. They also analysed how it interacts with the titanium dioxide (TiO2) substrate upon which it is grown. To their surprise, they found that the substrate contains an active layer that behaves just like the semiconductor when the VO2 switches between an insulating state and a metallic one.

Timed X-ray pulse

Gopalan and colleagues obtained their results by growing a very thin film of VO2 atop a thick TiO2 single crystal substrate. They then fabricated a device channel on the ensemble across which they could apply the voltage pulses that switch the semiconductor from insulating to conducting. During this switching, they applied high-energy X-ray pulses from the Advanced Photon Source (APS) at Argonne National Laboratory to the channel and observed the lattice planes of the semiconducting film and the substrate.

“The X-ray pulse was timed so that it could arrive before, at and after the electrical pulse so that we see what happens with time,” Gopalan explains. “It was also raster scanned across the channel to map what happens to the entire channel when the material switches from being an insulator to a metal.”

This technique, known as spatio-temporal X-ray diffraction microscopy, is good at revealing the behaviour of materials at the atomic level. In this case, it showed the researchers that the VO2 film bulges as it changes to a metal. This was unexpected: according to Gopalan, the material was supposed to shrink. “What is more, the substrate, which is usually thought to be electrically and mechanically passive, also bulges along with the VO2 film,” he says. “It is like the tail wagging the dog, and shows that a mechanism that was missed before is at play.”

Native oxygen vacancies are responsible

According to the researchers’ theoretical calculations and modelling, this mechanism involves atomic sites in the material lattice that are missing oxygen atoms. These native oxygen vacancies, as they are known, are present in both the semiconductor and substrate and they ionize and deionize in concert with the applied electric field.

“Neutral oxygen vacancies hold a charge of two electrons, which they can release when the material switches from an insulator to a metal,” Gopalan explains. “The oxygen vacancy left behind is now charged and swells up, leading to the observed swelling in the device. This can also happen in the substrate.”

The experiment itself was very challenging, Gopalan says. One of the X-ray beamlines at the APS had to be specially rigged and it took the team several years to complete the set-up. Then, he adds, “The results were so intriguing and unexpected that it took us several more years to analyse the data and come up with a theory to understand the results.”

According to Gopalan, there is tremendous interest in next-generation electronics based on correlated electronic materials such as VO2 that exhibit a fast insulator-to-metal transition. “While previous studies have analysed this material using various techniques, including using X-rays, our is the first to study a functioning device geometry under realistic conditions, while mapping its response in space and time,” he tells Physics World. “This study is unique in that respect, and it paid off in what it revealed.”

The researchers are now trying to understand the mechanisms behind the substrate’s surprising response, and they plan to revisit their experiment to this end. “We are thinking, for example, of intentionally adding ionizing defects that release electrons and trigger a metal-to-insulator transition when a voltage is applied,” Gopalan reveals.

The present study – which also involved collaborators at Cornell University and Georgia Tech in the US, and the Paul Drude Institute in Germany – is detailed in Advanced Materials.

The post Semiconductor substrate behaves ‘like the tail wagging the dog’, say scientists appeared first on Physics World.

  •  

Wigner crystal appears in bilayer graphene

2 mai 2024 à 10:30

Researchers at Princeton University in the US say they have made the first direct observation of a Wigner crystal – a structure consisting solely of electrons arranged in a lattice-like configuration. The finding, made by using scanning tunnelling microscopy to examine a material known as Bernal-stacked graphene, confirms a nearly century-old theory that electrons can assemble into a closely-packed lattice without having to orbit around an atom. The work could help scientists discover other phases of exotic matter in which electrons behave collectively.

Although electrons repel each other, at room temperatures their kinetic energy is high enough to overcome this, so they flow together as electric currents. At ultralow temperatures, however, repulsive forces dominate, and electrons spontaneously crystallize into an ordered quantum phase of matter. This, at least, is what the physicist Eugene Wigner predicted 90 years ago would happen. But while scientists have seen evidence of this type of crystalline lattice forming before (for example, in a one-dimensional carbon nanotube and in a quantum wire), it had never been observed directly.

A pristine sample of graphene

In the new work, which is detailed in Nature, researchers led by Princeton’s Ali Yazdani used a scanning tunnelling microscope (STM) to study electrons in a pristine sample of graphene (a sheet of carbon one atom thick). To keep the material as pure as possible, and so avoid the possibility of electron crystals forming in lattice defects or imperfections, they placed one sheet of graphene atop another in a configuration known as a bilayer Bernal stack.

Next, they cooled the sample down to just above absolute zero, which reduced the kinetic energy of the electrons. They also applied a magnetic field perpendicular to the sample’s layers, which suppresses kinetic energy still further by restricting the electrons’ possible orbits. The result was a two-dimensional gas of electrons located between the graphene layers, with a density the researchers could tune by applying a voltage across the sample.

Scanning tunnelling microscopy involves scanning a sharp metallic tip across a sample. When the tip passes over an electron, the particle tunnels through the gap between the sample surface and the tip, thereby creating an electric current. By measuring this current, researchers can determine the local density of electrons. Yazdani and colleagues found that when they increased this density, they observed a phase transition during which the electrons spontaneously assembled into an ordered triangular lattice structure – just as Wigner predicted.

Forcing a lattice to form

The team explains that this spontaneous assembly is the natural outcome of a “battle” between the electrons’ increased density (which pushes them closer together) and their mutual repulsion (which pushes them apart). An organized lattice configuration – a Wigner crystal – is, in effect, a compromise that lets electrons maintain a degree of distance from each other even when their density is relatively high. If the density increases still further, this crystalline phase melts, producing a phase known as a fractional quantum Hall electron liquid as well as an anisotropic quantum fluid in which the electrons organize themselves into stripes.

By analysing the size of each electron site in the Wigner crystal, the researchers also found evidence for the crystal’s “zero-point” motion. This motion, which comes about because of the Heisenberg uncertainty principle, occupies a “remarkable” 30% of the lattice constant of a crystal site, Yazdani explains, and highlights the crystal’s quantum nature.

The Princeton team now aims to use this same STM technique to image a Wigner crystal made of “holes”, which are regions of positive charge where electrons are absent. “We also plan to image other types of electron solid phases, so-called skyrme crystals and ‘bubble phases’,” Yazdani says. “In addition to even more exotic phases such as quasiparticle Wigner crystals made of fractional charges, there is also the possibility to study how these quantum crystals would change in the presence of a net electrical current.”

The post Wigner crystal appears in bilayer graphene appeared first on Physics World.

  •  

Cryptic quantum-physics word search: the solution

Par : No Author
1 mai 2024 à 18:08

Word search answers
Answers

Wave range? A plum tide at sea (9) [AMPLITUDE]

Sequence of hobo sonata carries force (5) [BOSON]

Yell out “circle” immediately? Overpowered freezer (8) [CRYOSTAT]

Nineties served up “greatest physicist” candidate (8) [EINSTEIN]

Devotion to closeness (8) [FIDELITY]                                                                       

Hubbub after unwanted disturbances (5) [NOISE]                                                               

Line, we heard, for bishop with computers? A quantum of quantum (5) [QUBIT]

Inky creature is sensitive about fields (5) [SQUID]

Ill gent, nun in a bad way? It’s barrier breaking (10)  [TUNNELLING]

Single cat is smallest matter (4) [ATOM]

Odd chart reveals quantum victim, potentially (3) [CAT]                                                     

Policeman finds right account for electron equation chap (5) [DIRAC]

Power to get-up-and-go (6) [ENERGY]

Physicist namesake for fictional meth lord? Cooking begins here (10) [HEISENBERG]

Nobel winner recited isometric exercise (6) [PLANCK]

Australian quantum physicist brings short model to space mountain (7) [SIMMONS]

Situation report, clearly (5) [STATE]

Danish physicist sounds like a pig (4) [BOHR]

Run out after firm hand? Result of quantum measurement (8) [COLLAPSE]

Gee, it’s neat! Uncertain, when some things are predictable (10) [EIGENSTATE]

Iron soldier charged? Obeys exclusion principle (7) [FERMION]

Mr Munster gets a bearing on often overlooked German quantum pioneer (7) [HERMANN]

Take his head! Queen killed dispatcher for sending secret messages (3) [QKD]

Rushes backwards in pirouette (4) [SPIN]

Creamy pus? Doctor to have the upper hand (9) [SUPREMACY]

And the hidden phrase: “This year is the centenary of the first prediction of Bose-Einstein condensates”

The post Cryptic quantum-physics word search: the solution appeared first on Physics World.

  •  

Schrödinger’s cat makes a better qubit in critical regime

1 mai 2024 à 14:00

An English proverb states, “A cat has nine lives. For three he plays, for three he strays, and for the last three he stays.” In the quantum world, however, objects can be in a superposition of states simultaneously. Therefore, a quantum cat could exist in a superposition of playing, straying, and staying all at once.

Though the literal quantum cat might sound like science fiction, so-called “cat states” – semi-classical states that exhibit properties of quantum superpositions – are real. What is more, they could be central to the development of quantum computers, which are machines that leverage the power of quantum mechanics to solve problems. The challenge is finding ways to control them, and researchers at the EPFL in Switzerland recently made a breakthrough in this field. By optimizing a particular control parameter, they identified a way to make quantum bits (qubits) based on cat states much more resilient to certain types of errors.

A famous thought experiment

The concepts of cat states was inspired by Erwin Schrödinger’s famous thought experiment in which a cat is both alive and dead until it is observed. The paradoxical picture represents a bridge between the microscopic quantum world of atoms and molecules and the macroscopic classical world.

Moving computing across that bridge would represent a paradigm shift in our computational capabilities, as a large-scale quantum computers could solve problems beyond the reach of classical machines. Yet, today’s quantum devices grapple with significant scalability challenges.

Central to the journey toward fully functional and scalable quantum hardware is the battle against noise, which compromises the reliability of quantum computations and necessitates sophisticated strategies for correcting noise-induced errors.

Schrödinger cat states: quantum superpositions as information carriers 

Schrödinger cat states stand out as a particularly promising way of combatting these errors. In many platforms, these cat states are created by superimposing a coherent (mostly classical) state of light that has a defined phase (say, the “alive” state) with another state of opposite phase (the “dead” state). While the coherent states encode the equivalent of 0 and 1 in classical logic, the power of quantum computing lies in the possibility of accessing any superposition of the two – that is, the cat states, which are the states of a quantum harmonic oscillator.

Diagram of a Bloch sphere
A Bloch sphere representation of a cat state operating in a metastable regime where the detuning of the driving field and non-linearity due to the dissipative phase transition compete. The competition produces a critical slowing down of photon injection, enhancing the system’s error suppression capabilities. (Courtesy: Luca Gravina et al., “Critical Schrödinger Cat Qubit”, PRX Quantum 4 020337 https://doi.org/10.1103/PRXQuantum.4.020337)

One factor that distinguishes qubits based on cat states from other proposals to encode quantum information is their intrinsic resilience to so-called bit-flip errors, which occur when the system passes randomly between the logical 0 and 1 state. Such passage can be envisioned as a pendulum, where the state 0 is to the left of the equilibrium point and state 1 is to the right. Implementing “cat codes” that exploit this resilience, however, poses several challenges, as it is difficult to generate cat states while maintaining compatibility with everything the operators needed to perform quantum computations (such as quantum gates and readout measurements).

Innovations in error suppression

In the recent work, Luca Gravina, Fabrizio Minganti and Vincenzo Savona of the EPFL’s Laboratory of Theoretical Physics of Nanosystems identified an additional and largely overlooked control parameter: the detuning, or difference in frequencies, between the force driving the “pendulum” and the resonant frequency of the quantum harmonic oscillator. This parameter has drastic influence over cat qubit properties, and getting it right enhances the qubit’s resilience to bit-flip errors by several orders of magnitude.

The EPFL researchers thoroughly investigated the nature of such an improvement in all operational regimes of the qubit. In particular, they tied it to the presence of dissipative criticality in the form of a first-order dissipative phase transition. In the context of cat qubits, dissipative criticality is a scheme that combines a two-photon drive and a two-photon loss to stabilize the operation of such qubits.

The researchers demonstrated that it is possible to access a peculiarly favourable regime of operation in these driven-dissipative non-linear resonators operated near the phase transition. The metastable nature of this encoding also makes it possible to draw parallels between quantum information and dissipative criticality, connecting the concepts of noise suppression in cat states and the spectral theory of Liouvillians, which is normally used to described critical phenomena.

The EPFL team’s findings underscore that carefully tuning the various parameters (non-linearity, two-photon dissipation and detuning) that characterize the devices where cat states are generated can enhance the performance of cat codes beyond current levels. This, in turn, would pave the way towards the realization of scalable quantum devices.

In the future, Gravina and colleagues aspire to simulate how non-linearity, dissipation and higher-order non-linearities can be resources. As well as applying their findings to a variety of physical models, they expect to investigate the role of dissipative criticality and use the results to explore how different codes can achieve an analogous enhancement in performance.

The study is published in PRX Quantum.

The post Schrödinger’s cat makes a better qubit in critical regime appeared first on Physics World.

  •  

What lies beneath: unearthing the secret interior lives of planets

Par : No Author
1 mai 2024 à 12:00

Humanity has a remarkable drive for exploration. We have sent astronauts 384,400 kilometres out into space to walk on the Moon; delivered rovers and helicopters roughly 225 million kilometres away to survey Mars; and sent probes a whopping 24.3 billion kilometres out to the furthest reaches of our solar system. It is remarkable, then, that when it comes to our own home, we have literally only scratched the surface – the deepest hole ever dug reached less than 1% of the distance to the centre of the Earth.

The question of how we get to grips with the other 99% of what lies under our feet – not to mention beneath the surface of other worlds – is the subject of this sparkling new book, “What’s Hidden Inside Planets?” by Sabine Stanley, a physicist at Johns Hopkins University

Starting with an imagined journey down to the centre of the Earth in a hi-tech travel capsule, Stanley explains how, even though we have only ever drilled about a third of the way through the crust, phenomena on the surface can be used to infer the structure of the rest of the planet. The seismic waves that follow an earthquake change speed and direction as they pass through the Earth, which tells us that the interior has distinct layers – the mantle, the liquid outer core and the solid inner core. In addition, diamonds found on the Earth’s surface can tell us about the hot, high-pressure conditions below the surface where they were formed.

Stanley’s focus soon sweeps out to explore the rest of the solar system. Though we can’t send probes to the centres of other planets, clues to their interior composition sometimes fall at our feet in the form of meteorites. These are remnants of the early solar system that tell us about the conditions in which the planets formed.

The book also explains why Venus is at least one planetary scientist’s bête noire given that it resists all the techniques used to investigate planetary interiors. The planet has an atmosphere that is opaque to remote optical observations and the extreme conditions on the surface make it incredibly challenging to operate seismometers.

Stanley also includes a spin through upcoming planetary science missions and what they might tell us – from the Mars Sample Return Mission, which could shine more light on the red planet’s geology, to the Jupiter Icy Moons Explorer, to various missions to study the surface and interior of Venus. She finishes with a reflection on the importance of looking after the Earth as our home.

The chapter I most enjoyed was “Curious planetary elements”, which explores the weird-and-wacky phenomena believed to occur on and within other worlds, from helium rain and metal volcanoes to exotic phases of water and diamond icebergs.

I was intrigued to encounter for the first time the term “precovery”, which is when fresh information on astronomical objects is found in archive data and images that predates the actual discovery. As Stanley notes, for example, “Pluto was officially discovered in 1930, but astronomers digging through archives since then have found evidence of its discovery going farther back, at least to 1914, and possibly to 1909.”

Stanley also takes the reader through one of my favourite episodes in the history of science, and the reason we have reached that aforementioned 1% down into the Earth. This was the space race’s geological counterpart, the contest to drill the deepest possible hole into the Earth. The US broke ground (both literally and metaphorically) in 1961 with “Project Mohole”, which aimed to collect samples from the Mohorovičić (Moho) discontinuity, the boundary between the crust and mantle identified some 50 years previously via its impact on the velocity of seismic waves. Beset by mismanagement, the endeavour was abandoned after its first phase, reaching just 183 metres beneath the ocean floor. In 1979 the Soviet Union picked up the gauntlet to bore, within a decade, to a depth of more than 12.2 kilometres; this is about a third of the way through the crust at the site on north-west Russia’s Kola Peninsula.

The strength of Stanley’s work lies in her engaging, conversational, almost conspiratorial writing style

The strength of Stanley’s work lies in her engaging, conversational, almost conspiratorial writing style, which – amid a slew of running jokes, anecdotes and charming food-based metaphors – makes light work of considerable scientific ground that, in less deft hands, could easily have become a painful slog.

However, I feel the preface has far too much of the author’s personality and life history. Some of the introduction sets up later preoccupations – a family background in restauranteering, for example, fits the conceit of comparing planets to soup, cake, pudding and fruit. However, other details venture too far into “Dear Diary” territory. Details of childhood friends, teachers, fictional idols and university mentors, for example, do little to advance the book’s theme and might have been better gently edited into the acknowledgements section instead.

My only other real criticism is that while the journey is engaging, the destination of the book isn’t entirely clear. The final chapter touches on how our home is unique, how there is no Earth 2.0 to retreat to amid the growing chaos of anthropogenic climate change. This is an important take-home message, but not one that the rest of the book feels like it was working towards. I cannot help but feel that a stronger through-line could have set up this conclusion to a more satisfying effect.

  • 2023 Johns Hopkins University Press 272pp £14/$16.95 pb

The post What lies beneath: unearthing the secret interior lives of planets appeared first on Physics World.

  •  
❌
❌