↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hier6.5 📰 Sciences English

Ultrasensitive microfluidic chip helps predict whether lung cancer treatment is working

Par : No Author
5 mars 2024 à 10:30

Lung cancer treatments are complex, lengthy and can cause debilitating side effects. And because a treatment may not be effective for a particular patient, it is important to learn early on whether or not it is working so that alternative cancer fighting tools can be employed where appropriate. Analysis of blood samples using liquid biopsy can monitor cancer cell destruction as early as four weeks into treatment, but current approaches are ineffective for non-small cell lung cancer (NSCLC).

This may change with the development of a novel ultrasensitive graphene oxide (GO) microfluidic chip by researchers at the University of Michigan. The team’s “GO chip” can isolate and capture enough circulating tumour cells (CTCs) in the blood of NSCLC patients for laboratory analysis. CTCs are early signs of metastasis and strong predictors of future progression, but their low concentration in the blood – 10 to 100 CTCs/ml, compared with a background of 108 white blood cells – makes their detection and analysis challenging.

Current tests for detecting CTCs in blood capture a single type of protein on the cell surface that’s infrequent in lung cancers. Instead, the GO chip is designed to capture a wider range of surface proteins specific to lung tumours. The chip uses immunoaffinity, antibodies against these CTC surface proteins, to capture and immobilize them on functionalized gold nanoposts as blood is pushed through the chip’s channels during testing.

Principal investigators Sunitha Nagrath and Shruti Jolly and their colleagues investigated the GO chip’s effectiveness at measuring CTCs in 26 patients with stage III NSCLC. They recorded measurements throughout the patients’ treatment cycles – which comprised chemoradiotherapy followed by 12 months of anti-PD-L1 immunotherapy – following up the patients for an average of 21 months. They also performed molecular characterization to develop CDC signatures that predict which patients have a shorter or longer progression-free survival.

For the study, the researchers analysed blood samples at six time points: pre-treatment, during and after chemoradiotherapy (weeks 1, 4 and 10) and during immunotherapy (weeks 18 and 30). They used two GO chips for each patient blood sample, one for CTC analysis and the other for RNA extraction of captured CTCs.

Writing in Cell Reports, Nagrath and Jolly report that CTCs decreased significantly during treatment for all patients. They observed a significant decrease in CTC counts from baseline following the first chemoradiation treatment, as well as at week 10 with the commencement of immunotherapy and at week 30. However, there was no correlation between the gross tumour volume and absolute CTC quantity throughout the duration of the blood testing.

Patients with a decrease in CTCs of less than 75% between baseline and week 4 had a shorter progression-free survival, with a mean time before progression of seven months. Patients with a decrease of 75% or greater were progression-free throughout the 21-month long study. This finding demonstrates the potential for using CTCs as an early predictor of progression.

The researchers also conducted mRNA analysis using the microarrays to compare gene expressions. Gene expression changes over time indicated that CTCs shed by tumours that survive chemoradiation may be more metastatic and more aggressive.

“We have now shown that CTCs are potential biomarkers to monitor and predict patient outcomes in patients with stage III NSCLC,” says Jolly in a press statement. “Currently, there is typically a wait of weeks to months before we can fully assess the effectiveness of cancer treatment. However, with this chip, we may be able to sidestep prolonged, ineffective therapy and quickly pivot to alternatives, thus saving patients from needless side effects. This technique has the potential to shift cancer diagnostics, moving from a delayed single assessment to a more continuous surveillance, and facilitating the delivery of personalized cancer treatment.”

Nagrath tells Physics World that the study results could also be relevant to other solid tumours. “We are currently testing the utility of GO chip in other solid tumours, such as bladder cancer, to identify the more aggressive disease,” she says. “We are also planning to conduct a larger cohort study in the future to confirm our initial findings.”

The post Ultrasensitive microfluidic chip helps predict whether lung cancer treatment is working appeared first on Physics World.

  •  

Radio astronomy: from amateur roots to worldwide groups

Par : No Author
5 mars 2024 à 12:00

I’ve been thinking a lot about my identity recently. When someone asks me what I do, I describe myself as a radio astronomer, or a cosmologist, or an astrophysicist – depending on my mood and who I’m talking to. But I’ve never really felt I fully belonged in any of these options. It seemed to me that my pursuit of the first stars using radio data did not quite fit with cosmologists’ tense discussions of the inflationary paradigms and dark energy. Similarly, when visiting radio telescopes, the jargon of “receivers” and “gains” flowed over my head.

“Radio astronomer” is a curious phrase, as one rarely hears scientists attach themselves so closely to any other wavelength. I’ve never heard the phrase “gamma-ray astronomer”, for example. But having visited groups of amateur radio astronomers over the last year, I realized that I do not yet have the skills to call myself a true “radio astronomer”. The label is a badge of honour one cannot earn simply by using data taken by radio telescopes.

Row of large radio telescopes at sunset
The modern face The Karl G Jansky Very Large Array (VLA) in New Mexico, US, was built between 1973 and 1981. Its 28 radio telescopes, each with a 25 m dish, are arranged in a Y-shaped interferometer. (Courtesy: Bettymaya Foott, NRAO/AUI/NSF)

I am an active member of the Square Kilometre Array Observatory (SKAO), an international radio telescope that is currently under construction in South Africa and Western Australia. Although the project’s headquarters, are Jodrell Bank Observatory in the UK, the SKAO is a global project with partnerships stretching from Australia, China, Italy and the Netherlands to Portugal, South Africa, Spain, Switzerland and the UK.

Astronomer versus engineer

According to astrophysicist Philip Diamond, director-general of the SKAO, the project’s calls and meetings often span some 20 time zones. With such a global and populous observatory, it is unsurprising that many of the people who project manage the SKA are from business backgrounds. Diamond half-jokingly quipped once that some will never have even touched a telescope. But that’s not a bad thing – they are not there for their love of the stars. They are there because they know how to keep complex companies thriving, so that the end user (like myself) has great quality data flowing to them on time.

Diamond has undoubtedly earned the badge of “radio astronomer” – indeed, his PhD is in the subject, and his career has found him working at most of the major radio facilities in the world. Talking to him, it is clear he loves the bare bones of the instruments as much as the science they enable. Lower down the hierarchy, not everyone is as broadly situated. There is an explicit split between astronomers and engineers, with only a few exceptions.

The two consortia, engineering and science, even have separate conferences, though I don’t think anyone would test your soldering skills at the engineering meet-up, to grant you entry. While I did attend one engineering conference many years ago, I sit firmly in the science camp and I can tell you: sometimes that split feels more like a chasm. The engineers bemoan the scientists who ask for too much and who do not understand the limits of the technology. Meanwhile, at the science conferences, the scientists loudly despair of any antenna changes that diminish their own science goals, complaining that the engineers do not understand the science potential going down the drain.

These conversations are not unique to the SKA, but they are pronounced because the size of the collaboration is so big. The vast majority of researchers involved are based at their universities and companies across the globe, not in one location where they might have the chance to meet and lessen the tribalism.

In many ways, we are seeing radio astronomy returning to its roots, which began with an uneasy marriage between astronomy and electrical engineering. It took time for scientists in those two fields to learn to cohabit and teach their academic offspring – but eventually universities produced ready-rolled radio astronomers who created the great radio facilities of the 1960s and beyond.

Recreational roots

Radio astronomy was pioneered by Bell Labs engineer Karl Jansky and the British scientists James Stanley Hey and Bernard Lovell (see boxes below). Their first discoveries were possible only thanks to electrical engineers, astronomers and amateurs working together. But with large-scale radio astronomy increasingly becoming a collaboration between two stark specialisms – engineers on the one hand and scientists on the other – what about the jack-of-all-trades amateurs? Is there still room for the group who played such a vital role in the genesis of the field?

Karl Jansky: the engineer

Two black and white photos: a man in an office and a large metal structure on wheels
The engineer Karl Jansky (pictured left in the 1930s) built a rotating antenna (right) to get all-sky coverage at a frequency of 20.5 MHz. With “Jansky’s Merry-go-round” he picked up thunderstorms and a strange hiss that moved throughout the day. (Courtesy: NRAO/AUI/NSF)

In 1928 Karl Jansky was an engineer with Bell Labs in the US, where his job was to reduce the annoying crackles on the new transatlantic radio telephone service that cost $25 a minute ($400 today). Most of the noise he found was due to local disturbances – such as lightning storms – but there was a lesser, continuous hiss in his headphones that he couldn’t place. Putting his engineering skills to good use, Jansky built his “Merry-go-Round”, a 30-m-wide arrangement of rectangular loops of wire that together acted as an antenna, all placed on repurposed Ford Model T wheels. This was during the Great Depression, after all, and money was scarce.

There followed a frustrating year, where Jansky chased the hiss across the sky, at first convinced it was coming from the Sun. But by 1932, he eventually realized that the true source was the centre of our galaxy. Jansky didn’t come to this conclusion alone. Realization dawned only when an astronomer colleague suggested plotting data from the whole year together, and a daily shift of 4 minutes resolved itself: the exact sidereal time (time determined by the apparent daily motions of the stars) you see in objects outside of the solar system. Unfortunately, as Bell Labs was not involved in radio astronomy, Jansky did not pursue this discovery – but his research was taken forward by amateur astronomer Grote Reber.

Grote Reber: the first radio astronomer

Black and white photo of a man stood in front of a radio telescope
The first radio astronomer Grote Reber’s self-built telescope is widely considered to be the world’s first radio telescope. It was originally constructed in 1938 in the backyard of his home in Wheaton, Illinois. When he went to work for the National Radio Astronomy Observatory in the 1960s he moved his telescope and its receiver tower to Green Bank in West Virginia. (Courtesy: NRAO/AUI/NSF)

For a number of years after Jansky’s 1932 discovery, there was one radio astronomer in the entire world, and he was an amateur with a reputation for the eccentric. Grote Reber, a young US engineer who worked for a radio equipment manufacturer in Chicago, had devoured Jansky’s pre-war literature and contacted various academic departments asking when they would act on this clearly important discovery. He repeatedly got the brush-off and eventually, bored with the disrespect from professional astronomers, in 1936 he decided to build a radio telescope in his mother’s back garden.

Using his radio engineer skills, Reber worked out the best shape for the dish (a parabola that would act as the blueprint to most future radio dishes). He then took a summer off work and a year’s worth of salary out of the bank and built a 9.6 m dish. The neighbours feared it could change the weather, pilots rerouted to avoid it, and school children used it as a climbing frame when he wasn’t looking.

Reber, undeterred, first confirmed Jansky’s experiments and then mapped the whole radio sky in the early 1940s, discovering the first radio galaxy, Cygnus A. He also made some of the first solar radio measurements, while professional astronomers were still just waking up to the potential of radio astronomy following the declassification of documents after the Second World War. As Reber’s (and later those of James Stanley Hey and Bernard Lovell’s) results became more well-known, there was a rush to observe the radio sky.

Those with a physics background could make the equipment but didn’t have a clue what they were detecting. Meanwhile, astronomers knew what they wanted to look at, but couldn’t understand the electrical engineering. In these first years, academics could offer only half the skills for a true radio astronomer: they could understand the experiment or they could understand the results. Reber seemed to be the only person who could do both. Alone, in his mother’s garden, Reber was the first radio astronomer, amateur or professional, and remained so for over a decade.

James Stanley Hey: the teacher

Black and white photograph of a man in a suit outside a large house
The teacher James Stanley Hey in 1958 at Meudon House in England. (Photograph by Leo Goldberg, courtesy AIP Emilio Segrè Visual Archives)

In 1942 the radar defence network of Britain’s Royal Air Force (RAF) failed for a nail-biting two days. Physicist James Stanley Hey found himself in charge of working out why the failure occurred. He had been drafted out of teaching physics at Burnley Grammar School in Lancashire at the start of the Second World War, when he joined the Army Operations Research Group. Hey had been given a cursory radio-engineering briefing and put in charge of a team responsible for improving the radar for anti-aircraft guns. By cross-referencing the timing and extent to which each radar station suffered a blackout, Hey worked out that the source of the radar failure was the Sun.

Had he been an astronomer Hey would have been perplexed, as most astronomers at the time knew that there had only been failures in attempts to detect solar radio waves. Even Thomas Edison had not succeeded. As a physics teacher, Hey had no such preconceptions, however, and readily admitted his own ignorance. He even went as far as calling the Royal Greenwich Observatory to ask if anything was wrong with the Sun. As it happens, it turns out there was, as the astronomers at Greenwich confirmed. Indeed, Hey found out that during the exact window the radar stations found themselves overwhelmed with noise, a monstrous sunspot had bloomed across the surface of the Sun.

At the time, the RAF must have been pleased the source wasn’t a new German jamming technology and thankful that there had not been a raid while the defences were blind. After the war, with his work declassified, Hey began to give talks, but the astronomy community was not kind. Who was this man, a teacher no less, to tell them the Sun emitted radio waves? Ridiculous!

Luckily, his vindication came quickly when, in 1946, another mammoth sunspot crossed the solar disc and produced the same interference. At this point, radio astronomy was established as a serious profession across the world, and Hey and other physicists (including Bernard Lovell) scavenged disused wartime radar equipment and constructed their own listening devices. This time, though, they were pointed not at enemy planes, but at the stars.

Bernard Lovell: the physicist

Two black and white photos: a man in a suit and a large telescope under construction
The physicist Bernard Lovell used remote fields owned by the University of Manchester at Jodrell Bank to set up radar equipment leftover from wartime. He later chose this site for construction of the Mark I Telescope, now renamed the Lovell Telescope. (Courtesy: Jodrell Bank Centre for Astrophysics, University of Manchester)

When the Second World War began in 1939, Bernard Lovell was a researcher at the University of Manchester, UK, where he was visualizing the tracks of ionizing particles through vapour in a cloud chamber. Lovell had been drafted in to develop portable radar units, but they were suffering from a pesky source of interference. Eventually the false signals were attributed to showers of particles interacting with the ionosphere and creating emitting radio waves –a fortuitous discovery for Lovell. Having struggled with table-top cloud chambers, he realized he could rely on the Earth’s atmosphere as both particle accelerator and cloud chamber.

After the war, Lovell and others – including his wartime colleague James Hey – “rescued” some disused radar equipment and set it up in the fields of a small outpost of the University of Manchester, at Jodrell Bank. The quiet location should have meant he heard the ping of the radar picking up the trails of a particle shower once an hour. But, to his surprise, he heard a cacophony. Hey suggested that Lovell’s signals could instead be due to the entry of a space rock into the Earth’s atmosphere. The ionized trails left behind by these meteors would reflect radio signals, giving away their position.

Lovell, not in any way qualified to think about meteors, quickly found out that professional astronomers had neither the time nor the inclination to use their precious telescopes to study them either. They left that business to the amateurs. And so it was that Lovell convinced Manning Prentice – solicitor by day, amateur astronomer by night – to join him at Jodrell Bank during the next big meteor shower. Prentice would lie back in his deckchair and shout when and where he saw a meteor. Each time, Lovell would turn the radar equipment in that direction and shout if there were pings on the radar screen.

It became quickly apparent that Lovell had indeed been recording meteor showers. Cloud chambers and particle physics now forgotten, Lovell began raising money to build the Mark I Telescope at Jodrell Bank (later renamed the Lovell Telescope) and began down the path to becoming one of the greatest radio astronomers of the 20th century. All it took was lessons from an amateur.

The word “amateur” has two common meanings: “one who engages in a pursuit, study, science or sport as a pastime rather than as a profession” and “one lacking in experience and competence in an art or science”. From gardening to DIY, there are many skills at which I am both unpaid and incompetent, and so it must go deeper than that. Indeed, the Latin root of the word is amator, meaning “lover”. Literally, to be an amateur in a pursuit is to love it, to have a passion for it.

It turns out I had been unfairly judging those who engage in amateur hobbies, not least in the field I thought I knew better than anyone: radio astronomy. Amateur astronomers might not be paid, or produce high-profile academic papers, but the pings of a meteor and the hiss of the Milky Way in their headphones make them beam with joy.

Upon searching for a modern equivalent of the pioneering US amateur radio astronomer Grote Reber (see box above), I came across numerous associations of amateur radio-astronomy clubs observing everything from the galactic spiral arms to, astonishingly, pulsars. Upon speaking with a few – including the British Amateur Astronomy Radio Astronomy Group, the Lincoln Amateur Astronomy Club, and the Sutton and Mansfield Amateur Astronomy Club – I realized that nowhere do I feel more like an amateur than in an amateur astronomy club.

Row of several telescope dishes on a lawn in front of a low stone building
Passion project Amateur radio astronomer Laurence Newell is currently building an observatory he calls “Area Fifty One and Three Quarters” in Suffolk, UK, as a retirement project. The observatory comprises several donated dishes in various states of construction. They include two fully steerable 4 m dishes (which can, with effort, be used for pulsar reception) and two 3 m dishes that act as an interferometer at 1420 MHz. Newell is also developing a receiver for the Schumann resonances. (Courtesy: Dr Laurence Newell)

Indeed, when I meet such groups, I must seem such a disappointment to the members; not that I am ever made to feel that way by them. The optical astronomers resident at these clubs usually do well to recover from their shocked pauses after I admit I don’t know what planet, constellation or star they are pointing at, while the radio amateurs politely try to get past my lack of experience in building or maintaining radio telescopes.

Dishes adorn roofs, lines of wire stretch across posts and antennas of all shapes point towards the sky. The technology is so simple and familiar looking that it is easy to assume those in the sheds are just trying to tap into a free radio or TV service. To me, though, I jump in excitement as I see the antennas shaped to pick up the storms of Jupiter or measure incoming solar flares.

The people who voluntarily maintain these telescopes are most often retired men who used to work in fields such as electrical engineering or radar science. They are experts at terrestrial radio technology who, after retirement, turned their devices to look up – either for the pure challenge or, truly, because their doctors told them they should no longer be carrying their hulking optical tubes along dark, icy fields.

There are still plenty of professional radio astronomers with knowledge of their antennas bordering on the level of horse-whispering – but I have met them mostly at the older, smaller telescopes and less frequently among my generation of academics. In large collaborations, radio astronomers like this are rare these days, due to a necessity of scale. In my view, that’s a loss. It was in the amateur groups’ cold, run-down sheds that I rediscovered the spirit of radio astronomy. Here were the true radio astronomers, amateur or not.

Two photos: a radio beacon and a man sat at a desk looking at several monitors of data
Citizen science The UK Meteor Beacon is a citizen science project to build a system to study meteors and the ionosphere. It comprises a beacon near Nottingham (left) and four receivers across the UK. In a collaboration between amateur radio and radio astronomy, both the Radio Society of Great Britain and the British Astronomical Association have contributed to costs, while volunteers run the project. Nigel “Sparky” Cunnington (right) is able to look at the traces of detected meteors at the Radio Astronomy Centre at Sherwood Observatory, UK, where he is radio astronomy co-ordinator. (CC BY Phil Randall, with additional information from Brian Coleman).

History looms large over the SKA headquarters at the Jodrell Bank Observatory, sitting as it does in the shadow of the iconic Lovell telescope. This 76 m dish was once the largest steerable radio dish in the world when constructed in 1957 and the phenomenal feat of its construction means only two telescopes have surpassed it since (in Effelsberg, Germany, and the Green Bank Telescope in West Virginia, US).

Large-scale radio-telescope arrays, such as the SKA, are the vital next step to gather light over larger areas. Indeed, the SKA is an interferometer, one part of which comprises 130,000 antennas in the West Australian Desert, linked so that incoming long-wavelength radio waves “see” a giant collecting area that circumvents the mechanical-engineering constraints of a physical dish.

A singular dish is easy to anthropomorphize and love; I suspect that an array of 130,000 antennas is less likely to induce as much love and loyalty. Perhaps one will develop a liking for antenna 118,456, which always seems to go cheekily offline on a Tuesday, but it will be the data engineer who chuckles. The astronomer will probably never know.

Large area of desert with several circular groups of hundreds of small antennas
Future astronomy Artist’s impression of planned SKA-Low stations in Murchison, Western Australia. This array will comprise 131,072 low-frequency antennas, each 2 m high, grouped into 512 stations. The components will be built all over the world. (Copyright: DISR)

Rogue radio astronomer

This lack of consolidated knowledge is a cause for concern for some radio astronomers, who know how important it is to understand how data are collected. I found one such astronomer in the physics department at the University of California, Berkeley, US.  As the director of its Radio Astronomy Laboratory, Aaron Parsons has made major contributions in my research field of the first stars, heading a collaboration of scientists in their quest for radio signals from the early universe. For me, touring his lab was a magical experience. I darted around, lifting sheets of metal and admiring different antennas while listening in rapture as Parsons spoke about each piece, as though he were a passionate curator of art.

Aaron Parsons is now what I like to think of as a rogue radio astronomer, turning his back on the evolution of the field towards global collaboration

Parsons freely expresses his concern – bordering on cynicism – regarding large collaborations, because of the natural split in expertise that efficiency dictates. Indeed, he is now what I like to think of as a rogue radio astronomer, turning his back on the evolution of the field towards global collaboration. He even spends his holidays camping alone or with his son in isolated parts of the US, looking for the perfect canyon across which to hang his newest, handmade antenna.

The ingenuity of his solo collaboration is overtly reminiscent of Reber and Lovell. Parsons builds his own instruments, always keeping in mind how he expects the data will look. He tells me he would struggle to trust any other scientist’s analysis, unless they have built the antennas themselves. One must know the instrument to know its effect on the data, more so than ever when the tiniest cosmological signal can be washed out by modelling an antenna effect incorrectly.

As we now enter an era of immense interferometry, we risk unpicking the tight marriage between electrical engineering and astronomy. Indeed, the knowledge required to show expertise in any one aspect is now too great for one person, or even one PhD training programme. The happiness of any ongoing relationship relies on spending time together and communicating openly. Large observatories such as the SKA will thrive only with the scientists and engineers exchanging knowledge and respecting each other’s expertise and love for their craft. One without the other is as good as nothing at all.

In some ways, true radio astronomers are a dying breed. They are found mainly at smaller telescopes or in amateur clubs; it’s potter for pleasure, not publish or perish. I understand why large collaborations need a clear split between engineers and astronomers, but both sides need to learn a little of the other’s language so that the essential marriage of minds does not falter. Your local amateur astronomy club could very well be the best place to do just that.

The post Radio astronomy: from amateur roots to worldwide groups appeared first on Physics World.

  •  

The physics behind ‘fractal painting’ revealed

5 mars 2024 à 14:17

Two researchers at the Okinawa Institute of Science and Technology (OIST) in Japan have examined the physics behind dendritic painting, which involves mixing colourful inks with alcohol and applying the droplets to a surface coated with a layer of acrylic paint.

The process first involves diluting one part of acrylic paint to two or three parts of water and applying it to a non-absorbent surface with a brush.

The next step is to mix the alcohol and acrylic ink and apply a droplet of the mix to the surface layer while the acrylic paint is still wet.

The result is an intricate set of fractal patterns that can resemble snowflakes, thunderbolts or neurons.

OIST’s Chan San To and Eliot Fried examined the fluid dynamics at play when the liquids create these patterns (PNAS Nexus 3 pgae059).

The duo found that the surface tension as the droplet dries and the non-Newtonian nature of the fluids play an important role.

As the ink droplet mix expands it changes the viscosity of the surface layer by shearing it and this force is what creates the fractal patterns. The researchers found that a surface layer less than half a millimetre thick was best to create the fractal patterns.

They discovered that the physics of dendritic painting is similar to how a liquid travels in a porous medium such as soil.

“If you were to look at the mix of acrylic paint under the microscope, you would see a network of microscopic structures made of polymer molecules and pigments,” notes Fried. “The ink droplet tends to find its way through this underlying network, travelling through paths of least resistance, that leads to the dendritic pattern.”

For a video of the process, see here.

The post The physics behind ‘fractal painting’ revealed appeared first on Physics World.

  •  

Cat qubits reach a new level of stability

5 mars 2024 à 16:02

Quantum computers could surpass conventional computing in essential tasks, but they are prone to errors that ultimately lead to the loss of quantum information, limiting today’s quantum devices. Therefore, to achieve large-scale quantum information processors, scientists need to develop and implement strategies for correcting quantum errors.

Researchers at the Paris-based quantum computing firm Alice & Bob, together with colleagues at France’s ENS–PSL and ENS de Lyon, have now made significant strides towards a solution by enhancing the stability and control of so-called cat qubits. Named after Erwin Schrödinger’s famous thought experiment, these quantum bits use coherent states of a quantum resonator as their logical states. Cat qubits are promising for quantum error correction because they are constructed from coherent states, which make them intrinsically robust against certain types of errors from the environment.

A new measurement protocol

Quantum bits suffer from two types of errors: phase flips and bit flips. In quantum computing, a bit flip is an error that changes the state of a qubit from |0⟩ to |1⟩ or vice versa, analogous to flipping a classical bit from 0 to 1. A phase flip, on the other hand, is an error that alters the relative phase between the |0⟩ and |1⟩ components of a qubit’s superposition state. Cat qubits can be stabilized against bit-flip errors by coupling the qubit to an environment that preferentially exchanges pairs of photons with the system. This autonomously counteracts the effects of some errors that generate bit-flips and ensures that the quantum state remains within the desired error-corrected subspace. However, the challenge of quantum error correction is not just about stabilizing qubits. It is also about controlling them without breaking the mechanisms that keep them stable.

Photograph of the circuit design
Cat coupling: Photograph of the circuit design described in the first paper. A superconducting resonator hosting the cat qubit mode (blue) couples to a lossy auxiliary/buffer mode (red). Pairs of photons from the memory are converted to buffer photons by pumping the nonlinear element (white) through two lines (yellow). (Courtesy: Réglade, Bocquet et al., Quantum control of a cat-qubit with bit-flip times exceeding ten seconds, https://arxiv.org/abs/2307.06617)

In the first of a pair of studies posted on the arXiv preprint server, and not yet peer-reviewed, researchers at Alice & Bob, ENS-PSL and ENS de Lyon found a way of increasing the bit-flip time to more than 10 seconds – four orders of magnitude longer than previous cat-qubit implementations – while still fully controlling the cat qubit. They achieved this by introducing a readout protocol that does not compromise bit-flip protection in their cat qubit, which consists of a quantum superposition of two classical quantum states trapped in a superconducting quantum resonator on a chip. Crucially, the new measurement scheme they devised for reading out and controlling these qubit states does not rely on additional physical control elements, which previously limited the achievable bit-flip times.

Previous experiment designs used a superconducting transmon – a two-level quantum element – to control and read out the state of the cat qubit. Here, the researchers devised a new readout and control scheme that uses the same auxiliary resonator that provides the two-photon stabilization mechanism for the cat qubit. As part of this scheme, they implemented a so-called holonomic gate that transforms the parity of the quantum state to the number of photons in the resonator. The photon number parity is a characteristic property of the cat qubit: an equal superposition of the two coherent states contains only superpositions of even photon numbers, whereas the same superposition but with a minus sign contains only superpositions of odd photon numbers. The parity therefore provides information about what state the quantum system is in.

Redesigning the stabilization of cat qubits

The Alice & Bob team prepared and imaged quantum superposition states while also controlling the phase of these superpositions and maintaining a bit-flip time of over 10 seconds and a phase-flip time longer than 490 ns. Fully realizing a large-scale error-corrected quantum computer based on cat qubits will, however, require not only good control and fast readout, but also a means of ensuring the cat qubit remains stable for long enough to perform computations. Researchers from Alice & Bob and ENS de Lyon addressed this important and challenging task in the second study.

To realize a stabilized cat qubit, the system can be driven by a two-photon process that injects pairs of photons while dissipating only two photons at once. This is usually done by coupling the cat qubit to an auxiliary resonator and pumping an element called an asymmetrically-threaded-SQUID (ATS) with precisely tuned microwave pulsesThis approach, however, poses significant drawbacks, such as heat buildup, activation of unwanted processes, and the necessity of bulky microwave electronics.

Diagram of circuit design
Extending bit flip times: Circuit design from the second paper. Instead of the traditional four-wave mixing element (a) pumped to convert pairs of photons of the cat qubit resonator (green) to buffer photons (blue), here the Alice and Bob/ENS de Lyon team realize a coupling with a three-wave mixing element (b). The schematics of the circuit, including the three-wave mixing element, are shown in (c), and a photograph of the chip in (d). (Courtesy: Marquet et al., Autoparametric resonance extending the bit-flip time of a cat qubit up to 0.3 s, https://arxiv.org/abs/2307.06761)

To mitigate these problems, the researchers redesigned the two-photon dissipation mechanism so that it does not require such an additional pump. Instead of an ATS, they implemented the cat qubit in a superconducting oscillator mode coupled to a lossy auxiliary mode via a nonlinear element consisting of multiple Josephson junctions. The Josephson element serves as a “mixer” that makes it possible to exactly match the energy of two cat qubit photons to that of one photon in the auxiliary resonator. As a result, in this so-called autoparametric process, pairs of photons in the cat qubit resonator are transformed into a single photon of the buffer mode without the need for any additional microwave pump.

Photo of Alice and Bob's chip, held with tweezers in a person's gloved hand against a black background
Robust against bit-flip errors: Another view of the Alice and Bob chip. (Courtesy: Alice & Bob/Nil Hoppenot)

By designing a superconducting circuit with a symmetric structure, the team was able to couple a high-quality resonator with a low-quality one through the same Josephson element. They thereby increased the two-photon dissipation rate by a factor of 10 compared to previous results, with a bit-flip time approaching one second – in this case limited by the transmon. A high two-photon dissipation rate is needed for fast qubit manipulation and short error correction cycles. These are crucial for correcting the remaining phase-flip errors in a repetition code of cat qubits.

Future applications with cat qubits

Gerhard Kirchmair, a physicist at the Institute of Quantum Optics and Quantum Information in Innsbruck, Austria, who was not involved in either study, says that both works describe important steps towards realizing a fully error-corrected qubit. “These are the next steps towards full-fledged error correction,” Kirchmair says. “They clearly demonstrate that it is possible to achieve exponential protection against bit flips in these systems, which demonstrates that this approach is viable to realize full quantum error correction.”

The researchers acknowledge that significant obstacles remain. Because the accuracy of readout using the holonomic gate protocol was rather limited, they want to find ways to improve it. Demonstrating gates involving multiple cat qubits and checking whether the inherent bit-flip protection remains will be another important step. Furthermore, with the new autoparametric device setup to exchange pairs of photons, Alice & Bob co-founder Raphaël Lescanne anticipates being able to stabilize a cat qubit using four different coherent states instead of only two. “Our goal is to use the unprecedented nonlinear coupling strength to stabilize a four-component cat-qubit, which would offer in situ phase-flip error protection along with bit-flip error protection,” Lescanne says.

Kirchmair believes these results pave the way for more elaborate error correction schemes relying on these heavily noise-biased qubits, where the bit-flip rate is much lower than the remaining phase flip rate. “Next steps will be scaling this system to also correct for phase flips thus realizing a fully error-corrected qubit,” Kirchmair tells Physics World. “One could even imagine combining both approaches in one system to make the best of both results and improve the bit flip times even further.”

The post Cat qubits reach a new level of stability appeared first on Physics World.

  •  

Intelligent solutions streamline radiotherapy treatment planning

Par : No Author
5 mars 2024 à 16:51

Intelligent software solutions have become a crucial tool for stretched clinical teams to provide the best possible care to cancer patients, particularly those that require more complex treatments using higher radiation doses. Software systems with built-in artificial intelligence can automate repetitive tasks, enhance the information that can be extracted from CT simulators, and ensure consistency of care across an increasing number of cases.

At Castle Hill Hospital in Cottingham, UK, which treats several hundred patients every month with its six linear accelerators, intelligent software has been deployed across the entire treatment planning process. “We try to make use of every tool at our disposal, whether it’s simple decision trees or commercial software that makes our work easier and more efficient,” says Carl Horsfield, principal physicist at Hull University Teaching Hospitals NHS Trust. “Like many treatment centres we are short of staff when compared to national models, and we use software to help us deliver high-quality care.”

Right at the start of the process, automated software on the CT simulators – the SOMATOM go.Open Pro from Siemens Healthineers – maintains the sensitivity of the images by modulating the radiation dose to match the size of the patient. The scanners are also equipped with a smart algorithm, called Direct i4D, that improves the quality of time-resolved images that are used to capture the breathing motion of patients with lung cancer. Normally these 4D CT scans only produce accurate images when regular breaths are taken during the acquisition time, typically around two minutes, but that is rarely the case for patients with lung conditions.

“Lung patients are often complex and problematic at CT, and I have spent a great deal of time attending scans to assess whether the images for 4D lung patients are clinically suitable,” says Horsfield. “With this smart algorithm the scan parameters adapt to the patient’s breathing in real time, which makes the radiographers much more confident in the acquisition when the breathing pattern is irregular.”

Even more significant time savings can be achieved by using an AI-powered solution embedded in the CT scanner, called DirectORGANS, that combines the image data with a deep-learning algorithm to automatically contour the patient’s critical organs. Such automatic contours are generated for every radical patient who is treated at Castle Hill, avoiding the need for a clinician to draw every structure by hand. In congested treatment sites, like the head-and-neck, that can reduce the time taken by an hour or more. “Saving time for our clinicians is paramount, and autocontouring is a fantastic way to ensure they are not repeating simple tasks for multiple patients,” comments Horsfield.

Importantly, the accuracy of the automatic contours – and therefore the amount of time that can be saved – depends on the quality of the input data. DirectORGANS offers a key advantage here, since it captures a bespoke dataset from the CT scan that has been optimized to generate the best results from the deep-learning algorithm. “Many autocontouring tools are hosted in the cloud, which means that they only have access to the scan that has been configured for the needs of the clinical team,” explains Horsfield. “One of the reasons we like DirectORGANS is that it makes its own reconstruction, setting the parameters on the acquiring scanner to match the way the organs should be made.”

The software generates accurate contours for many common organs-at-risk, including the lung, prostate, bladder and spinal canal. Once created, the patient’s clinician at Castle Hill always reviews the structures, edits them as needed, and manually delineates the tumour. Crucially, the clinician must also approve the final set of contours before they are used for treatment planning. “A clinician still needs to make sure that the contours produced by the algorithms are fit for purpose,” says Horsfield. “We also prompt them to provide feedback on the quality of the organs, which provides us with some internal quality assurance.”

While the initial version of the software included 30 or 40 pre-loaded structures, the latest release has improved the coverage and accuracy still further. One key advance, for example, is the ability to automatically contour the lymph node chains, normally a manual and painstaking task. “For prostate patients where there is risk of nodal infiltration, the clinicians need to work their way all the way from the prostate across the sacrum to the end of the local lymph-node chain,” explains Horsfield. “Having automated contouring for those kinds of structures will be a massive saving for them, even on the occasion when some editing is required.”

RapidPlan
Knowledge-based planning RapidPlan exploits model data from previous cases to generate a personalized treatment plan for each new patient. (Courtesy: Siemens Healthineers)

Meanwhile, a number of automated tools are also built into the team’s treatment planning system, Varian’s Eclipse. One that has proved particularly useful for the Castle Hill team is RapidPlan, a knowledge-based solution that uses a model created from previous cases to generate a personalized treatment plan for a new patient. “It’s a tool that helps us to determine what is achievable for each patient, particularly for more complicated cases where the location of the organs-at-risk might compromise the coverage of the target,” says Horsfield. “We have class solutions for our treatment plans as starting points, but it’s smarter than that because it is specific to the anatomy of each patient.”

This knowledge-based approach has proved particularly beneficial for new staff members, and has also improved the consistency and quality of the plans produced across the whole team. “Someone who has been with us for six months might not create a plan of the same standard as one of our more experienced team members,” says Horsfield. “Augmenting their knowledge with these intelligent tools allows them to access that experience and standardizes the quality of the plans we produce.”

Carl Horsfield and team
Software as a solution Carl Horsfield (centre) and the team at Castle Hill have deployed a series of intelligent tools to streamline the treatment planning process. (Courtesy: Siemens Healthineers)

As with any machine-learning approach, the quality of the predictions depends on the training data used to create the model. At Castle Hill the team has used its own cases to develop models for four treatment sites – the lung, head-and-neck, oesophagus and prostate – with several others now being developed to realize further time savings for the planning team. “One of the big difficulties with treatment planning is knowing when to stop,” says Horsfield. “RapidPlan provides the reassurance that you have found an optimal solution for that patient, and that there’s less benefit to spending additional time questioning your choices.”

The Eclipse treatment planning system also provides an interface for adding bespoke tools to the planning process. As an example, the team at Castle Hill has created an automated tool for creating optimization structures, which constrain the solutions produced by the treatment planning system by defining particular areas that should not be targeted with radiation. “We’ve made about 15 different protocols to create these avoidance and optimization structures,” says Horsfield. “They are all simple operations, but we realized that they were being done manually for nearly every treatment plan. It’s been really empowering to be able to create our own tools for making our processes more efficient.”

Such efficiency savings are particularly critical at a time when treatment centres like Castle Hill are dealing with the fallout from the COVID-19 pandemic. With a huge influx of patients and a shortage of healthcare professionals, intelligent tools that can automate at least some of the treatment planning process is helping ongoing efforts to work through the backlog. “Our capacity before COVID was to produce 40 plans per week, and now the whole team is making a big push to increase that to 50,” says Horsfield. “Every efficiency we can achieve by automating our processes is helping us to make headway against our recovery plan, while also ensuring that we continue to produce high-quality plans for every patient we treat.”

The post Intelligent solutions streamline radiotherapy treatment planning appeared first on Physics World.

  •  

A suspenseful story of life and death in the universe

6 mars 2024 à 12:00

In a saturated market, it’s hard to make an astronomy book stand out. For me, the blurb and introduction of C Renée JamesThings That Go Bump in the Universe didn’t immediately differentiate it from the thousands of other similar popular-science titles.

This is a shame because James – a researcher at Sam Houston State University in the US – is an engaging writer who employs all the careful plotting and pacing of a detective novelist. Once it gets into its groove, the book covers the explosive lives and deaths of stars, which, as James reminds us, are but a short-lived blip in the story of a universe that is unstoppably cooling and expanding.

The book starts slowly, beginning with the first hints found by early astronomers that we inhabit only a tiny, dim corner of a vast and cacophonous cosmos. The smoking gun is the discovery that the universe extends far beyond our Milky Way, making the stars far brighter and more energetic than astronomers had thought possible. The rest of the book details a century of efforts to solve the mystery of how to align our understanding of physics with the wild growing pains of our stellar neighbours.

The characters we meet along the way – from binary stars to black holes – are painted with rich and lively prose

The characters we meet along the way – from binary stars to black holes – are painted with rich and lively prose, with James detailing the range of clues astronomers use to study these strange objects, including neutrinos, gamma-ray bursts and even tree rings. The chapters are short and punchy, and while they do build to a coherent story, they are self-contained enough that keeping up with James never feels like studying for an exam.

To the non-expert reader, the scale of the universe can sometimes be so cartoonishly large that it fails to make an impression. But James cleverly never takes her feet off the Earth. The faltering, human stories of the people who have tried to peer through the intergalactic noise – from the first Aboriginal Australians to modern astronomers – emphasize the awesome size of the events she describes.

Though the broadness of the book’s topic might have put me off picking it off the shelf, James manages to bring it all together in the final pages. I finished the book on my evening commute, and I found myself racing to reach the end before my train pulled into the station.

  • 2023 Johns Hopkins University Press 304pp hb$29.95

The post A suspenseful story of life and death in the universe appeared first on Physics World.

  •  

Spectacular scans of thousands of vertebrate specimens released

6 mars 2024 à 14:00

A freely available repository of thousands of natural history specimens has been created by researchers in the US.

The openVertebrate (oVert) project, funded by the National Science Foundation, is a five-year initiative between 18 US institutions to create 3D reconstructions of vertebrate specimens using computed tomography (CT) scans (BioScience 10.1093/biosci/biad120).

The collection includes amphibians, reptiles, fish and mammals, with many of the fluid-preserved specimens held by museums and not only those on public display.

The specimens were scanned between 2017 and 2023 and allowed scientists to examine them without having to dissect or sample their tissues.

The study has already thrown up some surprises such as the finding that frogs have lost and regained teeth more than 20 times throughout their evolutionary history.

“When people first collected these specimens, they had no idea what the future would hold for them,” notes Edward Stanley from Florida Museum of Natural History, who is oVert’s co-principal investigator.

The post Spectacular scans of thousands of vertebrate specimens released appeared first on Physics World.

  •  

Evidence for ‘quark coalescence’ found in LHC collisions

Par : No Author
6 mars 2024 à 16:31

Physicists working on the LHCb experiment have seen evidence that “quark coalescence” plays a role in the evolution of quarks into hadrons following proton collisions at the Large Hadron Collider (LHC). This mechanism, which was originally proposed in the 1980s, has existing quarks with overlapping wavefunctions combining rather than creating new quarks. It is most pronounced at low transverse momenta, and gradually turns off as quarks escape rapidly from the collision point.

Quarks are the particles that make up the protons and neutrons inside atomic nuclei and numerous other hadrons (heavy particles) that feel the strong interaction. One of their strangest features is that they can never be observed in isolation. The main reason is that, unlike gravity, electromagnetism and the weak interaction, all of which drop in strength with distance, the effect of the strong interaction grows as bound quarks mover further apart. If quarks are sufficiently far apart, the gluon field that mediates the strong interaction contains enough energy to create particle-antiparticle pairs. These bind to the original quarks, creating new bound particles that can either be mesons (combinations of one quark and one antiquark) or baryons (comprising three quarks). This process is called fragmentation.

Experiments involving heavy ion collisions have suggested that this is not the whole story, however. Physicists believe that quarks can also combine in the dense quark–gluon plasma formed by smashing these large particles together in a process called coalescence.

“You have a collision, you make a bunch of quark–antiquark pairs that start moving away from each other, and because of wave-particle duality each particle has a wavelength that sort of tells you how big it is,” explains Matt Durham of Los Alamos National Laboratory in the US, who is a member of the LHCb collaboration.

Existing quarks combine

“If you have three quarks that overlap each other, you freeze them together into a baryon; if you have two quarks that overlap, you freeze them together into a meson; if you have a quark that doesn’t overlap with any other ones it has to fragment,” Durham explains. “So coalescence takes quarks that are produced in the collision and sticks them together; fragmentation requires you to make new quarks out of the vacuum.”

Coalescence in heavy ion collisions has been “generally accepted”, says Durham, because it is otherwise difficult to explain the ratios of protons to pions produced in experiments. Heavy ion collisions are messy, however, and theoretical predictions are inevitably imprecise. In the new research, the LHCb team studied the production of b quarks in proton-proton collisions.  Sometimes called the bottom or beauty quark, the b quark is the second most massive quark in the Standard Model of particle physics.

The production of b quarks is almost certain to produce either a b-lambda baryon or a B0 meson, which both contain a b quark. The production ratio between these two has been extensively studied in experiments in which the b quark is produced by electron-positron collisions – a process that can lead only to fragmentation.  “If you only have fragmentation, this ratio should be universal,” says Durham.

The LHCb team combed through several years’ data on proton–proton collisions and studied the decay products from collisions that had produced b quarks. For collisions with high transverse momenta relative to the colliding beams and few other outgoing particles detected at the same time, the baryon-to-meson ratio was approximately equal to the ratio in electron-positron experiments.

More baryons

However, as the transverse momenta dropped and as the number of other particles detected simultaneously grew, the proportion of baryons gradually increased relative to the proportion of mesons. This, the researchers concluded, was clear evidence that another process more likely to produce baryons was at work in these collisions. In this scenario the b quark is surrounded by other quarks – but became increasingly disfavoured as the produced quark was more separated from the other particles. “You really require coalescence to explain that,” says Durham, who adds, “I think we’ve shown it quite definitively here”.

“I definitely find the data convincing,” says theorist Ralf Rapp of Texas A&M University; “There used to be a disconnect between very small systems – the extreme being electron–positron, where you only have one quark–antiquark pair – and the heavy ion systems where you have thousands of quarks. The way they really make their point is to show systematically how the effect goes away and recovers the electron-positron limit as a function of how many hadrons are observed, which is an observable measuring how many quarks and antiquarks there are to coalesce with.”

Experimentalist Anselm Vossen of Duke University in North Carolina agrees that the work is “very nice”, but notes that the underlying assumptions used to calculate the fragmentation fractions involve the quarks being isolated, so it is perhaps unsurprising that they give incorrect results at low transverse momenta when this is not the case. “All these are models,” he says. “It’s very suggestive that if you use something in the coalescence model it works, but that doesn’t mean it’s ‘the truth’”

The research is described in Physical Review Letters.

The post Evidence for ‘quark coalescence’ found in LHC collisions appeared first on Physics World.

  •  

Battle for the skies: US insists GMT and TMT telescopes must vie for funding

Par : No Author
6 mars 2024 à 18:30

The US National Science Foundation (NSF) has announced it will only support the construction of the Giant Magellan Telescope (GMT) or the Thirty Meter Telescope (TMT) – but not both facilities. The decision to pick just one next-generation, ground-based instrument came as the National Science Board (NSB), which oversees the NSF, set a limit of $1.6bn for its Extremely Large Telescope programme (US-ELTP). The board says it will discuss the NSF’s “plan to select which of its two candidate telescopes to continue to support” at its meeting in May.

Both the GMT and the TMT are seen as the future of US ground-based astronomy and stem from advances in mirror technology. The GMT will rely on seven primary and seven secondary mirrors to give it an optical surface of 25.4 m. Building is already under way at Chile’s Las Campanas peak.

The TMT, meanwhile, will use a segmented primary mirror consisting of 492 elements of zero-expansion glass for a 30 m-diameter primary mirror. The team has chosen Hawaii’s Mauna Kea peak as its location.

However, protests by indigenous Hawaiians, who regard the site as sacred, have delayed the start of construction. The issue might even force a change of the TMT’s location with officials identifying the island of La Palma, belonging to Spain’s Canary Islands, as an alternative site in 2019.

Going it alone

In 2018 the two telescope teams joined forces to create US-ELTP to give US researchers access to giant telescopes in both the Northern and Southern hemispheres. A further boost came in 2020 when the two telescopes emerged from Astro2020, the most recent decadal survey of U.S. astronomy and astrophysics, as a main priority for the community.

The decision for the NSF to now only focus on one design comes after a reduction in funds for science-based government agencies in the current financial year, which began on 1 October 2023. Contentious negotiations among the Republican-majority House of Representatives, the Democratic-majority Senate, and Democratic US President Joe Biden have resulted in a cut of 8.3% in the NSF’s proposed budget. At $9.06bn, it falls about $820m short of the financial year 2023 amount.

At a meeting in late February, the NSB pointed out that any figure greater than $1.6bn for US-ELTP would impoverish other major projects supported by NSF. Yet the decision has not surprised insiders. Michael Turner, a cosmologist from the University of Chicago, wrote in Science last November that it was “simply not possible for NSF to join both projects at the level needed to make each successful”.

The GMT and TMT, both of which have support from various US universities, are not, however, entirely American projects. The GMT works with institutions in Asia, Australia and South America, while TMT’s partners include research organizations in Canada, India and Japan.

Indeed, in a letter published in Science on 16 February days before the NSB’s decision, the executive director of the TMT and president of GMT – along with the director of the NOIRLab, the US national center for ground-based observatories – wrote that overseas and US partners “are on a path to contribute a substantial portion of the $3bn required for each of our telescopes” adding that they “advocate US government funding for both telescopes”.

The GMT is expected to cost $2.54bn, of which $850m has already been committed. Officials at the TMT have not released a final construction cost for the telescope, but it is expected to be at least $3bn, of which $2bn has been committed.

Any further delays to the US programme, however, would give Europe the edge when it comes to making discoveries with a next-generation ground-based telescope. The European Southern Observatory’s Extremely Large Telescope is already under construction in Chile’s Cerro Amazones region. With a primary mirror of 39 m, first light is planned for 2028.

The post Battle for the skies: US insists GMT and TMT telescopes must vie for funding appeared first on Physics World.

  •  

Neural prosthetic aims to boost memory

Par : No Author
7 mars 2024 à 10:30

An electronic prosthetic system could help people with impaired memory – due to Alzheimer’s disease, traumatic brain injury or epilepsy – remember specific information. The new technology, being developed by researchers at Wake Forest University School of Medicine and the University of Southern California, works on the hippocampus, a part of the brain involved in making new memories.

Brain–computer interfaces, such as robotic limbs, establish communication between the brain and an external device. The hippocampus (humans actually have two hippocampi, one in each hemisphere of the brain) can, to some degree, grow new neurons. But scientists haven’t found a way to repair hippocampal damage. The neural prosthetic developed by the researchers uses models derived from hippocampal electrical activity to stimulate recall.

“Most brain control interfaces have relied on the brain figuring out how to deal with input from things. We’re working on how to figure out how to match what the brain is doing,” says Brent Roeder, a research fellow at Wake Forest who has been working on the project for nearly a decade. “We’re figuring out, what are the possible ways to enhance memory function, and which ways work best for what people and what type of conditions?”

Encoding and decoding memory

In a study published in 2018 in the Journal of Neural Engineering, the team stimulated neurons in the hippocampus in real time using a multi-input multi-output nonlinear mathematical model. “[In that study, the model] didn’t care what you were trying to remember…it was just trying to help your hippocampus work better,” Roeder explains.

In their most recent work, reported in Frontiers in Computational Neuroscience, the researchers isolated electrical activity to specific neurons and then used that information to stimulate the hippocampus to see if that could help people remember specific images better.

The study included 14 adults – all of whom had a diagnosis of epilepsy and were participating in a diagnostic brain-mapping procedure in which electrodes were placed in at least one hippocampus. The participants were shown different categories of images (animal, building, plant, tool or vehicle) in a visual delayed match-to-sample memory task. The researchers identified common neural activity in the hippocampus for each image category and used this information to derive a mathematically calculated, fixed firing pattern. This firing pattern was then used to stimulate the hippocampus during a visual recognition memory task.

“We were really testing two things in this study. The first is, can you stimulate for specific information? And the second was, how good are we at stimulating for the information we want to stimulate?” says Roeder. “So the answer to the first question is yes, you can stimulate for specific information. The answer to the second question is, well, there’s a lot of room for improvement.”

The researchers observed both increases and decreases in memory performance. In approximately 22% of cases, there was a difference in how well participants remembered images they had been shown before. When stimulation was delivered on both sides of the brain, almost 40% of participants with impaired memory function showed changes in memory performance.

“The example I give is, you’ve seen a waiter carry a tray on their fingers. They’re not supporting the whole tray, they’re supporting part of the tray. But because those parts of the tray are connected to the rest of the tray, they lift the whole tray,” Roeder explains. “Our memory is associative. We’re not trying to support all of the memory – we’re trying to support part of the neural activity to boost all of the memory.”

The researchers conclude that there may have been more overlap between image categories than they had anticipated (for example, animals are often found near plants). Making image categories more distinct, by showing colours or directions instead of images, for instance, could help improve the performance of the electronic prosthetic.

“Now that we know it’s possible…it’s just a matter of getting better at it,” Roeder says.

The post Neural prosthetic aims to boost memory appeared first on Physics World.

  •  

Lake Shore introduces the MeasureReady system for material and device research

Par : No Author
7 mars 2024 à 11:09

Lake Shore videoIn this video, Lake Shore summarizes the capabilities of the MeasureReady™ M81-SSM synchronous source measure system, which provides ultra-low noise DC and AC source and measure performance.

The M81-SSM has been designed to simplify the complexity often experienced with typical multiple function-specific instrumentation set-ups. It combines the convenience of DC and AC sourcing with DC and AC measurement – including both voltage and current lock-in measurement capabilities – for a wide range of low-level device characterization applications. It also enables switching between AC, DC and lock-in measurements at the press of a button, without having to change any cables or instruments.

Unique MeasureSync™ signal synchronization technology

An extremely low-noise simultaneous source and measure system, the M81-SSM’s MeasureSync™ technology ensures inherently synchronized measurements from 1-3 source channels and 1-3 measure channels per each half-rack instrument.

Amplitude and frequency signals are transmitted to and from the remote amplifier modules using a proprietary real-time analogue method that minimizes noise and ground errors while ensuring tight time and phase synchronization between all modules. Because the M81-SSM sources and measures channels synchronously, multiple devices can be tested under identical conditions so that time-correlated data can be easily obtained.

Connect up to three source modules and up to three measure modules at once

The M81-SSM system provides DC to 100 kHz precision electrical source and measure capabilities with 375 kHz (2.67 μs) source/measure digitization rates across up to 3 source and 3 measurement front-end modules.

There is a choice of differential voltage measure (VM-10) and balanced current source (BCS-10) modules, and single-ended current measure (CM-10) and voltage source (VS-10) modules. All modules use 100% linear amplifiers and are powered by highly isolated linear power supplies for the lowest possible voltage/current noise performance, which LakeShore says rival the most sensitive lock-in amplifiers and research lab-grade source and measure instruments. Embedded calibration data on the modules enables flexible measurement reconfiguration between experimental set-ups.

On the VS-10 module, dual AC and DC range sourcing allows for precise full control of DC and AC amplitude signals with a single module and sample/device connection. On the VM-10 module, seamless range-change measuring significantly reduces or eliminates the typical range change-induced measurement offsets/discontinuities in signal sweeping applications that require numerous range changes.

For further details, visit the M81-SSM webpage at www.lakeshore.com/M81.

The post Lake Shore introduces the MeasureReady system for material and device research appeared first on Physics World.

  •  

Novel superconducting cavity qubit pushes the limits of quantum coherence

7 mars 2024 à 13:00

Over the history of quantum computing, the coherence time of superconducting qubits – that is, the time during which they retain their quantum information – has improved drastically. One major improvement comes from placing superconducting qubits inside three-dimensional microwave resonator cavities, which preserve the qubit’s state by encoding it in photons stored in the cavity.

In a recent study, researchers from Israel’s Weizmann Institute of Science pushed the boundaries of this method by demonstrating a novel three-dimensional cavity qubit setup with a single-photon coherence time of 34 milliseconds (ms). Long coherence time is key to achieving low-error qubit operations (thereby reducing the hardware required in fault tolerance), and the new coherence time shatters the previous record by more than an order of magnitude.

Qubits are highly sensitive to their environments, and readily lose information due to noise. To preserve qubit states for longer, researchers turned to microwave resonator cavities as a form of storage device. As their name implies, these cavities are three-dimensional structures comprising a hollow space designed to accommodate a superconducting transmon qubit chip and the microwave photons that interact with it. Through an encoding process involving the application of specific microwave pulses, the qubit state is transferred to the cavity state and stored there. Once the desired period has passed, the state is retrieved by encoding it back into the transmon. The cavity thus plays a crucial role in controlling and measuring the qubit placed inside it.

For practical applications in quantum information processing, the cavity must be capable of storing the quantum state for extended periods. However, achieving this is not straightforward due to various external factors. Because they are the smallest particles of light, photons are hard to confine, and are easily lost. Disturbances in the qubit chip placed inside the cavity are significant sources of photon damping and decoherence. The formation of an unwanted oxide layer on the cavity’s surface further diminishes the photon lifetime.

Engineering a novel cavity design

Led by Serge Rosenblum, Fabien Lafont, Ofir Milul, Barkay Guttel, Uri Goldblatt and Nitzan Kahn, the Weizmann team overcame these challenges by designing a low-loss superconducting niobium cavity that supports a long-lived single-photon qubit. They used highly pure niobium to fabricate two separate parts of the cavity, and later welded the parts together to prevent photons from leaking out. They also removed oxide and surface contaminants by chemically polishing the cavity.

The resulting structure looks a little like an open umbrella, with a half-elliptical geometry that evolves into a narrow waveguide where the umbrella’s handle would be. Like a satellite dish antenna, which has a curved surface that reflects radio waves towards its focal point, the elliptical structure of the cavity concentrates the electromagnetic field at the centre of the flat surface of the other half of the cavity (see image).

Diagram and photo of the team's cavity set-up
Cavity set-up Left: diagram of the team’s transmon chip inserted inside the narrow waveguide and partially protruding into the half-elliptical superconducting cavity. Right: A photo of the cavity’s two halves before assembly. (Courtesy: Milul et al., “Superconducting Cavity Qubit with Tens of Milliseconds Single-Photon Coherence Time”, PRX Quantum 4 030336 https://doi.org/10.1103/PRXQuantum.4.030336; Serge Rosenblum)

Once the team had prepared the cavity, “the biggest challenge was to integrate a superconducting transmon qubit into a cavity without diminishing the cavity’s photon lifetime”, Rosenblum says. “This takes us back to the infamous balancing act in quantum systems between controllability on one side and isolation on the other.”

The researchers achieved this balance by placing only about 1 millimetre of the transmon chip inside the elliptical cavity, while the rest is housed inside the waveguide. This configuration minimizes chip-induced loses. The cavity’s limited exposure to the chip does, however, weaken the cavity-transmon interaction, so the researchers compensated for this by applying strong microwave pulses to encode the qubit state in the cavity.

Leveraging a cavity for quantum memory and quantum error correction

Thanks to this innovative cavity design, researchers achieved a single-photon lifetime of 25 ms and a coherence time of 34 ms. This is a significant improvement over the previous state-of-the-art cavity, which had a coherence time of about 2 ms.

Rosenblum and colleagues also demonstrated an error-correction method known as bosonic quantum error correction, whereby the qubit’s information is redundantly stored in multiple photons occupying the cavity (so-called Schrödinger cat states). This preserves the fragile qubit state by storing it in many cavity photons, not just a few. The drawback is that as the number of stored photons increases, so does the photon loss rate. Despite this constraint, the Weizmann team achieved Schrodinger cat states with a size of 1024 photons. This corresponds to an average number of 256 photons, which is 10 times larger than previous demonstrations – a remarkable advancement that could improve the performance of bosonic quantum error correction.

With a photon lifetime four orders of magnitude greater than the time required for gate operations, this breakthrough provides ample time for controlling the qubit before it loses information. Looking ahead, Rosenblum says the team’s aim is to realize quantum operations on these cavities with unprecedented fidelity, or probability of success. Notably, he mentions that after the study was published in PRX Quantum, the team has more than doubled the single-photon lifetime to 60 ms, indicating significant potential for further advancements.

The post Novel superconducting cavity qubit pushes the limits of quantum coherence appeared first on Physics World.

  •  

Tackling climate change while improving human wellbeing

7 mars 2024 à 16:46

Environmental challenges like climate change are forcing us to rethink how we live in cities. This provides humanity with an important opportunity to develop new policies that also improve the overall wellbeing of urban dwellers.

Our guest in this episode of Physics World Weekly podcast is Radhika Khosla – who is an urban climatologist based at the Oxford Smith School of Enterprise and the Environment at the UK’s University of Oxford. She points out that extreme heat is proving to be the most deadly consequence of climate change and talks about the need to develop and implement cooling technologies that do not boost greenhouse gas emissions.

Khosla explains why the rapid urbanization of India offers opportunities to develop environmental policies that improve people’s lives. She also talks about her plans for the journal Environmental Research Letters, where she has recently become editor-in-chief.

 

The post Tackling climate change while improving human wellbeing appeared first on Physics World.

  •  

Ask me anything: Anne Pawsey – ‘I really enjoy working with a huge community of physicists’

8 mars 2024 à 10:27

What skills do you use every day in your job?

Communication skills in all their forms are vital, whether it’s giving a presentation, writing a news article, discussing matters with one of our boards, or working with the team at the secretariat of the European Physical Society (EPS) in Mulhouse.

I also use a lot of project-management skills. The EPS runs several international conferences, is part of European Union projects, and facilitates the work of our volunteer members to promote and support physics and physicists – so there are often a lot of plates to keep spinning at the same time.

I’m grateful for the broad knowledge of physics I acquired during my degree. My PhD was in soft-matter physics on the behaviour of colloids in liquid crystals, but I also occasionally find that specialist knowledge I picked up in areas of science beyond my thesis topic come in handy for understanding matters under discussion.

What do you like best and least about your job?

I really enjoy working with a huge community of physicists and getting to hear them talk with enthusiasm about their research. I particularly enjoy interacting with the EPS’s Young Minds Sections and hearing about the outreach and engagement activities that they organize with EPS support.

My job is also really varied, and no two days are the same. I might be travelling for an editorial meeting, working on administration in Mulhouse, or participating in a planning meeting for a conference – all in the same week. The downside is that I occasionally miss the focused quiet of a meticulous laboratory experiment and I rarely get the luxury of spending an uninterrupted block of time on something.

What do you know today, that you wish you knew when you were starting out in your career?

I wish I’d known how vital language skills would be. Of course, most science is formally communicated in English and as a native speaker I have an advantage. But everyday life around the world happens in each country’s native language. The EPS is based in Mulhouse, France, very close to the Swiss and German borders, so I use French every day and have to converse in German at least once a week. I’m really grateful for the Erasmus year I spent in Grenoble during my degree for giving me a decent proficiency in French and the confidence to speak a foreign language.

The post Ask me anything: Anne Pawsey – ‘I really enjoy working with a huge community of physicists’ appeared first on Physics World.

  •  

Space-borne atoms herald new tests of Einstein’s equivalence principle

Par : Ali Lezeik
8 mars 2024 à 10:30

The motion of freely-falling bodies is independent of their composition. This is one of the foundations of Einstein’s Equivalence Principle (EEP), which underpins our modern understanding of gravity. This principle, however, is under constant scrutiny. Any violations of it would give us hints in our search for dark energy and dark matter, while also guiding our understanding of black holes and other systems where gravity and quantum mechanics meet.

Scientists from the US, France and Germany have now created a new system for testing the EEP: a mixture of two ultracold quantum gases that orbits the Earth aboard the International Space Station (ISS). They also demonstrated the first dual-species atom interferometer in space, which they describe as an “important step” towards testing the EEP. The question they aim to answer with this experiment is simple: do two atoms of different masses fall at the same rate?

Cold atoms on the ISS

The ISS is home to the Cold Atom Laboratory (CAL), which is a “playground” for atoms in space. Launched in 2018, in 2020 it created the first space-borne Bose-Einstein Condensate (BEC) – a special state of matter achieved after cooling atoms to temperatures just above absolute zero. This first quantum gas consisted of ultracold rubidium atoms, but following an upgrade in 2021, the CAL also hosts a microwave source for making quantum gases of potassium atoms.

In the latest work, which is described in Nature, the CAL scientists generated a quantum mixture of both species on the ISS. “Generating this quantum mixture in space is an important step towards developing high precision measurements for testing Einstein’s equivalence principle,” says Gabriel Müller, a PhD student at Leibniz University in Hannover, Germany who is involved in the experiment.

To achieve this mixture, the team confined rubidium atoms in a magnetic trap and allowed the most energetic “hot” atoms to evaporate out of the trap, leaving the “cold” atoms behind. This eventually leads to a phase transition into a quantum gas once the atoms drop below a certain critical temperature.

While this process also works for potassium atoms, simultaneously evaporating both species in the same trap is not straightforward. As the internal energy structure of rubidium and potassium atoms is different, their initial temperatures in the trap vary, and so will the optimum conditions of the trap and the evaporation time needed to reach the critical temperature. As a result, the scientists had to turn to a different solution. “The potassium quantum gas is not generated via evaporative cooling, but rather cooled ‘sympathetically’ via direct thermal contact with the evaporated ultracold rubidium gas,” explains Müller.

Generating this quantum gas in space has its merits, he adds. “On Earth, there’s a gravitational sag, meaning that two atoms of different masses will not be at the same position in the trap. In space, on the other hand, the gravitational interaction is weak, and the two species are overlapped.” This aspect of working in microgravity is essential for performing experiments aimed at observing interactions between the two species that would otherwise be hijacked by the effects of gravity on Earth.

The crucial role of quantum state engineering

Producing a quantum mixture of rubidium and potassium atoms brings the CAL team a step closer to testing the EEP, but other elements of the experiment still need to be tamed. For example, although the two species overlap in the trap, when they are released from it, their initial positions are slightly different. Müller explains that this is partly due to the properties of each atom species leading to different dynamics, but it is also due to the trap release not being instantaneous, meaning that one of the species experiences a residual magnetic force relative to the other. Such systematic effects could easily present themselves as a violation of the EEP if not taken care of properly.

For this reason, the scientists have turned their attention towards characterizing the systematics of their trap and reducing unwanted noise. “This is work that is actively being done in Hannover, to create well-engineered input states of both species, which will be crucial as you need similar initial conditions before you start the interferometer,” says Müller. One solution to the initial position problem, he adds, would be to slowly transport both species to a single position before switching off the magnetic trap. While this can be done with high precision, it comes at the expense of heating up the atoms and losing some of them. The scientists therefore hope to use machine learning to optimize the transport mechanism and thereby achieve similar control of the atomic dynamics, but much faster.

Image showing six red laser beams crossing inside a chamber with a chip suspended above it
Cooling atoms in space: A schematic of NASA’s Cold Atom Laboratory (CAL) showing the arrangement of the laser beams and the atom chip. (Courtesy: NASA/JPL)

Dual-species atom interferometer in space

Once these problems are resolved, the next step would be to perform an EEP test using dual-species atom interferometry. This involves using light pulses to create a coherent superposition of the two ultracold atom clouds, then recombining them and letting them interfere after a certain free evolution time. The interference pattern contains valuable information about the mixture’s acceleration, from which the scientists can extract whether both species experienced the same gravitational acceleration.

A limiting factor in this technique is how well the positions of the laser beam and the atomic sample overlap. “This is the most tricky part,” Müller stresses. One problem is that vibrations on the ISS cause the laser system to vibrate, introducing phase noise into the system. Another issue is that the different mass and atomic energy level structure of both species leads them to respond differently to the vibrational noise, producing a dephasing between the two atom interferometers.

In the latest work, the scientists demonstrated simultaneous atom interferometry of the mixture and measured a relative phase between the interference pattern of the rubidium and the potassium atoms. However, they are well aware that such a phase is likely due to the noise sources they are tackling, rather than a violation of the EPP.

Future missions

A new science module was launched to the ISS with the goal of increasing atom number, improving the laser sources and implementing new algorithms in the experimental sequence. Fundamentally, though, the CAL scientists are striving to demonstrate inertial precision measurement at beyond the current state of the art. “Such realizations are important milestones towards future satellite missions testing the universality of free fall to unprecedented levels,” says Hannover’s Naceur Gaaloul, a co-author of the recent paper.

One example Gaaloul mentions is the STE-QUEST (Space-Time Explorer and Quantum Equivalence Principle Space Test) proposal, which would be sensitive to differences in acceleration of as little as 10−17 m/s2. This precision is equivalent to dropping an apple and an orange and measuring, after one second, the difference in their position to within the radius of a proton. Space is, famously, hard, but atom interferometry in space is even harder.

The post Space-borne atoms herald new tests of Einstein’s equivalence principle appeared first on Physics World.

  •  

Mystery of why inkjet-printed paper curls finally solved

9 mars 2024 à 11:00

You may have noticed that a sheet of paper that is printed on one side using an inkjet printer curls up at the edges after a few hours or days, even if it the paper was perfectly flat after printing.

The effect has remained a mystery until now thanks to work done by researchers at Graz University of Technology.

They sprayed standard A4 printer paper on one side with an ink consisting of water and glycerol.

The duo then used a laser scanner to observe the curvature of the sheets over time, finding that once printed solvents in the ink migrate begin to slowly migrate through the paper towards the unprinted side (Materials & Design doi:10.1016/j.matdes.2023.112593).

The effect of this is to cause the cellulose fibres on the unprinted side to swell and thus the paper starts to curl.

“To solve the problem, glycerol could be replaced by other solvents,” says Graz material scientist Ulrich Hirn. “However, this is not so easy because glycerol gives the inkjet ink important properties that make it suitable for inkjet printing in the first place”.

Another solution is to print on both sides, which is better for the environment as well.

The post Mystery of why inkjet-printed paper curls finally solved appeared first on Physics World.

  •  

Seismic signal that pointed to alien technology was actually a passing truck

10 mars 2024 à 16:28

In January 2014 a meteor streaked across the sky above the Western Pacific Ocean. The event was initially linked to a seismic signal that was detected on Papua New Guinea’s Manus Island. This information was used by Harvard University’s Amir Siraj and Avi Loeb to determine where the object likely fell into the ocean. Loeb then led an expedition that recovered spherical objects called spherules from the ocean bottom, which the team claimed to be from the meteor.

Because of the spherule’s unusual elemental composition, the team has suggested that the objects may have come from outside the solar system. What is more, they hinted that the spherules may have an “extraterrestrial technological origin” – that they may have been created by an alien civilization.

Now, however, a study led by scientists at Johns Hopkins University has cast doubt on the connection between the spherules and the 2014 meteor event. They have proposed a very different source for the seismic signal that led Loeb and colleagues to the spherules.

Road rumble

“The signal changed directions over time, exactly matching a road that runs past the seismometer,” says Benjamin Fernando, a planetary seismologist at Johns Hopkins who led this latest research.

“It’s really difficult to take a signal and confirm it is not from something,” explains Fernando. “But what we can do is show that there are lots of signals like this, and show they have all the characteristics we’d expect from a truck and none of the characteristics we’d expect from a meteor.”

That’s right, it was a truck driving past the seismometer, not a meteor.

Discounting the Manus Island seismic data, Fernando and colleagues then used observations from underwater microphones in Australia and Palau to work out where the meteor crashed into the sea. Their location is more than 160 km from where Loeb’s team recovered their samples.

“Whatever was found on the sea floor is totally unrelated to this meteor, regardless of whether it was a natural space rock or a piece of alien spacecraft—even though we strongly suspect that it wasn’t aliens,” Fernando concludes.

He and his colleagues will report their findings next week at the Lunar and Planetary Science Conference in Houston, Texas

The post Seismic signal that pointed to alien technology was actually a passing truck appeared first on Physics World.

  •  

Can a classical computer tell if a quantum computer is telling the truth?

Par : No Author
11 mars 2024 à 10:30

Quantum computers can solve problems that would be impossible for classical machines, but this ability comes with a caveat: if a quantum computer gives you an answer, how do you know it’s correct? This is particularly pressing if you do not have direct access to the quantum computer (as in cloud computing), or you don’t trust the person running it. You could, of course, verify the solution with your own quantum processor, but not everyone has one to hand.

So, is there a way for a classical computer to verify the outcome of a quantum computation? Researchers in Austria say the answer is yes. Working at the University of Innsbruck, the Austrian Academy of Sciences and Alpine Quantum Technologies GmbH, the team experimentally executed a process termed Mahadev’s protocol, which is based on so-called post-quantum secure functions. These functions involve calculations that are too complex for even a quantum computer to crack, but with a “trapdoor” that allows a classical machine with the correct key to solve them easily. The team say these trapdoor calculations could verify the trustworthiness of a quantum computation using only a classical machine.

Honest Bob?

To understand how the protocol works, assume we have two parties. One of them, traditionally known as Alice, has the trapdoor information and wants to verify that a quantum computation is correct. The other, known as Bob, does not have the trapdoor information, and needs to prove that the calculations on his quantum computer can be trusted.

As a first step, Alice prepares a specific task for Bob to handle. Bob then reports the outcome to Alice. Alice could verify this outcome herself with a quantum computer, but if she wants to use a classical one, she needs to give Bob further information. Bob uses this information to entangle several of his main quantum bits (or qubits) with additional ones. If Bob performs a measurement on some of the qubits, this determines the state of the remaining qubits. While Bob does not know the state of the qubits in advance of the measurements, Alice, thanks to her trapdoor calculations, does. This means Alice can ask Bob to verify the qubits’ state and decide, based on his answer, whether his quantum computer is trustworthy.

Relieved Alice

The team ran this protocol on a quantum processor that uses eight trapped 40Ca+ ions as qubits. The measurements Bob makes relate to the energy of the qubits’ quantum states. To obtain a signal above background noise, the researchers ran the protocol 2000 times for each data point, ultimately proving that Bob’s answers could be trusted.

The researchers call their demonstration a proof of concept and acknowledge that more work is needed to make it practical. Additionally, a full, secure verification would require more than 100 qubits, which is out of scope for most of today’s processors. According to Barbara Kraus, one of the team’s leaders and now a quantum algorithms expert at the Technical University of Munich, Germany, even the simplified version of the protocol was challenging to implement. This is because verifying the output of a quantum computation is experimentally much more demanding than doing the computation, as it requires entangling more qubits.

Nonetheless, the demonstrated protocol contains all the steps required for a complete verification, and the researchers plan to develop it further. “An important task concerning the verification of quantum computations and simulations is to develop practical verification protocols with a high security level,” Kraus tells Physics World.

Andru Gheorghiu, a quantum computing expert from the Chalmers University of Technology in Sweden who was not involved in the research, calls it an important first step towards being able to verify general quantum computations. However, he notes that it currently only works for verifying a simple, one-qubit computation that could be reproduced with an ordinary laptop. Still, he says it offers insights into the challenges of trying to scale up to larger computations.

The research appears in Quantum Science and Technology.

The post Can a classical computer tell if a quantum computer is telling the truth? appeared first on Physics World.

  •  

How a technique for recycling rare-earth permanent magnets could transform the green economy

11 mars 2024 à 12:00
Some wind turbines, a pile of hard disk drives being recycled and motors in electric cars
Growth prospects Rare-earth permanent magnets are vital for the “green economy”, but with more than 99% scrapped, the potential market for HyProMag’s recycled magnets stretches from wind turbines and computer hard drives to motors in electric cars. (Courtesy (from left): Shutterstock/pedrosala; iStock/madsci; iStock/Aranga87)

I recently went on a trade mission to Canada funded by Innovate UK, where I met Allan Walton – a materials scientist who co-founded a company called HyProMag. Spun off from the University of Birmingham in 2018, HyProMag has developed a technique for recycling rare-earth magnets, which are widely used in wind turbines, electric-vehicle (EV) motors and other parts of the “green economy”.

Having been invited to tour HyProMag’s prototype recycling facility on the Birmingham campus, I saw that the technology was shaping up to be a great UK success story. So when Physics World sent me a press release announcing that the company is due to start commercial production at Tyseley Energy Park in Birmingham by mid-2024, I knew my instincts were well founded.

Rare-earth permanent magnets – as I described in my column a few months ago – are alloys of elements such as neodymium, samarium and cerium. With the transition to a “clean-energy” economy now in full swing, demand for rare earths is high. Estimates suggest that the market will grow by as much as a factor of seven between 2021 and 2040.

Trouble is, some 80–90% of the world’s neodymium is currently made – or controlled by – Chinese companies. That’s prompted some nations, such as the US, to revamp their own production of permanent magnets. But another way to secure supplies of rare earths is to recycle materials. That’s why the imminent start-up of HyProMag’s facility is so interesting, especially as its process is so energy efficient.

Extracting elements

There are lots of possible methods to extract rare-earth elements from waste materials or from products that have reached the end of their lives. Most of the work has so far focussed on getting the individual elements by first dissolving the magnets and then recovering the rare earths from liquid-waste streams that re-enter the supply chain early in the magnet-making process.

This approach is often called “long-loop” recycling as everything is broken down using various techniques and recovered as rare-earth oxides. These oxides then have to be converted into metals before being cast into alloys and broken down into a fine alloy powder to make the magnets. Long-loop recycling is an important but energy intensive and expensive process.

The Tyseley plant takes a different approach, based as it is on the University of Birmingham’s patented Hydrogen Processing of Magnet Scrap (HPMS) technique. It uses hydrogen as a processing gas to separate magnets from waste streams as a magnet alloy powder, which can be compactified into “sintered” rare-earth magnets. Not requiring heat, it’s a relatively quick process dubbed “short-loop” recycling.

A staggering 259 million hard disk drives were shipped in 2021, so the market for recycled magnets is huge.

When I looked around the company’s prototype line last year, I noticed that it can recycle the hard disk drives (HDDs) found in computers. Each disk can have as much as 16g of magnetic material, about a quarter of which are rare-earth elements. That’s only a small fraction of the disk’s overall mass but, as you’ll recall me pointing out, a staggering 259 million HDDs were shipped in 2021, so the market is huge.

HyProMag’s production method involves a robot with magnetic-field sensors first identifying the location of the HDD’s motor, which contains the all-important rare-earth permanent magnet. The section with the motor is then chopped off, with the rest of the disk sent for conventional recycling. The motor section is finally exposed to hydrogen at atmospheric pressure and room temperature via the HPMS technique.

Amazingly, the rare-earth magnets – typically alloys of neodymium, iron and boron (NdFeB) – just break apart to form a powder. I’ve seen videos of the process and it’s like watching something turn to rust. Crucially, the powder becomes demagnetized so any coatings on the magnet peel away from the surface of the magnets and can be easily separated.

The extracted NdFeB powder is then sieved to remove impurities before being re-processed into new magnetic materials or rare-earth alloys. HyProMag reckons that the process requires 88% less energy than that needed to make rare-earth magnets from primary sources, which is impressive. It has already produced more than 3000 new rare-earth magnets at its pilot plant for project partners and potential customers, with the magnets tested in a wide range of applications in the automotive, aerospace and electronics sectors.

Production promises

But the company wants to get past the trial phase and become a volume supplier of magnets. That’s why the Tyseley scale-up plant is so important. The company reckons it will initially be able to process up to 20 tonnes of rare-earth magnets and alloys a year – and eventually five times that amount. HyProMag is also planning further facilities in Germany and the US.

The technology is promising because so many products contain rare-earth magnets, but when they’re scrapped the magnets get shredded and break apart. The resulting powder remains magnetic, sticking to the ferrous scrap and plant components, but less than 1% of the magnets get recycled. HyProMag can, however, efficiently remove this material before it’s shredded and is already eyeing up a diverse range of economically viable sources of scrap.

“It is difficult to see large-scale recycling of rare-earth magnets taking off without an efficient separation process such as HPMS,” Walton says. “The current pilot line allows us to process up to two tonnes of scrap applications in a single run, with the commercial plant scaled to allow much larger batch sizes.” Loading to powder removal can be done, the company claims, in as little as four hours.

As the demand for rare earths increases and the amount of second-hand magnetic material available also rises, recycling such magnets is becoming an ever-bigger opportunity and an ever-more viable process. Just look at the growth of the EV sector: a typical electric motor has 2–5 kg of magnetic material and worldwide sales of EVs are expected to rise to 65 million per year by 2030, according to market-research firm IHS Markit.

Another huge source of rare earths are wind turbines, many of which are reaching the end of their lives after decades of use. Their generators contain up to 650 kg of rare earths per megawatt of generator capacity. Given that the UK aims to have up to 75 GW of off-shore wind capacity by 2050, it will have nearly 50,000 tonnes of rare-earth magnets in the years to come, according to Martyn Cherrington from Innovate UK, who runs its Circular Critical Materials Supply Chain (CLIMATES) programme.

Such long-term opportunities often need government support – and the recycling of rare-earth permanent magnets has been no exception. Indeed, the fundamental research behind HyProMag’s work began many years before it was spun off. The company has also benefited from financial support from a range of sources, including UK Research and Innovation’s Driving the Electric Revolution programme, the European Union and private investors.

In 2023 HyProMag Ltd was bought by the Canadian firm Maginito, which is part of Mkango Resources – a mineral-exploration and development company listed on the UK and Canadian stock exchanges. Mkango clearly saw the potential of HyProMag’s recycling and magnet-manufacturing technology. It’s a great UK success story, which could have huge long-term global potential for the circular economy.

The post How a technique for recycling rare-earth permanent magnets could transform the green economy appeared first on Physics World.

  •  

Photonic metastructure does vector–matrix multiplication

Par : No Author
11 mars 2024 à 15:03

A new silicon photonics platform that can do mathematical operations far more efficiently than previous designs has been unveiled by Nader Engheta and colleagues at the University of Pennsylvania. The US-based team hopes that its system will accelerate progress in optical computing.

Analogue optical computers can do certain calculations more efficiently than conventional digital computers. They work by encoding information into light signals and then sending the signals through optical components that process the information. Applications include optical imaging, signal processing and equation solving.

Some of these components can be made from photonic metamaterials, which contain arrays of structures with sizes that are on par, or smaller, than the wavelength of light. By carefully controlling the size and distribution of these structures, various information-processing components can be made.

Unlike the bulky lenses and filters that were used to create the first analogue optical computers, devices based on photonic metamaterials are smaller and easier to integrate into compact circuits.

Mathematical operations

Over the past decade, Engheta’s team have made several important contributions to the development of such components. Starting in 2014, they showed that photonic metamaterials can be used to perform mathematical operations on light signals.

They have since expanded on this research. “In 2019, we introduced the idea of metamaterials that can solve equations,” Engheta says. “Then in 2021, we extended this idea to structures that can solve more than one equation at the same time.” In 2023, the team developed a new approach for fabricating ultrathin optical metagratings.

Engheta and colleagues have now set their sights on vector–matrix multiplication, which is a vital operation for the artificial neural networks used in some artificial intelligence systems. The team has created the first photonic nanostructure capable of doing vector–matrix multiplication. The material was made using a silicon photonics (SiPh) platform that integrates optical components onto a silicon substrate.

Inverse design

The researchers also used an inverse design procedure. Instead of taking a known nanostructure and determining if it has the correct optical properties, inverse design begins with a set of desired optical properties. Then, a photonic structure is reverse-engineered to have those properties. Using this approach, the team designed a highly compact material that is suited to doing vector-matrix multiplications with light.

“By combining the inverse design method with the SiPh platform, we could design structures with sizes on the order of 10-30 micron, with a silicon thickness ranging between 150–220 nm,” Engheta explains.

The team says that its new photonic platform can do vector–matrix multiplication far more efficiently than existing technologies. Engheta also points out that the platform is also more secure than existing systems. “Since this vector-matrix multiplication computation is done optically and simultaneously, one does not need to store the intermediate-stage information. Therefore, the results and processes are less vulnerable to hacking.”

The team anticipates that their approach will have important implications for how artificial intelligence is implemented.

The research is described in Nature Photonics.

The post Photonic metastructure does vector–matrix multiplication appeared first on Physics World.

  •  

Modelling lung cells could help personalize radiotherapy

Par : No Author
12 mars 2024 à 10:30

A new type of computer model that can reveal radiation damage at the cellular level could improve radiotherapy outcomes for lung cancer patients.

Roman Bauer, a computational neuroscientist at the University of Surrey in the UK, in collaboration with Marco Durante and Nicolò Cogno from GSI Helmholtzzentrum für Schwerionenforschung in Germany, created the model, which simulates how radiation interacts with the lungs on a cell-by-cell basis.

Over half of all patients with lung cancer are treated using radiotherapy. Although this approach is effective, it leaves up to 30% of recipients with radiation-induced injuries. These can trigger serious conditions that affect breathing, such as fibrosis – in which the lining of the alveoli (air sacs) in the lungs is thickened and stiffened – and pneumonitis – when the walls of the alveoli become inflamed.

In order to limit radiation damage to healthy tissue while still killing cancer cells, radiotherapy is delivered in several separate “fractions”. This allows a higher – and therefore more effective – dose to be administered overall because some of the damaged healthy cells can repair themselves in between each fraction.

Currently, radiotherapy fractionation schemes are chosen based on past experience and generalized statistical models, so are not optimized for individual patients. In contrast, personalized medicine could be achieved thanks to this new model which, as Durante, director of the Biophysics Department at GSI explains, looks at “toxicity in tissues starting from the basic cellular reactions and [is] therefore able to predict what happens to any patient” when different fractionation schemes are chosen.

The team developed an “agent-based” model (ABM) consisting of separate interacting units or agents – which in this case mimic lung cells – coupled with a Monte Carlo simulator. The ABM, described in Communications Medicine, builds a representation of an alveolar segment consisting of 18 alveoli each 260 µm in diameter. Next, Monte Carlo simulations of irradiation of these alveoli are carried out at the microscopic and nanoscopic scale, and information about the radiation dose delivered to each cell and its distribution is fed back into the ABM.

The ABM uses this information to work out whether each cell would live or die, and outputs the final results in the form of a 3D picture. Crucially, the coupled model can simulate the passage of time and thus show the severity of radiation damage – and the progression of the medical conditions it may cause – hours, days, months or even years after treatment.

“What I found very exciting is that these computational simulations actually delivered results that matched with various experimental observations from different groups, labs and hospitals. So our computational approach could in principle be used within a clinical setting,” says Bauer, the spokesperson for the international BioDynaMo collaboration, which aims to bring new computational methods into healthcare via the software suite used to build this model.

Bauer began working on computational cancer models after a close friend died from the disease aged just 34. “Every cancer is different and every person is different, with different shaped organs, genetic predispositions and lifestyles,” he explains. His hope is that information from scans, biopsies and other tests could be fed into the new model to provide a picture of each individual. An AI-assisted therapy protocol could then be created that would output a closely tailored treatment plan that improves the patient’s chances of survival.

Bauer is currently seeking collaborators from other disciplines, including physics, to help move towards a clinical trial following lung cancer patients over several years. Meanwhile, the team intends to expand the model’s use into other areas of medicine.

Durante, for instance, is hoping to study viral infection with this lung model as it “may predict the pneumonitis induced by the COVID-19 infection”. Meanwhile, Bauer has begun simulating the development of circuits in the brains of premature babies, with the goal of better understanding “at what time point to intervene and how”.

The post Modelling lung cells could help personalize radiotherapy appeared first on Physics World.

  •  
❌
❌