↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 24 juin 2024Physics World

Why optics is critical to meeting society’s grand challenges

Par : No Author
24 juin 2024 à 12:00

Over the last century, optics and photonics have transformed the world. A staggering 97% of all intercontinental communications traffic travels down optical fibres, enabling around $10 trillion of business transactions daily across the globe. Young people especially are at the heart of some of the most dramatic changes in optical technologies the world has ever witnessed.

Whether it’s a growing demand for higher data rates, larger cloud storage and cleaner energy supplies – or simply questions around content and self-censorship – communications networks, based on optics and photonics, are a crucial aspect of modern life. Even our knowledge of the impact of climate change comes mostly from complex optical instruments that are carried by satellites including spectrometers, narrow linewidth lasers and sophisticated detectors. They provide information that can be used to model key aspects of the Earth’s atmosphere, landforms and oceans.

Optics and photonics can also help us to monitor the behaviour of earthquakes and volcanoes – both terrestrial and underwater – and the risk and impact of tsunamis on coastal populations. The latter requires effective modelling together with satellite and ground-based observations.

Recent developments in optical quantum technologies are also beginning to bear fruit in areas such as high-resolution gravimetry. It allows tiny changes in subsurface mass distributions to be detected by measuring the spatial variations in gravity, and with it the movement of magma and the prediction of volcanic activity.

The challenge ahead

The UK-based Photonics Leadership Group (PLG) estimates that by 2035 more than 60% of the UK economy will directly depend on photonics to keep it competitive, becoming one of the top three UK economic sectors. PLG projects that the UK photonics industry will increase from £14.5bn today to £50bn over that period. The next 25 years are likely to see further significant advances in photonics, integrated circuits, far-infrared detector breakthroughs, free-space optical communication and quantum optical technologies.

There are likely to be breakthroughs in bandgap engineering in compound-semiconductor alloy technologies that will let us easily make and operate room-temperature very-long-wavelength infrared detectors and imaging devices. This could boost diagnostic medical imaging for management of pain, cancer detection and neurodiagnostics.

The joint effort between photonics and compound semiconductor materials science will become a significant capability in a sustainable 21st century and beyond. Defence and security are also likely to benefit from long-range spectroscopic identification of trace molecules. Optics and photonics will dominate space, with quantum technologies coming into service for communications and environmental monitoring, even if the proliferation of low-Earth-orbiting space objects are likely to cause congestion and hamper direct line-of-sight communications and monitoring.

Such developments, however, don’t come without their challenges, especially when adapting to the pace of change. Optics has a long history in the UK and the evolving nature of the subject is similar to that faced over a century ago by the Optical Society, Physical Society and the Institute of Physics (see box below).

Education will be key and making undergraduate courses attractive as will having a good balance of optics, photonics and fundamental physics in the curriculum. Making sure that students get experience in photonics engineering labs that reflect practical on-the-job tasks will be crucial as will close partnerships with the photonics industry and professional societies when aligning course content with the needs of the photonics industry.

Postgraduate photonics research in the UK remains strong, but we cannot rest on our laurels and it must be improved further, if not expanded.

Another challenge will be tackling the gap in optics and photonics advances between low-income nations and those that are high-income. These include access to optics and photonics education, research collaborations and mentoring as well as the need to equip developing nations with optics and photonics expertise to tackle global issues like desertification, climate change and the supply of potable water.

Desertification exacerbates economic, environmental and social issues and is entwined with poverty. According to the United Nations Convention to Combat Desertification, 3.2 billion people worldwide are negatively affected by spreading deserts. The International Commission for Optics is working with the International Science Council to tackle this by offering educational development, improving access to optical technologies and international collaborations with an emphasis on low-income countries.

If I had a crystal ball, I would say that over the next 25 years global economies will depend even more on optics and photonics for their survival, underpinning tighter social, economic and physical networks driven by artificial intelligence and quantum-communication technologies. Optical societies as professional bodies must play a leading role in addressing and communicating these issues head on. After all, only they can pull together like-minded professionals and speak with one voice to the needs and challenges of society.

Why the Optical Group of the Institute of Physics is the UK’s optical society

The Optical Group of the Institute of Physics, which is celebrating its 125th anniversary this year, can trace its roots back to 1899 when the Optical Society of London was formed by a group of enthusiastic optical physicists, led by Charles Parsons and Frank Twyman. Until 1931 it published a journal – Transactions of the Optical Society – which attracted several high-profile physicists including George Paget Thomson and Chandrasekhara Raman.

Many activities of the Optical Society overlapped with those of the Physical Society of London and they held several joint annual exhibitions at Imperial College London. When the two organizations formally merged in 1932, the Optical Group of the Physical Society became the de facto national optical society of the UK and Ireland.

In 1947 the Physical Society – via the Optical Group – became a signatory to the formation of the International Commission for Optics, which is now made up of more than 60 countries and provides policy recommendations and co-ordinates international activities in optics. The Optical Group is also a member of the European Optical Society.

In 1960 the Physical Society merged with the Institute of Physics (IOP), and today, the Optical Group of the IOP, of which I am currently chair, has a membership above 2100. The group represents UK and Irish optics, organizes conferences, funds public engagement projects and supports early-career researchers.

The post Why optics is critical to meeting society’s grand challenges appeared first on Physics World.

  •  

Liquid crystals generate entangled photon pairs

Par : No Author
24 juin 2024 à 10:00
Diagram showing a beam of laser light impinging on a liquid crystal and producing a pair of entangled photons
Highly adaptable entanglement: The new technique makes it possible to alter both the flux and the polarization state of the photon pairs simply by changing the orientation of the molecules in the liquid crystal. This can be done either by engineering the sample geometry or applying an electric field. (Courtesy: Adapted from Sultanov, V., Kavčič, A., Kokkinakis, E. et al. Tunable entangled photon-pair generation in a liquid crystal. Nature (2024). https://doi.org/10.1038/s41586-024-07543-5)

Researchers in Germany and Slovenia have found a new, more adaptable way of generating entangled photons for quantum physics applications. The technique, which relies on liquid crystals rather than solid ones, is much more tunable and reconfigurable than today’s methods, and could prove useful in applications such as quantum sensing.

The usual way of generating entangled photon pairs is in a crystal such as lithium niobate that exhibits a nonlinear polarization response to an applied electric field. When a laser beam enters such a crystal, most of the photons pass straight through. A small fraction, however, are converted into pairs of entangled photons via a process known as spontaneous parametric down-conversion (SPDC). Because energy is conserved, the combined energy and momenta of the entangled photons must equal those of the original photons.

This method is both cumbersome and inflexible, explains team leader Maria Chekhova. “First they grow a crystal, then they cut it in a certain way, and after it’s cut it can only be used in one way,” says Chekhova, an optical physicist at the Friedrich-Alexander Universität Erlangen-Nürnberg and the Max-Planck Institute for the Science of Light, both in Germany. “You cannot generate pairs at one wavelength with one sort of entanglement and then use it in a different way to generate pairs at a different wavelength with a different polarization entanglement. It’s just one rigid source.”

In the new work, Chekhova, Matjaž Humar of the Jožef Stefan Institute in Slovenia and colleagues developed an SPDC technique that instead uses liquid crystals. These self-assembling, elongated molecules are easy to reconfigure with electric fields (as evidenced by their widespread use in optical displays) and some types exhibit highly nonlinear optical effects. For this reason, Noel Clark of the University of Colorado at Boulder, US, observes that “liquid crystals have been in the nonlinear optics business for quite a long time, mostly doing things like second harmonic generation and four-wave mixing”.

Generating and modifying entanglement

Nobody, however, had used them to generate entanglement before. For this, Chekhova, Humar and colleagues turned to the recently developed ferroelectric nematic type of liquid crystals. After preparing multiple 7-8 μm-thick layers of these crystals, they placed them between two electrodes with a predefined twist of either zero, 90° or 180° between the molecules at either end.

When they irradiated these layers with laser light at 685 nm, the photons underwent SPDC with an efficiency almost as high as that of the most commonly used solid crystals of the same thickness. What is more, although individual photons in a pair are always entangled in the time/frequency domain – meaning that their frequencies must be anti-correlated to ensure conservation of energy – the technique produces photons with a broad range of frequencies overall. The team believes this widens its applications: “There are ways to concentrate the emission around a narrow bandwidth,” Chekhova says. “It’s more difficult to create a broadband source.”

The researchers also demonstrated that they could modify the nature of the entanglement between the photons. Although the photons’ polarizations are not normally entangled, applying a voltage across the liquid crystal is enough to make them so. By varying the voltage on the electrodes and the twist on the molecules’ orientations, the researchers could even control the extent of this entanglement — something they confirmed by measuring the degree of entanglement at one voltage and twist setting and noting that it was in line with theoretical predictions.

Potential extensions

The researchers are now exploring several extensions to the work. According to their calculations, it should be possible to use liquid crystals to produce non-classical “squeezed” states of light, in which the uncertainty in one variable drops below the standard quantum limit at the expense of the other.  “We just need higher efficiency,” Chekhova says.

Another possibility would be to manipulate the chirality within the crystal layers with an applied voltage. The team also seeks to develop practical devices: “Pixellated devices could produce photon pairs in which each part of the beam had its own polarization,” Chekhova says. “You could then produce structured light and encode quantum information into the structure of the beam.” This could be useful in sensing, she adds.

“This liquid crystal device has a lot of potential flexibility that would never have been available in crystalline materials,” says Clark, who was not involved in the research. “If you want to change something in a [solid] crystal, then you tweak something, you have to re-grow the crystal and evaluate what you have. But in this liquid crystal, you can mix things in, you can put electric field on and change the orientation.”

The research is published in Nature.

The post Liquid crystals generate entangled photon pairs appeared first on Physics World.

  •  
À partir d’avant-hierPhysics World

From the lab to Ukraine’s front line

21 juin 2024 à 12:00

When Ukraine was invaded by Russia in February 2022, life for the country’s citizens was turned upside down, with scientists no exception. Efforts to help have come from many quarters both within Ukraine and the wider international community. Michael Banks caught up with Holly Tann, who has a PhD in nuclear physics from the University of Liverpool and the University of Jyväskylä, Finland, and Adam McQuire, who is completing a PhD in archaeology, focusing on contemporary conflict analysis, also at Liverpool.

Why did you set up Casus Pax?

Casus Pax, which is a registered and regulated non-profit organization, was formed in the first week of the Russian invasion in February 2022. Adam went to the Polish–Ukrainian Border a few days later, initially on a fact-finding mission to find a way to help civilians escaping the war. He soon began assisting with the construction of an impromptu field hospital, inside a truck stop, to help refugees approaching the border. This field hospital treated more than 300,000 patients in the first month of the war and required a constant supply of equipment and consumables.

Did you have any links with Ukraine?

We have always had a connection with Eastern Europe. Holly has her family roots in western Ukraine while Adam has family who live close to the Ukrainian border in Poland. When the Russian invasion began, it didn’t feel like a distant conflict in some faraway land because Holly was working in Finland at the time.

What kind of support did you have to set up Casus Pax?

At the beginning it was simply the two of us and our van, but friends and family helped us to raise funds to buy aid and transport it over. Slowly we were able to gather a group of motivated, highly skilled volunteers who have been instrumental in developing Casus Pax into what it is today.

Are you now full time?

We work very long hours running Casus Pax, alongside which Adam is still finishing his PhD. Holly left academic research last year after finishing her doctorate.

How many people are involved with the organization?

Casus Pax is run by us on a day-to-day basis. We also have volunteers doing outreach, procurement and fundraising and who travel to Ukraine with us. Daniel Bromley, for example, is a physics postdoc at Imperial College London who volunteers when he can. Jessica Wade – another Imperial physicist – recently became a patron. In Ukraine we have Yuriy Polyezhayev of University of Zaporizhzhia Polytechnic as an education co-ordinator.

Your focus initially was on medical supplies – what has that involved?

Since our operations began we have delivered over a million pieces of lifesaving equipment directly to civilian practitioners across every frontline region. We tend to operate between 1 and 50 km from the front lines. These locations feature the most unstable conditions and therefore need medical resources designed to deal with severe wounds and catastrophic injuries. This equipment, sadly, is often used soon after delivery and it is unlikely that we will run out of places to deliver for quite some time.

Casus Pax in Kherson
The Casus Pax team delivering medical equipment to Kherson in October 2023 (courtesy: Adam McQuire)

Where do you get the supplies?

We buy the majority of the supplies we provide and most is sourced as surplus from the UK National Health Service, UK Ministry of Defence and private hospitals. The rest we receive as donations from the private sector.

Are you solely focused on consumables?

No. We have also provided a fleet of eight ambulances and rescue vehicles to the Ukrainian emergency services. Again, these vehicles are unlikely to survive long enough to be discarded due to age-related wear and will invariably need to be replaced fairly quickly.

What were some of the challenges in the early months of the war?

The first challenge was the language barrier. We are by no means fluent, but knowing some gives you  a better understanding of Ukrainian administration and bureaucracy. The second was developing a reputation for efficiency and honesty, which took some time. The longest challenge, however, is the constant process of staying safe. Learning how to use different intelligence options and security protocols and ensuring that we are suitably trained and equipped for emergencies.

How often do you return to Ukraine?

We go back every 6–8 weeks. Sometimes we meet people who are truly desperate with nothing left – a family killed and a house destroyed, clinging to the land they grew up on accompanied only by a pet or two. Ukrainians are tough, resilient and stoic but they are not bulletproof nor are they free from the effects of prolonged psychological duress. On one visit to the front-line town of Berylsav in Kherson Oblast in 2023, which was left without reliable clean water after the Kakhovka Dam had failed, we brought medical supplies, water and water filtration equipment and a couple of tonnes of food. Despite the situation, we were greeted warmly and given boiled potatoes and kompot. One elderly woman said to Holly in Ukrainian: “I don’t know who you are and I don’t care that you can’t understand me, I love you.”

You were also asked by the Ukrainian police to help preparations for a nuclear event; what did that entail?

In July 2023 we met leaders of the Zaporzhzhia Regional Administration to discuss the risk posed by the ongoing Russian occupation of the Zaporizhzhia Nuclear Power Plant (ZNPP). The local authorities were deeply troubled and were struggling to generate international co-operation to prepare for the worst. We then received a letter from the Office of the President, recognizing the risk of an accident at the ZNPP and endorsing Casus Pax as an organization positioned to assist.

From the president’s office itself?

Yes. Two weeks later we received a call from the National Police of Ukraine HQ in Kyiv while we were in Scotland picking up an aid vehicle to take to Ukraine. They requested that we help them prepare for a major chemical, biological, radiological and nuclear incident. Three weeks later we made our first delivery of hundreds of thousands of pieces of equipment to the national police based in Zaporizhzhia.

Did your technical backgrounds help with this request?

With the rise in concern over the occupation of the nuclear power plant, it became clear that we were in a niche position to relay information from technical sources to humanitarian organizations. Through Zaporizhzhia we also developed a number of relationships with academic and educational leaders in the city and were asked to work with universities and schools during our deployments.

Can you say more about these educational initiatives?

It was a natural progression for us to move towards supporting schools and universities, after using our academic networks to good effect in Zaporizhzhia. We met Yuriy, now our education co-ordinator, through his translation work with the National Police. Yuriy works in several institutions, having taken on extra teaching roles to compensate for those that have left or been called up to serve. We discussed the challenges faced by the universities and schools in the frontline regions and we visited some institutions to see how learning was continuing.

Casus Pax team
The Casus Pax team at the Zaporizhzhia Polytechnic National University in April 2024 (Courtesy: Zaporizhzhia Polytechnic National University)

What were some of the challenges?

Many school classes had moved into universities without an opportunity to bring their classroom equipment, while older school children and university students were using apparatus in buildings regularly hit by airstrikes. After discussions with officials at the University of Zaporizhzhia Polytechnic, we developed a long-term plan for how Casus Pax might assist. They were keen on developing connections with researchers and institutions to facilitate outreach and cultural exchange. International collaboration collapsed post invasion and without it, academic aspiration has weakened.

What did this involve?

Our first delivery of aid to the university included medical equipment to bolster the university’s bomb shelter reserves and response capacity. But we also supplied 20 Micro:bit kits, which were donated to us by BBC Micro:bit. The University of Zaporizhzhia Polytechnic had previously been using Micro:bits in limited numbers to develop robotics and computing skills. But the university did not have enough to efficiently roll out courses. Now they do and we want to connect schools in the UK with those in Ukraine so that they can run Micro:bit classes in tandem. This has educational benefits but is also morale boosting.

How challenging is it for students?

It’s very hard for a young person to try and compartmentalize war and focus on studies and career ambitions. But the students are remarkable. Thousands of schools have suffered significant damage during the war and have had to close. Tens of thousands of internally displaced students also attend schools in safer areas often under the auspices of universities. This is not unusual, it is the norm. Education is continuing because of the commitment that a reduced number of staff have to the future of their students.

What future plans do you have for education?

In the coming months we will be hosting a platform where researchers can guest lecture remotely about their work or interests at the University of Zaporizhzhia Polytechnic. We want to expand this model to cover the whole of Ukraine, and in time take it international. We hope to have thousands of online resources available by 2025 that can be used anywhere.

Above all else we need donations and sponsorship. The equipment we supply saves lives and we need to continue with this work

What do you also hope for Casus Pax in the future?

We would like to keep our team relatively small, which means it is easier to ensure that every penny is accounted for. At the same time, we need to expand the reach of our operations. Every day we have desperate requests for urgent and often complex medical support – not only from Ukraine but elsewhere too – and sadly the only limiting factor is funding.

What kind of support do you need to achieve this?

Above all else we need donations and sponsorship. The equipment we supply saves lives and we need to continue with this work alongside the new education initiatives. Without financial support the whole operation will stop. We are also asking anyone who is interested to come forward and help, especially those keen to do outreach work.

The post From the lab to Ukraine’s front line appeared first on Physics World.

  •  

Hawaiian volcano erupted ‘like a stomp rocket’

20 juin 2024 à 17:00

A series of eruptions at the Hawaiian volcano Kilauea in 2018 may have been driven by a hitherto undescribed mechanism that resembles the “stomp-rocket” toys popular in science demonstrations. While these eruptions are the first in which scientists have identified such a mechanism, researchers at the University of Oregon, US, and the US Geological Survey say it may also occur in other so-called caldera collapse eruptions.

Volcanic eruptions usually fall into one of two main categories. The first is magmatic eruptions, which (as their name implies) are driven by rising magma. The second is phreatic eruptions, which are prompted by ground water flash-evaporating into steam. But the sequence of 12 closely-timed Kilauea eruptions didn’t match either of these categories. According to geophysicist Joshua Crozier, who led a recent study of the eruptions, these eruptions instead appear to have been triggered by a collapse of Kilauea’s subsurface magma reservoir, which contained a pocket of gas and debris as well as molten rock.

When this kilometre-thick chunk of rock dropped, Crozier explains that the pressure of the gas in the pocket suddenly increased. And just like stamping on the gas-filled cavity in a stomp rocket causes a little plastic rocket to shoot upwards, the increase in gas pressure within Kilauea blasted plumes of rock fragments and hot gas eight metres into the air, leaving behind a collapsed region of crustal rock known as a caldera.

A common occurrence?

Caldera collapses are fairly common, with multiple occurrences around the world in the past few decades, Crozier says. This means the stomp-rocket mechanism might be behind other volcanic eruptions, too. Indeed, previous studies had hinted at this possibility. “Several key factors led us to speculate along the line of the stomp-rocket, one being that the material erupted during the Kilauea events was largely lithic clasts [broken bits of crustal rock or cooled lava] rather than ‘fresh’ molten magma as occurs in typical magmatic eruptions,” Crozier tells Physics World.

This lack of fresh magma might imply phreatic activity, as was invoked for previous explosive eruptions at Kilauea in 1924. However, in 2018, USGS scientists Paul Hsieh and Steve Ingebritsen used groundwater simulations to show that the rocks around Kilauea’s summit vent should have been too hot for liquid groundwater to flow in at the time the explosions occurred. Seismic, geodetic, and infrasound data also all suggested that the summit region was experiencing early stages of caldera collapse during this time.

First test of the stomp-rocket idea

The new work is based on three-dimensional simulations of how plumes containing different types of matter rise through a conduit and enter the atmosphere. Crozier and colleagues compared these simulations with seismic and infrasound data from previously-published papers, and with plume heights measured by radar. They then connected the plume simulations with seismic inversions they conducted themselves.

The resulting model shows Kilauea’s magma reservoir overlain by a pocket of accumulated high-temperature magmatic gas and lithic clasts. When the reservoir collapsed, the gas and the lithic clasts were driven up through a conduit around 600-m long to erupt particles at a rate of roughly 3000 m3/s.

As well as outlining a new mechanism that could contribute to hazards during caldera collapse eruptions, Crozier and colleagues used subsurface and atmospheric data to constrain Kilauea’s eruption mechanics in more detail than is typically possible. They were able to do this, Crozier says, because Kilauea is unusually well-monitored, being covered with instruments such as ground sensors to detect seismic activity and spectrometers to analyze the gases released.

“Our work provides a valuable opportunity to validate next-generation transient eruptive plume simulations, which could ultimately help improve both ash hazard forecasts and interpretations of the existing geologic eruption record,” says Crozier, who is now a postdoctoral researcher at Stanford University in the US. “For example, I am currently looking into the fault mechanics involved in the sequence of caldera collapse earthquakes that produced these explosions. In most tectonic settings we haven’t been able to observe complete earthquake cycles since they occur over long timescales, so caldera collapses provide valuable opportunities to understand fault mechanics.”

The study is detailed in Nature Geoscience.

The post Hawaiian volcano erupted ‘like a stomp rocket’ appeared first on Physics World.

  •  

Linking silicon T centres with light offers a route to fault-tolerant quantum computing

20 juin 2024 à 15:55

Today’s noisy quantum processors are prone to errors that can quickly knock a quantum calculation off course. As a result, quantum error correction schemes are used to make some nascent quantum computers more tolerant to such faults.

This involves using a large number of qubits – called “physical” qubits – to create one fault-tolerant “logical” qubit. A useful fault-tolerant quantum computer would have thousands of logical qubits and this would require the integration of millions of physical qubits, which remains a formidable challenge.

In this episode of the Physics World Weekly podcast, I am in conversation with Stephanie Simmons, who is founder and chief quantum officer at Photonic Inc. The Vancouver-based company is developing optically-linked silicon spin qubits – and it has recently announced that it has distributed quantum entanglement between two of its modules.

I spoke with Simmons earlier this month in London at Commercialising Quantum Global 2024, which was organized by Economist Impact. She explains how the company’s qubits – based on T-centre spins in silicon – are connected using telecoms-band photons. Simmons makes the case that the technology can be integrated and scaled to create fault-tolerant computers. We also chat about the company’s manufacturing programme and career opportunities for physicists at the firm.

The post Linking silicon T centres with light offers a route to fault-tolerant quantum computing appeared first on Physics World.

  •  

Speed of sound in quark–gluon plasma is measured at CERN

Par : No Author
20 juin 2024 à 13:00

The speed of sound in a quark–gluon plasma has been measured by observing high-energy collisions between lead nuclei at CERN’s Large Hadron Collider. The work, by the CMS Collaboration, provides a highly precise test of lattice quantum chromodynamics (QCD), and could potentially inform neutron star physics.

The strong interaction – which binds quarks together inside hadrons – is the strongest force in the universe. Unlike the other forces, which become weaker as particles become further apart, its strength grows with increasing separation. What is more, when quarks gain enough energy to move apart, the space between them is filled with quark–antiquark pairs, making the physics ever-more complex as energies rise.

In the interior of a proton or neutron, the quarks and gluons (the particles that mediate the strong interaction) are very close together and effectively neutralize one another’s colour charge, leaving just a small perturbation that accounts for the residual strong interaction between protons and neutrons.  At very high energies, however, the particles become deconfined, forming a hot, dense and yet almost viscosity-free fluid of quarks and gluons, all strongly interacting with one another. Calculations of this quark gluon plasma are non-perturbative, and other techniques are needed. The standard approach is lattice QCD.

Speed of sound is key

To check whether the predictions of lattice QCD are correct, the speed of sound is key. “The specific properties of quark–gluon plasma correspond to a specific value of how fast sound will propagate,” says CMS member Wei Li of Rice University in Texas. He says indirect measurements have provided constraints in the past, but the value has never been measured directly.

In the new work, the CMS researchers collided heavy ions of lead instead of protons because – like cannonballs compared with bullets – these are easier to accelerate to high energies and momenta. The CMS detector monitored the particles emitted in the collisions, using a two-stage detection system to determine what type of collisions had occurred and what particles had been produced in the collisions.

“We pick the collisions that were almost exactly head-on,” explains Li, “Those types of collisions are rare.” The energy is deposited into the plasma, heating it and leading to the creation of particles. The researchers monitored the energies and momenta of the particles emitted from different collisions to reconstruct the energy density of the plasma immediately after each collision. “We look at the variations between the different groups of events,” he explains. “The temperature of the plasma is tracked based on the energies of the particles that are coming out, because it’s a thermal source that emits particles.”

In this way, the researchers were able to measure the speed at which heat – and therefore energy density – flowed through the plasma. Under these extreme conditions, this is identical to the speed of sound i.e. the rate at which pressure travels. “In relativity, particle number is not conserved,” says Li; “You can turn particles into energy and energy into particles. But energy is conserved, so we always talk about total energy density.”

Even more stringent tests

The team’s findings matched the predictions of lattice QCD and the researchers would now like to conduct even more stringent tests. “We have extracted the speed of sound at one specific temperature,” says Li. “Whereas lattice QCD has predicted how the speed of sound goes with temperature as a continuous function. In principle, a more convincing case would be to measure at multiple temperatures and have them come out all agreeing with the lattice QCD prediction.” One remarkable prediction of lattice QCD is that, as the temperature of the quark–gluon plasma drops to its lowest possible temperature, the sound speed reaches a minimum before then increasing as the temperature drops further and the quarks become bound into hadrons. “It would be remarkable if we could observe that,” he says.

The research is described in a paper in Reports on Progress in Physics.

“I think it’s a good paper,” says nuclear theorist Larry McLerran of the University of Washington in Seattle – who is not a CMS member. He believes its most interesting aspect, however, is not what it shows about the theory being tested but what it demonstrates about the techniques used to test it. “The issue of sound velocity is interesting,” he says. “They have a way of calculating it – actually two ways of calculating it, one of which is kind of hand waving, but then it’s backed up with detailed simulation – and it agrees with lattice gauge theory calculations.”

McLerran is also interested in the potential to study heavy-ion collisions at low energies, and hopes these might give clues about the cold, dense matter in neutron stars. “In heavy ion collisions, you can calculate the sound velocity squared as a function of density using numerical methods, whereas these numerical methods don’t work at high density and low temperature, which is the limiting case for neutron stars. So being able to measure a simple bulk property of the matter and do it well is important.”

The post Speed of sound in quark–gluon plasma is measured at CERN appeared first on Physics World.

  •  

Scientists uncover hidden properties of rare-earth element promethium

Par : No Author
19 juin 2024 à 16:50

For the first time, researchers have experimentally examined the chemistry of the lanthanide element promethium. The investigation was carried out by Alex Ivanov and colleagues at Oak Ridge National Laboratory in the US – the same facility at which the element was first discovered almost 80 years ago.

Found on the sixth row of the periodic table, the lanthanide rare-earth metals possess an unusually diverse range of magnetic, optical and electrical properties, which are now exploited in many modern technologies. Yet despite their widespread use, researchers still know very little about the chemistry of promethium, a lanthanide with an atomic number of 61, which was first identified in 1945 by researchers on the Manhattan project.

“As the world slowly recovered from a devastating war, a group of national laboratory scientists from the closed town of Oak Ridge, Tennessee, isolated an unknown radioactive element,” Ivanov describes. “This last rare-earth lanthanide was subsequently named promethium, derived from the Greek mythology hero Prometheus, who stole fire from heaven for the use of mankind.”

Despite its relatively low atomic number compared with the other lanthanides, promethium’s chemical properties have remained elusive in the decades following its discovery. Part of the reason for this is that promethium is the only lanthanide with no stable isotopes. Only small quantities of synthetic promethium (mostly promethium-147 with a half-life of 2.62 years) are available, extracted from nuclear reactors, through tedious and energy-intensive purification processes.

Ultimately, this limited availability means that researchers are still in the dark about even the most basic aspects of promethium’s chemistry: including the distance between its atoms when bonded together, and the number of atoms a central promethium atom will bond to when forming a molecule or crystal lattice.

Ivanov’s team revisited this problem in their study, taking advantage of the latest advances in isotope separation technology. In a careful, multi-step process, they harvested atoms of promethium-147 from an aqueous solution of plutonium waste, and bonded them to a group of specially selected organic molecules. “By doing this, we could study how promethium interacts with other atoms in a solution environment, providing insights that were previously unknown,” Ivanov explains

Using synchrotron X-ray absorption spectroscopy to study these interactions, the researchers observed the very first appearance of a promethium-based chemical complex: a molecular structure whose central promethium atom is bonded to several neighbouring organic molecules.

Altogether, they observed nine promethium-binding oxygen atoms in the complex, which allowed them to probe several of the metal’s fundamental chemical properties for the first time. “We discovered how promethium bonds with oxygen atoms, measured the lengths of these bonds, and compared them to other lanthanides,” Ivanov describes.

Based on these results, the researchers then studied a complete set of comparable chemical complexes spanning all lanthanide elements. This enabled them to experimentally observe the phenomenon of “lanthanide contraction” across the whole lanthanide series for the first time.

Lanthanide contraction describes the decrease in the atomic radii of lanthanide elements as their atomic number increases, due to increasingly poor shielding from nuclear charge by inner-shell electrons. The effect causes the lanthanide–oxygen bond length to shrink. Ivanov’s team observed that this shortening accelerated early in the lanthanide series, before slowing down as the atomic number increased.

The team’s discoveries have filled a glaring gap in our understanding of promethium’s chemistry. By building on their results, the researchers hope that future studies could pave the way for a wide range of important applications for the element.

“This new knowledge could improve the methods used to separate promethium and other lanthanides from one another, which is crucial for advancing sustainable energy systems,” Ivanov describes. “By understanding how promethium bonds in a solution, we can better explore its potential use in advanced technologies like pacemakers, spacecraft power sources and radiopharmaceuticals.”

The researchers report their findings in Nature.

The post Scientists uncover hidden properties of rare-earth element promethium appeared first on Physics World.

  •  

Physics and sport: flying balls, perfecting technique, and wellbeing in academia

19 juin 2024 à 13:28

For sports fans, the next few weeks will bring excitement and drama. The Euro 2024 football (soccer) tournament is under way in Germany and the Copa América is about to kick off in the US. Then at the end of July, the Olympics starts in Paris as athletes from across the world compete to run, jump, sail, cycle and dance themselves into the history books. In this episode of Physics World Stories, you will hear from two US physicists with a profound connection with sport.

The first guest is John Eric Goff of the University of Lynchburg, author of Gold Medal Physics: the Science of Sports. After training as a condensed-matter theorist, Goff has focused his research career the physics of sport. In a wide-ranging conversation with podcast host Andrew Glester, Goff discusses everything from the flight of balls to the biodynamics of martial arts. He also considers how data and AI in sport are changing the practice and the spectacle of sport.

Our second guest is Harvard University’s Jenny Hoffman, who recently set the record for the fastest woman to run across the US. In November 2023 Hoffman completed the 3000 mile (5000 km) journey in just 47 days, 12 hours and 35 minutes, running from San Francisco to New York City. Hoffman, who studies the electronic properties of exotic materials, speaks about the benefits of having hobbies and passions outside of work. For her, running plays an essential role in wellbeing during her successful career in academia.

The post Physics and sport: flying balls, perfecting technique, and wellbeing in academia appeared first on Physics World.

💾

  •  

SNMMI ‘Image of the Year’ visualizes the brain as never before

Par : Tami Freeman
18 juin 2024 à 10:30
SNMMI Image of the Year
Ultrahigh-resolution images A series of PET images recorded by the NeuroEXPLORER brain PET scanner was chosen by the Society of Nuclear Medicine and Molecular Imaging as its Image of the Year. (Courtesy: Richard E Carson et al. Yale University, New Haven, CT).

A series of ultrahigh-resolution brain PET images has been selected as the SNMMI Image of the Year. At each of its annual meetings, the Society of Nuclear Medicine and Molecular Imaging chooses an image that represents the most promising advances in the field, with this year’s winner picked from more than 1500 submitted abstracts.

The winning images, recorded by the ultrahigh-performance NeuroEXPLORER human brain PET scanner, highlight targeted tracer uptake in specific brain nuclei (clusters of neurons), providing detailed information on neuronal and functional activity. The technology could dramatically expand the scope of brain PET studies, with potential to advance the treatment of brain diseases.

NeuroEXPLORER was built by a collaboration of researchers from Yale University, the University of California, Davis and United Imaging Healthcare. They designed the scanner to provide ultrahigh sensitivity and ultrahigh spatial resolution, as well as to perform continuous correction for head motion.

“If we are going to get high resolution, we have to deal with patient motion,” Richard Carson from Yale University explained at the recent SNMMI Annual Meeting in Toronto. “But the challenge has always been sensitivity, even with a high-resolution system. In a late frame for a carbon-11 study, you’re really scanning on fumes, it’s hard to generate the quality of images you’d like to really use the resolution that’s possible.”

To achieve the highest possible resolution, ideally better than 2 mm, the team designed a detector micro-block comprising a 4 x 2 array of 1.56 x 3.07 x 20 mm LYSO crystals, read out by four silicon photomultipliers. The 1.5 mm transaxial pitch provides extremely high resolution, while the larger axial pitch is used to read out depth-of-interaction (DOI) data.

The NeuroEXPLORER assembles 20 detector modules, each containing five or six detector blocks (12 × 12 microblocks), in a cylindrical ring with a diameter of 52.4 cm. The scanner’s high sensitivity is enabled by having a long (49.5 cm) axial field-of-view, building on pioneering work from UC Davis on the EXPLORER system. Motion correction is performed using a Vicra tracking system with a built-in camera that collects data from the face for markerless motion tracking.

In performance tests, the NeuroEXPLORER demonstrated a sensitivity of 46 kcps/MBq at the centre, a transverse spatial resolution of less than 1.4 mm with OSEM (ordered-subset expectation-maximization) reconstruction, and a DOI resolution of less than 4 mm. The scanner also performs time-of-flight measurements with an average resolution of 236 ps. The team provide detailed system characteristics in the Journal of Nuclear Medicine.

Exceptional images

In their latest study, presented by Carson at the SNMMI Annual Meeting, the researchers compared human brain images from the NeuroEXPLORER with images from the current state-of-the-art scanner, the High Resolution Research Tomograph (HRRT).

The team scanned healthy volunteers, using both PET systems on different days, after administration of five different radiotracers. The tracers target various features in the brain, including synaptic density (imaged with 18F-SynVesT-1), dopamine receptors (11C-PHNO) and transporters (18F-FE-PE2I), and glucose metabolism (18F-FDG). Carson also shared some late-breaking data using the tracer 18F-Flutabine.

Overall, the team observed a dramatic improvement in the resolution and quality of NeuroEXPLORER images compared with those recorded by the HRRT. Carson presented a series of examples to the SNMMI audience.

NeuroEXPLORER images using 18F FDG, for instance, visualized extremely small substructures with much higher contrast than corresponding HRRT scans. Time activity curves in the caudate and thalamus quantified that NeuroEXPLORER showed higher standard uptake values (SUV) than HRRT. Likewise, late images of synaptic density, recorded 60–90 min postinjection, showed higher contrast and greater SUV with the NeuroEXPLORER than the HRRT scanner.

Carson also presented binding potential images using the dopamine receptor tracer 11C-PHNO. While noise is higher with a 11C-based tracer, NeuroEXPLORER scans showed higher values than the HRRT images. “This was one of the really good days,” he said. “We saw two hotspots in the anterior thalamus, we’re not sure exactly which subnucleus within there, but we’re seeing beautiful hotspots consistent with high dopaminergic intervention in those regions. If you look at the HRRT images, it’s there, but not something you could commit to.”

Carson noted that the NeuroEXPLORER’s long axial field-of-view enables the scanner to also image the carotid arteries in the neck. In addition, the team is using 18F-FDG to image head-and-neck tumours, for example to visualize lymph nodes in the neck. “This is an exciting opportunity in terms of being able to see much smaller tumours,” he said.

The team’s ongoing projects include optimization of the reconstruction parameters, further improving the camera for motion correction, and fully characterizing the resolution of the PET images using brain phantoms with variable contrast.

As well as imaging of small brain structures with targeted pharmaceuticals, future studies will also include more imaging in the spinal cord, and looking at the dynamics of neurotransmitter release. “One exciting part to think about doing is, because of the sensitivity, we can drop the radiation dose and begin to scan adolescents, for example with autism or schizophrenia,” said Carson.

“The dramatic improvement in resolution and overall quality of the NeuroEXPLORER images compared to the HRRT images is clear,” says Heather Jacene, chair of SNMMI’s Scientific Program Committee, in a press statement. “The NeuroEXPLORER has the potential to be a gamechanger in research for conditions such as Alzheimer’s disease, Parkinson’s disease, epilepsy and mental illnesses.”

The post SNMMI ‘Image of the Year’ visualizes the brain as never before appeared first on Physics World.

  •  

Birds save up to 25% of their energy when they follow a leader

17 juin 2024 à 20:01

A painstaking study of sensor-laden European starlings has confirmed what scientists have long suspected: birds use significantly less energy when they fly behind a leader. The study, which was carried out by an interdisciplinary team of researchers in the US, is the first to measure birds’ in-flight energy expenditure directly, and team co-leader Ty Hedrick thinks the findings could have implications for other bird species, too.

“There’s nothing particularly ‘special’ about a trio or pair of starlings that would lead us to believe that we’d find the effect in them but not in (for example) small shorebirds,” says Hedrick, a biologist at the University of North Carolina, Chapel Hill who conducted the work alongside postdoctoral researcher Sonja Friman and colleagues in physics and engineering departments at the University of Massachusetts Amherst, Brown University, Howard University, the Rochester Institute of Technology and the University of Southern California, Los Angeles. According to Hedrick, the energy savings he and his colleagues identified may even apply to birds that fly in dynamic flocks, not just those that adopt the familiar, fixed V-formation used by geese and other large migratory species.

First, catch some starlings

To measure how a starling’s position in a flock affects its energy use, Hedrick, Friman and colleagues needed three things. The first was a controlled flying environment big enough to accommodate up to three starlings without compromising their safety or impeding their natural flight. The team found this at Brown, which operates a purpose-built, mesh-enclosed animal flight wind tunnel with an active volume 1.2 m wide, 1.2 m tall and 2.8 m long.

The second thing the researchers needed was a way to monitor the starlings’ movements in flight. They achieved this by fitting the birds with miniature backpacks containing inertial measurement units (IMUs) and different-coloured LEDs. The IMUs recorded three-dimensional data on the birds’ linear accelerations and angular velocities, while the LEDs helped the team distinguish the positions of individual birds on video recordings of test flights.

A photo of two starlings in flight in a wind tunnel, one wearing a bluish light and the other a yellow one
Two’s company: A view from below of two European starlings flying together in a wind tunnel while wearing inertial measurement units with a coloured LED marker to help researchers distinguish them in video footage. (Courtesy: Siyang Hao)

The final ingredient was a means of measuring the starlings’ energy use. For this, the researchers injected the birds with a dose of sodium bicarbonate that contained carbon-13. This non-radioactive isotope of carbon is often used as a label in metabolic testing because biological processes preferentially take up carbon-12 atoms, which are lighter. By placing the starlings in a metabolic chamber and using a spectrometer to monitor the 13C/12C ratio in their exhaled breath before and after flights, the team could therefore calculate how much energy the birds used.

Together, these three elements enabled the team to go beyond previous bird-flight studies that measured biomechanical and physiological correlates of energy use, but not energy itself, Hedrick says. “The effect of formation flight on flight costs has been investigated for so long and in so many ways, but we realized that the existence of a large animal flight wind tunnel and the new ‘turn-key’ equipment for doing the metabolic measurements would let us go after the question experimentally in a way that had not been done before,” he tells Physics World.

“A lot of effort”

Even with the latest equipment, things did not always go to plan. “The most challenging part was getting everything to work together at once,” Hedrick says. “We needed two (or three) birds to fly well in the wind tunnel, all of the inertial measurement unit backpacks to function at least well enough to provide the bird ID light colours, and the 13C sodium bicarbonate metabolic measurement to work as well.”

As for what happened when things went wrong, the materials and methods section of the team’s paper makes illuminating reading. Data from several test flights had to be discarded after birds “repeatedly veered toward the floor, attempted to land on the floor or cameras, or clung to the front or rear mesh” of the wind tunnel. Noise from whirring feathers masked the indicator tones meant to help synchronize IMU data with video images. And of course, the researchers needed to catch the birds cleanly and transfer them to the metabolic chamber quickly to avoid messing up the post-flight 13C/12C readings. “All of these things are pretty likely to [work] on their own, but getting them to all work at once took a lot of effort from a skilled and dedicated research team,” Hedrick says.

The reward for this effort was twofold. First, the researchers found that when a starling spent most of a test flight in a “follower” position, it expended up to 25% less energy than it did when flying solo. Second, they noticed that the most energy-efficient solo flyers were far more likely to adopt the “leader” role when flying with other birds. The researchers say that this difference is likely related to the birds’ wing-flapping frequency, which was generally lower for “leader” birds. A more complete explanation, however, will require additional experiments.

“The next thing we’d really like to do is directly visualize the wake interaction between a lead bird and a follower using digital particle image velocimetry to get a better understanding of the fluid dynamic mechanism that produces the energy savings,” Hedrick says.

The research is published in PNAS.

The post Birds save up to 25% of their energy when they follow a leader appeared first on Physics World.

  •  

‘I was always interested in the structure of things’: particle physicist Çiğdem İşsever on the importance of thinking about physics early

Par : No Author
17 juin 2024 à 14:46
Çiğdem İşsever
Çiğdem İşsever “My main focus is to shed light, experimentally, on the so-called Higgs mechanism.” (Credit: DESY Courtesy of Cigdem Issever)

The 2012 discovery of the Higgs boson at CERN’s Large Hadron Collider (LHC) was a momentous achievement. Despite completing the so-called Standard Model of particle physics, the discovery of this particle opened up the search for physics beyond the Standard Model and the elements of nature that assist the Higgs boson in granting all other matter particles their mass. One researcher who is taking a deeper look at the Higgs boson is the experimental particle physicist Çiğdem İşsever – lead scientist in the particle physics group at Deutsches Elektronen-Synchrotron (DESY) in Hamburg, and the experimental high-energy physics group at Humboldt University of Berlin.

After obtaining her degree in physics and completing a PhD in natural sciences at the University of Dortmund in Germany by 2001, İşsever was a postdoc at DESY and at the University of California, Santa Barbara in the US. From 2004 to 2019, she was based at the University of Oxford, where from 2014 she held a professorship in elementary particle physics. She then became head of physics and from 2015 taught at Lincoln College, Oxford, before moving back to DESY in 2019.

As a member of the ATLAS collaboration at CERN since 2004, İşsever’s research has focused on how the Higgs boson defines our reality. “My main focus is to shed light, experimentally, on the so-called Higgs mechanism, which explains how elementary particles and gauge bosons acquire mass in nature,” explains İşsever.

The Higgs mechanism is a key parameter of the Higgs boson, which particle physicists are trying to experimentally constrain, to gain important insight about the shape of the so-called Higgs potential. Determining if the Higgs potential is exactly as predicted by our theories or “if nature has chosen a different shape for it influences the very physics that determines the shape of our universe and even its eventual fate,” she explains.

What lies within

İşsever was fascinated by the inner workings of nature from a very young age. “I was always interested in how things are made, or why something is the way it is,” she says. “My father is not a physicist, but when I was in the first or second year of primary school, we would talk like adults about physics. He would discuss with me how nuclear reactors split the atom and if it was possible to bring it back together.”

As a child of six or seven, İşsever recalls industriously dissecting the vegetables on her plate, to reveal their inner structure. “This might sound really weird, but I wouldn’t just eat vegetables and fruit… I would really carefully cut them open. Look where the seeds are, see how many chambers a tomato has.”

This early fascination with the natural world on small scales deeply influenced İşsever and led to her interest in science communication. Keen to inspire young minds, and help children engage with science from a early age, the ATLAScraft project was developed by İşsever together with her husband and fellow DESY physicist Steven Worm, and physicist Becky Parker, from Queen Mary University, London. The project was a collaboration between the University of Birmingham, University of Oxford, the Institute for Research in Schools and the Abingdon Science Partnership, with technical expertise from CERN.

ATLAScraft provides users a map of CERN, the ATLAS detector and the LHC; all which have been created in the hugely popular computer game Minecraft. The idea behind the project was to bring the LHC and its scientific endeavours to a whole new generation, but it was also about breaking cultural stereotypes, especially getting more women in physics.

“Children decide quite early in their life, as early as primary school, if science is for them or not,” İşsever explains. They decided to visit pupils between five and 11, and “talk to them before they buy into science-related stereotypes of the male scientist and his female assistant,” says İşsever, adding that “When we went to schools in the UK to talk about our physics, I would be the main presenter of the physics concept, and Steve would be my sidekick. This was something we did deliberately to challenge these stereotypes.” Thanks to ATLAScraft, you can now take a virtual tour of ATLAS, via a 3D interactive map complete with the buildings, beamline tunnels and the actual ATLAS detector, all within Minecraft.

Pairing up

This year İşsever will also be involved in CERN’s 70th anniversary celebrations. She sees these as further opportunities to communicate CERN’s discoveries to a wider audience. However, İşsever’s research is still her prevailing passion. She is currently excited about her work to discover Higgs “pair production” at the LHC. Experimentally detecting these pairs of Higgs bosons is a crucial step in understanding how the Higgs boson may interact with itself, as this will determine the shape of the potential of the Higgs field.

“This hasn’t yet happened. When it does, if we collect enough data, we should be able to constrain the Higgs coupling as a parameter,” says İşsever. She adds that this search could also lead to the discovery of physics beyond the Standard Model. “To me, this represents the true thrill of discovering something new, which would be amazing.”

When it comes to the future of CERN and particle physics in general, the proposed successor to the LHC – the Future Circular Collider (FCC) – is an interesting prospect. More than 90 km in diameter – three times that of the LHC – the FCC would allow for a significant upgrade in collision energies.

One of the LHCf detectors
One of the Large Hadron Collider forward (LHCf) detectors. (Courtesy: CERN)

While she acknowledges how useful the FCC would be, İşsever believes that a less energetic electron–positron collider, could be vital as a next step. Such an instrument could lead to a deeper understanding of the Higgs boson and its associated phenomena, as well as allowing particle physicists to “infer the energy scale we should investigate with future machines,” she adds.

Looking ahead

Beyond the Higgs boson, İşsever is also involved with the Large Hadron Collider forward (LHCf) experiment, that captures and measures forward-travelling particles that escape “standard” detectors like ATLAS. LHCf could help build a better understanding of the cosmic rays that bombard the atmosphere of Earth from space.

İşsever also acknowledges the importance of non-collider experiments, even though they are unlikely to end the collider-dominated era of particle physics. “Collider experiments are much more general-purpose experiments. If you think of, for example, the ATLAS experiment, it’s not just one experiment. At any time, there are something like 200 or more analyses going on in parallel. You can think of each of them as an individual experiment. So, it is a very efficient way to perform experimental physics.”

The post ‘I was always interested in the structure of things’: particle physicist Çiğdem İşsever on the importance of thinking about physics early appeared first on Physics World.

  •  

Researchers build 0.05 T MRI scanner that produces diagnostic quality images

Par : No Author
17 juin 2024 à 11:00

Magnetic resonance imaging (MRI) is an essential tool used by radiologists to visualize tissues and diagnose disease, particularly for brain, cardiac, cancer and orthopaedic conditions. However, the high cost of an MRI scanner and dedicated MR imaging suite, combined with the scanner’s operational complexity, has severely limited its use in low- and middle-income countries, as well as in rural healthcare facilities.

Among member countries of the Organization for Economic Co-operation and Development (OCED), the number of MRI scanners (in 2021) ranged from just 0.24 per million people in Columbia to 55 per million in Japan. This significant disparity negatively impacts the quality of healthcare for the global population.

Aiming to close this gap in MRI availability, researchers at the University of Hong Kong’s Laboratory of Biomedical Imaging and Signal Processing are developing a whole-body, ultralow-field 0.05 T MR scanner that operates from a standard wall power outlet and does not require radiofrequency (RF) or magnetic shielding cages.

Targeted as both an alternative and a supplement to conventional 1.5 T and 3 T MRI systems, the novel scanner incorporates a compact permanent magnet and employs data-driven deep learning for image formation. This simplified design not only makes the MRI scanner easier to operate, but should significantly lower its acquisition and maintenance costs compared with current clinical MRI systems.

Writing in Science, principal investigator Ed X Wu and colleagues describe how they used the 0.05 T scanner, along with deep-learning reconstruction methods developed by the team, to obtain anatomical images of the brain, spine, abdomen and knee with comparable image quality to that of a 3T system. In one example, they acquired spine MRI scans showing details of intervertebral disks, the spinal cord and cerebrospinal fluid, in 8 min or less.

Whole-body scanner design

Central to the scanner’s hardware design is a permanent neodymium ferrite boron (NdFeB) magnet with a double-plate structure. Permanent magnets are safer to operate than superconductive magnets as they generate less heat and acoustic noise during imaging. Ultralow-field MRI also benefits from low sensitivity to metallic implants, fewer image susceptibility artefacts at air–tissue interfaces and an extremely low RF specific absorption rate.

The magnet features two NdFeB plates connected by four vertical pillars, chosen to optimize openness and patient comfort. Its key components – including yokes, magnet plates, pole pieces, anti-eddy current plates and shimming rings – were designed to create a uniform field suitable for whole-body imaging while maintaining shoulder and chest accessibility. The final magnetic field was 0.048 T at room temperature (corresponding to a 2.045 MHz proton resonance frequency).

Prototype ultralow-field MRI scanner
Low-cost, low-power, shielding-free The prototype MRI scanner has a compact footprint of roughly 1.3 m2 and requires neither magnetic nor RF shielding cages. (Courtesy: Ed X Wu)

The magnet assembly has exterior dimensions of 114.0 x 102.6 x 69.9 cm, with a 40 x 92 cm gap for patient entry, and weighs approximately 1300 kg, making it potentially portable for point-of-care imaging.

In the absence of RF shielding, the researchers used deep learning to eliminate electromagnetic interference (EMI). Specifically, they positioned 10 small EMI sensing coils around the scanner and inside the electronics cabinet to acquire EMI signals. During scanning, the EMI sensing coils and the MRI receive coil simultaneously sample data within two windows: one for MR signal acquisition, the other for EMI signal characterization. The team then used a deep-learning direct signal prediction (Deep-DSP) model to predict EMI-free MR signals from the acquired data.

For the study, 30 healthy volunteers were scanned with the 0.05 T system, using standard protocols and optimized contrasts for the various anatomical structures. To overcome the weak MR signal at 0.05 T, the team also designed a data-driven deep-learning image formation method – the partial Fourier super-resolution (PF-SR) model – that integrates image reconstruction and 3D multiscale super-resolution, validating the model by comparing 0.055 T brain scans with 3 T images from the same subjects (as described in Science Advances). This PF-SR reconstruction improved the 0.05 T image quality by suppressing artefacts and noise and increasing spatial resolution.

The researchers are currently optimizing the scanner design and algorithms. They plan to perform experimental assessment and optimization of ultralow-field data acquisition and deep-learning image reconstruction, to yield the optimal trade-offs between image fidelity, resolution, contrast, scan time and cost for each specific application. They are also evaluating clinical applications of the 0.05 T scanner in depth.

“We shall continue to refine our data-driven approaches in order to minimize the hardware requirements while advancing imaging quality and speed,” Wu tells Physics World. “We are starting to plan our research of the PF-SR in detecting various pathologies, and are currently training the PF-SR models with datasets from both normal and abnormal subjects.”

The post Researchers build 0.05 T MRI scanner that produces diagnostic quality images appeared first on Physics World.

  •  

Huge fault-tolerant quantum computers on the agenda at Commercialising Quantum 2024 conference

15 juin 2024 à 13:23

On 5–6 June I was in the City of London for Economist Impact’s Commercialising Quantum 2024 conference. Held in a shiny new skyscraper in Bishopsgate, the technology being discussed felt as new and exciting as the venue itself. As someone who was taught quantum mechanics well before the second quantum revolution, I felt more akin to the ancient churches that I walked past to get there – a vestige of a different era.

I was at the conference last year, so I was keen to see how the commercialization of quantum was proceeding. Was it rapidly rising to the heavens like a modern skyscraper, or was construction proceeding at a more leisurely pace like a medieval cathedral?

One company that has taken great strides since June 2023 is US-based PsiQuantum, which earlier this year received a staggering A$940m ($620m) from Australian federal and state governments to build “the world’s first utility-scale quantum computer” in Brisbane.

The company’s founder and CEO Jeremy O’Brien spoke at the event and I was very keen to find out how the company was scaling-up the production and integration of its photonic quantum chips. The Brisbane facility will need about one million physical qubits so that quantum error correction can be used to create the thousand or so logical qubits required to create a fault-tolerant computer. The ultimate goal being to use this system to do certain calculations that are beyond the means of even the most powerful supercomputers.

Mass production

O’Brien, who is Australian, said that PsiQuantum is using technology first developed by the semiconductor industry to mass produce their chips right now. He said that integration will be key to the company’s success and using photonics to connecting qubits using light is the way forward. He described the system as a hybrid of photonic and electrical components and said that it will have to be cooled to cryogenic temperatures because of its superconducting photon detectors.

He suggested that the system will be up and running in a little over three years – which seems very soon to me! Incidentally, it took about three years to build the skyscraper (22 Bishopsgate) in which O’Brien spoke.

Echoing O’Brien’s assertion that millions of physical qubits are needed to create the 1000s of qubits required to do useful calculations was Oded Melamed, who is CEO of Quantum Source. The Israel-based company is designing a quantum computing system that integrates atomic qubits – which Melamed said are entangled easily – with photonic qubits, which are much more easy to connect up. The atomic qubits comprise rubidium atoms, which are stored in small vacuum cells and coaxed out individually to interact with photons in optical resonators.

Quantum Source is a much newer company than PsiQuantum, which was founded in 2015. The Israeli company came into being in 2021 with $27 million in seed funding.

Quantum factories

Melamed referred to large, integrated implementations of millions of qubits as “quantum factories”. They would be huge – possibly the size of a football pitch. When you connect millions of logical elements – quantum or otherwise – together, it will be possible to do calculations. However, a debate I can see looming on the horizon is whether calculations done at such facilities are actually quantum rather than classical.

Error correction is required because the quantum nature of most types of qubit is very quickly degraded and destroyed in a process called decoherence. A big problem with current error-correction schemes is that they use a very large number of physical (real) qubits to create one logical (virtual and useful) qubit. Today, that ratio is greater than 1000 to one. When you consider that useful quantum calculations (those that far outperform conventional computers) will require thousands of logical qubits, a quantum computer that uses error correction would need millions of physical qubits – as O’Brien and Melamed pointed out.

But what if the number of physical qubits could be reduced significantly? That was at hinted by the physicist Peter Knight, who advises the UK government on all matters quantum. He told the conference that great strides have been taken in the development of error correction schemes that are less demanding in terms of physical qubits. However, he also pointed out that it is possible that the effects of some types of noise – the absorption of cosmic rays, for example – may always plague quantum computers. This is one reason why the UK government is looking at expanding the Boulby Underground Laboratory – to provide low cosmic-ray background facility for developing quantum devices. And, maybe even running a quantum computer more than 1 km under the North Sea.

Keeping an eye on quantum

Many of the delegates that I spoke to at the conference did not work for quantum-technology companies – but rather work for large companies such as Nestlé and Johnson & Johnson. These people are charged with understanding what quantum computers could do for their companies in the future.

Indeed, many of these companies have gone beyond the observing phase and are actively engaging with quantum technologies. I attended a fascinating talk by the physicist Lene Oddershede of Denmark’s Novo Nordisk Foundation – which (amongst other things) is the majority voting shareholders in the pharmaceutical giant Novo Nordisk. She said that the foundation is three years into its “quantum mission” to support the development of quantum simulations of natural systems and quantum sensors for biomedical applications. To this end, the for-profit foundation has created a company called Quantum Foundry Copenhagen with the sole function of protecting the intellectual property developed during the quantum mission. She said that because of Danish law, universities would not be able to protect this IP.

What bad actors could do with quantum technologies is a concern of Matija Matokovic, who is deputy head of innovation at NATO. He described quantum technologies as disruptive and emerging technologies with the potential to change the nature of security. He also pointed out the strategic importance for countries to develop and nurture quantum technology companies that are developing strategically important technologies.

Few physicists

Oddershede, Knight and O’Brien are all physicists – and that put them a distinct minority at the conference. I spent some time at the Institute of Physics’ exhibition stand chatting with delegates (the IOP publishes Physics World). The first question I normally asked was “are you a physicist?” and the answer was almost always “no”. That came as no surprise because the focus of the conference was about commercialization.

Many of the talks that I attended were not given by physicists (or other technical experts), but rather by start-up CEOs, who are often serial tech entrepreneurs. While I cannot claim to be an expert in quantum computing, I did find it much easier to follow talks at the conference that were given by technical experts. And, the presentations that I struggled with the most tended to be discussions about the future benefits of quantum computing.

One puzzling example – and I will give a general description because I don’t want to cast aspersions on the speakers, whom I believe were sincere – was a fireside chat between the representative of quantum computing company and one of its customers. The customer was talking about the success that they had in showing that a quantum algorithm could be used to solve a business-related problem. But, it wasn’t clear whether this quantum algorithm was run on a quantum processor or on a conventional computer that was simulating a quantum computer. To me, this is an important distinction that may have been lost on non-technical audience members.

While most of the focus of the conference was on quantum computing, there was much said about quantum sensors – which are often much more mature technologies than quantum processors. I watched a fascinating presentation by Margot Taylor, who is director of functional neuroimaging at Toronto’s Hospital for Sick Children and David Woolger, who is CEO of UK-based Cerca Magnetics. Cerca makes a brain scanner that uses quantum sensors and Taylor described how she is using it to study cognitive development in children.

Avoiding hype

While the science, engineering and business strategies of the future discussed at the conference were fascinating, it was easy to get swept away by the hype – something that speakers including Peter Knight warned against.

However, there were also some real success stories. My favourite comes from Cerca’s David Woolger, who pointed out that his company was formed in 2020, and has been in profit ever since.

The post Huge fault-tolerant quantum computers on the agenda at Commercialising Quantum 2024 conference appeared first on Physics World.

  •  

Could the answer to the Antikythera astronomical device emerge from a Manhattan basement?

14 juin 2024 à 10:40

“You can’t understand it unless you build it yourself,” says Michael Dubno, a scientist, inventor and explorer. He’s talking to me in the richly equipped basement workshop of his Manhattan townhouse, standing in front of a long table, crammed with partly built mechanical devices to replicate the relative positions of astronomical objects.

By “it”, Dubno is referring to the Antikythera, an ancient device whose fragments were found by sponge divers in 1901 in a sunken ship near the Greek island of that name. Over the next few decades, archaeologists figured out that the object was a sophisticated ancient Greek instrument to predict the movement of astronomical bodies – and that it dates from around the 2nd or 3rd century BCE.

The device radically changes what historians thought of the astronomical, mathematical and engineering capabilities of ancient Greece

Historians of technology are still shocked by the Antikythera. Though encrusted, degraded and with only 30 of its supposed 60 gears discovered, nothing remotely as complicated has ever been found dating from anywhere near that time. The device radically changes what historians thought of the astronomical, mathematical and engineering capabilities of ancient Greece.

Other mysteries include how the Antikythera works. Its gears evidently drove a pointer around a moveable calendar ring, allowing ancient Greeks four millennia ago to predict the motions of the Sun, Moon and five planets. But it’s unclear what the remaining gears were or how they worked.

Such mysteries have enveloped the Antikythera with mystique. In fact, it appears as the McGuffin – the central but meaningless plot device – of the 2023 adventure film Indiana Jones and the Dial of Destiny. The movie’s villains are looking for the Antikythera because its supposed time-travelling properties would give them unlimited powers.

The movie’s hero is an archaeologist (played by Harrison Ford) who, reluctantly, joins a team of people seeking to keep the Antikythera from the villains. He’s initially dismissive, calling it “an ancient hunk of gears”. But he recovers the missing parts of the device, is kidnapped and taken through a time fissure back to Archimedes’ time, defeats the villains, and, reluctantly, returns.

Hunk of gears

That hunk of gears attracted Dubno.

A Brooklyn-born New Yorker, he attended the Bronx High School of Science, which is famous for producing seven Nobel-prize winners. Dubno entered Rensselaer Polytechnic Institute but dropped out to start a software company. He then built a robot, advanced for the time, that navigated around with GPS-like and sonar sensors.

Michael Dubno in his workshop.
Trial and error Michael Dubno is learning more about by Antikythera by trying to reproduce it in his workshop. (Courtesy: Robert P Crease)

Dubno also pioneered risk programming and financial analysis for the investment banking company Goldman Sachs, becoming its chief technology officer, and later worked for Bank of America. After quitting the finance world, Dubno led scientific expeditions to the North Pole, into the Mariana Trench and inside the Masaya Volcano.

But on my visit, he mainly wants to talk about the Antikythera, and the orreries that he is working on based on its gearing. Dubno is not alone in his fascination with this strange object, which numerous scientists and hobbyists have studied for over a century, developing theories and collecting data about its operation.

Dubno, however, came at it indirectly. About 15 years ago he bought a laser cutter/etcher and wondered what novel things he could do with it. “For the first time I was able to make gears quickly”, he says. “And I thought, ‘How hard could it be to make a model of the Antikythera?’”

He joined some like-minded people: Chris Budiselic, a machinist from Australia, Andrew Thoeni a professor of business and marketing at the University of North Florida, and Andrew Ramsey, an X-ray imaging scientist from Michigan. They soon ran into unexpected difficulties with their model.

One had to do with drag on the mechanism due to galling. “The Antikythera mechanism with the 30 known gears”, Dubno says, “already has enough drag in it that by the time you start adding the other gears to drive the planets there’s a good chance you can barely squeak out the performance.”

Oil doesn’t help; it increases surface tension. Jewelled bearings might do, but the Greeks almost certainly didn’t have them. Graphite seems to work, but it is unclear what the Greeks used. “You might have a theory of how it works, but it probably won’t work the way you think – you must build it.”

Another problem had to do with the Antikythera’s gears. They used triangular teeth, but such teeth quickly wear into a different shape, and Dubno had to figure how to cut those gears to replicate it.

Still another issue concerned the Antikythera’s moveable calendar ring, which used marks for days and a series of holes for adjustment. Researchers had assumed the lines and holes laid out an Egyptian solar calendar of 365 days. But after extensive calculations, Dubno and his team determined that the calendar ring was most likely based on 354 days, representing 12 lunar cycles.

They presented their findings in a research paper in the Horological Journal (2020) that has recently been confirmed using different mathematical techniques (arxiv.org:2403.00040v1). This discovery suggests the need for historians to revise their understanding of calendars in ancient Egypt.

Dubno is still shocked by the confidence of the Antikythera’s makers. “We don’t see any corrections”, he confides. “That means whoever built it knew what they wanted to do and didn’t see the need to change it when finished. That means there must have been previous prototypes. Where are they?”

The critical point

Historians can make surprising discoveries by doing things the way the ancients say they did. An example concerns Galileo’s observation that, when two bodies of different masses are dropped, the light one first moves ahead before the heavy one catches up. Some historians therefore concluded that Galileo was a poor observer, because “everyone knows” that all bodies fall at the same rate.

But in the 1980s the late science historian Thomas Settle repeated Galileo’s experiments exactly as he had written – and was startled to observe exactly what Galileo said. Further investigation found that the hand holding the heavy object becomes slightly fatigued, making the release slightly slower, though the heavy body soon catches up due to air resistance.

The finding about the Antikythera’s calendar would not have come to light without rebuilding the device. As Dubno point outs in his team’s paper about their work, doing so demonstrates the value of rebuilding other ancient mechanisms and instruments with the original skills. This would not only help reveal true and false interpretations, but also “assist in properly illuminating the reality embodied in an ancient device”.

As for the full purpose of the Antikythera and its makers, the mystery continues.

The post Could the answer to the Antikythera astronomical device emerge from a Manhattan basement? appeared first on Physics World.

  •  

The Kavli Prize in Astrophysics: meet the 2024 laureates David Charbonneau and Sara Seager

13 juin 2024 à 17:30

This episode features a wide-ranging interview with Sara Seager and David Charbonneau, who share the 2024 Kavli Prize in Astrophysics. Charbonneau is at Harvard University and Seager is at the Massachusetts Institute of Technology, and they won the prize for their discoveries of exoplanets and the characterization of their atmospheres.

Exoplanets are planets that orbit stars other than the Sun. Astronomers have confirmed the existence of more than 5000 exoplanets, and that number keeps increasing.

In this podcast, the two laureates talk about the astonishing range of exoplanets that have been observed and explain how astronomers study the atmospheres of these faint and distant objects. Seager and Charbonneau also talk about the search for biosignatures of life on distant exoplanets and look to the future of exoplanet astronomy.

This podcast is sponsored by The Kavli Prize.

The post The Kavli Prize in Astrophysics: meet the 2024 laureates David Charbonneau and Sara Seager appeared first on Physics World.

  •  

Extreme impacts make metals stronger when heated

13 juin 2024 à 17:00

Heating metals usually makes them softer, but new micro-ballistic impact testing experiments show that if they are deformed extremely quickly during heating, they actually become harder. This unexpected discovery, made by researchers in the department of materials science and engineering at the Massachusetts Institute of Technology (MIT) in the US, could be important for developing materials for use in extreme environments.

In the new work, Christopher Schuh and his PhD student Ian Dowding used laser beams to propel tiny particles of sapphire (an extremely hard material) towards flat sheets of pure copper, titanium and gold at velocities as high as a few hundred metres per second. Using high-speed cameras to observe the particles as they hit and bounce off the sheets, they calculated the difference between the particles’ incoming and outgoing velocities. From this, they determined the amount of energy deposited into the target sample, which in turn indicates its surface strength, or hardness.

Schuh and Dowding found that increasing the temperature by 157 °C boosted the strength of the copper sample by about 30%. At 177 °C, the sample’s hardness increased still further, to more than 300 MPa, making it nearly as hard as steel (304 MPa) at this temperature. This result is counterintuitive since pure copper is a soft metal at low strain rates and would normally be expected to soften further at high temperatures. The pair observed similar effects for their titanium and gold samples.

Drag strengthening

Schuh and Dowding say that the anomalous thermal strengthening effect appears to stem from the way the metals’ ordered crystal lattice deforms when the sapphire microparticles strike it. This effect is known as drag strengthening, and it occurs because higher temperatures increase the activity of phonons – vibrations of the crystal lattice – within the metal. These phonons limit the deformation of defects, like dislocations, in the metal, preventing them from slipping as they would do at lower temperatures.

“The effect increases with increased impact speed and temperature,” Dowding explains, “so that the faster the impact, the less the dislocations are able to respond.”

Unlike in previous high-velocity impact experiments, which used centimetre-scale particles, the small particles in this study do not create significant pressure shock waves when they hit the sample. “In our work, the impacted region is smaller than around 100 micrometres,” Schuh tells Physics World. “This small scale lets us perform high-rate deformations without having a large shock wave or high-pressure conditions to contend with. The result is a clean and quantitative set of data that clearly reveals the counterintuitive ‘hotter-is-stronger’ effect.”

The behaviour the MIT researchers uncovered could come in handy when designing materials for use in extreme conditions. Possible examples include shields that protect spacecraft from fast-moving meteorites and equipment for high-speed machining processes such as sandblasting. Metal additive manufacturing processes such as cold spraying could also benefit.

The researchers, who report their work in Nature, say they would now like to explore the range of conditions and materials in which this “hotter-is-stronger” behaviour occurs.

The post Extreme impacts make metals stronger when heated appeared first on Physics World.

  •  

Leading-edge facilities and cross-disciplinary collaboration underpin AWE’s nuclear remit

Par : No Author
13 juin 2024 à 14:48

AWE is no ordinary physics-based business. With a specialist workforce of around 7000 employees, AWE supports the UK government’s nuclear defence strategy and Continuous At-Sea-Deterrent (nuclear-armed submarines), whilst also providing innovative technologies and know-how to support international initiatives in counter-terrorism and nuclear-threat reduction. Tracy Hart, physics business operations manager at AWE, talked to Physics World about the opportunities for theoretical and experimental physicists within the company’s core production, science, engineering and technology divisions.

Why should a talented physics graduate consider AWE as a long-term career choice?

AWE provides the focal point for research, development and support of the UK’s nuclear-weapons stockpile. Our teams work at the cutting edge of science, technology and engineering across the lifecycle of the warhead – from initial concept and design to final decommissioning and disposal. The goal: to deter the most extreme threats our nation might face, now and in the future. Within that context, we offer unique professional opportunities across a range of technical and leadership roles suitable for bright, dynamic and innovative graduates in physics, mathematics, engineering and high-performance computing.

What can early-career scientists at AWE expect in terms of training and development?

For starters, the early-career training programme is accredited by 10 professional bodies, including the Institute of Physics (IOP) and the Institute of Mathematics and its Applications (IMA). That’s because we want AWE scientists and engineers to be the best of the best, with heads of profession within the management team prioritizing development of their technical staff on an individualized basis. There are lots of opportunities for self-guided learning along the way, with our technical training modules covering an extensive programme of courses in areas like machine learning, advanced programming (e.g. Python, Java, C++) and Monte Carlo modelling.

More specifically, our physicists have their IOP membership paid for by AWE, while a structured mentoring programme provides guidance along the path to CPhys chartership (a highly regarded professional validation scheme overseen by the IOP). We also prioritize external collaboration and work closely with the UK academic community – notably, the University of Oxford and Imperial College London – sponsoring PhD studentships and establishing centres of excellence for joint research.

How about long-term career progression?

There’s a can-do culture at AWE, with a lot of talented scientists and engineers more than ready – and willing – to take on additional responsibility after just a few months in situ. Fast-track development pathways are supported through fluid grading and a promotion process that enables staff to advance by developing their technical knowledge in a given specialism and/or their leadership competencies in wider management roles. It’s all about opportunity: we take a lot of time – and care – recruiting talented people, so it’s important to ensure they can access diverse career pathways across the business.

What research and technical infrastructures are available to scientists at AWE?

Our experts work with advanced experimental and modelling capabilities to keep the nation safe and secure. A case in point is the Orion Laser Facility, a critical component of AWE’s working partnerships with academia (with around 15% of its usage ring-fenced for such collaborations).

The size of a football stadium, Orion enables our teams to replicate the conditions found at the heart of a nuclear explosion – ensuring the safety, reliability and performance of warheads throughout their lifecycle. This high-energy-density plasma physics capability underpins not only our weapons research, but also yields fundamental scientific insights for astrophysicists studying star formation and researchers working on nuclear fusion.

There is also AWE’s high-performance computing (HPC) programme and a unique scientific computing platform on a scale that only a few companies across the UK can match. Our latest Damson supercomputer, for example, is one of the most advanced of its kind and performs 4.3 trillion calculations every second – essential for 3D modelling and simulation capabilities to support our research into the performance and reliability of nuclear warheads.

Does AWE work on nuclear non-proliferation activities?

We are home to the Comprehensive Test Ban Treaty Organization (CTBTO) National Data Centre for Seismology and Infrasound. Through the collection and analysis of data from monitoring systems all over the world, the centre works with the UK Ministry of Defence (MOD) to identify potential nuclear explosions conducted by other countries. Further, the team supports the MOD and international partners in underpinning the CTBT, providing expertise on arms control verification, development of forensic monitoring techniques, as well as the capability to analyse and advise on nuclear tests.

How important is cross-disciplinary collaboration to AWE’s mission?

The multidisciplinary nature of our programme means there’s a place for domain experts – technical leaders in their specialist niche – as well as “big-picture” scientists, engineers and managers who might be equally at ease when working across a range of scientific disciplines. Ultimately, collaboration informs everything we do. A case study in this regard is The Hub, a new purpose-built facility that will, when completed, consolidate many ageing laboratories and workshops into a central campus that integrates engineering, science, learning and administrative functions.

What sorts of projects do physicists get to work on at AWE?

The physics department at AWE recruits a broad range of skillsets spanning systems assessment, design physics, radiation science and detection, material physics and enabling technologies. Among our priorities right now is to scale the talent pipeline for ongoing studies in the criticality safety group. Roles in this area are multidisciplinary, combining strong technical understanding of the nuclear physics of criticality alongside the operational know-how of writing safety assessments.

Put simply, nuclear physics domain knowledge is applied to derive safe working limits and restrictions for a wide variety of operations that use fissile material across the nuclear material and facility lifecycles. These derivations regularly involve the use of nuclear data from real-world experiments and Monte Carlo computer codes. What’s more, the production of safety assessments requires an understanding of hazard identification methods and various fault analysis techniques to determine how a criticality could occur and what safety systems are required to manage that risk.

What about other recruitment priorities at AWE?

Current areas of emphasis for the HR team include the HPC programme – where we’re looking for systems administrators and applied computational scientists – and design physics – where we need candidates with a really strong physics and mathematics background plus the versatility to put that knowledge to work versus our unique requirements. Our design physics team uses state-of-the-art multiphysics codes to model hydrodynamics, radiation transport and nuclear processes plus a range of experimental data to benchmark their predictions. Operationally, that means understanding the complex physical processes associated with nuclear-weapons function, while applying those insights to current systems as well as next-generation weapons design.

The key take-away: if you’re looking for a role with excitement, intrigue and something that really makes a difference, then now is the time to join AWE.

 

The post Leading-edge facilities and cross-disciplinary collaboration underpin AWE’s nuclear remit appeared first on Physics World.

  •  

Deep transfer learning detects six different cancers on PET/CT scans

Par : Tami Freeman
13 juin 2024 à 10:30
Deep transfer learning-predicted tumour segmentations
Six in one Example tumour segmentations predicted by the deep transfer learning approach, for prostate cancer, lung cancer, melanoma, lymphoma, head-and-neck cancer and breast cancer (showing pre- and post-therapy scans). (Courtesy: K H Leung et al. Johns Hopkins University, Baltimore, MD)

Whole-body positron emission tomography/computed tomography (PET/CT) is a diagnostic imaging technique that can detect the spread of cancer or monitor a tumour’s response to treatment. But manual delineation of the multiple lesions often observed in whole-body images is a time-consuming task that’s subject to inter-reader variability. For clinical use, what’s really needed is an efficient, fully automated method for detection and characterization of cancer from PET/CT images, enabling rapid diagnosis and treatment.

Deep learning can be used to automatically extract key features from image data, but the models need to be trained on large annotated datasets, which may not be readily available. To remove this reliance on labelled data, a team headed up by Kevin Leung from Johns Hopkins University School of Medicine is using deep transfer learning – a technique that employs knowledge from a pre-existing model to address a new task – to detect six different types of cancer, imaged with two different radiotracers, on whole-body PET/CT scans.

“Deep learning models are also often developed for specific radiotracers,” explained Leung, who presented the findings at this week’s 2024 SNMMI Annual Meeting in Toronto. “And given the wide range of radiotracers available for nuclear medicine, there’s a need to develop generalizable approaches for automated PET/CT tumour quantification in order to optimize early detection and treatment.”

The new approach uses deep transfer learning to jointly optimize a 3D nnU-Net backbone (a deep learning-based segmentation method) across two PET/CT datasets, in order to learn to generalize the tumour segmentation task. The model then automatically extracts radiomic features and quantitative imaging measures from the predicted segmentations, and uses these extracted features to assess patient risk, estimate survival and predict treatment response.

“In addition to performing cancer prognosis, the approach provides a framework that will help improve patient outcomes and survival by identifying robust predictive biomarkers, characterizing tumour subtypes, and enabling the early detection and treatment of cancer,” says Leung in a press statement.

For their study, Leung and colleagues examined data from a total of 1019 patients. For training and cross validation, they used PET scans (with limited tumour annotation) from 270 patients with prostate cancer imaged with the PSMA-based tracer 18F-DCFPyL, as well as scans (with complete annotations) from 501 patients with lung cancer, melanoma or lymphoma imaged with the metabolic tracer 18F-FDG.

For external testing, they used PSMA PET scans of 138 patients with prostate cancer and FDG PET scans from 74 patients with head-and-neck cancer and 36 with breast cancer (none containing annotations). The automated segmentation approach yielded median true positive rates ranging from 0.75 to 0.87, and median Dice similarity coefficients ranging from 0.73 to 0.83, indicating fairly accurate segmentation performance.

Leung shared the findings of the three prognostic models that the researchers developed. For risk stratification of prostate cancer, they used whole-body imaging measures extracted from the tumour predictions to build a model that predict patients as low to high risk. Comparison with classifications based on initial prostate-specific antigen (PSA) levels showed that the risk model had an overall accuracy of 0.83.

“Patients predicted as being high risk also had significantly higher follow-up PSA levels and shorter PSA doubling times compared to low- and intermediate-risk patients, indicating further disease progression for those patients,” Leung explained.

In a similar manner, the researchers used imaging measures from FDG PET scans of patients with head-and-neck cancer to calculate risk scores. They found that the risk score was significantly associated with overall survival, with patients assigned a higher risk score having the shortest median overall survival.

Lastly, the team created models to predict the response of patients with breast cancer to neoadjuvant chemotherapy, using imaging measures extracted from pre- and post-therapy FDG PET scans. A classifier using only pre-therapy imaging measures predicted pathological complete response with an accuracy of 0.72, while models using both pre- and post-therapy measures exhibited an accuracy of 0.84, highlighting the feasibility of using the model for response prediction.

“The approach may be able to help reduce physician workload by providing automated whole-body tumour segmentations, as well as automatically quantifying prognostic biomarkers,” Leung concluded, noting that the AI tool could also play a role in tracking changes in tumour volume in response to therapy.

The post Deep transfer learning detects six different cancers on PET/CT scans appeared first on Physics World.

  •  

An investigation into battery thermal runaway initiation and propagation

Par : No Author
12 juin 2024 à 15:58
An investigation into battery thermal runaway initiation and propagation

Abuse testing and failure recreation of thermal runaway in lithium-ion battery packs within Exponent’s London laboratory has shown how battery fires can initiate and propagate. This webinar discusses how even small amounts of moisture ingress into a battery pack can lead to thermal runaway of the cells within the pack. Specific conditions and behaviours of saltwater ingress-driven circuit board faults were investigated, and localized temperature increases of greater than 400 °C even at relatively low voltages and fault currents were demonstrated, showing the potential for saltwater induced circuit board faults to lead to cell thermal runaway events. The extent and severity of e-mobility battery fires resulting from a single cell thermal runaway failure was explored. Various suppression techniques a user may attempt to implement if they experience a battery fire in a household environment were evaluated. Tests were run of water flows typical of a household garden hose as well as different fire blankets deployed both before the forced thermal runaway event, and after initiation. In addition, various design approaches, such as added thermal insulation between cells, were shown to help prevent cell-to-cell propagation and reduce the severity of a battery pack failure.

An interactive Q&A session follows the presentation.

Daniel Torelli
Daniel Torelli

Daniel Torelli specializes in lithium-ion battery failure analysis and manufacturing quality control. As a certified fire and explosion investigator (CFEI), Daniel has been involved in a number of fire investigations related to lithium-ion batteries, consumer electronics, power and energy, and electricity transmission and distribution. During his time at Exponent, he has worked with lithium-ion batteries for a variety of applications including consumer electronics, electric vehicles, residential and grid-scale, energy-storage systems, and micromobility. His work focuses on root cause analysis of battery failures such as thermal runaway, performance-related issues, and pack-design issues. Daniel completed his PhD in 2018 at the California Institute of Technology where he received national science foundation graduate research fellowships from 2014–2016. He also received a 2013 American chemical society undergraduate award in inorganic chemistry and was inducted into the Gamma Sigma Epsilon Chemistry Honors Society in 2012.

The post An investigation into battery thermal runaway initiation and propagation appeared first on Physics World.

  •  

David Charbonneau and Sara Seager bag the 2024 Kavli Prize in Astrophysics

12 juin 2024 à 15:00

Exoplanet researchers David Charbonneau from Harvard University and Sara Seager from the Massachusetts Institute of Technology have won the 2024 Kavli Prize in Astrophysics. The laureates, who will share the $1m prize, have been recognized by the Kavli Foundation and the Norwegian Academy of Science and Letters for their work characterizing the atmospheres of exoplanets. The foundation was set up by the Norwegian-American physicist and philanthropist Fred Kavli in 2000.

Since the first planet around other stars was spotted in the 1990s, new techniques and telescopes have helped discover more than 5000 exoplanets to date. Spectroscopic measurements of the chemical compositions in the atmospheres of some exoplanets have also been made, revealing whether they are suitable for life. Such methods can detect atomic and molecular species in planetary atmospheres around both giant and rocky planets.

The characterization of exoplanet atmospheres is still a developing field but it is one that both Charbonneau and Seager have pioneered. In 1999, Charbonneau led the team that used the transit method – a technique that measures the tiny amount of light blocked by such a planet as it passes in front of its host star – to discover a giant exoplanet HD 209458b.

In the early 2000s, he then pioneered the use of space-based observatories, such as the Hubble Space Telescope, to perform the first studies of the atmosphere of giant extrasolar planets. This involved taking molecular spectra using both filtered starlight and infrared emission from the planets themselves.

Seager, meanwhile, pioneered the theoretical study of planetary atmospheres and predicted the presence of atomic and molecular species that should be detectable by transit spectroscopy, notably the alkali gases. She improved our knowledge of planets with masses below Neptune while finding that higher-mass variants are dominated by hydrogen and helium. Seager also provided new concepts for our understanding of the habitable zone, where liquid water can exist, and thus established the importance of a variety of biomarkers such as oxygen, ozone and carbon dioxide.

Exciting opportunities ahead

Seager told Physics World she is “absolutely thrilled” to receive the prize with Charbonneau. “It’s a significant milestone that exoplanets are recognized with this award for the first time,” she says. “With the James Webb Space Telescope operational and continuous discoveries about exoplanet atmospheres, our field is thriving. I am most looking forward to the future when we can identify a true Earth twin, an Earth-like planet orbiting a Sun-like star.”

Charbonneau, meanwhile, told Physics World that receiving the news was “wonderful” not just for him but the “entire community of exoplanet explorers”. “Research in exoplanets has been such an adventure,” he says. “We are constantly finding new worlds and developing new tools to learn about these worlds.”

He adds that the next “exciting opportunity” in the field is to study terrestrial worlds. “Recently some very nearby examples have been discovered orbiting small stars, which makes them accessible to our current telescopes,” he notes. “So the question is, are these indeed Earth-like, with atmospheres and oceans and even a moon, or not? And we can hope to learn the answers in only a few years.”

The post David Charbonneau and Sara Seager bag the 2024 Kavli Prize in Astrophysics appeared first on Physics World.

  •  

Sun’s magnetic field may have a surprisingly shallow origin

12 juin 2024 à 13:00

A new mathematical model indicates that the Sun’s magnetic field originates just 20 000 miles below its surface, contradicting previous theories that point to much deeper origins. The model, developed by researchers at Northwestern University in the US and the University of Edinburgh, UK, could help explain the origins of the magnetic field, and might lead to more accurate forecasts for solar storms, which can damage electronics in space and even on the ground if they are powerful enough.

The physical processes that generate the Sun’s magnetic field – the magnetic dynamo – follow a very specific pattern. Every 11 years, a propagating region of sunspots appears at a solar latitude of around 30°, and vanishes near the equator. Around the same time, longitudinal flows of gas and plasma within the Sun, known as torsional oscillations, closely follow the motion of the sunspots.

These two phenomena might be related – they might, in other words, be different manifestations of the same underlying physical process – but researchers still do not know where they come from. Recent helioseismology measurements point to a relatively shallow origin, limited to the near-surface “shear layer” located in the outer 5-10% of the star. However, that contradicts previous theoretical explanations that rely on effects arising more than 130 000 miles below the Sun’s surface.

Magnetorotational instability at the surface

Researchers led by Geoffrey Vasil at Edinburgh may have found a resolution to this conflict. According to their model, the Sun’s magnetic field does indeed stem from a near-surface effect: a unstable fluid-dynamic process known as a magnetorotational instability.

This is promising, Vasil notes, because such instabilities also occur in astrophysical systems such as black holes and young planetary systems, and we have understood them in that context since the 1950s thanks to pioneering work by the Nobel Prize-winning physicist Subrahmanyan Chandrasekhar. More exciting still, he tells Physics World, is that the new model better matches observations of the Sun, successfully reproducing physical properties seen in sub-surface torsional oscillations and magnetic field amplitudes. A final advantage is that unlike theories that invoke deeper effects, the new model describes how sunspots follow the Sun’s magnetic activity.

Several difficulties with current theories

Vasil says he first stumbled across the idea that near-surface instability could be responsible while he was a PhD student at the University of Colorado in the US. “I remember the ‘huh, that’s funny’ insight while flipping through an astrophysics textbook,” he recalls. The previous leading hypothesis, he explains, held that the Sun’s magnetic field originated at the bottom of a 130 000-mile-deep “ocean”. Two things happen down in this so-called tachocline region: “The first is that the rolling, overturning turbulence of gas and plasma stops and gives way to a calmer interior,” he says. “There is also a jump in the solar windspeed that can ‘stretch’ magnetic fields.”

While these ideas hold some appeal, they suffer from several difficulties, he adds. For one, even if the magnetic field did originate deep inside the Sun, it would still have to get to the surface. Calculations show that this would not be easy.

“Overall, it makes a lot of sense if things happen near the surface and don’t have to go anywhere,” he says. “While that’s not the only reason supporting our surface-origin hypothesis, it is a big part of it.”

A better understanding of sunspot formation

If magnetic fields do originate right below the surface, they ought to be easy to measure. Such measurements could, in turn, lead to a better understanding of sunspot formation and improved warnings of sunspot eruptions – which would help us protect sensitive electronics.

The researchers need much more data to continue with their investigations. “Our current work mostly concerns the shallow region near the Sun’s equator, but we know for sure that the polar regions are also extremely important, including deeper down from the poles,” says Vasil. “The difference is that we don’t have any specific physical hypotheses of what might be happening in these zones.

“We hope to obtain such data from planned satellite missions (from both NASA and the European Space Agency, ESA) to observe the solar poles in much more detail. Unfortunately, these projects have recently been put on hold, but I hope that our work will encourage others to pursue these again.”

For now, the researchers plan to concentrate on building open-source tools to help analyse the wealth of data they already have. Their present study is detailed in Nature.

The post Sun’s magnetic field may have a surprisingly shallow origin appeared first on Physics World.

  •  

The route to ‘net zero’: how the manufacturing industry can help

Par : No Author
11 juin 2024 à 16:00

The manufacturing industry is one of the largest emitters of carbon dioxide and other greenhouse gases worldwide. Manufacturing inherently consumes large amounts of energy and raw materials, and while the sector still relies mainly on fossil fuels, it generates emissions that directly contribute to climate change and environmental pollution. To combat global warming and its potentially devastating impact upon our planet, there’s an urgent need for the manufacturing industry to move towards net zero operation.

Cranfield University, a specialist postgraduate university in the UK, is working to help the industry achieve this task. Teams at the university’s science, technology and engineering centres are devising ways to accelerate the journey towards more sustainable manufacturing – whether by introducing manufacturing processes that use less energy and raw materials; investigating renewable and low-carbon energy sources; creating new materials with enhanced recyclability; or implementing smart functions that extend the life of existing assets.

Greener manufacturing

One way to lower the carbon footprint of manufacturing is to move to 3D printing, an additive fabrication technique that inherently reduces waste.

“The machining techniques used in conventional manufacturing require a lot of power and a lot of raw material, which itself requires energy to create,” explains Muhammad Khan, acting head of Cranfield’s Centre for Life-cycle Engineering and Management and reader in damage mechanics. “In 3D printing, however, the amount of power required to generate the same complex part is far less, which impacts the overall carbon footprint.”

Materials used for 3D printing, particularly polymeric or other organic materials, are generally recyclable and easier to reuse, further reducing emissions. “Within our centre, we are working on polymeric materials to replace existing metallic materials in areas such as aerospace and automotive applications,” says Khan.

3D printing also enables manufacturers to rapidly tailor the design and properties of a product to meet changing requirements.

David Ayre
David Ayre “It’s important that everyone makes a move towards net zero, because we’re not going to make any impact unless the whole world is on board.” (Courtesy: Cranfield University)

“We’ve seen this a lot in Formula One,” says David Ayre, a senior lecturer in composites and polymers in Cranfield University’s Composites and Advanced Materials Centre. “They’ll 3D print prototyping materials to quickly push out the structures they need on their cars. Twenty years ago, the resins used for this were brittle and only suitable for prototyping. But now we have developed more robust resins that can actually be used on working structures.”

Another benefit of 3D printing is that it can be performed on a smaller scale, enabling manufacturing sites to be installed locally. This could be next to the resource that the printer will use or next to the consumers that are going to use it; in either case, reducing transportation costs. While the cost implications of this “end of the street” model haven’t yet won through, the pressure to reduce CO2 emissions “might be the driver that starts to change the way we look at manufacturing”, Ayre notes.

Recycling opportunities

The introduction of novel advanced materials can also help increase sustainability. Thermal barrier coatings developed at Cranfield, for example, enable jet engines to work at higher temperatures, increasing efficiency and reducing fuel consumption. “There’s a huge role for engineers to play,” says Ayre.

Designing materials that can be recycled and reused is another important task for Ayre’s team. Producing raw material requires vast amounts of energy, a step that can be eliminated by recycling. Aluminium, for instance, is easy to process, highly recyclable and used to create a vast spectrum of products. But there are still some challenges to address, says Ayre.

“The aerospace industry likes to machine parts. They’ll take a one tonne billet of aluminium and end up with a 100 kg part,” he explains. “I worked with a student last year looking at how to recycle the swarf that comes from that machining. Unfortunately, aluminium is quite reactive and the swarf oxidizes back to the ore state, where it’s not really easy to recycle. These are the sorts of issues that we need to get around.”

The centre also focuses on composite materials, such as those used to manufacture wind turbine blades. Ayre notes that turbine blades built in the 1970s are now reaching the end of their usable life – and the composites they’re made from are difficult to recycle. The team is working to find ways to recycle these materials, though Ayre points out that it was such composites that enabled growth in the wind turbine market and the resulting source of renewable energy.

Alongside, the researchers are developing recyclable composite materials, such as bioresins and fibres produced from natural products, although work is still at an early stage. “These materials don’t have the same properties as petroleum-derived resins and ceramic, carbon and glass fibres,” Ayre says. “I don’t think we’re close yet to being able to replace our super-lightweight, super-stiff carbon fibre composite structures that motorsport and aerospace are utilizing.”

Smart materials

Meanwhile, Khan’s team at Cranfield is developing materials with smart functionalities, such as self-healing, self-cleaning or integrated sensing. One project involves replacing domestic pipelines used for wastewater distribution with 3D-printed self-cleaning structures. This will reduce water requirements compared with conventional pipelines, reducing the overall carbon footprint.

Muhammad Khan
Muhammad Khan “If you can extend device life by utilizing smart mechanisms…This can positively contribute to the net zero agenda.” (Courtesy: Cranfield University)

With a focus on maintaining existing assets, rather than creating new ones, the researchers are also developing self-healing structures that can repair themselves after any damage. “If you can extend device life twice or thrice by utilizing these smart mechanisms, you can reduce the amount of raw material used and the emissions generated during manufacturing of replacement parts,” says Khan. “This can positively contribute to the net zero agenda.”

Another project involves developing structures with integrated sensing functionality. Such devices, which monitor their own health by providing information such as displacement or vibration responses, eliminate the need to employ external sensors that require energy to construct and operate. The diagnostic data could provide users with an early warning of signs of damage or help determine the remaining useful life of a structure.

“Life estimation is challenging, but is something we are looking to incorporate in the future – how we can utilize the raw data from embedded sensing elements to model the remaining useful life,” says Khan. “That prediction could allow users to plan maintenance and replacement routines, and save a system from catastrophic failure.”

Building for the future

Cranfield University also aims to embed this sense of sustainability in its students – the engineers of the future – with a focus on net zero integral to all its engineering and related courses.

“The majority of our manufacturing and materials students will go on to an engineering career and need to appreciate their role in sourcing sustainable materials for any parts they’re designing and investigating manufacturing routes with low CO2 footprint,” Ayre explains. Students also learn about asset management – choosing the right product in the initial stages to minimize maintenance costs and extend a component’s life.

Elsewhere, Khan is working to ensure that standards agencies keep sustainability in mind. His centre is part of a consortium aiming to bring the goal of achieving net zero into standards. The team recently demonstrated how the existing asset management standard – ISO 15,000 – can be modified to incorporate net zero elements. The next step is to convince ISO and other agencies to accept these concepts, allowing people to manage their assets in a more environmentally friendly way without compromising availability or performance.

Ultimately, says Ayre, alongside “trying to encourage humanity not to want more and more and more”, lowering global emissions could rely on engineers getting creative and finding innovative ways to produce products that people want, but at reduced cost to the environment. It’s also vital that customers take on these ideas. “There’s no point us coming up with new-fangled manufacturing process and new materials if nobody has the experience or the confidence to take it anywhere,” he points out.

“It’s important that everyone makes a move towards net zero, because we’re not going to make any impact unless the whole world is on board,” says Ayre.

The post The route to ‘net zero’: how the manufacturing industry can help appeared first on Physics World.

  •  
❌
❌