↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Aujourd’hui — 2 juillet 2024Physics World

How to get the errors out of quantum computing

2 juillet 2024 à 14:26

All of today’s quantum computers are prone to errors. These errors may be due to imperfect hardware and control systems, or they may arise from the inherent fragility of the quantum bits, or qubits, used to perform quantum operations. But whatever their source, they are a real problem for anyone seeking to develop commercial applications for quantum computing. Although noisy, intermediate-scale quantum (NISQ) machines are valuable for scientific discovery, no-one has yet identified a commercial NISQ application that brings value beyond what is possible with classical hardware. Worse, there is no immediate theoretical argument that any such applications exist.

It might sound like a downbeat way of opening a scientific talk, but when Christopher Eichler made these comments at last week’s Quantum 2.0 conference in Rotterdam, the Netherlands, he was merely reflecting what has become accepted wisdom within the quantum computing community. According to this view, the only way forward is to develop fault-tolerant computers with built-in quantum error correction, using many flawed physical qubits to encode each perfect (or perfect-enough) logical qubit.

That isn’t going to be easy, acknowledged Eichler, a physicist at FAU Erlingen, Germany. “We do have to face a number of engineering challenges,” he told the audience. In his view, the requirements of a practical, error-corrected quantum computer include:

  • High-fidelity gates that are fast enough to perform logical operations in a manageable amount of time
  • More and better physical qubits with which to build the error-corrected logical qubits
  • Fast mid-circuit measurements for “syndromes”, which are the set of eigenvalues that make it possible to infer (using classical decoding algorithms) which errors have happened in the middle of a computation, rather than waiting until the end.

The good news, Eichler continued, is that several of today’s qubit platforms are already well on their way to meeting these requirements. Trapped ions offer high-fidelity, fault-tolerant qubit operations. Devices that use arrays of neutral atoms as qubits are easy to scale up. And qubits based on superconducting circuits are good at fast, repeatable error correction.

The bad news is that none of these qubit platforms ticks all of those boxes at once. This means that no out-and-out leader has emerged, though Eichler, whose own research focuses on superconducting qubits, naturally thinks they have the most promise.

In the final section of his talk, Eichler suggested a few ways of improving superconducting qubits. One possibility would be to discard the current most common type of superconducting qubit, which is known as a transmon, in favour of other options. Fluxonium qubits, for example, offer better gate fidelities, with 2-qubit gate fidelities of up to 99.9% recently demonstrated. Another alternative superconducting qubit, known as a cat qubit, exhibits lifetimes of up to 10 seconds before it loses its quantum nature. However, in Eichler’s view, it’s not clear how either of these qubits might be scaled up to multi-qubit processors.

Another promising strategy (not unique to superconducting qubits) Eichler mentioned is to convert dominant types of errors into events that involve a qubit being erased instead of changing state. This type of error should be easier (though still not trivial) to detect. And many researchers are working to develop new error correction codes that operate in a more hardware-efficient way.

Ultimately, though, the jury is still out on how to overcome the problem of error-prone qubits. “Moving forward, one should very broadly study all these platforms,” Eichler concluded. “One can only learn from one another.”

The post How to get the errors out of quantum computing appeared first on Physics World.

  •  

Oculomics: a window to the health of the body

Par : No Author
2 juillet 2024 à 12:00

More than 13 million eye tests are carried out in the UK each year, making it one of the most common medical examinations in the country. But what if eye tests could tell us about more than just the health of the eye? What if these tests could help us spot some of humanity’s greatest healthcare challenges, including diabetes, Alzheimer’s or heart disease?

It’s said that the eye is the “window to the soul”. Just as our eyes tell us lots about the world around us, so they can tell us lots about ourselves. Researchers working in what’s known as “oculomics” are seeking ways to look at the health of the body, via the eye. In particular, they’re exploring the link between certain ocular biomarkers (changes or abnormalities in the eye) with systemic health and disease. Simply put, the aim is to unlock the valuable health data that the eye holds on the body (Chronic Disease. Ophthalmol. Ther. 13 1427).

Oculomics is particularly relevant when it comes to chronic conditions, such as dementia, diabetes and cardiovascular disease. They make up most of the “burden of disease” (a factor that is calculated by looking at the sum of the mortality and morbidity of a population) and account for around 80% of deaths in industrialized nations. We can reduce how many people die or get ill from such diseases through screening programmes. Unfortunately, most diseases don’t get screened for and – even when they do – there’s limited or incomplete uptake.

Cervical-cancer screening, for example, is estimated to have saved the lives of one in 65 of all British-born women since 1950 (Lancet 364 249), but nearly a third of eligible women in the UK do not attend regular cervical screening appointments. This highlights the need for new and improved screening methods that are as non-intimidating, accessible and patient-friendly as a trip to a local high-street optometrist.

Seeing the light: the physics and biology of the eye

In a biological sense, the eye is fantastically complex. It can adapt from reading this article directly in front of you to looking at stars that are light-years away. The human eye is a dynamic living tissue that can operate across six orders of brightness magnitude, from the brightest summer days to the darkest cloudy nights.

The eye has several key structures that enable this (figure 1). At the front, the cornea is the eye’s strongest optical component, refracting light as it enters the eye to form an image at the back of the eye. The iris allows the eye to adapt to different light levels, as it changes size to control how much light enters the eye. The crystalline lens provides depth-dynamic range, changing size and shape to focus on objects nearby or far away from the eye. The aqueous humour (a water-like fluid in front of the lens) and the vitreous humour (a gel-like liquid between the lens and the retina) give the eye its shape, and provide the crucial separation over which the refraction of light takes place. Finally, light reaches the retina, where the “pixels” of the eye – the photoreceptors – detect the light.

1 Look within

Diagram of the eye with labels including iris, cornea and vitreous humour
(Courtesy: Occuity)

The anatomy of the human eye, highlighting the key structures including the iris, cornea, the lens and the retina.

The tissues and the fluids in the eye have optical characteristics that stem from their biological properties, making optical methods ideally suited to study the eye. It’s vital, for example, that the aqueous humour is transparent – if it were opaque, our vision would be obscured by our own eyes. The aqueous humour also needs to fulfil other biological properties, such as providing nutrition to the cornea and lens.

To do all these things, our bodies produce the aqueous humour as an ultrafiltered blood plasma. This plasma contains water, amino acids, electrolytes and more, but crucially no red blood cells or opaque materials. The molecules in the aqueous humour reflect the molecules in the blood, meaning that measurements on the aqueous humour can reveal insights into blood composition. This link between optical and biological properties is true for every part of the eye, with each structure potentially revealing insights into our health.

Chronic disease insights and AI

Currently, almost all measurements we take of the eye are to discern the eye’s health only. So how can these measurements tell us about chronic diseases that affect other parts of the body? The answer lies in both the incredible properties of the eye, and data from the sheer number of eye examinations that have taken place.

Chronic diseases can affect many different parts of the body, and the eye is no exception (figure 2). For example, cardiovascular disease can change artery and vein sizes. This is also true in the retina and choroid (a thin layer of tissue that lies between the retina and the white of the eye) – in patients with high blood pressure, veins can become dilated, offering optometrists and ophthalmologists insight into this aspect of a patient’s health.

For example, British optometrist and dispensing optician Jason Higginbotham, points out that throughout his career “Many eye examinations have yielded information about the general health of patients – and not just their vision and eye health. For example, in some patients, the way the arteries cross over veins can ‘squash’ or press on the veins, leading to a sign called ‘arterio-venous nipping’. This is a possible indicator of hypertension and hardening of the arteries.”

Higginbotham, who is also the managing editor of Myopia Focus, adds that “Occasionally, one may spot signs of blood-vessel leakage and swelling of the retinal layers, which is indicative of active diabetes. For me, a more subtle sign was finding the optic nerves of one patient appearing very pale, almost white, with them also complaining of a lack of energy, becoming ‘clumsier’ in their words and finding their vision changing, especially when in a hot bath. This turned out to be due to multiple sclerosis.”

2 Interconnected features

Diagram of the eye with labels explaining detectable changes that occur
(Courtesy: Occuity)

Imaging the eye may reveal ocular biomarkers of systemic disease, thanks to key links between the optical and biological properties of the eye. With the emergence of oculomics, it may be possible – through a standard eye test – to detect cardiovascular diseases; cancer; neurodegenerative disease such as Alzheimer’s, dementia and Parkinson’s disease; and even metabolic diseases such as diabetes.

However, precisely because there are so many things that can affect the eye, it can be difficult to attribute changes to a specific disease. If there is something abnormal in the retina, could this be an indicator of cardiovascular disease, or could it be diabetes? Perhaps it is a by-product of smoking – how can an optometrist tell?

This is where the sheer number of measurements becomes important. The NHS has been performing eye tests for more than 60 years, giving rise to databases containing millions of images, complete with patient records about long-term health outcomes. These datasets have been fed into artificial intelligence (AI) deep-learning models to identify signatures of disease, particularly cardiovascular disease (British Journal of Ophthalmology 103 67J Clin Med. 10.3390/jcm12010152). Models can now predict cardiovascular risk factors with accuracy that is comparable to the current state-of-the-art. Also, new image-analysis methods are under constant development, allowing further signatures of cardiovascular disease, diabetes and even dementia to be spotted in the eye.

But bias is a big issue when it comes to AI-driven oculomics. When algorithms are developed using existing databases, groups or communities with historically worse healthcare provision will be under-represented in these databases. Consequently, the algorithms may perform worse for them, which risks embedding past and present inequalities into future methods. We have to be careful not to let such biases propagate through the healthcare system – for example, by drawing on multiple databases from different countries to reduce sensitivities to country-specific bias.

Although AI oculomics methods have not yet moved beyond clinical research, it is only a matter of time. Ophthalmology companies such as Carl Zeiss Meditec (Ophthalmology Retina 7 1042) and data companies such as Google are developing AI methods to spot diabetic retinopathy and other ophthalmic diseases. Regulators are also engaging more and more with AI, with the FDA having reviewed at least 600 medical devices that incorporate AI or machine learning across medical disciplines, including nine in the ophthalmology space, by October 2023.

Eye on the prize

So how far can oculomics go? What other diseases could be detected by analysing hundreds of thousands of images? And, more importantly, what can be detected with only one image or measurement of the eye?

Ultimately, the answer lies in matching the imaging technique to the disease. It is critical to choose the measurement technique that fits the disease. So, if we want to detect more diseases, we need more measurement techniques.

At Occuity, a UK-based medical technology company, we are developing solutions to some of humanity’s greatest health challenges through optical diagnostic technologies. Our aim is to develop pain-free, non-contact screening and monitoring of chronic health conditions, such as glaucoma, myopia, diabetes and Alzheimer’s disease (Front Aging Neurosci.13 720167). We believe that the best way that we can improve health is by developing instruments that can spot specific signatures of disease. This would allow doctors to start treatments earlier, give researchers a better understanding of the earliest stages of disease, and ultimately, help people live healthier, happier lives.

Currently, we are developing a range of instruments that target different diseases by scanning a beam of light through the different parts of the eye and measuring the light that comes back. Our first instruments measure properties such as the thickness of the cornea (needed for accurate glaucoma diagnosis); and the length of the eyeball, which is key to screening and monitoring the epidemic of myopia, which is expected to affect half of the world’s population by 2050. As we advance these technologies, we open up opportunities for new measurements to advance scientific research and clinical diagnostics.

Looking into the past

The ocular lens provides a remarkable record of our molecular history because, unlike many other ocular tissues, the cells within the lens do not get replaced as people age. This is particularly important for a family of molecules dubbed “advanced glycation end-products”, or AGEs. These molecules are waste products that build up when glucose levels are too high. While present in everybody, they occur in much higher concentrations in people with diabetes and pre-diabetes people who have higher blood-glucose levels and are at high risk of developing diabetes, but largely without symptoms. Measurements of a person’s lens AGE concentration may therefore indicate their diabetic state.

Fortunately, these AGEs have a very important optical property – they fluoresce. Fluorescence is a process where an atom or molecule absorbs light at one colour and then re-emits light at another colour – it’s why rubies glow under ultraviolet light. The lens is the perfect place to look for these AGEs, as it is very easy to shine light into the lens. Luckily, a lot of this fluorescence makes it back out of the lens, where it can be measured (figure 3).

3 AGEs and fluorescence

Graph with x axis labelled fluorescence and y axis labelled age. The data are spread out but roughly follow a line that is gently rising from left to right
(Courtesy: Occuity)

Fluorescence, a measure of advanced glycation end-products (AGE) concentration, rises as people get older. However, it increases faster in diabetes as higher blood-glucose levels accelerate the formation of AGEs, potentially making lens fluorescence a powerful tool for detecting diabetes and pre-diabetes. This chart shows rising fluorescence as a function of both age and diabetic status, taken as part of an internal Occuity trial on 21 people using a prototype instrument; people with diabetes are shown by orange points and people without diabetes are shown by blue points. Error bars are the standard deviation of three measurements. These measurements are non-invasive, non-contact and take just seconds to perform.

Occuity has developed optical technologies that measure fluorescence from the lens as a potential diabetes and pre-diabetes screening tool, building on our optometry instruments. Although they are still in the early stages of development, the first results taken earlier this year are promising, with fluorescence clearly increasing with age, and strong preliminary evidence that the two people with diabetes in the dataset have higher lens fluorescence than those without diabetes. If these results are replicated in larger studies, this will show that lens-fluorescence measurement techniques are a way of screening for diabetes and pre-diabetes rapidly and non-invasively, in easily accessible locations such as high-street optometrists and pharmacists.

Such a tool would be revolutionary. Almost five million people in the UK have diabetes, including over a million with undiagnosed type 2 diabetes whose condition goes completely unmonitored. There are also over 13 million people with pre-diabetes. If they can be warned before they move from pre-diabetes to diabetes, early-stage intervention could reverse this pre-diabetic state, preventing progression to full diabetes and drastically reducing the massive impact (and cost) of the illness.

Living in the present

Typical diabetes management is invasive and unpleasant, as it requires finger pricks or implants to continuously monitor blood glucose levels. This can result in infections, as well as reduce the effectiveness of diabetes management, leading to further complications. Better, non-invasive glucose-measurement techniques could transform how patients can manage this life-long disease.

As the aqueous humour is an ultra-filtered blood plasma, its glucose concentration mimics that of the glucose concentration in blood. This glucose also has an effect on the optical properties of the eye, increasing the refractive index that gives the eye its focusing power (figure 4).

4 Measuring blood glucose level

Graph with x axis labelled refractive index and y axis labelled glucose concentration. The data points show a gradually rising line from left to right
(Courtesy: Occuity)

The relationship between blood glucose and optical measurements on the eye has been probed theoretically and experimentally at Occuity. Their goal is to create a non-invasive, non-contact measure of blood glucose concentration for diabetics. Occuity has shown that changes in glucose concentration comparable to that observed in blood has a measurable effect on refractive index in cuvettes and is moving towards equivalent measurements in the anterior chamber.

As it happens, the same techniques that we at Occuity use to measure lens and eyeball thickness can be used to measure the refractive index of the aqueous humour, which correlates with glucose concentration. Preliminary cuvette-based tests are close to being precise enough to measure glucose concentrations to the accuracy needed for diabetes management – non-invasively, without even touching the eye. This technique could transform the management of blood-glucose levels for people with diabetes, replacing the need for repetitive and painful finger pricks and implants with a simple scan of the eye.

Eye on the future

As Occuity’s instruments become widely available, the data that they generate will grow, and with AI-powered real-time data analysis, their predictive power and the range of diseases that can be detected will expand too. By making these data open-source and available to researchers, we can continuously expand the breadth of oculomics.

Oculomics has massive potential to transform disease-screening and diagnosis through a combination of AI and advanced instruments. However, there are still substantial challenges to overcome, including regulatory hurdles, issues with bias in AI, adoption into current healthcare pathways, and the cost of developing new medical instruments.

Despite these hurdles, the rewards of oculomics are too great to pass up. Opportunities such as diabetes screening and management, cardiovascular risk profiling and early detection of dementia offer massive health, social and economic benefits. Additionally, the ease with which ocular screening can take place removes major barriers to the uptake of screening.

With more than 35,000 eye exams being carried out in the UK almost every day, each one offers opportunities to catch and reverse pre-diabetes, to spot cardiovascular risk factors and propose lifestyle changes, or to identify and potentially slow the onset of neurodegenerative conditions. As oculomics grows, the window to health is getting brighter.

The post Oculomics: a window to the health of the body appeared first on Physics World.

  •  
Hier — 1 juillet 2024Physics World

Satellites burning up in the atmosphere may deplete Earth’s ozone layer

Par : No Author
1 juillet 2024 à 10:00

The increasing deployment of extensive space-based infrastructure is predicted to triple the number of objects in low-Earth orbit over the next century. But at the end of their service life, decommissioned satellites burn up as they re-enter the atmosphere, triggering chemical reactions that deplete the Earth’s ozone layer.

Through new simulations, Joseph Wang and colleagues at the University of Southern California have shown how nanoparticles created by satellite pollution can catalyse chemical reactions between ozone and chlorine. If the problem isn’t addressed, they predict that the level of ozone depletion could grow significantly in the coming decades.

From weather forecasting to navigation, satellites are a vital element of many of the systems we’ve come to depend on. As demand for these services continues to grow, swarms of small satellites are being rolled out in mega-constellations such as Starlink. As a result, low-Earth orbit is becoming increasingly cluttered with manmade objects.

Once a satellite reaches the end of its operational lifetime, international guidelines suggest that it should re-enter the atmosphere within 25 years to minimize the risk of collisions with other satellites. Yet according to Wang’s team, re-entries from a growing number of satellites are a concerning source of pollution; and one that has rarely been considered so far.

As they burn up on re-entry, satellites can lose between 51% and 95% of their mass – and much of the vaporized material they leave behind will remain in the upper atmosphere for decades.

One particularly concerning component of this pollution is aluminium, which makes up close to a third of the mass of a typical satellite. When left in the upper atmosphere, aluminium will react with the surrounding oxygen, creating nanoparticles of aluminium oxide (AlO). Although this compound isn’t reactive itself, its nanoparticles have large surface areas and excellent thermal stability, making them extremely effective at catalysing reactions between ozone and chlorine.

For this ozone–chlorine reaction to occur, chlorine-containing compounds must first be converted into reactive species – which can’t happen without a catalyst. Typically, catalysts come in the form of tiny, solid particles found in stratospheric clouds, which provide surfaces for the chlorine activation reaction to occur. But with higher concentrations of AlO nanoparticles in the upper atmosphere, the chlorine activation reaction can occur more readily – depleting the vital layer that protects Earth’s surface from damaging UV radiation.

Backwards progress

The ozone layer has gradually started to recover since the signing in 1987 of the Montreal Protocol – in which all UN member states agreed to phase out production of the substances primarily responsible for ozone depletion. With this new threat, however, Wang’s team predict that much of this progress could be reversed if the problem isn’t addressed soon.

In their study, reported in Geophysical Research Letters, the researchers assessed the potential impact of satellite-based pollution through molecular dynamics simulations, which allowed them to calculate the mass of ozone-depleting nanoparticles produced during satellite re-entry.

They discovered that a small 250 kg satellite can generate around 30 kg of AlO nanoparticles. By extrapolating this figure, they estimated that in 2022 alone, around 17 metric tons of AlO compounds were generated by satellites re-entering the atmosphere. They also found that the nanoparticles may take up to 30 years to drift down from the mesosphere into the stratospheric ozone layer, introducing a noticeable delay between satellite decommissioning and eventual ozone depletion in the stratosphere.

Extrapolating their findings further, Wang’s team then considered the potential impact of future mega-constellation projects currently being planned. Altogether, they estimate that some 360 metric tons of AlO nanoparticles could enter the upper atmosphere each year if these plans come to fruition.

Although these estimates are still highly uncertain, the researchers’ discoveries clearly highlight the severity of the threat that decommissioned satellites pose for the ozone layer. If their warning is taken seriously, they hope that new strategies and international guidelines could eventually be established to minimize the impact of these ozone-depleting nanoparticles, ensuring that the ozone layer can continue to recover in the coming decades.

The post Satellites burning up in the atmosphere may deplete Earth’s ozone layer appeared first on Physics World.

  •  
À partir d’avant-hierPhysics World

Ask me anything: Catherine Phipps – ‘Seeing an aircraft take off and knowing you contributed to the engine design is an amazing feeling’

28 juin 2024 à 12:00
Catherine Phipps
Talented mind As an engineer at Rolls-Royce in Derby, UK, Catherine Phipps tests how aircraft-engine components behave in extreme conditions. (Courtesy: Catherine Phipps)

What skills do you use every day in your job?

I originally joined Rolls-Royce to use my physics skills in an engineering environment and see them applied in the real world. My plan was to work in the materials department, thinking that would align with my degree. But after completing the graduate training scheme, I chose to join the mechanical-integrity team working on demonstrator engines. A few years later, I moved to Berlin to focus on small engines for civil aerospace before returning to Derby in the UK, where I’m now a mechanical integrity engineer working on large civil engines.

A large part of my job involves understanding how materials behave in extreme conditions, such as high temperature or extreme stress. I might, for example, run simulations to see how long a new component will last or if it will corrode.

I’ll also design programmes to test how components behave when the engine runs in a particular way. The results of these tests are then fed back into the models to validate predictions and improve the simulations. Statistical analysis skills are vital too, as is the ability to make rapid judgements. Above all, I need to consider and understand any safety implications and consider what might happen if the component fails.

It’s a team role, working alongside people from numerous other disciplines such as aerodynamics, fluid mechanics and materials, and everyone brings their own skills. We need to make sure our designs are cost-effective, meet weight targets, and can be manufactured consistently and to the right standard. It’s immensely challenging work, which means I need to collaborate, communicate and – where acceptable – compromise.

What do you like least and best about your job?

Best has to be the people. It’s inspiring and motivating to work day in, day out in an international environment with talented, innovative and dedicated colleagues from varied backgrounds and with different life experiences. Sharing knowledge and coaching younger members of the team is also rewarding. Plus, seeing an aircraft take off and knowing you contributed to the engine design is an amazing feeling.

I did have a seven-year career break to have children, after which I was shocked at how much my colleagues had progressed. I felt in awe and inadequate. It was challenging to return, but everyone assured me the laws of physics hadn’t changed and I soon got back up to speed. The hardest time for me, though, was working from home during COVID-19. Meetings continued online, but I missed the chance conversations with colleagues where we’d run ideas past each other and I’d learn useful information. I felt siloed and it was hard to share knowledge. The line between work and home was blurred and it was always tempting to leave the laptop on and “just finish something” after dinner.

What do you know today you wish you knew when you were starting your career?

First, don’t think you always have to know the answer and don’t be afraid to ask questions. You won’t look stupid and you’ll learn from the responses. When you start working, it’s easy to think you should know everything, but I’m still learning and questioning all these years later. New ideas and perspectives are always valuable, so stay curious and keep wondering “Why?” and “What if?”. You may unlock something new. Second, just because you start on one route, don’t think you can’t do something different. Your career will probably span several decades so when new opportunities arise, don’t be afraid to take them.

The post Ask me anything: Catherine Phipps – ‘Seeing an aircraft take off and knowing you contributed to the engine design is an amazing feeling’ appeared first on Physics World.

  •  

Mitigating tokamak plasma disruption bags Plasma Physics and Controlled Fusion Outstanding Paper Prize

28 juin 2024 à 11:00

Vinodh Bandaru from the Indian Institute of Technology in Guwahati, India, and colleagues have been awarded the 2024 Plasma Physics and Controlled Fusion (PPCF) Outstanding Paper Prize for their research on “relativistic runaway electron beam termination” at the Joint European Torus (JET) fusion experiment in Oxfordshire.

The work examines the termination of relativistic electron beam events that occurred during experiments on JET, which was operated at the Culham Centre for Fusion Energy until earlier this year. A better understanding of such dynamics could help the successful mitigation of plasma disruptions, which lead to energy losses in the plasma. The work could also be useful for experiments that will take place on the ITER experimental fusion tokamak, which is currently under construction in Cadarache, France.

Awarded each year, the PPCF prize aims to highlight work of the highest quality and impact published in the journal.  The award was judged on originality, scientific quality and impact as well as being based on community nominations and publication metrics. The prize will be presented at the 50th European Physical Society Conference on Plasma Physics in Salamanca, Spain, on 8–12 July.

Jonathan Graves from the University of York, UK, who is PPCF editor-in-chief, calls the work is “outstanding”. “[It] explores state of the art simulations with coupled runaway electron physics, presented together with convincing comparison against disrupting JET tokamak plasmas,” he says. “The development is critically important for the safe operation of future reactor devices.”

Below, Bandaru talks to Physics World about the prize, his research and what advice he has for early-career researchers.

What does winning the 2024 PPCF Outstanding Paper Prize mean to you?

The award means a lot to me, as a recognition of the hard work that went into the research. I would like to thank my co-authors for their valuable contributions and PPCF for considering the paper.

How important is it that researchers receive recognition for their work?

Receiving recognition is encouraging for researchers and can give an extra boost and motivation in their scientific pursuits. This is more so given the nature and dynamics of contemporary research work. This new initiative from PPCF is very welcome and commendable.

What advice would you give to early-career researchers looking to pursue a career in plasma physics?

Having worked in a few different fields over the years, I can say that plasma physics is one area that entails significant complexity due to the shear range of length and timescales of the physical processes involved. This not only offers interesting and challenging problems, but also allows them to choose from a variety of problems over the course of one’s research career.

How so?

Fusion science has now reached an inflection point with enormous ongoing activity involving research labs, universities as well as start-ups all over the world. With several big and important projects underway such as ITER and the planned Spherical Tokamak for Energy Production in the UK, plasma researchers can not only make important, concrete and impactful contributions, but can also have a relatively visible long-term career path. I would say these are really exciting times to be in plasma physics.

The post Mitigating tokamak plasma disruption bags <em>Plasma Physics and Controlled Fusion</em> Outstanding Paper Prize appeared first on Physics World.

  •  

Shrinivas Kulkarni: 2024 Shaw Prize in Astronomy winner talks about his fascination with variable and transient objects

27 juin 2024 à 15:55

This episode features an in-depth  conversation with Shrinivas Kulkarni, who won the 2024 Shaw Prize in Astronomy “for his ground-breaking discoveries about millisecond pulsars, gamma-ray bursts, supernovae, and other variable or transient astronomical objects”. Based at Caltech in the US, he is also cited for his “leadership of the Palomar Transient Factory and its successor, the Zwicky Transient Facility, which have revolutionized our understanding of the time-variable optical sky”.

Kulkarni talks about his fascination with astronomical objects that change over time and he reveals the principles that have guided his varied and successful career. He also offers advice to students and early-career researchers about how to thrive in astronomy.

This podcast also features an interview with Scott Tremaine, who is chair of the selection committee for the 2024 Shaw Prize in Astronomy. Based at the Institute for Advanced Study in Princeton, New Jersey, he talks about Kulkarni’s many contributions to astronomy, including his work to make astronomical data more accessible to researchers not affiliated with major telescopes.

This podcast is sponsored by The Shaw Prize Foundation

The post Shrinivas Kulkarni: 2024 Shaw Prize in Astronomy winner talks about his fascination with variable and transient objects appeared first on Physics World.

  •  

Bringing the second quantum revolution to the rest of the world

27 juin 2024 à 15:02

Quantum technologies have enormous potential, but achieving that potential is not going to be cheap. The US, China and the EU have already invested more than $50 billion between them in quantum computing, quantum communications, quantum sensing and other areas that make up the so-called “second quantum revolution”. Other high-income countries, notably Australia, Canada and the UK, have also made significant investments. But what about the rest of the world? How can people in other countries participate in (and benefit from) this quantum revolution?

In a panel discussion at Optica’s Quantum 2.0 conference, which took place this week in Rotterdam in the Netherlands, five scientists from low- and middle-income countries took turns addressing this question. The first, Tatevik Chalyan, drew sympathetic nods from her fellow panellists and moderator Imrana Ashraf when she described herself as “part of the generation forced to leave Armenia to get an education”. Since then, she said, the Armenian government has become more supportive, building on a strong tradition of research in quantum theory. Chalyan, however, is an experimentalist, and she and many of her former classmates are still living abroad – in her case, as a postdoctoral researcher in silicon photonics at the Vrije Universiteit Brussel, Belgium.

Another panellist, Vatshal Srivastav, followed a similar path, studying at the Indian Institute of Technology (IIT) in Kanpur before moving to the UK’s Heriot-Watt University to do his PhD and postdoc on higher-dimensional quantum circuits. He, too, thinks things are improving back home, with the quality of research in the IIT network becoming high enough that many of his friends chose to remain there. Countries that want to improve their research base, he said, should find ways to “keep good people within your system”.

For panellist Taofiq Paraiso, who says he was “brought up in several African countries” before moving to EPFL in Switzerland for his master’s and PhD, the starting point is simple. “It’s about transferring skills and knowledge,” said Paraiso, who now leads a team developing chip-based hardware for quantum cryptography at Toshiba Europe’s Cambridge Research Laboratory in the UK. People who return to their home countries after being educated abroad have an important role to play in that, he added.

Returning is not always easy, though. The remaining two panellists, Roger Alfredo Kögler and Rodrigo Benevides, are both from Brazil, and Kögler, who did his PhD in Brazil’s Instituto Nacional de Ciência e Tecnologia de Informação Quântica, said that Brazilians who want to become professors in their home country are strongly urged to go abroad for their postdoctoral research. But now that he has seen the resources available to him as a postdoc in nanooptics at the Humboldt University of Berlin, Germany, Kögler admitted that he is “rethinking whether I want to go back” even though he worries that staying in Europe would make him “part of the problem”.

It’s much easier to freely have ideas if you have a lot of money

Rodrigo Benevides

Benevides, whose PhD was split between Brazil’s University of Campinas and the Netherlands’ TU Delft, elaborated on the reasons for this dilemma. In Brazil, he and his colleagues “used to see all these papers in Nature or Science” while they were “in the lab just trying to make our laser work”. That kind of atmosphere, he said, “leads to a lack of self-confidence” because people begin to suspect that they, and not the system, are the problem. Now, as a postdoc working on hybrid quantum systems at ETH Zurich in Switzerland, Benevides wryly observed that “it’s much easier to freely have ideas if you have a lot of money”.

As for how to remedy these challenges, Benevides argued that the solutions will be diverse and tailored to local circumstances. As an example, Paraiso highlighted the work of an outreach organization, Photonics Ghana, that motivates students to engage with quantum science. He also suggested that cloud-based quantum computing and freely-available software packages such as IBM’s Qiskit will help organizations that lack the resources to build a quantum computer of their own. Chalyan, for her part, pointed out that a lack of resources sometimes has a silver lining. Coming up with creative work-arounds, she said, “is what we are famous for [as] people from developing countries”.

Finally, several panellists emphasized the need to focus on quantum technologies that will make a difference locally. Though Kögler warned that it is hard to predict what will turn out to be “useful”, a few answers are already emerging. “Maybe we don’t need quantum error correction, but we do need a quantum sensor that brings better agriculture,” Benevides suggested. Paraiso noted that information security is important in African countries as well as European ones, and added that quantum key distribution is one of the more mature quantum technologies. Whatever the specifics, though, Srinivastav recommended identifying the problems your society is facing and figuring out how they overlap with your current research. “As scientists, it is our job to make things better,” he concluded.

The post Bringing the second quantum revolution to the rest of the world appeared first on Physics World.

  •  

Shapeshifting organism uses ‘cellular origami’ to extend to 30 times its body length

Par : No Author
27 juin 2024 à 10:00

For the first time, two researchers in the US have observed the intricate folding and unfolding of “cellular origami”. Through detailed observations, Eliott Flaum and Manu Prakash at Stanford University discovered helical pleats in the membrane of a single-celled protist, which enable the organism to reversibly extend to over 30 times its own body length. The duo now hopes that the mechanism could inspire a new generation of advanced micro-robots.

A key principle in biology is that a species’ ability to survive is intrinsically linked with the physical structure of its body. One group of organisms where this link is still poorly understood are protists: single-celled organisms that have evolved to thrive in almost every ecological niche on the planet.

Although this extreme adaptability is known to stem from the staggering variety of shapes, sizes and structures found in protist cells, researchers are still uncertain as to how these structures have contributed to their evolutionary success.

In their study, reported in Science, Flaum and Prakash investigated a particularly striking feature found in a protist named Lacrymaria olor. Measuring 40 µm in length, this shapeshifting organism hunts its prey by launching a neck-like like feeding apparatus up to 1200 µm in less than 30 s. Afterwards, the protrusion retracts just as quickly: an action that can be repeated over 20,000 times throughout the cell’s lifetime.

Through a combination of high-resolution fluorescence and electron microscopy techniques, the duo found that this extension occurs through the folding and unfolding of an intricate helical structure in L. olor’s cytoskeleton membrane. These folds occur along bands of microtubule filaments embedded in the membrane, which group together to form accordion-like pleats.

Altogether, Flaum and Prakash found 15 of these pleats in L. olor’s membrane, which wrap around the cell in elegant helical ribs. The structure closely resembles “curved crease origami”, a subset of traditional origami in which folds follow complex curved paths instead of straight ones.

“When you store pleats on the helical angle in this way, you can store an infinite amount of material,” says Flaum in a press statement. “Biology has figured this out.”

“It is incredibly complex behaviour,” adds Prakash. “This is the first example of cellular origami. We’re thinking of calling it lacrygami.”

Perfection in projection

A further striking feature of L. olor’s folding mechanism is that the transition between its folded and unfolded states can happen thousands of times without making a single error: a feat that would be incredibly difficult to reproduce in any manmade mechanism with a similar level of intricacy.

To explore the transition in more detail, Flaum and Prakash investigated points of concentrated stress within the cell’s cytoskeleton. Named “topological singularities”, the positions of these points are intrinsically linked to the membrane’s helical geometry.

The duo discovered that L. olor’s transition is controlled by two types of singularity. The first of these is called a d-cone: a point where the cell’s surface develops a sharp, conical point due to the membrane bending and folding without stretching. Crucially, a d-cone can travel across the membrane in a neat line, and then return to its original position along the exact same path as the membrane folds and unfolds.

The second type of topological singularity is called a twist singularity, and occurs in the membrane’s microtubule filaments through their rotational deformation. Just like the d-cone, this singularity will travel along the filaments, then return to its original position as the cell folds and unfolds.

As Prakash explains, both singularities are key to understanding how L. olor’s transition is so consistent. “L. olor is bound by its geometry to fold and unfold in this particular way,” he says. “It unfolds and folds at this singularity every time, acting as a controller. This is the first time a geometric controller of behaviour has been described in a living cell.”

The researchers hope that their remarkable discovery could provide new inspiration for our own technology. By replicating L. olor’s cellular origami, it may be possible to design micro-scale machines whose movements are encoded into patterns of pleats and folds in their artificial membranes. If achieved, such structures could be suitable for a diverse range of applications: from miniature surgical robots to deployable habitats in space.

The post Shapeshifting organism uses ‘cellular origami’ to extend to 30 times its body length appeared first on Physics World.

  •  

When the world went wild for uranium: tales from the history of a controversial element

26 juin 2024 à 12:00
A 1950s children's board game called Uranium Rush
Radioactive fun! This 1955 children’s board game was inspired by the US government’s drive to encourage the domestic discovery and mining of uranium. (Courtesy: Oak Ridge Associated Universities Museum of Radiation and Radioactivity)

The uranium craze that hit America in the 1950s was surely one of history’s strangest fads. Jars of make-up lined with uranium ore were sold as “Revigorette” and advertised as infusing “beautifying radioactivity [into] every face cream”. A cosmetics firm applied radioactive soil to volunteers’ skin and used Geiger counters to check whether its soap could wash it away. Most astonishing of all, a uranium mine in the US state of Montana developed a sideline as a health spa, inviting visitors to inhale “a constant supply of radon gas” for the then-substantial sum of $10.

The story of this craze, and much else besides, is entertainingly told in Lucy Jane Santos’ new book Chain Reactions: A Hopeful History of Uranium. Santos is an expert in the history of 20th-century leisure, health and beauty rather than physics, but she is nevertheless well-acquainted with radioactive materials. Her previous book, Half Lives, focused on radium, which had an equally jaw-dropping consumer heyday earlier in the 20th century.

The shift to uranium gives Santos the license to explore several new topics. For physicists, the most interesting of these is nuclear power. Before we get there, though, we must first pass through uranium’s story from prehistoric times up to the end of the Second World War. From the uranium-bearing silver mines of medieval Jachymóv, Czechia, to the uranium enrichment facilities founded in Oak Ridge, Tennessee as part of the Manhattan Project, Santos tells this story in a breezy, anecdote-driven style. The fact that many of her chosen anecdotes also appear in other books on the histories of quantum mechanics, nuclear power or atomic weapons is hardly her fault. This is well-trodden territory for historians and publishers alike, and there are only so many quirky stories to go around.

The most novel factor that Santos brings to this crowded party is her regular references to people whose role in uranium’s history is often neglected. This includes not only female scientists like Lise Meitner (co-discoverer of nuclear fission) and Leona Woods (maker of the boron trifluoride counter used in the first nuclear-reactor experiment), but also the “Calutron Girls”, who put in 10-hour shifts six days a week at the Oak Ridge plant and were not allowed to know that they were enriching uranium for the first atomic bomb. Other “hidden figures” include the Allied prisoners who worked the Jachymóv mines for the Nazis; the political “undesirables” who replaced them after the Soviets took over; and the African labourers who, though legally free, experienced harsh conditions while mining uranium ore at Shinkolobwe (now in the Democratic Republic of the Congo) for the Belgians and, later, the Americans.

Most welcome of all, though, are the book’s references to the roles of Indigenous peoples. When Robert Oppenheimer’s Manhattan Project needed a facility for transmuting uranium into plutonium, Santos notes that members of the Wanapum Nation in eastern Washington state were given “a mere 90 days to pack up and abandon their homes…mostly with little compensation”. The 167 residents of Bikini island in the Pacific were even less fortunate, being “temporarily” relocated before the US Army tested an atomic bomb on their piece of paradise. Santos quotes the American comedian Bob Hope – nobody’s idea of a woke radical – in summing up the result of this callous act: “As soon as the war ended, we located the one spot on Earth that hadn’t been touched by war and blew it to hell.”

The most novel factor that Santos brings to this crowded party is her regular references to people whose role in uranium’s history is often neglected.

These injustices, together with the radiation-linked illnesses experienced by the (chiefly Native American) residents of the Trinity and Nevada test sites, are not the focus of Chain Reactions. It could hardly be “a hopeful history” if they were. But while mentioning them is a low bar, it’s a low bar that the three-hour-long Oscar-winning biopic Oppenheimer didn’t manage to clear. If Santos can do it in a book not even 300 pages long, no-one else has any excuse.

Chain Reactions is not a science-focused book, and in places it feels a little thin. For example, while Santos correctly notes that the “gun” design of the first uranium bomb wouldn’t work for a plutonium weapon, she doesn’t say why. Later, she states that “making a nuclear reactor safe enough and small enough for use in a car proved impossible”, but she leaves out the scientific and engineering reasons for this. The book’s most eyebrow-raising scientific statement, though, is that “nuclear is one of the safest forms of electricity produced – only beaten by solar”. This claim is neither explained nor footnoted, and it left me wondering, first, what “safest” means in this context, and second what makes wind, geothermal and tidal electricity less “safe” than nuclear or solar?

Despite this, there is much to enjoy in Santos’ breezy and – yes – hopeful history. Although she is blunt when discussing the risks of nuclear energy, she also points out that when countries stop using it, they mostly replace nuclear power plants with fossil-fuel ones. This, she argues, is little short of disastrous. Quite apart from the climate impact, ash from coal-fired power plants carries radiation from uranium and thorium into the environment “at a much larger rate than any from a nuclear power plant”. Thus, while the 2011 meltdown of Japan’s Fukushima reactors killed no-one directly, Japan and Germany’s subsequent phase-out of nuclear power contributed to an estimated 28,000 deaths from air pollution. Might a revival of nuclear power be better? Santos certainly thinks so, and she concludes her book with a slogan that will have many physicists nodding along: “Nuclear power? Yes please.”

  • 2024 Icon Books 288pp £20hb

The post When the world went wild for uranium: tales from the history of a controversial element appeared first on Physics World.

  •  

Classical models of gravitational field show flaws close to the Earth

26 juin 2024 à 10:00

If the Earth was a perfect sphere or ellipsoid, modelling its gravitational field would be easy. But it isn’t, so geoscientists instead use an approximate model based on a so-called Brillouin sphere. This is the smallest geocentric sphere that the entire planet fits inside, and it touches the Earth at a single point: the summit of Mount Chimborazo in Ecuador, near the crest of the planet’s equatorial bulge.

For points outside this Brillouin sphere, traditional methods based on spherical harmonic (SH) expansions produce a good approximation of the real Earth’s gravitational field. But for points inside it – that is, for everywhere on or near the Earth’s surface below the peak of Mount Chimborazo – these same SH expansions generate erroneous predictions.

A team of physicists and mathematicians from the universities of Ohio State and Connecticut in the US has now probed the difference between the model’s predictions and the actual field. Led by Ohio State geophysicist Michael Bevis, the team showed that the SH expansion equations diverge below the Brillouin sphere, leading to errors. They also quantified the scale of these errors.

Divergence is a genuine problem

Bevis explains that the initial motivation for the study was to demonstrate through explicit examples that a mathematical theory proposed by Ohio State’s Ovidiu Costin and colleagues in 2022 was correct. This landmark paper was the first to show that SH expansions of the gravitational potential always diverge below the Brillouin sphere, but “at the time, many geodesists and geophysicists found the paper somewhat abstract”, Bevis observes. “We wanted to convince the physics community that divergence is a genuine problem, not just a formal mathematical result. We also wanted to show how this intrinsic divergence produces model prediction errors.”

In the new study, the researchers demonstrated that divergence-driven prediction error increases exponentially with depth beneath the Brillouin sphere. “Furthermore, at a given point in free space beneath the sphere, we found that prediction error decreases as truncation degree N increases towards its optimal value, Nopt,” explains Bevis. Beyond this point, however, “further increasing N will cause the predictions of the model to degrade [and] when N >> Nopt, prediction error will grow exponentially with increasing N.”

The most important practical consequence of the analysis, he tells Physics World, was that it meant they could quantify the effect of this mathematical result on the prediction accuracy of any gravitational model formed from a so-called truncated SH expansion – or SH polynomial – anywhere on or near the surface of the Earth.

Synthetic planetary models

The researchers obtained this result by taking a classic theory developed by Robert Werner of the University of Texas at Austin in 1994 and using it to write code that simulates the gravitational field created by a polyhedron of constant density. “This code uses arbitrary precision arithmetic,” explains Bevis, “so it can compute the gravitational potential and gravitational acceleration g anywhere exterior to a synthetic planet composed of hundreds or thousands of faces with a triangular shape.

“The analysis is precise to many hundreds of significant digits, both above and below the Brillouin sphere, which allowed us to test and validate the asymptotic expression derived by Costin et al. for the upper limit on SH model prediction error beneath the Brillouin sphere.”

The new work, which is described in Reports on Progress in Physics, shows that traditional SH models of the gravitational field are fundamentally flawed when they are applied anywhere near the surface of the planet. This is because they are attempting to represent a definite physical quantity with a series that is actually locally diverging. “Our calculations emphasize the importance of finding a new approach to representing the external gravitational field beneath the Brillouin sphere,” says Bevis. “Such an approach will have to avoid directly evaluating SH polynomials.”

Ultimately, generalizations of the new g simulator will help researchers formulate and validate the next generation of global gravity models, he adds. This has important implications for inertial navigation and perhaps even the astrophysics of exoplanets.

The team is now working to improve the accuracy of its gravity simulator so that it can better model planets with variable internal density and more complex topography. They are also examining analytical alternatives to using SH polynomials to model the gravitational field beneath the Brillouin sphere.

The post Classical models of gravitational field show flaws close to the Earth appeared first on Physics World.

  •  

Battery boss: physicist Martin Freer will run UK’s Faraday Institution

25 juin 2024 à 15:00

The nuclear physicist Martin Freer is to be the next chief executive of the Faraday Institution – the UK’s independent institute for electrochemical energy-storage research. Freer, who is currently based at the University of Birmingham, will take up the role on 2 September. He replaces the condensed-matter physicist Pam Thomas, who stepped down in April after almost four years as boss.

The Faraday Institution was set up in 2017 to help research scientists and industry experts to reduce the cost and weight of batteries and improve their performance and reliability. From its base at the Harwell Science and Innovation Campus in Oxfordshire, it carries out research, training, market analysis and early-stage commercialization, with the research programme currently involving 27 UK universities and 50 businesses.

With a PhD in nuclear physics from Birmingham, Freer has held a number of high-profile roles in the energy sector, including director of the Birmingham Centre for Nuclear Education and Research, which he established in 2010. Five years later he became director of the university’s Birmingham Energy Institute.

Freer also steered activity on the influential Physics Powering the Green Economy report released last year by the Institute of Physics, which publishes Physics World. The report set out the role that physics and physicists can play in fostering the green economy.

Freer told Physics World that joining the Faraday Institution is a “tremendous opportunity”, especially when it comes to the transition to electric vehicles and ensuring that UK battery innovation plays an integral part.

“Energy storage is going to be needed to manage our future energy system from domestic to grid scale and there is a crucial role for the Faraday Institution to play,” says Freer. “This is a globally competitive sector, and the UK needs to ensure it does not lose the advantage it has created for itself through the Faraday Battery Challenge.”

Theoretical physicist Steven Cowley, who is chair elect of the Faraday Institution, notes that Freer is a “proven leader” and is a “terrific fit” for the institution.

“[Freer] knows first-hand what it takes to work with industry and policy makers to translate research into future energy technologies on the ground,” notes Cowley, who is director of the Princeton Plasma Physics Laboratory in the US. “[He] will help to accelerate its mission as it further establishes itself in the UK’s research ecosystem.”

The post Battery boss: physicist Martin Freer will run UK’s Faraday Institution appeared first on Physics World.

  •  

Dark matter’s secret identity: WIMPs or axions?

Par : No Author
25 juin 2024 à 12:00

A former South Dakota gold mine is the last place you might think to look to solve one of the universe’s biggest mysteries. Yet what lies buried in the Sanford Underground Research Facility, 1.47 km beneath the surface, could be our best chance of detecting the ghost of the galaxy: dark matter.

Deep within those old mine tunnels, accessible only by a shaft from the surface, is seven tonnes of liquid xenon, sitting perfectly still (figure 1).

This is the LUX-ZEPELIN (LZ) experiment. It’s looking for the tiny signatures that dark matter is predicted to leave in its wake as it passes through the Earth. To have any chance of success, LZ needs to be one of the most sensitive experiments on the planet.

“The centre of LZ, in terms of things happening, is the quietest place on Earth,” says Chamkaur Ghag, a physicist from University College London in the UK, and spokesperson for the LZ collaboration. “It is the environment in which to look for the rarest of interactions.”

For more than 50 years astronomers have puzzled over the nature of the extra gravitation first observed in galaxies by Vera Rubin, assisted by Kent Ford, who noticed stars orbiting galaxies under the influence of more gravity than could be accounted for by visible matter. (In the 1930s Fritz Zwicky had noticed a similar phenomenon in the movement of galaxies in the Coma Cluster.)

Most (though not all – see part one of this series “Cosmic combat: delving into the battle between dark matter and modified gravity“) scientists believe this extra mass to be dark matter. “We see these unusual gravitational effects, and the simplest explanation for that, and one that seems self-consistent so far, is that it’s dark matter,” says Richard Massey, an astrophysicist from Durham University in the UK.

The standard model of cosmology tells us that about 27% of all the matter and energy in the universe is dark matter, but no-one knows what it actually is. One possibility is a hypothetical breed of particle called a weakly interacting massive particle (WIMP), and it is these particles that LZ is hoping to find. WIMPs are massive enough to produce a substantial gravitational field, but they otherwise only gently interact with normal matter via the weak force.

With more questions than answers, the search for dark matter is heading for a showdown

“The easiest explanation to solve dark matter would be a fundamental particle that interacts like a WIMP,” says Ghag. Should LZ fail in its mission, however, there are other competing hypotheses. One in particular that is lurking in the wings is a lightweight competitor called the axion.

Experiments are under way to pin down this vast, elusive portion of the cosmos. With more questions than answers, the search for dark matter is heading for a showdown.

Going deep underground

According to theory, as our solar system cruises through space we’re moving through a thin fog of dark matter. Most of the dark-matter particles, being weakly interacting, would pass through Earth, but now and then a WIMP might interact with a regular atom.

This is what LZ is hoping to detect, and the seven tonnes of liquid xenon are designed to be a perfect WIMP trap. The challenge the experiment faces is that even if a WIMP were to interact with a xenon atom, it has to be differentiated from the other particles and radiation, such as gamma rays, that could enter the liquid.

1 Buried treasure

The LUX-ZEPELIN (LZ) experiment
(Courtesy: Matthew Kapust, Sanford Underground Research Facility)

The seven-tonne tank of liquid xenon that comprises the LZ detector. The experiment is located almost a mile beneath the Earth to reduce background effects, which astronomers hope will enable them to identify weakly interacting massive particles (WIMPs).

Both a gamma ray and a WIMP can create a cloud of ionized free electrons inside the detector, and in both cases, when the ionized electrons recombine with the xenon atoms, they emit flashes of light. But both mechanisms are slightly different, and LZ is designed to detect the unique signature of a WIMP interaction.

When a gamma ray enters the detector it can interact with an electron in the xenon, which flies off and causes a chain of ionizations by interacting with other neighbouring electrons. The heavy WIMP, however, collides with the xenon nucleus, sending it spinning through the liquid, bumping into other nuclei, and indirectly ionizing a few atoms along the way.

To differentiate these two events, an electric field of a few tens of kilovolts is cast across the xenon tank, drawing some of the ionized electrons toward the top of the tank before they can recombine. When these electrons reach the top, they enter a thin layer of gas and produce another, second, burst of light.

When a gamma ray enters the tank, the second flash is brighter than the first – the recoil electron flies off like a bullet, and most of the electrons it liberates are pulled up by the detector before they recombine.

A nucleus is much heavier than an electron, so when a WIMP interacts with the xenon, the path of the recoil is shorter. The cloud of electrons generated by the interaction is therefore localized to a smaller area and more of the electrons find a “partner” ion to recombine with before the electric field can pull them away. This means that for a WIMP, the first flash is brighter than the second.

In practice, there is a range of brightnesses depending upon the energies of the particles, but statistically an excess of brighter first flashes above a certain background level would be a strong signature of WIMPs.

“Looking for dark matter experimentally is about understanding your backgrounds perfectly,” explains Ghag. “Any excess or hint of a signal above our expected background model – that’s what we’re going to use to ascribe statistical significance.”

LZ is now up and running, as of late 2021, and has completed about 5% of its search. Before it could begin its hunt, the project had to endure a five-year process to screen every component of the detector, to make sure that the background effects of every nut, bolt and washer have been accounted for.

WIMPs in crisis?

How many, if any, WIMPs are detected will inform physicists about the interaction cross-section of the dark-matter particle – meaning how likely it is to interact with normal matter it comes into proximity with.

The timing couldn’t be more crucial. Some of the more popular WIMP candidates are predicted by a theory called “supersymmetry”, which posits that every particle in the Standard Model has a more massive “superpartner” with a different quantum spin. Some of these superpartners were candidates for WIMPs but the Large Hadron Collider (LHC) has failed to detect them, throwing the field – and the hypothetical WIMPs associated with them – into crisis.

Francesca Chadha-Day, a physicist who works at Durham University and who studies dark-matter candidates based on astrophysical observations, thinks time may be up for supersymmetry. “The standard supersymmetric paradigm hasn’t materialized, and I think it might be in trouble,” she says.

Ruling out WIMPs now would be like building the LHC but stopping before turning it on

Chamkaur Ghag

She does, however, stress that supersymmetry is “only one source of WIMPs”. Supersymmetry was proposed to explain certain problems in physics, such as why gravity is more feeble than the weak force. Even if supersymmetry is a dead end, there are alternative theories to solve these problems that also predict the existence of particles that could be WIMPs.

“It’s way too early to give up on WIMPs,” adds Ghag. LZ needs to run for at least 1000 days to reach its full sensitivity and he says that ruling out WIMPs now would be “like building the LHC but stopping before turning it on”.

The axion universe

With question marks nevertheless hanging over WIMPs, an alternative type of dark-matter particle has been making waves.

Dubbed axions, Chadha-Day describes them as “dark matter for free”, because they were developed to solve an entirely different problem.

“There’s this big mystery in particle physics that we call the Strong CP Problem,” says Chadha-Day. C refers to charge and P, parity. The CP problem describes how, if you switch a particle for its oppositely charged antiparticle and swap it for a spatial mirror image, the laws of physics would still function the same for it.

The Standard Model predicts that the strong force, which glues quarks together inside protons and neutrons, should actually violate CP symmetry. Yet in practice, it plays ball with the conservation of charge and parity. Something is intervening and interacting with the strong force to maintain symmetry. This something is proposed to be the axion.

“The axion is by far the most popular way of solving the Strong CP Problem because it is the simplest,” says Chadha-Day. “And then when you look at the properties of the axion you also find that it can act as dark matter.”

Supersymmetry’s difficulties have seen a recent boom in support for axions as dark matter

These properties include rarely interacting with other particles and sometimes being non-relativistic, meaning that some axions would move slowly enough to clump into haloes around galaxies and galaxy clusters, which would account for their additional mass. Like WIMPs, however, axions have yet to be detected.

Supersymmetry’s difficulties have seen a recent boom in support for axions as dark matter. “There are strong motivations for axions,” says Ghag, “Because they could exist even if they are not dark matter.”

Lensing patterns

Axions are predicted to be lighter than WIMPs and to interact with matter via the electromagnetic force (and gravity) rather than the weak force. Experiments to directly detect axions use magnetic fields, because in their presence an axion can transform into a photon. However, because axions might exist even if they aren’t dark matter, to test them against WIMPs, physicists have to take a different approach.

The extra mass from dark matter around galaxies and galaxy clusters can bend the path of light coming from more distant objects, magnifying them and warping their appearance, sometimes even producing multiple images (figure 2). The shape and degree of this effect, called “gravitational lensing”, is impacted by the distribution of dark matter in the lensing galaxies. WIMPs and axions are predicted to distribute themselves slightly differently, so gravitational lensing can put the competing theories to the test.

2 Seeing quadruple

Lensing effects around six astronomical objects
(Courtesy: NASA, ESA, A Nierenberg (JPL) and T Treu (UCLA))

Galaxies and galaxy clusters can bend the light coming from bright background objects such as quasars, creating magnified images. If the lensing effect is strong, as in these images, we may even observe multiple images of a single quasar. The top right image shows quasar HS 0810+2554 (see figure 4).

If dark matter is WIMPs, then they will form a dense clump at the centre of a galaxy, smoothly dispersing with increasing distance. Axions, however, operate differently. “Because axions are so light, quantum effects become more important,” says Chadha-Day.

These effects should show up on large scales – the axion halo around a galaxy is predicted to exhibit long-range quantum interference patterns, with the density fluctuating in peaks and troughs thousands of light-years across.

Gravitational lensing could potentially be used to reveal these patterns, using something called the “critical curve”. Think of a gravitational lens as a series of lines where space has been warped by matter, like on a map where the contour lines indicate height. The critical curve is where the contours bunch up the most (figure 3).

3 Cosmic cartography

Gravitational lensing around the Abell 1689 galaxy cluster
(CC BY Astron. Astrophys. Rev. 19 47)

Gravitational lensing around the Abell 1689 galaxy cluster. Red lines indicate the critical curve where magnification is infinite and yellow contours indicate the regions of the sky where objects are magnified by more than a factor of 10.

Critical curves “are lines of sight in the universe where you get enormous magnification in gravitational lensing, and they have different patterns depending on whether dark matter is WIMPs or axions”, says Massey. With axions, the quantum interference pattern can render the critical curve wavy.

In 2023 a team led by Alfred Amruth of the University of Hong Kong found some evidence of wavy effects in the critical curve. They studied the quasar HS 0810+2554 – the incredibly luminous core of a distant galaxy that is being gravitationally lensed (we can see four images of it from Earth) by a foreground object. They found that the lensing pattern could be better explained by axions than WIMPs (figure 4), though because they only studied one system, this is far from a slam dunk for axions.

Dark-matter interactions

Massey prefers not to tie himself to any one particular model of dark matter, instead opting to take a phenomenological approach. “I look to test whether dark-matter particles can interact with other dark-matter particles,” he says. Measuring how much dark matter interacts with itself (another kind of cross section) can be used to narrow down its properties.

4 Making waves

Comparison of the shapes of gravitational lenses from four models of dark matter
(First published in Amruth et al. 2023 Nature Astron. 7 736. Reprinted with permission from Springer Nature.)

The shape of a gravitational lens would change depending on whether dark matter is WIMPs or axions. Alfred Amruth and colleagues developed a model of the gravitational lensing of quasar HS 0810+2554 (see figure 2). Light from the quasar is bent around a foreground galaxy, and the shape of the gravitational lensing depends on the properties of the dark matter in the galaxy. The researchers tested models of both WIMP-like and axion-like dark matter.

The colours indicate the amount of magnification, with the light blue lines representing the critical curves of high magnification. Part a shows a model of WIMP-like dark matter, whereas b, c and d show different models of axionic dark matter. Whereas the WIMP-like critical curve is smooth, the interference between the wavelike axion particles makes the critical curve wavy.

The best natural laboratories in which to study dark matter interacting with itself are galaxy cluster collisions, where vast quantities of matter and, theoretically, dark matter collide. If dark-matter halos are interacting with each other in cluster collisions, then they will slow down, but how do you measure this when the objects in question are invisible?

“This is where the bits of ordinary matter are actually useful,” says Massey. Cluster collisions contain both galaxies and clouds of intra-cluster hydrogen. Using gravitational lensing, scientists can work out where the dark matter is in relation to these other cosmic objects, which can be used to work out how much it is interacting.

The galaxies in clusters are so widely spaced that they sail past each other during the collision. By contrast, intra-cluster hydrogen gas clouds are so vast that they can’t avoid each other, and so they don’t move very far. If the dark matter doesn’t interact with itself, it should be found out with the galaxies. If the interaction is strong, however, it will be located with the hydrogen clouds. If it interacts just a bit, then the dark matter will be somewhere in-between. Its location can therefore be used to estimate the interaction cross-section, and this value can be handed to theorists to test which dark-matter model best fits the bill.

High-altitude astronomy

The problem is that cluster collisions can take a hundred million years to run their course. What’s needed is to see galaxy cluster collisions at all stages, with different velocities, from different angles.

Enter SuperBIT – the Super Balloon-borne Imaging Telescope, on which Massey is the UK principal investigator. Reaching 40 km into the atmosphere while swinging beneath a super-pressure balloon provided by NASA, SuperBIT was a half-metre aperture telescope designed to map dark matter in as many galaxy-cluster collisions as possible to piece together the stages of such a collision.

SuperBIT flew five times, embarking on its first test flight in September 2015 (figure 5). “We would bring it back down, tinker with it, improve it and send it back up again, and by the time of the final flight it was working really well,” says Massey.

5 Far from home

Photo of the Earth taken from the superBIT telescope
(CC BY-SA Javier Romualdez/SuperBIT)

The SuperBIT telescope took gravitational lensing measurements of cluster collisions to narrow down the properties of dark matter. This photo of the Earth was taken from SuperBIT during one of its five flights.

That final flight took place during April and May 2023, launching from New Zealand and journeying around the Earth five and a half times. The telescope parachuted to its landing site in Argentina, but while it touched down well enough, the release mechanism had frozen in the stratosphere and the parachute did not detach. Instead, the wind caught it and dragged SuperBIT across the landscape.

“It went from being aligned to within microns to being aligned within kilometres! The whole thing was just a big pile of mirrors and metal, gyroscopes and hard drives strewn across Argentina, and it was heart-breaking,” says Massey, who laughs about it now. Fortunately, the telescope had worked brilliantly and all the data had been downloaded to a remote drive before catastrophe struck.

As long as a detection remains elusive, the identity of dark matter will continue to be a sore point for astronomers and physicists

The SuperBIT team is working through that data now. If there is any evidence that dark-matter particles have collided, the resulting estimate of the interaction cross-section will point to specific theoretical models and rule out others.

Astronomical observations can guide us, but only a positive detection of a dark-matter particle in an experiment such as LZ will settle the matter. As long as a detection remains elusive, the identity of dark matter will continue to be a sore point for astronomers and physicists. It also keeps the door ajar for alternative theories, and proponents of modified Newtonian dynamics (MOND) are already trying to exploit those cracks, as we shall see in the third and final part of this series.

  • In the first instalment of this three-part series, Keith Cooper explored the struggles and successes of modified gravity in explaining phenomena at varying galactic scales

The post Dark matter’s secret identity: WIMPs or axions? appeared first on Physics World.

  •  

Waffle-shaped solar evaporator delivers durable desalination

Par : No Author
25 juin 2024 à 10:00

Water is a vital resource to society and is one of the main focus areas for the United Nations Sustainable Development Goals. However, around two thirds of the world still doesn’t have regular access to freshwater – with people in this category facing water scarcity for at least a month each year.

Alongside, every two minutes a child dies from water-, sanitation- and hygiene-related diseases; and freshwater sources are becoming ever more polluted, causing further stress on water supplies. With many water-related challenges around the world, new ways of producing freshwater are being sought. In particular, solar steam-based desalination methods are seen as a green way of producing potable water from seawater.

Solar steam generation a promising approach

There are various water treatment technologies available today, but one that has gathered a lot of attention lately is solar steam generation. Interfacial solar absorbers convert solar energy into heat to remove the salt from seawater and produce freshwater. By localizing the absorbed energy at the surface, interfacial solar absorbers reduce heat loss to bulk water.

Importantly, solar absorbers can be used off-grid and in remote regions, where potable water access is the most unreliable. However, many of these technologies cannot yet be made at scale because of salt crystallization on the solar absorber, which reduces both the light absorption and the surface area of the interface. Over time, the solar absorption capabilities become reduced and the supply of water becomes obstructed.

Quasi-waffle design could prevent crystallization

To combat the salt crystallization challenge, researchers in China have developed a waffle-shaped solar evaporator (WSE). The WSE is made of a graphene-like porous monolith, fabricated via a zinc-assisted pyrolysis route using biomass and recyclable zinc as the precursor materials.

First authors Yanjun Wang and Tianqi Wei from Nanjing University and their colleagues designed the WSE with a basin and ribs, plus extra sidewalls (that conventional plane-shaped solar evaporators don’t have) to drive the Marangoni effect in the device. The Marangoni effect is the flow of fluid from regions with low surface tension to those of high surface tension. Marangoni effects can be induced by both gradients in solute concentration or in temperature – and the WSE’s extra sidewalls trigger both effects.

Schematic of waffle-shaped solar evaporator
WSE schematic Brine evaporation in a conventional plane evaporator (A) and a WSE (B); white spots denote salt concentration; blue arrows indicate salt transport. The extra sidewalls in the WSE induce Marangoni flows (red and brown arrows). C) The interfacial solar steam generation device. (Courtesy: Y Wang et al Sci. Adv. 10.1126/sciadv.adk1113)

When the saltwater evaporates, the faster evaporation and more efficient heat consumption on the plateaus than in the basins creates gradients in solute concentration and temperature. Based on these gradients, the sidewalls then generate a surface-tension gradient, which induces solute- and temperature-driven Marangoni flows in the same direction.

The two Marangoni effects increase the convection of fluid in the device, accelerating the transport of salt ions and diluting the maximum salinity of the system below the critical saturation value – therefore preventing salt crystallization from occurring. This leads to continuous salt rejection with reduced fouling at the interface.

The WSE delivers a solar absorption of 98.5% and high evaporation rates of 1.43 kg/m2/h in pure water and 1.40 kg/m2/h in seawater. In an outdoor experiment using a prototype WSE to treat a brine solution, the device produced freshwater at up to 2.81 l/m2 per day and exhibited continuous operation for 60 days without requiring cleaning.

The WSE’s ability to alleviate the salt crystallization issues, combined with its cost-efficiency, means that the device could theoretically be commercially scalable in the future.

Overall, the WSE overcomes the three main obstacles faced when designing solar desalination devices: efficient water evaporation and condensation, and preventing salt fouling. While the device achieved a high desalination stability (evident from the long cleaning cycles), the evaporation rate is currently restricted by the upper limits of a single-stage evaporator. The researchers point out that introducing a multistage evaporator to the system could help improve the solar-to-water efficiency and the freshwater yield of the device. They are now designing such a multistage evaporator to further their current research.

The findings are reported in Science Advances.

The post Waffle-shaped solar evaporator delivers durable desalination appeared first on Physics World.

  •  

Why optics is critical to meeting society’s grand challenges

Par : No Author
24 juin 2024 à 12:00

Over the last century, optics and photonics have transformed the world. A staggering 97% of all intercontinental communications traffic travels down optical fibres, enabling around $10 trillion of business transactions daily across the globe. Young people especially are at the heart of some of the most dramatic changes in optical technologies the world has ever witnessed.

Whether it’s a growing demand for higher data rates, larger cloud storage and cleaner energy supplies – or simply questions around content and self-censorship – communications networks, based on optics and photonics, are a crucial aspect of modern life. Even our knowledge of the impact of climate change comes mostly from complex optical instruments that are carried by satellites including spectrometers, narrow linewidth lasers and sophisticated detectors. They provide information that can be used to model key aspects of the Earth’s atmosphere, landforms and oceans.

Optics and photonics can also help us to monitor the behaviour of earthquakes and volcanoes – both terrestrial and underwater – and the risk and impact of tsunamis on coastal populations. The latter requires effective modelling together with satellite and ground-based observations.

Recent developments in optical quantum technologies are also beginning to bear fruit in areas such as high-resolution gravimetry. It allows tiny changes in subsurface mass distributions to be detected by measuring the spatial variations in gravity, and with it the movement of magma and the prediction of volcanic activity.

The challenge ahead

The UK-based Photonics Leadership Group (PLG) estimates that by 2035 more than 60% of the UK economy will directly depend on photonics to keep it competitive, becoming one of the top three UK economic sectors. PLG projects that the UK photonics industry will increase from £14.5bn today to £50bn over that period. The next 25 years are likely to see further significant advances in photonics, integrated circuits, far-infrared detector breakthroughs, free-space optical communication and quantum optical technologies.

There are likely to be breakthroughs in bandgap engineering in compound-semiconductor alloy technologies that will let us easily make and operate room-temperature very-long-wavelength infrared detectors and imaging devices. This could boost diagnostic medical imaging for management of pain, cancer detection and neurodiagnostics.

The joint effort between photonics and compound semiconductor materials science will become a significant capability in a sustainable 21st century and beyond. Defence and security are also likely to benefit from long-range spectroscopic identification of trace molecules. Optics and photonics will dominate space, with quantum technologies coming into service for communications and environmental monitoring, even if the proliferation of low-Earth-orbiting space objects are likely to cause congestion and hamper direct line-of-sight communications and monitoring.

Such developments, however, don’t come without their challenges, especially when adapting to the pace of change. Optics has a long history in the UK and the evolving nature of the subject is similar to that faced over a century ago by the Optical Society, Physical Society and the Institute of Physics (see box below).

Education will be key and making undergraduate courses attractive as will having a good balance of optics, photonics and fundamental physics in the curriculum. Making sure that students get experience in photonics engineering labs that reflect practical on-the-job tasks will be crucial as will close partnerships with the photonics industry and professional societies when aligning course content with the needs of the photonics industry.

Postgraduate photonics research in the UK remains strong, but we cannot rest on our laurels and it must be improved further, if not expanded.

Another challenge will be tackling the gap in optics and photonics advances between low-income nations and those that are high-income. These include access to optics and photonics education, research collaborations and mentoring as well as the need to equip developing nations with optics and photonics expertise to tackle global issues like desertification, climate change and the supply of potable water.

Desertification exacerbates economic, environmental and social issues and is entwined with poverty. According to the United Nations Convention to Combat Desertification, 3.2 billion people worldwide are negatively affected by spreading deserts. The International Commission for Optics is working with the International Science Council to tackle this by offering educational development, improving access to optical technologies and international collaborations with an emphasis on low-income countries.

If I had a crystal ball, I would say that over the next 25 years global economies will depend even more on optics and photonics for their survival, underpinning tighter social, economic and physical networks driven by artificial intelligence and quantum-communication technologies. Optical societies as professional bodies must play a leading role in addressing and communicating these issues head on. After all, only they can pull together like-minded professionals and speak with one voice to the needs and challenges of society.

Why the Optical Group of the Institute of Physics is the UK’s optical society

The Optical Group of the Institute of Physics, which is celebrating its 125th anniversary this year, can trace its roots back to 1899 when the Optical Society of London was formed by a group of enthusiastic optical physicists, led by Charles Parsons and Frank Twyman. Until 1931 it published a journal – Transactions of the Optical Society – which attracted several high-profile physicists including George Paget Thomson and Chandrasekhara Raman.

Many activities of the Optical Society overlapped with those of the Physical Society of London and they held several joint annual exhibitions at Imperial College London. When the two organizations formally merged in 1932, the Optical Group of the Physical Society became the de facto national optical society of the UK and Ireland.

In 1947 the Physical Society – via the Optical Group – became a signatory to the formation of the International Commission for Optics, which is now made up of more than 60 countries and provides policy recommendations and co-ordinates international activities in optics. The Optical Group is also a member of the European Optical Society.

In 1960 the Physical Society merged with the Institute of Physics (IOP), and today, the Optical Group of the IOP, of which I am currently chair, has a membership above 2100. The group represents UK and Irish optics, organizes conferences, funds public engagement projects and supports early-career researchers.

The post Why optics is critical to meeting society’s grand challenges appeared first on Physics World.

  •  

Liquid crystals generate entangled photon pairs

Par : No Author
24 juin 2024 à 10:00
Diagram showing a beam of laser light impinging on a liquid crystal and producing a pair of entangled photons
Highly adaptable entanglement: The new technique makes it possible to alter both the flux and the polarization state of the photon pairs simply by changing the orientation of the molecules in the liquid crystal. This can be done either by engineering the sample geometry or applying an electric field. (Courtesy: Adapted from Sultanov, V., Kavčič, A., Kokkinakis, E. et al. Tunable entangled photon-pair generation in a liquid crystal. Nature (2024). https://doi.org/10.1038/s41586-024-07543-5)

Researchers in Germany and Slovenia have found a new, more adaptable way of generating entangled photons for quantum physics applications. The technique, which relies on liquid crystals rather than solid ones, is much more tunable and reconfigurable than today’s methods, and could prove useful in applications such as quantum sensing.

The usual way of generating entangled photon pairs is in a crystal such as lithium niobate that exhibits a nonlinear polarization response to an applied electric field. When a laser beam enters such a crystal, most of the photons pass straight through. A small fraction, however, are converted into pairs of entangled photons via a process known as spontaneous parametric down-conversion (SPDC). Because energy is conserved, the combined energy and momenta of the entangled photons must equal those of the original photons.

This method is both cumbersome and inflexible, explains team leader Maria Chekhova. “First they grow a crystal, then they cut it in a certain way, and after it’s cut it can only be used in one way,” says Chekhova, an optical physicist at the Friedrich-Alexander Universität Erlangen-Nürnberg and the Max-Planck Institute for the Science of Light, both in Germany. “You cannot generate pairs at one wavelength with one sort of entanglement and then use it in a different way to generate pairs at a different wavelength with a different polarization entanglement. It’s just one rigid source.”

In the new work, Chekhova, Matjaž Humar of the Jožef Stefan Institute in Slovenia and colleagues developed an SPDC technique that instead uses liquid crystals. These self-assembling, elongated molecules are easy to reconfigure with electric fields (as evidenced by their widespread use in optical displays) and some types exhibit highly nonlinear optical effects. For this reason, Noel Clark of the University of Colorado at Boulder, US, observes that “liquid crystals have been in the nonlinear optics business for quite a long time, mostly doing things like second harmonic generation and four-wave mixing”.

Generating and modifying entanglement

Nobody, however, had used them to generate entanglement before. For this, Chekhova, Humar and colleagues turned to the recently developed ferroelectric nematic type of liquid crystals. After preparing multiple 7-8 μm-thick layers of these crystals, they placed them between two electrodes with a predefined twist of either zero, 90° or 180° between the molecules at either end.

When they irradiated these layers with laser light at 685 nm, the photons underwent SPDC with an efficiency almost as high as that of the most commonly used solid crystals of the same thickness. What is more, although individual photons in a pair are always entangled in the time/frequency domain – meaning that their frequencies must be anti-correlated to ensure conservation of energy – the technique produces photons with a broad range of frequencies overall. The team believes this widens its applications: “There are ways to concentrate the emission around a narrow bandwidth,” Chekhova says. “It’s more difficult to create a broadband source.”

The researchers also demonstrated that they could modify the nature of the entanglement between the photons. Although the photons’ polarizations are not normally entangled, applying a voltage across the liquid crystal is enough to make them so. By varying the voltage on the electrodes and the twist on the molecules’ orientations, the researchers could even control the extent of this entanglement — something they confirmed by measuring the degree of entanglement at one voltage and twist setting and noting that it was in line with theoretical predictions.

Potential extensions

The researchers are now exploring several extensions to the work. According to their calculations, it should be possible to use liquid crystals to produce non-classical “squeezed” states of light, in which the uncertainty in one variable drops below the standard quantum limit at the expense of the other.  “We just need higher efficiency,” Chekhova says.

Another possibility would be to manipulate the chirality within the crystal layers with an applied voltage. The team also seeks to develop practical devices: “Pixellated devices could produce photon pairs in which each part of the beam had its own polarization,” Chekhova says. “You could then produce structured light and encode quantum information into the structure of the beam.” This could be useful in sensing, she adds.

“This liquid crystal device has a lot of potential flexibility that would never have been available in crystalline materials,” says Clark, who was not involved in the research. “If you want to change something in a [solid] crystal, then you tweak something, you have to re-grow the crystal and evaluate what you have. But in this liquid crystal, you can mix things in, you can put electric field on and change the orientation.”

The research is published in Nature.

The post Liquid crystals generate entangled photon pairs appeared first on Physics World.

  •  

From the lab to Ukraine’s front line

21 juin 2024 à 12:00

When Ukraine was invaded by Russia in February 2022, life for the country’s citizens was turned upside down, with scientists no exception. Efforts to help have come from many quarters both within Ukraine and the wider international community. Michael Banks caught up with Holly Tann, who has a PhD in nuclear physics from the University of Liverpool and the University of Jyväskylä, Finland, and Adam McQuire, who is completing a PhD in archaeology, focusing on contemporary conflict analysis, also at Liverpool.

Why did you set up Casus Pax?

Casus Pax, which is a registered and regulated non-profit organization, was formed in the first week of the Russian invasion in February 2022. Adam went to the Polish–Ukrainian Border a few days later, initially on a fact-finding mission to find a way to help civilians escaping the war. He soon began assisting with the construction of an impromptu field hospital, inside a truck stop, to help refugees approaching the border. This field hospital treated more than 300,000 patients in the first month of the war and required a constant supply of equipment and consumables.

Did you have any links with Ukraine?

We have always had a connection with Eastern Europe. Holly has her family roots in western Ukraine while Adam has family who live close to the Ukrainian border in Poland. When the Russian invasion began, it didn’t feel like a distant conflict in some faraway land because Holly was working in Finland at the time.

What kind of support did you have to set up Casus Pax?

At the beginning it was simply the two of us and our van, but friends and family helped us to raise funds to buy aid and transport it over. Slowly we were able to gather a group of motivated, highly skilled volunteers who have been instrumental in developing Casus Pax into what it is today.

Are you now full time?

We work very long hours running Casus Pax, alongside which Adam is still finishing his PhD. Holly left academic research last year after finishing her doctorate.

How many people are involved with the organization?

Casus Pax is run by us on a day-to-day basis. We also have volunteers doing outreach, procurement and fundraising and who travel to Ukraine with us. Daniel Bromley, for example, is a physics postdoc at Imperial College London who volunteers when he can. Jessica Wade – another Imperial physicist – recently became a patron. In Ukraine we have Yuriy Polyezhayev of University of Zaporizhzhia Polytechnic as an education co-ordinator.

Your focus initially was on medical supplies – what has that involved?

Since our operations began we have delivered over a million pieces of lifesaving equipment directly to civilian practitioners across every frontline region. We tend to operate between 1 and 50 km from the front lines. These locations feature the most unstable conditions and therefore need medical resources designed to deal with severe wounds and catastrophic injuries. This equipment, sadly, is often used soon after delivery and it is unlikely that we will run out of places to deliver for quite some time.

Casus Pax in Kherson
The Casus Pax team delivering medical equipment to Kherson in October 2023 (courtesy: Adam McQuire)

Where do you get the supplies?

We buy the majority of the supplies we provide and most is sourced as surplus from the UK National Health Service, UK Ministry of Defence and private hospitals. The rest we receive as donations from the private sector.

Are you solely focused on consumables?

No. We have also provided a fleet of eight ambulances and rescue vehicles to the Ukrainian emergency services. Again, these vehicles are unlikely to survive long enough to be discarded due to age-related wear and will invariably need to be replaced fairly quickly.

What were some of the challenges in the early months of the war?

The first challenge was the language barrier. We are by no means fluent, but knowing some gives you  a better understanding of Ukrainian administration and bureaucracy. The second was developing a reputation for efficiency and honesty, which took some time. The longest challenge, however, is the constant process of staying safe. Learning how to use different intelligence options and security protocols and ensuring that we are suitably trained and equipped for emergencies.

How often do you return to Ukraine?

We go back every 6–8 weeks. Sometimes we meet people who are truly desperate with nothing left – a family killed and a house destroyed, clinging to the land they grew up on accompanied only by a pet or two. Ukrainians are tough, resilient and stoic but they are not bulletproof nor are they free from the effects of prolonged psychological duress. On one visit to the front-line town of Berylsav in Kherson Oblast in 2023, which was left without reliable clean water after the Kakhovka Dam had failed, we brought medical supplies, water and water filtration equipment and a couple of tonnes of food. Despite the situation, we were greeted warmly and given boiled potatoes and kompot. One elderly woman said to Holly in Ukrainian: “I don’t know who you are and I don’t care that you can’t understand me, I love you.”

You were also asked by the Ukrainian police to help preparations for a nuclear event; what did that entail?

In July 2023 we met leaders of the Zaporzhzhia Regional Administration to discuss the risk posed by the ongoing Russian occupation of the Zaporizhzhia Nuclear Power Plant (ZNPP). The local authorities were deeply troubled and were struggling to generate international co-operation to prepare for the worst. We then received a letter from the Office of the President, recognizing the risk of an accident at the ZNPP and endorsing Casus Pax as an organization positioned to assist.

From the president’s office itself?

Yes. Two weeks later we received a call from the National Police of Ukraine HQ in Kyiv while we were in Scotland picking up an aid vehicle to take to Ukraine. They requested that we help them prepare for a major chemical, biological, radiological and nuclear incident. Three weeks later we made our first delivery of hundreds of thousands of pieces of equipment to the national police based in Zaporizhzhia.

Did your technical backgrounds help with this request?

With the rise in concern over the occupation of the nuclear power plant, it became clear that we were in a niche position to relay information from technical sources to humanitarian organizations. Through Zaporizhzhia we also developed a number of relationships with academic and educational leaders in the city and were asked to work with universities and schools during our deployments.

Can you say more about these educational initiatives?

It was a natural progression for us to move towards supporting schools and universities, after using our academic networks to good effect in Zaporizhzhia. We met Yuriy, now our education co-ordinator, through his translation work with the National Police. Yuriy works in several institutions, having taken on extra teaching roles to compensate for those that have left or been called up to serve. We discussed the challenges faced by the universities and schools in the frontline regions and we visited some institutions to see how learning was continuing.

Casus Pax team
The Casus Pax team at the Zaporizhzhia Polytechnic National University in April 2024 (Courtesy: Zaporizhzhia Polytechnic National University)

What were some of the challenges?

Many school classes had moved into universities without an opportunity to bring their classroom equipment, while older school children and university students were using apparatus in buildings regularly hit by airstrikes. After discussions with officials at the University of Zaporizhzhia Polytechnic, we developed a long-term plan for how Casus Pax might assist. They were keen on developing connections with researchers and institutions to facilitate outreach and cultural exchange. International collaboration collapsed post invasion and without it, academic aspiration has weakened.

What did this involve?

Our first delivery of aid to the university included medical equipment to bolster the university’s bomb shelter reserves and response capacity. But we also supplied 20 Micro:bit kits, which were donated to us by BBC Micro:bit. The University of Zaporizhzhia Polytechnic had previously been using Micro:bits in limited numbers to develop robotics and computing skills. But the university did not have enough to efficiently roll out courses. Now they do and we want to connect schools in the UK with those in Ukraine so that they can run Micro:bit classes in tandem. This has educational benefits but is also morale boosting.

How challenging is it for students?

It’s very hard for a young person to try and compartmentalize war and focus on studies and career ambitions. But the students are remarkable. Thousands of schools have suffered significant damage during the war and have had to close. Tens of thousands of internally displaced students also attend schools in safer areas often under the auspices of universities. This is not unusual, it is the norm. Education is continuing because of the commitment that a reduced number of staff have to the future of their students.

What future plans do you have for education?

In the coming months we will be hosting a platform where researchers can guest lecture remotely about their work or interests at the University of Zaporizhzhia Polytechnic. We want to expand this model to cover the whole of Ukraine, and in time take it international. We hope to have thousands of online resources available by 2025 that can be used anywhere.

Above all else we need donations and sponsorship. The equipment we supply saves lives and we need to continue with this work

What do you also hope for Casus Pax in the future?

We would like to keep our team relatively small, which means it is easier to ensure that every penny is accounted for. At the same time, we need to expand the reach of our operations. Every day we have desperate requests for urgent and often complex medical support – not only from Ukraine but elsewhere too – and sadly the only limiting factor is funding.

What kind of support do you need to achieve this?

Above all else we need donations and sponsorship. The equipment we supply saves lives and we need to continue with this work alongside the new education initiatives. Without financial support the whole operation will stop. We are also asking anyone who is interested to come forward and help, especially those keen to do outreach work.

The post From the lab to Ukraine’s front line appeared first on Physics World.

  •  

Hawaiian volcano erupted ‘like a stomp rocket’

20 juin 2024 à 17:00

A series of eruptions at the Hawaiian volcano Kilauea in 2018 may have been driven by a hitherto undescribed mechanism that resembles the “stomp-rocket” toys popular in science demonstrations. While these eruptions are the first in which scientists have identified such a mechanism, researchers at the University of Oregon, US, and the US Geological Survey say it may also occur in other so-called caldera collapse eruptions.

Volcanic eruptions usually fall into one of two main categories. The first is magmatic eruptions, which (as their name implies) are driven by rising magma. The second is phreatic eruptions, which are prompted by ground water flash-evaporating into steam. But the sequence of 12 closely-timed Kilauea eruptions didn’t match either of these categories. According to geophysicist Joshua Crozier, who led a recent study of the eruptions, these eruptions instead appear to have been triggered by a collapse of Kilauea’s subsurface magma reservoir, which contained a pocket of gas and debris as well as molten rock.

When this kilometre-thick chunk of rock dropped, Crozier explains that the pressure of the gas in the pocket suddenly increased. And just like stamping on the gas-filled cavity in a stomp rocket causes a little plastic rocket to shoot upwards, the increase in gas pressure within Kilauea blasted plumes of rock fragments and hot gas eight metres into the air, leaving behind a collapsed region of crustal rock known as a caldera.

A common occurrence?

Caldera collapses are fairly common, with multiple occurrences around the world in the past few decades, Crozier says. This means the stomp-rocket mechanism might be behind other volcanic eruptions, too. Indeed, previous studies had hinted at this possibility. “Several key factors led us to speculate along the line of the stomp-rocket, one being that the material erupted during the Kilauea events was largely lithic clasts [broken bits of crustal rock or cooled lava] rather than ‘fresh’ molten magma as occurs in typical magmatic eruptions,” Crozier tells Physics World.

This lack of fresh magma might imply phreatic activity, as was invoked for previous explosive eruptions at Kilauea in 1924. However, in 2018, USGS scientists Paul Hsieh and Steve Ingebritsen used groundwater simulations to show that the rocks around Kilauea’s summit vent should have been too hot for liquid groundwater to flow in at the time the explosions occurred. Seismic, geodetic, and infrasound data also all suggested that the summit region was experiencing early stages of caldera collapse during this time.

First test of the stomp-rocket idea

The new work is based on three-dimensional simulations of how plumes containing different types of matter rise through a conduit and enter the atmosphere. Crozier and colleagues compared these simulations with seismic and infrasound data from previously-published papers, and with plume heights measured by radar. They then connected the plume simulations with seismic inversions they conducted themselves.

The resulting model shows Kilauea’s magma reservoir overlain by a pocket of accumulated high-temperature magmatic gas and lithic clasts. When the reservoir collapsed, the gas and the lithic clasts were driven up through a conduit around 600-m long to erupt particles at a rate of roughly 3000 m3/s.

As well as outlining a new mechanism that could contribute to hazards during caldera collapse eruptions, Crozier and colleagues used subsurface and atmospheric data to constrain Kilauea’s eruption mechanics in more detail than is typically possible. They were able to do this, Crozier says, because Kilauea is unusually well-monitored, being covered with instruments such as ground sensors to detect seismic activity and spectrometers to analyze the gases released.

“Our work provides a valuable opportunity to validate next-generation transient eruptive plume simulations, which could ultimately help improve both ash hazard forecasts and interpretations of the existing geologic eruption record,” says Crozier, who is now a postdoctoral researcher at Stanford University in the US. “For example, I am currently looking into the fault mechanics involved in the sequence of caldera collapse earthquakes that produced these explosions. In most tectonic settings we haven’t been able to observe complete earthquake cycles since they occur over long timescales, so caldera collapses provide valuable opportunities to understand fault mechanics.”

The study is detailed in Nature Geoscience.

The post Hawaiian volcano erupted ‘like a stomp rocket’ appeared first on Physics World.

  •  

Linking silicon T centres with light offers a route to fault-tolerant quantum computing

20 juin 2024 à 15:55

Today’s noisy quantum processors are prone to errors that can quickly knock a quantum calculation off course. As a result, quantum error correction schemes are used to make some nascent quantum computers more tolerant to such faults.

This involves using a large number of qubits – called “physical” qubits – to create one fault-tolerant “logical” qubit. A useful fault-tolerant quantum computer would have thousands of logical qubits and this would require the integration of millions of physical qubits, which remains a formidable challenge.

In this episode of the Physics World Weekly podcast, I am in conversation with Stephanie Simmons, who is founder and chief quantum officer at Photonic Inc. The Vancouver-based company is developing optically-linked silicon spin qubits – and it has recently announced that it has distributed quantum entanglement between two of its modules.

I spoke with Simmons earlier this month in London at Commercialising Quantum Global 2024, which was organized by Economist Impact. She explains how the company’s qubits – based on T-centre spins in silicon – are connected using telecoms-band photons. Simmons makes the case that the technology can be integrated and scaled to create fault-tolerant computers. We also chat about the company’s manufacturing programme and career opportunities for physicists at the firm.

The post Linking silicon T centres with light offers a route to fault-tolerant quantum computing appeared first on Physics World.

  •  

Speed of sound in quark–gluon plasma is measured at CERN

Par : No Author
20 juin 2024 à 13:00

The speed of sound in a quark–gluon plasma has been measured by observing high-energy collisions between lead nuclei at CERN’s Large Hadron Collider. The work, by the CMS Collaboration, provides a highly precise test of lattice quantum chromodynamics (QCD), and could potentially inform neutron star physics.

The strong interaction – which binds quarks together inside hadrons – is the strongest force in the universe. Unlike the other forces, which become weaker as particles become further apart, its strength grows with increasing separation. What is more, when quarks gain enough energy to move apart, the space between them is filled with quark–antiquark pairs, making the physics ever-more complex as energies rise.

In the interior of a proton or neutron, the quarks and gluons (the particles that mediate the strong interaction) are very close together and effectively neutralize one another’s colour charge, leaving just a small perturbation that accounts for the residual strong interaction between protons and neutrons.  At very high energies, however, the particles become deconfined, forming a hot, dense and yet almost viscosity-free fluid of quarks and gluons, all strongly interacting with one another. Calculations of this quark gluon plasma are non-perturbative, and other techniques are needed. The standard approach is lattice QCD.

Speed of sound is key

To check whether the predictions of lattice QCD are correct, the speed of sound is key. “The specific properties of quark–gluon plasma correspond to a specific value of how fast sound will propagate,” says CMS member Wei Li of Rice University in Texas. He says indirect measurements have provided constraints in the past, but the value has never been measured directly.

In the new work, the CMS researchers collided heavy ions of lead instead of protons because – like cannonballs compared with bullets – these are easier to accelerate to high energies and momenta. The CMS detector monitored the particles emitted in the collisions, using a two-stage detection system to determine what type of collisions had occurred and what particles had been produced in the collisions.

“We pick the collisions that were almost exactly head-on,” explains Li, “Those types of collisions are rare.” The energy is deposited into the plasma, heating it and leading to the creation of particles. The researchers monitored the energies and momenta of the particles emitted from different collisions to reconstruct the energy density of the plasma immediately after each collision. “We look at the variations between the different groups of events,” he explains. “The temperature of the plasma is tracked based on the energies of the particles that are coming out, because it’s a thermal source that emits particles.”

In this way, the researchers were able to measure the speed at which heat – and therefore energy density – flowed through the plasma. Under these extreme conditions, this is identical to the speed of sound i.e. the rate at which pressure travels. “In relativity, particle number is not conserved,” says Li; “You can turn particles into energy and energy into particles. But energy is conserved, so we always talk about total energy density.”

Even more stringent tests

The team’s findings matched the predictions of lattice QCD and the researchers would now like to conduct even more stringent tests. “We have extracted the speed of sound at one specific temperature,” says Li. “Whereas lattice QCD has predicted how the speed of sound goes with temperature as a continuous function. In principle, a more convincing case would be to measure at multiple temperatures and have them come out all agreeing with the lattice QCD prediction.” One remarkable prediction of lattice QCD is that, as the temperature of the quark–gluon plasma drops to its lowest possible temperature, the sound speed reaches a minimum before then increasing as the temperature drops further and the quarks become bound into hadrons. “It would be remarkable if we could observe that,” he says.

The research is described in a paper in Reports on Progress in Physics.

“I think it’s a good paper,” says nuclear theorist Larry McLerran of the University of Washington in Seattle – who is not a CMS member. He believes its most interesting aspect, however, is not what it shows about the theory being tested but what it demonstrates about the techniques used to test it. “The issue of sound velocity is interesting,” he says. “They have a way of calculating it – actually two ways of calculating it, one of which is kind of hand waving, but then it’s backed up with detailed simulation – and it agrees with lattice gauge theory calculations.”

McLerran is also interested in the potential to study heavy-ion collisions at low energies, and hopes these might give clues about the cold, dense matter in neutron stars. “In heavy ion collisions, you can calculate the sound velocity squared as a function of density using numerical methods, whereas these numerical methods don’t work at high density and low temperature, which is the limiting case for neutron stars. So being able to measure a simple bulk property of the matter and do it well is important.”

The post Speed of sound in quark–gluon plasma is measured at CERN appeared first on Physics World.

  •  

Scientists uncover hidden properties of rare-earth element promethium

Par : No Author
19 juin 2024 à 16:50

For the first time, researchers have experimentally examined the chemistry of the lanthanide element promethium. The investigation was carried out by Alex Ivanov and colleagues at Oak Ridge National Laboratory in the US – the same facility at which the element was first discovered almost 80 years ago.

Found on the sixth row of the periodic table, the lanthanide rare-earth metals possess an unusually diverse range of magnetic, optical and electrical properties, which are now exploited in many modern technologies. Yet despite their widespread use, researchers still know very little about the chemistry of promethium, a lanthanide with an atomic number of 61, which was first identified in 1945 by researchers on the Manhattan project.

“As the world slowly recovered from a devastating war, a group of national laboratory scientists from the closed town of Oak Ridge, Tennessee, isolated an unknown radioactive element,” Ivanov describes. “This last rare-earth lanthanide was subsequently named promethium, derived from the Greek mythology hero Prometheus, who stole fire from heaven for the use of mankind.”

Despite its relatively low atomic number compared with the other lanthanides, promethium’s chemical properties have remained elusive in the decades following its discovery. Part of the reason for this is that promethium is the only lanthanide with no stable isotopes. Only small quantities of synthetic promethium (mostly promethium-147 with a half-life of 2.62 years) are available, extracted from nuclear reactors, through tedious and energy-intensive purification processes.

Ultimately, this limited availability means that researchers are still in the dark about even the most basic aspects of promethium’s chemistry: including the distance between its atoms when bonded together, and the number of atoms a central promethium atom will bond to when forming a molecule or crystal lattice.

Ivanov’s team revisited this problem in their study, taking advantage of the latest advances in isotope separation technology. In a careful, multi-step process, they harvested atoms of promethium-147 from an aqueous solution of plutonium waste, and bonded them to a group of specially selected organic molecules. “By doing this, we could study how promethium interacts with other atoms in a solution environment, providing insights that were previously unknown,” Ivanov explains

Using synchrotron X-ray absorption spectroscopy to study these interactions, the researchers observed the very first appearance of a promethium-based chemical complex: a molecular structure whose central promethium atom is bonded to several neighbouring organic molecules.

Altogether, they observed nine promethium-binding oxygen atoms in the complex, which allowed them to probe several of the metal’s fundamental chemical properties for the first time. “We discovered how promethium bonds with oxygen atoms, measured the lengths of these bonds, and compared them to other lanthanides,” Ivanov describes.

Based on these results, the researchers then studied a complete set of comparable chemical complexes spanning all lanthanide elements. This enabled them to experimentally observe the phenomenon of “lanthanide contraction” across the whole lanthanide series for the first time.

Lanthanide contraction describes the decrease in the atomic radii of lanthanide elements as their atomic number increases, due to increasingly poor shielding from nuclear charge by inner-shell electrons. The effect causes the lanthanide–oxygen bond length to shrink. Ivanov’s team observed that this shortening accelerated early in the lanthanide series, before slowing down as the atomic number increased.

The team’s discoveries have filled a glaring gap in our understanding of promethium’s chemistry. By building on their results, the researchers hope that future studies could pave the way for a wide range of important applications for the element.

“This new knowledge could improve the methods used to separate promethium and other lanthanides from one another, which is crucial for advancing sustainable energy systems,” Ivanov describes. “By understanding how promethium bonds in a solution, we can better explore its potential use in advanced technologies like pacemakers, spacecraft power sources and radiopharmaceuticals.”

The researchers report their findings in Nature.

The post Scientists uncover hidden properties of rare-earth element promethium appeared first on Physics World.

  •  

Physics and sport: flying balls, perfecting technique, and wellbeing in academia

19 juin 2024 à 13:28

For sports fans, the next few weeks will bring excitement and drama. The Euro 2024 football (soccer) tournament is under way in Germany and the Copa América is about to kick off in the US. Then at the end of July, the Olympics starts in Paris as athletes from across the world compete to run, jump, sail, cycle and dance themselves into the history books. In this episode of Physics World Stories, you will hear from two US physicists with a profound connection with sport.

The first guest is John Eric Goff of the University of Lynchburg, author of Gold Medal Physics: the Science of Sports. After training as a condensed-matter theorist, Goff has focused his research career the physics of sport. In a wide-ranging conversation with podcast host Andrew Glester, Goff discusses everything from the flight of balls to the biodynamics of martial arts. He also considers how data and AI in sport are changing the practice and the spectacle of sport.

Our second guest is Harvard University’s Jenny Hoffman, who recently set the record for the fastest woman to run across the US. In November 2023 Hoffman completed the 3000 mile (5000 km) journey in just 47 days, 12 hours and 35 minutes, running from San Francisco to New York City. Hoffman, who studies the electronic properties of exotic materials, speaks about the benefits of having hobbies and passions outside of work. For her, running plays an essential role in wellbeing during her successful career in academia.

The post Physics and sport: flying balls, perfecting technique, and wellbeing in academia appeared first on Physics World.

💾

  •  

SNMMI ‘Image of the Year’ visualizes the brain as never before

Par : Tami Freeman
18 juin 2024 à 10:30
SNMMI Image of the Year
Ultrahigh-resolution images A series of PET images recorded by the NeuroEXPLORER brain PET scanner was chosen by the Society of Nuclear Medicine and Molecular Imaging as its Image of the Year. (Courtesy: Richard E Carson et al. Yale University, New Haven, CT).

A series of ultrahigh-resolution brain PET images has been selected as the SNMMI Image of the Year. At each of its annual meetings, the Society of Nuclear Medicine and Molecular Imaging chooses an image that represents the most promising advances in the field, with this year’s winner picked from more than 1500 submitted abstracts.

The winning images, recorded by the ultrahigh-performance NeuroEXPLORER human brain PET scanner, highlight targeted tracer uptake in specific brain nuclei (clusters of neurons), providing detailed information on neuronal and functional activity. The technology could dramatically expand the scope of brain PET studies, with potential to advance the treatment of brain diseases.

NeuroEXPLORER was built by a collaboration of researchers from Yale University, the University of California, Davis and United Imaging Healthcare. They designed the scanner to provide ultrahigh sensitivity and ultrahigh spatial resolution, as well as to perform continuous correction for head motion.

“If we are going to get high resolution, we have to deal with patient motion,” Richard Carson from Yale University explained at the recent SNMMI Annual Meeting in Toronto. “But the challenge has always been sensitivity, even with a high-resolution system. In a late frame for a carbon-11 study, you’re really scanning on fumes, it’s hard to generate the quality of images you’d like to really use the resolution that’s possible.”

To achieve the highest possible resolution, ideally better than 2 mm, the team designed a detector micro-block comprising a 4 x 2 array of 1.56 x 3.07 x 20 mm LYSO crystals, read out by four silicon photomultipliers. The 1.5 mm transaxial pitch provides extremely high resolution, while the larger axial pitch is used to read out depth-of-interaction (DOI) data.

The NeuroEXPLORER assembles 20 detector modules, each containing five or six detector blocks (12 × 12 microblocks), in a cylindrical ring with a diameter of 52.4 cm. The scanner’s high sensitivity is enabled by having a long (49.5 cm) axial field-of-view, building on pioneering work from UC Davis on the EXPLORER system. Motion correction is performed using a Vicra tracking system with a built-in camera that collects data from the face for markerless motion tracking.

In performance tests, the NeuroEXPLORER demonstrated a sensitivity of 46 kcps/MBq at the centre, a transverse spatial resolution of less than 1.4 mm with OSEM (ordered-subset expectation-maximization) reconstruction, and a DOI resolution of less than 4 mm. The scanner also performs time-of-flight measurements with an average resolution of 236 ps. The team provide detailed system characteristics in the Journal of Nuclear Medicine.

Exceptional images

In their latest study, presented by Carson at the SNMMI Annual Meeting, the researchers compared human brain images from the NeuroEXPLORER with images from the current state-of-the-art scanner, the High Resolution Research Tomograph (HRRT).

The team scanned healthy volunteers, using both PET systems on different days, after administration of five different radiotracers. The tracers target various features in the brain, including synaptic density (imaged with 18F-SynVesT-1), dopamine receptors (11C-PHNO) and transporters (18F-FE-PE2I), and glucose metabolism (18F-FDG). Carson also shared some late-breaking data using the tracer 18F-Flutabine.

Overall, the team observed a dramatic improvement in the resolution and quality of NeuroEXPLORER images compared with those recorded by the HRRT. Carson presented a series of examples to the SNMMI audience.

NeuroEXPLORER images using 18F FDG, for instance, visualized extremely small substructures with much higher contrast than corresponding HRRT scans. Time activity curves in the caudate and thalamus quantified that NeuroEXPLORER showed higher standard uptake values (SUV) than HRRT. Likewise, late images of synaptic density, recorded 60–90 min postinjection, showed higher contrast and greater SUV with the NeuroEXPLORER than the HRRT scanner.

Carson also presented binding potential images using the dopamine receptor tracer 11C-PHNO. While noise is higher with a 11C-based tracer, NeuroEXPLORER scans showed higher values than the HRRT images. “This was one of the really good days,” he said. “We saw two hotspots in the anterior thalamus, we’re not sure exactly which subnucleus within there, but we’re seeing beautiful hotspots consistent with high dopaminergic intervention in those regions. If you look at the HRRT images, it’s there, but not something you could commit to.”

Carson noted that the NeuroEXPLORER’s long axial field-of-view enables the scanner to also image the carotid arteries in the neck. In addition, the team is using 18F-FDG to image head-and-neck tumours, for example to visualize lymph nodes in the neck. “This is an exciting opportunity in terms of being able to see much smaller tumours,” he said.

The team’s ongoing projects include optimization of the reconstruction parameters, further improving the camera for motion correction, and fully characterizing the resolution of the PET images using brain phantoms with variable contrast.

As well as imaging of small brain structures with targeted pharmaceuticals, future studies will also include more imaging in the spinal cord, and looking at the dynamics of neurotransmitter release. “One exciting part to think about doing is, because of the sensitivity, we can drop the radiation dose and begin to scan adolescents, for example with autism or schizophrenia,” said Carson.

“The dramatic improvement in resolution and overall quality of the NeuroEXPLORER images compared to the HRRT images is clear,” says Heather Jacene, chair of SNMMI’s Scientific Program Committee, in a press statement. “The NeuroEXPLORER has the potential to be a gamechanger in research for conditions such as Alzheimer’s disease, Parkinson’s disease, epilepsy and mental illnesses.”

The post SNMMI ‘Image of the Year’ visualizes the brain as never before appeared first on Physics World.

  •  
❌
❌