↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Hybrid irradiation could facilitate clinical translation of FLASH radiotherapy

Dosimetric comparisons of prostate cancer treatment plans
Dosimetric comparisons Treatment plans for a prostate cancer patient, showing the dose distributions for a conventional dose rate (CDR) plan, a hybrid ultrahigh-dose rate (UHDR)–conventional (HUC) plan, and the UHDR part of the HUC plan, delivered by a 250 MeV electron field. (Courtesy: CC BY 4.0/Radiother. Oncol. 10.1016/j.radonc.2024.110576)

FLASH radiotherapy is an emerging cancer treatment that delivers radiation at extremely high dose rates within a fraction of a second. This innovative radiation delivery technique, dramatically faster than conventional radiotherapy, reduces radiation injury to surrounding healthy tissues while effectively targeting malignant tumour cells.

Preclinical studies of laboratory animals have demonstrated that FLASH radiotherapy is at least equivalent to conventional radiotherapy, and may produce better anti-tumour effects in some types of cancer. The biological “FLASH effect”, which is observed for ultrahigh-dose rate (UHDR) irradiations, spares normal tissue compared with conventional dose rate (CDR) irradiations, while retaining the tumour toxicity.

With FLASH radiotherapy opening up the therapeutic window, it has potential to benefit patients requiring radiotherapy. As such, efforts are underway worldwide to overcome the clinical challenges for safe adoption of FLASH into clinical practice. As the FLASH effect has been mostly investigated using broad UHDR electron beams, which have limited range and are best suited for treating superficial lesions, one important challenge is to find a way to effectively treat deep-seated tumours.

In a proof-of-concept treatment planning study, researchers in Switzerland demonstrated that a hybrid approach combining UHDR electron and CDR photon radiotherapy may achieve equivalent dosimetric effectiveness and quality to conventional radiotherapy, for the treatment of glioblastoma, pancreatic cancer and localized prostate cancer. The team, at Lausanne University Hospital and the University of Lausanne, report the findings in Radiotherapy and Oncology.

Combined device

This hybrid treatment could be facilitated using a linear accelerator (linac) with the capability to generate both UHDR electron beams and CDR photon beams. Such a radiotherapy device could eliminate concerns relating to the purchase, operational and maintenance costs of other proposed FLASH treatment devices. It would also overcome the logistical hurdles of needing to move patients between two separate radiotherapy treatment rooms and immobilize them identically twice.

For their study, the Lausanne team presumed that such a dual-use clinically approved linac exists. This linac would deliver a bulk radiation dose by a UHDR electron beam in a less conformal manner to achieve the FLASH effect, and then deliver conventional intensity-modulated radiation therapy (IMRT) or volumetric-modulated arc therapy (VMAT) to enhance dosimetric target coverage and conformity.

Principal investigator Till Böhlen and colleagues created a machine model that simulates 3D-conformal broad electron beams with a homogeneous parallel fluence. They developed treatments that deliver a single broad UHDR electron beam with case-dependent energy of between 20 and 250 MeV for every treatment fraction, together with a CDR VMAT to produce a conformal dose delivery to the planning target volume (PTV).

The tumours for each of the three cancer cases required simple, mostly round PTVs that could be covered by a single electron beam. Each plan’s goal was to deliver the majority of the dose per treatment with the UHDR electron beam, while achieving acceptable PTV coverage, homogeneity and sparing of critical organs-at-risk.

Plan comparisons

The researchers assessed the plan quality based on absorbed dose distribution, dose–volume histograms and dose metric comparisons with the CDR reference plans used for clinical treatments. In all cases, the hybrid plans exhibited comparable dosimetric quality to the clinical plans. They also evaluated dose metrics for the parts of the doses delivered by the UHDR electron beam and by the CDR VMAT, observing that the hybrid plans delivered the majority of the PTV dose, and large parts of doses to surrounding tissues, at UHDR.

“This study demonstrates that hybrid treatments combining an UHDR electron field with a CDR VMAT may provide dosimetrically conformal treatments for tumours with simple target shapes in various body sites and depths in the patient, while delivering the majority of the prescribed dose per fraction at UHDR without delivery pauses,” the researchers write.

In another part of the study, the researchers estimated the potential FLASH sparing effect achievable with their hybrid technique, using the glioblastoma case as an example. They assumed a FLASH normal tissue sparing scenario with an onset of FLASH sparing at a threshold dose of 11 Gy/fraction, and a more favourable scenario with sparing onset at 3 Gy/fraction. The treatment comprised a single-fraction 15 Gy UHDR electron boost, supplemented with 26 fractions of CDR VMAT. The two tested scenarios showed a FLASH sparing magnitude of 10% for the first scenario and more substantial 32% sparing of brain tissues of for the second.

“Following up on this pilot study focusing on feasibility, the team is currently working on improving the joint optimization of the UHDR and CDR dose components to further enhance plan quality, flexibility and UHDR proportion of the delivered dose using the [hybrid] treatment approach,” Böhlen tells Physics World. “Additional work focuses on quantifying its biological benefits and advancing its technical realization.”

The post Hybrid irradiation could facilitate clinical translation of FLASH radiotherapy appeared first on Physics World.

Trailblazer: astronaut Eileen Collins reflects on space, adventure, and the power of lifelong learning

In this episode of Physics World Stories, astronaut Eileen Collins shares her extraordinary journey as the first woman to pilot and command a spacecraft. Collins broke barriers in space exploration, inspiring generations with her courage and commitment to discovery. Reflecting on her career, she discusses not only her time in space but also her lifelong sense of adventure and her recent passion for reading history books. Today, Collins frequently shares her experiences with audiences around the world, encouraging curiosity and inspiring others to pursue their dreams.

Joining the conversation is Hannah Berryman, director of the new documentary SPACEWOMAN, which is based on Collins’ memoir Through the Glass Ceiling to the Stars, co-written with Jonathan H Ward. The British filmmaker describes what attracted her to Collins’ story and the universal messages it reveals. Hosted by science communicator Andrew Glester, this episode offers a glimpse into the life of a true explorer – one whose spirit of adventure knows no bounds.

SPACEWOMAN has its world premiere on 16 November 2024 at DOC NYC. Keep an eye on the documentary’s website for details of how you can watch the film wherever you are.

The post Trailblazer: astronaut Eileen Collins reflects on space, adventure, and the power of lifelong learning appeared first on Physics World.

💾

Venkat Srinivasan: ‘Batteries are largely bipartisan’

Which battery technologies are you focusing on at Argonne?

We work on everything. We work on lead-acid batteries, a technology that’s 100 years old, because the research community is saying, “If only we could solve this problem with cycle life in lead-acid batteries, we could use them for energy storage to add resilience to the electrical grid.” That’s an attractive prospect because lead-acid batteries are extremely cheap, and you can recycle them easily.

We work a lot on lithium-ion batteries, which is what you find in your electric car and your cell phone. The big challenge there is that lithium-ion batteries use nickel and cobalt, and while you can get nickel from a few places, most of the cobalt comes from the Democratic Republic of Congo, where there are safety and environmental concerns about exactly how that cobalt is being mined, and who is doing the mining. Then there’s lithium itself. The supply chain for lithium is concentrated in China, and we saw during COVID the problems that can cause. You have one disruption somewhere and the whole supply chain collapses.

We’re also looking at technologies beyond lithium-ion batteries. If you want to start using batteries for aviation, you need batteries with a long range, and for that you have to increase energy density. So we work on things like solid-state batteries.

Finally, we are working on what I would consider really “out there” technologies, where it might be 20 years before we see them used. Examples might be lithium-oxygen or lithium-sulphur batteries, but there’s also a move to go beyond lithium because of the supply chain issues I mentioned. One alternative might be to switch to sodium-based batteries. There’s a big supply of soda ash in the US, which is the raw material for sodium, and sodium batteries would allow us to eliminate cobalt while using very little nickel. If we can do that, the US can be completely reliant on its own domestic minerals and materials for batteries.

What are the challenges associated with these different technologies?

Frankly, every chemistry has its challenges, but I can give you an example.

If you look at the periodic table, the most electronegative element is lithium, while the most electropositive is fluorine. So you might think the ultimate battery would be lithium-fluorine. But in practice, nobody should be using fluorine – it’s super dangerous. The next best option is lithium-oxygen, which is nice because you can get oxygen from the air, although you have to purify it first. The energy density of a lithium-oxygen battery is comparable to that of gasoline, and that is why people have been trying to make solid-state lithium-metal batteries since before I was born.

Photo of Arturo Gutierrez and Venkat Srinivasan. Gutierrez is wearing safety glasses and a white lab coat and has his arms inside a glovebox while Srinivasan looks on
Building batteries: Venkat Srinivasan (right) discusses battery research with materials scientist Arturo Gutierrez in one of the energy storage discovery labs at Argonne National Laboratory. (Courtesy: Argonne National Laboratory)

The problem is that when you charge a battery with a lithium metal anode, the electrolyte deposits on the lithium metal, and unfortunately it doesn’t create a thin, planar layer. Instead, it forms these needle-like structures called dendrites that short to the battery’s separator. Battery shorting is never a good thing.

Now, if you put a mechanically hard material next to the lithium metal, you can stop the dendrites from growing through. It’s like putting in a concrete wall next to the roots of a tree to stop the roots growing into the other side. But if you have a crack in your concrete wall, the roots will find a way – they will actually crack the concrete – and exactly the same thing happens with dendrites.

So the question becomes, “Can we make a defect-free electrolyte that will stop the dendrites?” Companies have taken a shot at this, and on the small scale, things look great: if you’re making one or two devices, you can have incredible control. But in a large-format manufacturing setup where you’re trying to make hundreds of devices per second, even a single defect can come back to bite you. Going from the lab scale to the manufacturing scale is such a challenge.

What are the major goals in battery research right now?

It depends on the application. For electric cars, we still have to get the cost down, and my sense is that we’ll ultimately need batteries that charge in five minutes because that’s how long it takes to refuel a gasoline-powered car. I worry about safety, too, and of course there’s the supply-chain issue I mentioned.

But if you forget about supply chains for a second, I think if we can get fast charging with incredibly safe batteries while reducing the cost by a factor of two, we are golden. We’ll be able to do all sorts of things.

A researcher holding a plug kneels next to an electric car. The car has a sign on the front door that reads "Argonne research vehicle"
Charging up: Developing better batteries for electric vehicles is a major goal of research in Argonne’s ACCESS collaboration. (Courtesy: Argonne National Laboratory)

For aviation, it’s a different story. We think the targets are anywhere from increasing energy density by a factor of two for the air taxi market, all the way to a factor of six if you want an electric 737 that can fly from Chicago to Washington, DC with 75 passengers. That’s kind of hard. It may be impossible. You can go for a hybrid design, in which case you will not need as much energy density, but you need a lot of power density because even when you’re landing, you still have to defy gravity. That means you need power even when the vehicle is in its lowest state of charge.

The political landscape in the US is shifting as the Biden administration, which has been very focused on clean energy, makes way for a second presidential term for Donald Trump, who is not interested in reducing carbon emissions. How do you see that impacting battery research?

If you look at this question historically, ReCell, which is Argonne’s R&D centre for battery recycling, got established during the first Trump administration. Around the same time, we got the Federal Consortium for Advanced Batteries, which brought together the Department of Energy, the Department of Defense, the intelligence community, the State Department and the Department of Commerce. The reason all those groups were interested in batteries is that there’s a growing feeling that we need to have energy independence in the US when it comes to supply chains for batteries. It’s an important technology, there’s lots of innovations, and we need to find a way to move them to market.

So that came about during the Trump administration, and then the Biden administration doubled down on it. What that tells me is that batteries are largely bipartisan, and I think that’s at least partly because you can have different motivations for buying them. Many of my neighbours aren’t particularly thinking about carbon emissions when they buy an electric vehicle (EV). They just want to go from zero to 60 in three seconds. They love the experience. Similarly, people love to be off-grid, because they feel like they’re controlling their own stuff. I suspect that because of this, there will continue to be largely bipartisan support for EVs. I remain hopeful that that’s what will happen.

  • Venkat Srinivasan will appear alongside William Mustain and Martin Freer at a Physics World Live panel discussion on battery technologies on 21 November 2024. Sign up here.

The post Venkat Srinivasan: ‘Batteries are largely bipartisan’ appeared first on Physics World.

UK plans £22bn splurge on carbon capture and storage

Further details have emerged over the UK government’s pledge to spend almost £22bn on carbon capture and storage (CCS) in the next 25 years. While some climate scientists feel the money is vital to decarbonise heavy industry, others have raised concerns about the technology itself, including its feasibility at scale and potential to extend fossil fuel use rather than expanding renewable energy and other low-carbon technologies.

In 2023 the UK emitted about 380 million tonnes of carbon dioxide equivalent and the government claims that CCS could remove more than 8.5 million tonnes each year as part of its effort to be net-zero by 2050. Although there are currently no commercial CCS facilities in the UK, last year the previous Conservative government announced funding for two industrial clusters: HyNet in Merseyside and the East Coast Cluster in Teesside.

Projects at both clusters will capture carbon dioxide from various industrial sites, including hydrogen plants, a waste incinerator, a gas-fired power station and a cement works. The gas will then be transported down pipes to offshore storage sites, such as depleted oil and gas fields. According to the new Labour government, the plans will create 4000 jobs, with the wider CCS industry potentially supporting 50,000 roles.

Government ministers claim the strategy will make the UK a global leader in CCS and hydrogen production and is expected to attract £8bn in private investment. Rachel Reeves, the chancellor, said in September that CCS is a “game-changing technology” that will “ignite growth”. The Conservative’s strategy also included plans to set up two other clusters but no progress has been made on these yet.

The new investment in CCS comes after advice from the independent Climate Change Committee, which said it is necessary for decarbonising the UK’s heavy industry and for the UK to reach its net-zero target. The International Energy Agency (IEA) and the Intergovernmental Panel on Climate Change have also endorsed CCS as critical for decarbonisation, particularly in heavy industry.

“The world is going to generate more carbon dioxide from burning fossil fuels than we can afford to dump into the atmosphere,” says Myles Allen, a climatologist at the University of Oxford. “It is utterly unrealistic to pretend otherwise. So, we need to scale up a massive global carbon dioxide disposal industry.” Allen adds, however, that discussions are needed about how CCS is funded. “It doesn’t make sense for private companies to make massive profits selling fossil fuels while taxpayers pay to clean up the mess.”

Out of options

Globally there are around 45 commercial facilities that capture about 50 million tonnes of carbon annually, roughly 0.14% of global emissions. According to the IEA, up to 435 million tonnes of carbon could be captured every year by 2030, depending on the progress of more than 700 announced CCS projects.

One key part of the UK government’s plans is to use CCS to produce so-called “blue” hydrogen. Most hydrogen is currently made by heating methane from natural gas with a catalyst, producing carbon monoxide and carbon dioxide as by-products. Blue hydrogen involves capturing and storing those by-products, thereby cutting carbon emissions.

But critics warn that blue hydrogen continues our reliance on fossil fuels and risks leaks along the natural gas supply chain. There are also concerns about its commercial feasibility. The Norwegian energy firm Equinor, which is set to build several UK-based hydrogen plants, has recently abandoned plans to pipe blue hydrogen to Germany, citing cost and lack of demand.

“The hydrogen pipeline hasn’t proved to be viable,” Equinor spokesperson Magnus Frantzen Eidsvold told Reuters, adding that its plans to produce hydrogen had been “put aside”. Shell has also scrapped plans for a blue hydrogen plant in Norway, saying that the market for the fuel had failed to materialise.

To meet our climate targets, we do face difficult choices. There is no easy way to get there

Jessica Jewell

According to the Institute for Energy Economics and Financial Analysis (IEEFA), CCS “is costly, complex and risky with a history of underperformance and delays”. It believes that money earmarked for CCS would be better spent on proven decarbonisation technologies such as buildings insulation, renewable power, heat pumps and electric vehicles. It says the UK’s plans will make it “more reliant on fossil gas imports” and send “the wrong signal internationally about the need to stop expanding fossil fuel infrastructure”.

After delays to several CCS projects in the EU, there are also questions around progress on its target to store 50 million tonnes of carbon by 2030. Press reports, have recently revealed, for example, that a pipeline connecting Germany’s Rhine-Ruhr industrial heartland to a Dutch undersea carbon storage project will not come online until at least 2032.

Jessica Jewell, an energy expert at Chalmers University in Sweden, and colleagues have also found that CCS plants have a failure rate of about 90% largely because of poor investment prospects (Nature Climate Change 14 1047). “If we want CCS to expand and be taken more seriously, we have to make projects more profitable and make the financial picture work for investors,” Jewell told Physics World.

Subsidies like the UK plan could do so, she says, pointing out that wind power, for example, initially benefited from government support to bring costs down. Jewell’s research suggests that by cutting failure rates and enabling CCS to grow at the pace wind power did in the 2000s, it could capture a “not insignificant” 600 gigatonnes of carbon dioxide by 2100, which could help decarbonise heavy industry.

That view is echoed by Marcelle McManus, director of the Centre for Sustainable Energy Systems at the University of Bath, who says that decarbonising major industries such as cement, steel and chemicals is challenging and will benefit from CCS. “We are in a crisis and need all of the options available,” she says. “We don’t currently have enough renewable electricity to meet our needs, and some industrial processes are very hard to electrify.”

Although McManus admits we need “some storage of carbon”, she says it is vital to “create the pathways and technologies for a defossilised future”. CCS alone is not the answer and that, says Jewell, means rapidly expanding low carbon technologies like wind, solar and electric vehicles. “To meet our climate targets, we do face difficult choices. There is no easy way to get there.”

The post UK plans £22bn splurge on carbon capture and storage appeared first on Physics World.

From melanoma to malaria: photoacoustic device detects disease without taking a single drop of blood

Malaria remains a serious health concern, with annual deaths increasing yearly since 2019 and almost half of the world’s population at risk of infection. Existing diagnostic tests are less than optimal and all rely on obtaining an invasive blood sample. Now, a research collaboration from USA and Cameroon has demonstrated a device that can non-invasively detect this potentially deadly infection without requiring a single drop of blood.

Currently, malaria is diagnosed using optical microscopy or antigen-based rapid diagnostic tests, but both methods have low sensitivity. Polymerase chain reaction (PCR) tests are more sensitive, but still require blood sampling. The new platform – Cytophone – uses photoacoustic flow cytometry (PAFC) to rapidly identify malaria-infected red blood cells via a small probe placed on the back of the hand.

PAFC works by delivering low-energy laser pulses through the skin into a blood vessel and recording the thermoacoustic signals generated by absorbers in circulating blood. Cytophone, invented by Vladimir Zharov from the University of Arkansas for Medical Science, was originally developed as a universal diagnostic platform and first tested clinically for detection of cancerous melanoma cells.

“We selected melanoma because of the possibility of performing label-free detection of circulating cells using melanin as an endogenous biomarker,” explains Zharov. “This avoids the need for in vivo labelling by injecting contrast agents into blood.” For malaria diagnosis, Cytophone detects haemozoin, an iron crystal that accumulates in red blood cells infected with malaria parasites. These haemozoin biocrystals have unique magnetic and optical properties, making them a potential diagnostic target.

Photoacoustic detection
Photoacoustic detection Schematic of the focused ultrasound transducer array assessing a blood network. (Courtesy: Nat. Commun. 10.1038/s41467-024-53243-z)

“The similarity between melanin and haemozoin biomarkers, especially the high photoacoustic contrast above the blood background, motivated us to bring a label-free malaria test with no blood drawing to malaria-endemic areas,” Zharov tells Physics World. “To build a clinical prototype for the Cameroon study we used a similar platform and just selected a smaller laser to make the device more portable.”

The Cytophone prototype uses a 1064 nm laser with a linear beam shape and a high pulse rate to interrogate fast moving blood cells within blood vessels. Haemozoin nanocrystals in infected red blood cells absorb this light (more strongly than haemoglobin in normal red blood cells), heat up and expand, generating acoustic waves. These signals are detected by an array of 16 tiny ultrasound transducers in acoustic contact with the skin. The transducers have focal volumes oriented in a line across the vessel, which increases sensitivity and resolution, and simplifies probe navigation.

In vivo testing

Zharov and collaborators – also from Yale School of Public Health and the University of Yaoundé I – tested the Cytophone in 30 Cameroonian adults diagnosed with uncomplicated malaria. They used data from 10 patients to optimize device performance and assess safety. They then performed a longitudinal study in the other 20 patients, who attended four or five times at up to 37 days following antimalarial therapy, contributing 94 visits in total.

Photoacoustic waveforms and traces from infected blood cells have a particular shape and duration, and a different time delay to that of background skin signals. The team used these features to optimize signal processing algorithms with appropriate averaging, filtration and gating to identify true signals arising from infected red blood cells. As the study subjects all had dark skin with high melanin content, this time-resolved detection also helped to avoid interference from skin melanin.

On visit 1 (the day of diagnosis), 19/20 patients had detectable photoacoustic signals. Following treatment, these signals consistently decreased with each visit. Cytophone-positive samples exhibited median photoacoustic peak rates of 1.73, 1.63, 1.18 and 0.74 peaks/min on visits 1–4, respectively. One participant had a positive signal on visit 5 (day 30). The results confirm that Cytophone is sensitive enough to detect low levels of parasites in infected blood.

The researchers note that Cytophone detected the most common and deadliest species of malaria parasite, as well as one infection by a less common species and two mixed infections. “That was a really exciting proof-of-concept with the first generation of this platform,” says co-lead author Sunil Parikh in a press statement. “I think one key part of the next phase is going to involve demonstrating whether or not the device can detect and distinguish between species.”

The research team
Team work The researchers from the USA and Cameroon are using photoacoustic flow cytometry to rapidly detect malaria infection. (Courtesy: Sunil Parikh)

Performance comparison

Compared with invasive microscopy-based detection, Cytophone demonstrated 95% sensitivity at the first visit and 90% sensitivity during the follow-up period, with 69% specificity and an area under the ROC curve of 0.84, suggesting excellent diagnostic performance. Cytophone also approached the diagnostic performance of standard PCR tests, with scope for further improvement.

Staff required just 4–6 h of training to operate Cytophone, plus a few days experience to achieve optimal probe placement. And with minimal consumables required and the increasing affordability of lasers, the researchers estimate that the cost per malaria diagnosis will be low. The study also confirmed that the safety of the Cytophone device. “Cytophone has the potential to be a breakthrough device allowing for non-invasive, rapid, label-free and safe in vivo diagnosis of malaria,” they conclude.

The researchers are now performing further malaria-related clinical studies focusing on asymptomatic individuals and children (for whom the needle-free aspect is particularly important). Simultaneously, they are continuing melanoma trials to detect early-stage disease and investigating the use of Cytophone to detect circulating blood clots in stroke patients.

“We are integrating multiple innovations to further enhance Cytophone’s sensitivity and specificity,” says Zharov. “We are also developing a cost-effective wearable Cytophone for continuous monitoring of disease progression and early warning of the risk of deadly disease.”

The study is described in Nature Communications.

The post From melanoma to malaria: photoacoustic device detects disease without taking a single drop of blood appeared first on Physics World.

Quantized vortices seen in a supersolid for the first time

Quantized vortices – one of the defining features of superfluidity – have been seen in a supersolid for the first time. Observed by researchers in Austria, these vortices provide further confirmation that supersolids can be modelled as superfluids with a crystalline structure. This model could have variety of other applications in quantum many body physics and Austrian team now using it to study pulsars, which are rotating and magnetized neutron stars.

A superfluid is a curious state of matter that can flow without any friction. Superfluid systems that have been studied in the lab include helium-4; type-II superconductors; and Bose–Einstein condensates (BECs) – all of which exist at very low temperatures.

More than five decades ago, physicists suggested that some systems could exhibit crystalline order and superfluidity simultaneously in a unique state of matter called a supersolid. In such a state, the atoms would be described by the same wavefunction and are therefore delocalized across the entire crystal lattice. The order of the supersolid would therefore be defined by the nodes and antinodes of this wavefunction.

In 2004, Moses Chan of the Pennsylvania State University in the US and his PhD student Eun-Seong Kim reported observing a supersolid phase in superfluid helium-4. However, Chan and others have not been able to reproduce this result. Subsequently, researchers including Giovanni Modugno at Italy’s University of Pisa and Francesca Ferlaino at the University of Innsbruck in Austria have demonstrated evidence of supersolidity in BECs of magnetic atoms.

Irrotational behaviour

But until now, no-one had observed an important aspect of superfluidity in a supersolid: that a superfluid never carries bulk angular momentum. If a superfluid is placed in a container and the container is rotated at moderate angular velocity, it simply flows freely against the edges. As the angular momentum of the container increases, however, it becomes energetically costly to maintain the decoupling between the container and the superfluid. “Still, globally, the system is irrotational,” says Ferlaino; “So there’s really a necessity for the superfluid to heal itself from rotation.”

In a normal superfluid, this “healing” occurs by the formation of small, quantized vortices that dissipate the angular momentum, allowing the system to remain globally irrotational. “In an ordinary superfluid that’s not modulated in space [the vortices] form a kind of triangular structure called an Abrikosov lattice, because that’s the structure that minimizes their energy,” explains Ferlaino. It was unclear how the vortices might sit inside a supersolid lattice.

In the new work, Ferlaino and colleagues at the University of Innsbruck utilized a technique called magnetostirring to rotate a BEC of magnetic dysprosium-164 atoms. They caused the atoms to rotate simply by rotating the magnetic field. “That’s the beauty: it’s so simple but nobody had thought about this before,” says Ferlaino.

As the group increased the field’s rotation rate, they observed vortices forming in the condensate and migrating to the density minima. “Vortices are zeroes of density, so there it costs less energy to drill a hole than in a density peak,” says Ferlaino; “The order that the vortices assume is largely imparted by the crystalline structure – although their distance is dependent on the repulsion between vortices.”

Unexpected applications

The researchers believe the findings could be applicable in some unexpected areas of physics. Ferlaino tells of hearing a talk about the interior composition of neutron stars by the theoretical astrophysicist Massimo Mannarelli of Gran Sasso Laboratory in Italy. “During the coffee break I went to speak to him and we’ve started to work together.”

“A large part of the astrophysical community is convinced that the core of a neutron star is a superfluid,” Ferlaino says; “The crust is a solid, the core is a superfluid, and a layer called the inner crust has both properties together.” Pulsars are neutron stars that emit radiation in a narrow beam, giving them a well-defined pulse rate that depends on their rotation. As they lose energy through radiation emission, they gradually slow down.

Occasionally, however, their rotation rates suddenly speed up again in events called glitches. The researchers’ theoretical models suggest that the glitches could be caused by vortices unpinning from the supersolid and crashing into the solid exterior, imparting extra angular momentum. “When we impose a rotation on our supersolid that slows down, then at some point the vortices unpin and we see the glitches in the rotational frequency,” Ferlaino says. “This is a new direction – I don’t know where it will bring us, but for sure experimentally observing vortices was the first step.”

Theorist Blair Blakie of the University of Otago in New Zealand is excited by the research. “Vortices in supersolids were a bit of a curiosity in early theories, and sometimes you’re not sure whether theorists are just being a bit crazy considering things, but now they’re here,” he says. “It opens this new landscape for studying things from non-equilibrium dynamics to turbulence – all sorts of things where you’ve got this exotic material with topological defects in it. It’s very hard to predict what the killer application will be, but in these fields people love new systems with new properties.”

The research is described in Nature.

The post Quantized vortices seen in a supersolid for the first time appeared first on Physics World.

Sceptical space settlers, Einstein in England, trials of the JWST, tackling quantum fundamentals: micro reviews of the best recent books

A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?
By Kelly and Zach Weinersmith

Husband-and-wife writing team Kelly and Zach Weinersmith were excited about human settlements in space when they started research for their new book A City on Mars. But the more they learned, the more sceptical they became. From technology, practicalities and ethics, to politics and the legal framework, they uncovered profound problems at every step. With humorous panache and plenty of small cartoons by Zach, who also does the webcomic Saturday Morning Breakfast Cereal, the book is a highly entertaining guide that will dent the enthusiasm of most proponents of settling space. Kate Gardner

  • 2024 Particular Books

Einstein in Oxford
By Andrew Robinson

“England has always produced the best physicists,” Albert Einstein once said in Berlin in 1925. His high regard for British physics led him to pay three visits to the University of Oxford in the early 1930s, which are described by Andrew Robinson in his charming short book Einstein in Oxford. Sadly, the visits were not hugely productive for Einstein, who disliked the formality of Oxford life. His time there is best remembered for the famous blackboard – saved for posterity – on which he’d written while giving a public lecture. Matin Durrani

  • 2024 Bodleian Library Publishing

Pillars of Creation: How the James Webb Telescope Unlocked the Secrets of the Cosmos
By Richard Panek

The history of science is “a combination of two tales” says Richard Panek in his new book charting the story of the James Webb Space Telescope (JWST). “One is a tale of curiosity. The other is a tale of tools.” He has chosen an excellent case study for this statement. Pillars of Creation combines the story of the technological and political hurdles that nearly sank the JWST before it launched with a detailed account of its key scientific contributions. Panek’s style is also multi-faceted, mixing technical explanations with the personal stories of scientists fighting to push the frontiers of astronomy.  Katherine Skipper

  • 2024 Little, Brown

Quanta and Fields: the Biggest Ideas in the Universe
By Sean Carroll

With 2025 being the International Year of Quantum Science and Technology, the second book in prolific science writer Sean Carroll’s “Biggest Ideas” trilogyQuanta and Fields – might make for a prudent read. Following the first volume on “space, time and motion”, it tackles the key scientific principles that govern quantum mechanics, from wave functions to effective wave theory. But beware: this book is packed with equations, formulae and technical concepts. It’s essentially a popular-science textbook, in which Carroll does things like examine each term in the Schrödinger equation and delve into the framework for group theory. Great for physicists but not, perhaps, for the more casual reader. Tushna Commissariat

  • 2024 Penguin Random House

The post Sceptical space settlers, Einstein in England, trials of the JWST, tackling quantum fundamentals: micro reviews of the best recent books appeared first on Physics World.

Four-wave mixing could boost optical communications in space

A new and practical approach to the low-noise amplification of weakened optical signals has been unveiled by researchers in Sweden. Drawing from the principles of four-wave mixing, Rasmus Larsson and colleagues at Chalmers University of Technology believe their approach could have promising implications for laser-based communication systems in space.

Until recently, space-based communication systems have largely relied on radio waves to transmit signals. Increasingly, however, these systems are being replaced with optical laser beams. The shorter wavelengths of these signals offer numerous advantages over radio waves. These include higher data transmission rates; lower power requirements; and lower risks of interception.

However, when transmitted across the vast distances of space, even a tightly focused laser beam will spread out significantly by the time its light reaches its destination. This will weaken severely the signal’s strength.

To deal with this loss, receivers must be extremely sensitive to incoming signals. This involves the preamplification of the signal above the level of electronic noise in the receiver. But conventional optical amplifiers are far too noisy to achieve practical space-based communications.

Phase-sensitive amplification

In a 2021 study, Larsson’s team showed how these weak signals can, in theory, be amplified with zero noise using a phase-sensitive optical parametric amplifier (PSA). However, this approach did not solve the problem entirely.

“The PSA should be the ideal preamplifier for optical receivers,” Larsson explains. “However, we don’t see them in practice due to their complex implementation requirements, where several synchronized optical waves of different frequencies are needed to facilitate the amplification.” These cumbersome requirements place significant demands on both transmitter and receiver, which limits their use in space-based communications.

To simplify preamplification, Larsson’s team used four-wave mixing. Here, the interaction between light at three different wavelengths within a nonlinear medium produces light at a fourth wavelength.

In this case, a weakened transmitted signal is mixed with two strong “pump” waves that are generated within the receiver. When the phases of the signal and pump are synchronized inside a doped optical fibre, light at the fourth wavelength interferes constructively with the signal. This boosts the amplitude of the signal without sacrificing low-noise performance.

Auxiliary waves

“This allows us to generate all required auxiliary waves in the receiver, with the transmitter only having to generate the signal wave,” Larsson describes. “This is contrary to the case before where most, if not all waves were generated in the transmitter. The synchronization of the waves further uses the same specific lossless approach we demonstrated in 2021.”

The team says that this new approach offers a practical route to noiseless amplification within an optical receiver. “After optimizing the system, we were able to demonstrate the low-noise performance and a receiver sensitivity of 0.9 photons per bit,” Larsson explains. This amount of light is the minimum needed to reliably decode each bit of data and Larsson adds, “This is the lowest sensitivity achieved to date for any coherent modulation format.”

This unprecedented sensitivity enabled the team to establish optical communication links between a PSA-amplified receiver and a conventional, single-wave transmitter. With a clear route to noiseless preamplification through some further improvements, the researchers are now hopeful that their approach could open up new possibilities across a wide array of applications – especially for laser-based communications in space.

“In this rapidly emerging topic, the PSA we have demonstrated can facilitate much higher data rates than the bandwidth-limited single photon detection technology currently considered.”

This ability would make the team’s PSA ideally suited for communication links between space-based transmitters and ground-based receivers. In turn, astronomers could finally break the notorious “science return bottleneck”. This would remove many current restrictions on the speed and quantity of data that can be transmitted by satellites, probes, and telescopes scattered across the solar system.

The research is described in Optica.

The post Four-wave mixing could boost optical communications in space appeared first on Physics World.

The Arecibo Observatory’s ‘powerful radiation environment’ led to its collapse, claims report

The Arecibo Observatory’s “uniquely powerful electromagnetic radiation environment” is the most likely initial cause of its destruction and collapse in December 2020. That’s according to a new report by the National Academies of Sciences, Engineering, and Medicine, which states that failure of zinc in the cables that held the telescope’s main platform led to it falling onto the huge 305 m reflector dish – causing catastrophic damage.

While previous studies of the iconic telescope’s collapse had identified the deformation of zinc inside the cable sockets, other reasons were also put forward. They included poor workmanship and the effects of hurricane Maria, which hit the area in 2017. It subjected the telescope’s cables to the highest structural stress they had ever endured since the instrument opened in 1963.

Inspections after the hurricane showed some evidence of cable slippage. Yet these investigations, the report says, failed to note several failure patterns and did not provide plausible explanations for most of them. In addition, photos taken in 2019 gave “a clear indication of major socket deterioration”, but no further investigation followed.

The eight-strong committee, chaired by Roger McCarthy of the US firm McCarthy Engineering, that wrote the report found that move surprising. “The lack of documented concern from the contracted engineers about the inconsequentiality of cable pullouts or the safety factors between Hurricane Maria in 2017 and the failure is alarming,” they say.

Further research

The report concludes that the root cause of the catastrophe was linked to the zinc sockets, which suffered “unprecedented and accelerated long-term creep-induced failure”. Metallic creep – the slow, permanent deformation of a metal – is caused by stress and exacerbated by heat, making components based on the metal to fail. “Each failure involved both the rupture of some of the cable’s wires and a deformation of the socket’s zinc, and is therefore the failure of a cable-socket assembly,” the report notes.

As to the cause of the creep, the committee sees the telescope’s radiation environment as “the only hypothesis that…provides a plausible but unprovable answer”. The committee proposes that the telescope’s powerful transmitters induced electrical currents in the cables and sockets, potentially causing “long-term, low-current electroplasticity” in the zinc. The increased induced plasticity accelerated the natural ongoing creep in the zinc.

The report adds that the collapse of the platform is the first documented zinc-induced creep failure, despite the metal being used in such a way for over a century. The committee now recommends that the National Science Foundation (NSF), which oversees Arecibo, offer the remaining socket and cable sections to the research community for further analysis on the “large-diameter wire connections, the long-term creep behavior of zinc spelter connections, and [the] materials science”.

  • Meanwhile, the NSF had planned to reopen the telescope site as an educational center later this month but that has now be delayed until next year to coincide with the NSF’s 75th anniversary.

The post The Arecibo Observatory’s ‘powerful radiation environment’ led to its collapse, claims report appeared first on Physics World.

Top-cited author Vaidehi Paliya discusses the importance of citations and awards

More than 50 papers from India have been recognized with a top-cited paper award for 2024 from IOP Publishing, which publishes Physics World. The prize is given to corresponding authors who have papers published in both IOP Publishing and its partners’ journals from 2021 to 2023 that are in the top 1% of the most cited papers.

The winners include astrophysicist Vaidehi Paliya from Inter-University Centre for Astronomy and Astrophysics (IUCAA) and colleagues. Their work involved studying the properties of the “central engines” of blazars, a type of active galactic nucleus.

Vaidehi Paliya
Highly cited: Vaidehi Paliya

“Knowing that the astronomy community has appreciated the published research is excellent,” says Vaidehi. “It has been postulated for a long time that the physics of relativistic jets is governed by the central supermassive black hole and accretion disk, also known as the central engine of an active galaxy. Our work is probably the first to quantify their physical properties, such as the black hole mass and the accretion disk luminosity, for a large sample of active galaxies hosting powerful relativistic jets called blazars.”

Vaidehi explains that getting many citations for the work, which was published in Astrophysical Journal Supplement Series, indicates that the published results “have been helpful to other researchers” and that this broad visibility also increases the chance that other groups will come across the work. “[Citations] are important because they can therefore trigger innovative ideas and follow-up research critical to advancing scientific knowledge,” adds Vaidehi.

Vaidehi says that he often turns to highly cited research “to appreciate the genuine ideas put forward by scientists”, with two recent examples being what inspired him to work on the central engine problem.

Indeed, Vaidehi says that prizes such as IOP’s highly cited paper award are essential for researchers, especially students. “Highly cited work is crucial not only to win awards but also for the career growth of a researcher. Awards play a significant role in further motivating fellow researchers to achieve even higher goals and highlight the importance of innovation,” he says. “Such awards are definitely a highlight in getting a career promotion. The news of the award may also lead to opportunities. For instance, to be invited to join other researchers working in similar areas, which will provide an ideal platform for future collaboration and research exploration.”

Vaidehi adds that results that are meaningful to broader research areas will likely result in higher citations. “Bringing innovation to the work is the key to success,” he says. “Prestigious awards, high citation counts, and other forms of success and recognition will automatically follow. You will be remembered by the community only for your contribution to its advancement and growth, so be genuine.”

  • For the full list of top-cited papers from India for 2024, see here.

The post Top-cited author Vaidehi Paliya discusses the importance of citations and awards appeared first on Physics World.

How to boost the sustainability of solar cells

In this episode of the Physics World Weekly podcast I explore routes to more sustainable solar energy. My guests are four researchers at the UK’s University of Oxford who have co-authored the “Roadmap on established and emerging photovoltaics for sustainable energy conversion”.

They are the chemist Robert Hoye; the physicists Nakita Noel and Pascal Kaienburg; and the materials scientist Sebastian Bonilla. We define what sustainability means in the context of photovoltaics and we look at the challenges and opportunities for making sustainable solar cells using silicon, perovskites, organic semiconductors and other materials.

This podcast is supported by Pfeiffer Vacuum+Fab Solutions.

Pfeiffer is part of the Busch Group, one of the world’s largest manufacturers of vacuum pumps, vacuum systems, blowers, compressors and gas abatement systems. Explore its products at the Pfeiffer website.

 

The post How to boost the sustainability of solar cells appeared first on Physics World.

Lightning sets off bursts of high-energy electrons in Earth’s inner radiation belt

A supposedly stable belt of radiation 7000 km above the Earth’s surface may in fact be producing damaging bursts of high-energy electrons. According to scientists at the University of Colorado Boulder, US, the bursts appear to be triggered by lightning, and understanding them could help determine the safest “windows” for launching spacecraft – especially those with a human cargo.

The Earth is surrounded by two doughnut-shaped radiation belts that lie within our planet’s magnetosphere. While both belts contain high concentrations of energetic electrons, the electrons in the outer belt (which starts from about 4 Earth radii above the Earth’s surface and extends to about 9–10 Earth radii) typically have energies in the MeV range. In contrast, electrons in the inner belt, which is located between about 1.1 and 2 Earth radii, have energies between 10 and a few hundred kilo-electronvolts (KeV).

At the higher end of this energy scale, these electrons easily penetrate the walls of spacecraft and can damage sensitive electronics inside. They also pose risks to astronauts who leave the protective environment of their spacecraft to perform extravehicular activities.

The size of the radiation belts, as well as the energy and number of electrons they contain, varies considerably over time. One cause of these variations is sub-second bursts of energetic electrons that enter the atmosphere from the magnetosphere that surrounds it. These rapid microbursts are most commonly seen in the outer radiation belt, where they are the result of interactions with phenomena called whistler mode chorus radio waves. However, they can also be observed in the inner belt, where they are generated by whistlers produced by lightning storms. Such lightening-induced precipitation, as it is known, typically occurs at low energies of 10s to 100 KeV.

Outer-belt energies in inner-belt electrons

In the new study, researchers led by CU Boulder aerospace engineering student Max Feinland observed clumps of electrons with MeV energies in the inner belt for the first time. This serendipitous discovery came while Feinland was analysing data from a now-decommissioned NASA satellite called the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX). He originally intended to focus on outer-belt electrons, but “after stumbling across these events in the inner belt, we thought they were interesting and decided to investigate further,” he tells Physics World.

After careful analysis, Feinland, who was working as an undergraduate research assistant in Lauren Blum’s team at CU Boulder’s Laboratory for Atmospheric and Space Physics at the time, identified 45 bursts of high-energy electrons in the inner belt in data from 1996 to 2006. At first, he and his colleagues weren’t sure what could be causing them, since the chorus waves known to produce such high-energy bursts are generally an outer-belt phenomenon. “We actually hypothesized a number of processes that could explain our observations,” he says. “We even thought that they might be due to Very Low Frequency (VLF) transmitters used for naval communications.”

The lightbulb moment, however, came when Feinland compared the bursts to records of lightning strikes in North America. Intriguingly, he found that several of the peaks in the electron bursts seemed to happen less than a second after the lighting strikes.

A lightning trigger

The researchers’ explanation for this is that radio waves produced after a lightning strike interact with electrons in the inner belt. These electrons then begin to oscillate between the Earth’s northern and southern hemispheres with a period of just 0.2 seconds. With each oscillation, some electrons drop out of the inner belt and into the atmosphere. This last finding was unexpected: while researchers knew that high-energy electrons can fall into the atmosphere from the outer radiation belt, this is the first time that they have observed them coming from the inner belt.

Feinland says the team’s discovery could help space-launch firms and national agencies decide when to launch their most sensitive payloads. With further studies, he adds, it might even be possible to determine how long these high-energy electrons remain in the inner belt after geomagnetic storms. “If we can quantify these lifetimes, we could determine when it is safest to launch spacecraft,” he says.

The researchers are now seeking to calculate the exact energies of the electrons. “Some of them may be even more energetic than 1 MeV,” Feinland says.

The present work is detailed in Nature Communications.

The post Lightning sets off bursts of high-energy electrons in Earth’s inner radiation belt appeared first on Physics World.

First human retinal image brings sight-saving portable OCT a step closer

Image of a human retina taken with the Akepa photonic chip
First in human Image of a human retina taken with the Akepa photonic chip, showing the retinal layers and key clinical features used for disease diagnosis and monitoring. (Courtesy: Siloton)

UK health technology start-up Siloton is developing a portable optical coherence tomography (OCT) system that uses photonic integrated circuits to miniaturize a tabletop’s-worth of expensive and fragile optical components onto a single coin-sized chip. In a first demonstration by a commercial organization, Siloton has now used its photonic chip technology to capture a sub-surface image of a human retina.

OCT is a non-invasive imaging technique employed as the clinical gold standard for diagnosing retinal disease. Current systems, however, are bulky and expensive and only available at hospital clinics or opticians. Siloton aims to apply its photonic chip – the optical equivalent of an electronic chip – to create a rugged, portable OCT system that patients could use to monitor disease progression in their own homes.

Siloton's Akepa photonic chip
Compact device Siloton’s photonic chip Akepa replaces approximately 70% of the optics found in traditional OCT systems. (Courtesy: Siloton)

The image obtained using Siloton’s first-generation OCT chip, called Akepa, reveals the fine layered structure of the retina in a healthy human eye. It clearly shows layers such as the outer photoreceptor segment and the retinal pigment epithelium, which are key clinical features for diagnosing and monitoring eye diseases.

“The system imaged the part of the retina that’s responsible for all of your central vision, most of your colour vision and the fine detail that you see,” explains Alasdair Price, Siloton’s CEO. “This is the part of the eye that you really care about looking at to detect disease biomarkers for conditions like age-related macular degeneration [AMD] or various diabetic eye conditions.”

Faster and clearer

Since Siloton first demonstrated that Akepa could acquire OCT images of a retinal phantom, the company has deployed some major software enhancements. For example, while the system previously took 5 min to image the phantom – an impractical length of time for human imaging – the imaging speed is now less than a second. The team is also exploring ways to improve image quality using artificial intelligence techniques.

Price explains that the latest image was recorded using the photonic chip in a benchtop set-up, noting that the company is about halfway through the process of miniaturizing all of the optics and electronics into a handheld binocular device.

“The electronics is all off-the-shelf, so we’re not going to focus too heavily on miniaturizing that until right at the end,” he says. “The innovative part is in miniaturizing the optics. We are very close to having it in that binocular headset now, the aim being that by early next year we will have that fully miniaturized.”

As such, the company plans to start deploying some research-only systems commercially next year. These will be handheld binocular-style devices that users hold up to their faces, complete with a base station for charging and communications. Speaking with over 100 patients in focus groups, Siloton confirmed that they prefer this binocular design over the traditional chin rest employed in full-size OCT systems.

“We were worried about that because we thought we may not be able to get the level of stability required,” says Price. “But we did further tests on the stability of the binocular system compared with the chin rest and actually found that the binoculars showed greater stability. Right now we’re still using a chin rest, so we’re hopeful that the binocular system will further improve our ability to record high-quality images.”

The Siloton founding team
Siloton founding team Left to right: Alasdair Price, Euan Allen and Ben Hunt. (Courtesy: Siloton)

Expanding applications

The principal aim of Siloton’s portable OCT system is to make the diagnosis and monitoring of eye diseases – such as diabetic macular oedema, retinal vein occlusion and AMD, the leading cause of sight loss in the developed world – more affordable and accessible.

Neovascular or “wet” AMD, for example, can be treated with regular eye injections, but this requires regular OCT scans at hospital appointments, which may not be available frequently enough for effective monitoring. With an OCT system in their own homes, patients can scan themselves every few days, enabling timely treatments as soon as disease progression is detected – as well as saving hospitals substantial amounts of money.

Ongoing improvements in “quality versus cost” of the Akepa chip has also enabled Siloton to expand its target applications outside of ophthalmology. The ability to image structures such as the optic nerve, for example, enables the use of OCT to screen for optic neuritis, a common early symptom in patients with multiple sclerosis.

The company is also working with the European Space Agency (ESA) on a project investigating spaceflight-associated neuro-ocular syndrome (SANS), a condition suffered by about 70% of astronauts and which requires regular monitoring.

“At the moment, there is an OCT system on the International Space Station. But for longer-distance space missions, things like Gateway, there won’t be room for such a large system,” Price tells Physics World. “So we’re working with ESA to look at getting our chip technology onto future space missions.”

The post First human retinal image brings sight-saving portable OCT a step closer appeared first on Physics World.

‘Buddy star’ could explain Betelgeuse’s varying brightness

An unseen low-mass companion star may be responsible for the recently observed “Great Dimming” of the red supergiant star Betelgeuse. According to this hypothesis, which was put forward by researchers in the US and Hungary, the star’s apparent brightness varies when an orbiting companion – dubbed α Ori B or, less formally, “Betelbuddy” – displaces light-blocking dust, thereby changing how much of Betelgeuse’s light reaches the Earth.

Located about 548 light-years away, in the constellation Orion, Betelgeuse is the 10th brightest star in the night sky. Usually, its brightness varies over a period of 416 days, but in 2019–2020, its output dropped to the lowest level ever recorded.

At the time, some astrophysicists speculated that this “Great Dimming” might mean that the star was reaching the end of its life and would soon explode as a supernova. Over the next three years, however, Betelgeuse’s brightness recovered, and alternative hypotheses gained favour. One such suggestion is that a cooler spot formed on the star and began ejecting material and dust, causing its light to dim as seen from Earth.

Pulsation periods

The latest hypothesis was inspired, in part, by the fact that Betelgeuse experiences another cycle in addition to its fundamental 416-day pulsation period. This second cycle, known as the long secondary period (LSP), lasts 2170 days, and the Great Dimming occurred after its minimum brightness coincided with a minimum in the 416-day cycle.

While astrophysicists are not entirely sure what causes LSPs, one leading theory suggest that they stem from a companion star. As this companion orbits its parent star, it displaces the cosmic dust the star produces and expels, which in turn changes the amount of starlight that reaches us.

Lots of observational data

To understand whether this might be happening with Betelgeuse, a team led by Jared Goldberg at the Flatiron Institute’s Center for Computational Astrophysics; Meridith Joyce at the University of Wyoming; and László Molnár of the Konkoly Observatory, HUN-REN CSFK, Budapest; analysed a wealth of observational data from the American Association of Variable Star Observers. “This association has been collecting data from both professional and amateur astronomers, so we had access to decades worth of data,” explains Molnár. “We also looked at data from the space-based SMEI instrument and spectroscopic observations collected by the STELLA robotic telescope.”

The researchers combined these direct-observation data with advanced computer models that simulate Betelgeuse’s activity. When they studied how the star’s brightness and its velocity varied relative to each other, they realized that the brightest phase must correspond to a companion being in front of it. “This is the opposite of what others have proposed,” Molnár notes. “For example, one popular hypothesis postulates that companions are enveloped in dense dust clouds, obscuring the giant star when they pass in front of them. But in this case, the companion must remove dust from its vicinity.”

As for how the companion does this, Molnár says they are not sure whether it evaporates the dust away or shepherds it to the opposite side of Betelgeuse with its gravitational pull. Both are possible, and Goldberg adds that other processes may also contribute. “Our new hypothesis complements the previous one involving the formation of a cooler spot on the star that ejects material and dust,” he says. “The dust ejection could occur because the companion star was out of the way, behind Betelgeuse rather than along the line of sight.”

The least absurd of all hypotheses?

The prospect of a connection between an LSP and the activity of a companion star is a longstanding one, Goldberg tells Physics World. “We know the Betelgeuse has an LSP and if an LSP exists, that means a ‘buddy’ for Betelgeuse,” he says.

The researchers weren’t always so confident, though. Indeed, they initially thought the idea of a companion star for Betelgeuse was absurd, so the hardest part of their work was to prove to themselves that this was, in fact, the least absurd of all hypotheses for what was causing the LSP.

“We’ve been interested in Betelgeuse for a while now, and in a previous paper, led by Meridith, we already provided new size, distance and mass estimates for the star based on our models,” says Molnár. “Our new data started to point in one direction, but first we had to convince ourselves that we were right and that our claims are novel.”

The findings could have more far-reaching implications, he adds. While around one third of all red giants and supergiants have LSPs, the relationships between LSPs and brightness vary. “There are therefore a host of targets out there and potentially a need for more detailed models on how companions and dust clouds may interact,” Molnár says.

The researchers are now applying for observing time on space telescopes in hopes of finding direct evidence that the companion exists. One challenge they face is that because Betelgeuse is so bright – indeed, too bright for many sensitive instruments – a “Betelbuddy”, as Goldberg has nicknamed it, may be simpler to explain than it is to observe. “We’re throwing everything we can at it to actually find it,” Molnár says. “We have some ideas on how to detect its radiation in a way that can be separated from the absolute deluge of light Betelgeuse is producing, but we have to collect and analyse our data first.”

The study has been accepted for publication in The Astrophysical Journal. A pre-print is available on the arXiv.

The post ‘Buddy star’ could explain Betelgeuse’s varying brightness appeared first on Physics World.

Black hole in rare triple system sheds light on natal kicks

For the first time, astronomers have observed a black hole in a triple system with two other stars. The system is called V404 Cygni and was previously thought to be a closely-knit binary comprising a black hole and a star. Now, Kevin Burdge and colleagues at the Massachusetts Institute of Technology (MIT) have shown that the pair is orbited by a more distant tertiary star.

The observation supports the idea that some black holes do not experience a “natal kick” in momentum when they form. This is expected if a black hole is created from the sudden implosion of a star, rather than in a supernova explosion.

When black holes and neutron stars are born, they can gain momentum through mechanisms that are not well understood. These natal kicks can accelerate some neutron stars to speeds of hundreds of kilometres per second. For black holes, the kick is expected to be less pronounced — and in some scenarios, astronomers believe that these kicks must be very small.

Information about natal kicks can be gleaned by studying the behaviour of X-ray binaries, which usually pair a main sequence star with a black hole or neutron star companion. As these two objects orbit each other, material from the star is transferred to its companion, releasing vast amounts of gravitational potential energy as X-rays and other electromagnetic radiation.

Wobbling objects

In such binaries, any natal kick the black hole may have received during its formation can be deduced by studying how the black hole and its companion star orbit each other. This can be done using the radial velocity (or wobble) technique, which measures the Doppler shift of light from the orbiting objects as they accelerate towards and then away from an observer on Earth.

In their study, Burdge’s team scrutinized archival observations of V404 Cygni that were made using a number of different optical telescopes. A bright blob of light thought to be the black hole and its close-knit companion star is prominent in these images. But the team noticed something else, a second blob of light that could be a star orbiting the close-knit binary.

“We immediately noticed that there was another star next to the binary system, moving together with it,” Burdge explains. “It was almost like a happy accident, but was a direct product of an optical and an X-ray astronomer working together.”

As Burdge describes, the study came as a result of integrating his own work in optical astronomy with the expertise of MIT’s Erin Kara, who does X-ray astronomy on black holes. Burge adds, “We were thinking about whether it might be interesting to take high speed movies of black holes. While thinking about this, we went and looked at a picture of V404 Cygni, taken in visible light.”

Hierarchical triple

The observation provided the team with clear evidence that V404 Cygni is part of a “hierarchical triple” – an observational first. “In the system, a black hole is eating a star which orbits it every 6.5 days. But there is another star way out there that takes 70,000 years to complete its orbit around the inner system,” Burdge explains. Indeed, the third star is about 3500 au (3500 times the distance from the Earth to the Sun) from the black hole.

By studying these orbits, the team gleaned important information about the black hole’s formation. If it had undergone a natal kick when its progenitor star collapsed, the tertiary system would have become more chaotic – causing the more distant star to unbind from the inner binary pair.

The team also determined that the outer star is in the later stages of its main-sequence evolution. This suggests that V404 Cygni’s black hole must have formed between 3–5::billion years ago. When the black hole formed, the researchers believe it would have removed at least half of the mass from its binary companion. But since the black hole still has a relatively low mass, this means that its progenitor star must have lost very little mass as it collapsed.

“The black hole must have formed through a gentle process, without getting a big kick like one might expect from a supernova,” Burdge explains. “One possibility is that the black hole formed from the implosion of a star.”

If this were the case, the star would have collapsed into a black hole directly, without large amounts of matter being ejected in a supernova explosion. Whether or not this is correct, the team’s observations suggest that at least some black holes can form with no natal kick – providing deeper insights into the later stages of stellar evolution.

The research is described in Nature.

The post Black hole in rare triple system sheds light on natal kicks appeared first on Physics World.

UK particle physicist Mark Thomson selected as next CERN boss

The UK particle physicist Mark Thomson has been selected as the 17th director-general of the CERN particle-physics laboratory. Thomson, 58, was chosen today at a meeting of the CERN Council. He will take up the position on 1 January 2026 for a five-year period succeeding the current CERN boss Fabiola Gianotti, who will finish her second term next year.

Three candidates were shortlisted for the job after being put forward by a search committee. Physics World understands that the Dutch theoretical physicist and former Dutch science minister Robbert Dijkgraaf was also considered for the position. The other was reported to have been Greek particle physicist Paris Sphicas.

With a PhD in physics from the University of Oxford, Thomson is currently executive chair of the Science and Technology Facilities Council (STFC), one of the main funding agencies in the UK. He spent a significant part of career at CERN working on precise measurements of the W and Z boson in the 1990s as part of the OPAL experiment at CERN’s Large Electron-Positron Collider.

In 2000 he moved back to the UK to take up a position in experimental particle physics at the University of Cambridge. He was then a member of the ATLAS collaboration at CERN’s Large Hadron Collider (LHC) and between 2015 and 2018 served as co-spokesperson for the US Deep Underground Neutrino Experiment. Since 2018 he has served as the UK delegate to CERN’s Council.

Thomson was selected for his managerial credentials in science and connection to CERN. “Thomson is a talented physicist with great managerial experience,” notes Gianotti. “I have had the opportunity to collaborate with him in several contexts over the past years and I am confident he will make an excellent director-general. I am pleased to hand over this important role to him at the end of 2025.”

“Thomson’s election is great news – he has the scientific credentials, experience, and vision to ensure that CERN’s future is just as bright as its past, and it remains at the absolute cutting edge of research,” notes Peter Kyle, UK secretary of state for science, innovation and technology.“Work that is happening at CERN right now will be critical to scientific endeavour for decades to come, and for how we tackle some of the biggest challenges facing humanity.”

‘The right person’

Dirk Ryckbosch, a particle physicist at Ghent University and a delegate for Belgium in the CERN Council, told Physics World that Thomson is a “perfect match” for CERN. “As a former employee and a current member of the council, Thomson knows the ins and outs of CERN and he has the experience needed to lead a large research organization,” adds Ryckbosch.

The last UK director-general of CERN was Chris Llewellyn Smith who held the position between 1994 and 1998. Yet Ryckbosch acknowledges that within CERN, Brexit has never clouded the relationship between the UK and EU member states. “The UK has always remained a strong and loyal partner,” he says.

Thomson will have two big tasks when he becomes CERN boss in 2026: ensuring the start of operations with the upgraded LHC, known as the High-Luminosity LHC (HL-LHC) by 2030, and securing plans for the LHC’s successor.

CERN has currently put its weight behind the Future Circular Collider (FCC), which will cost about £12bn and be four times as large as the LHC with a 91 km circumference. The FCC would first be built as an electron-positron collider with the aim of studying the Higgs boson in unprecedented detail. It could later be upgraded as a hadron collider, known as the FCC-hh.

The construction of the FCC will, however, require additional funding from CERN member states. Earlier this year Germany, which is a main contributor to CERN’s annual budget, publicly objected to the FCC’s high cost. Garnering support from the FCC, if CERN selects it as its next project, will be a delicate balancing act for Thomson. “With his international network and his diplomatic skills, Mark is the right person for this,” concludes Ryckbosch.

That view is backed by particle theorist John Ellis from King’s College London, who told Physics World that Thomson has the “ideal profile for guiding CERN during the selection and initiation of its next major accelerator project”. Ellis adds that Thomson “brings to the role a strong record of research in collider physics as well as studies of electron-positron colliders and leadership in the DUNE neutrino experiment and also extensive managerial experience”.

The post UK particle physicist Mark Thomson selected as next CERN boss appeared first on Physics World.

Timber! Japan launches world’s first wooden satellite into space

Researchers in Japan have launched the world’s first wooden satellite to test the feasibility of using timber in space. Dubbed LignoSat2, the small “cubesat” was developed by Kyoto University and the logging firm Sumitomo Forestry. It was launched on 4 November to the International Space Station (ISS) from the Kennedy Space Center in Florida by a SpaceX Falcon 9 rocket.

Given the lack of water and oxygen in space, wood is potentially more durable in orbit than it is on Earth where it can rot or burn. This makes it an attractive and sustainable alternative to metals such as aluminium that can create aluminium oxide particles during re-entry into the Earth’s atmosphere.

Work began on LignoSat in 2020. In 2022 scientists at Kyoto sent samples of cherry, birch and magnolia wood to the ISS where the materials were exposed to the harsh environment of space for 240 days to test their durability.

While each specimen performed well with no clear deformation, the researchers settled on building LignoSat from magnolia – or Hoonoki in Japanese. This type of wood has traditionally been used for sword sheaths and is known for its strength and stability.

LignoSat2 is made without screws of glue and is equipped with external solar panels and encased in an aluminium frame. Next month the satellite is expected to be deployed in orbit around the Earth for about six months to measure how the wood withstands the environment and how well it protects the chips inside the satellite from cosmic radiation.

Data will be collected on the wood’s expansion and contraction, the internal temperature and the performance of the electronic components inside.

Researchers are hopeful that if LignoSat is successful it could pave the way for satellites to be made from wood. This would be more environmentally friendly given that each satellite would simply burn up when it re-enters the atmosphere at the end of its lifetime.

“With timber, a material we can produce by ourselves, we will be able to build houses, live and work in space forever,” astronaut Takao Doi who studies human space activities at Kyoto University told Reuters.

The post Timber! Japan launches world’s first wooden satellite into space appeared first on Physics World.

Physicists propose new solution to the neutron lifetime puzzle

Neutrons inside the atomic nucleus are incredibly stable, but free neutrons decay within 15 minutes – give or take a few seconds. The reason we don’t know this figure more precisely is that the two main techniques used to measure it produce conflicting results. This so-called neutron lifetime problem has perplexed scientists for decades, but now physicists at TU Wien in Austria have come up with a possible explanation. The difference in lifetimes, they say, could stem from the neutron being in not-yet-discovered excited states that have different lifetimes as well as different energies.

According to the Standard Model of particle physics, free neutrons undergo a process called beta decay that transforms a neutron into a proton, an electron and an antineutrino. To measure the neutrons’ average lifetime, physicists employ two techniques. The first, known as the bottle technique, involves housing neutrons within a container and then counting how many of them remain after a certain amount of time. The second approach, known as the beam technique, is to fire a neutron beam with a known intensity through an electromagnetic trap and measure how many protons exit the trap within a fixed interval.

Researchers have been performing these experiments for nearly 30 years but they always encounter the same problem: the bottle technique yields an average neutron survival time of 880 s, while the beam method produces a lifetime of 888 s. Importantly, this eight-second difference is larger than the uncertainties of the measurements, meaning that known sources of error cannot explain it.

A mix of different neutron states?

A team led by Benjamin Koch and Felix Hummel of TU Wien’s Institute of Theoretical Physics is now suggesting that the discrepancy could be caused by nuclear decay producing free neutrons in a mix of different states. Some neutrons might be in the ground state, for example, while others could be in a higher-energy excited state. This would alter the neutrons’ lifetimes, they say, because elements in the so-called transition matrix that describes how neutrons decay into protons would be different for neutrons in excited states and neutrons in ground states.

As for how this would translate into different beam and bottle lifetime measurements, the team say that neutron beams would naturally contain several different neutron states. Neutrons in a bottle, in contrast, would almost all be in the ground state – simply because they would have had time to cool down before being measured in the container.

Towards experimental tests

Could these different states be detected? The researchers say it’s possible, but they caution that experiments will be needed to prove it. They also note that theirs is not the first hypothesis put forward to explain the neutron lifetime discrepancy. Perhaps the simplest explanation is that the gap stems from unknown systematic errors in either the beam experiment, the bottle experiment, or both. Other, more theoretical approaches have also been proposed, but Koch says they do not align with existing experimental data.

“Personally, I find hypotheses that require fewer and smaller new assumptions – and that are experimentally testable – more appealing,” Koch says. As an example, he cites a 2020 study showing that a phenomenon called the inverse quantum Zeno effect could speed up the decay of bottle-confined neutrons, calling it “an interesting idea”. Another possible explanation of the puzzle, which he says he finds “very intriguing” has just been published and describes the admixture of novel bound electron-proton states in the final state of a weak decay, known as “Second Flavor Hydrogen Atoms”.

As someone with a background in quantum gravity and theoretical physics beyond the Standard Model, Koch is no stranger to predictions that are hard (and sometimes impossible, at least in the near term) to test. “Contributing to the understanding of a longstanding problem in physics with a hypothesis that could be experimentally tested soon is therefore particularly exciting for me,” he tells Physics World. “If our hypothesis of excited neutron states is confirmed by future experiments, it would shed a completely new light on the structure of neutral nuclear matter.”

The researchers now plan to collaborate with colleagues from the Institute for Atomic and Subatomic Physics at TU Wien to revaluate existing experimental data and explore various theoretical models. “We’re also hopeful about designing experiments specifically aimed at testing our hypothesis,” Koch reveals.

The present study is detailed in Physical Review D.

The post Physicists propose new solution to the neutron lifetime puzzle appeared first on Physics World.

Women and physics: navigating history, careers, and the path forward

Join us for an insightful webinar based on Women and Physics (Second Edition), where we will explore the historical journey, challenges, and achievements of women in the field of physics, with a focus on English-speaking countries. The session will dive into various topics such as the historical role of women in physics, the current statistics on female representation in education and careers, navigating family life and career, and the critical role men play in fostering a supportive environment. The webinar aims to provide a roadmap for women looking to thrive in physics.

Laura McCullough

Laura McCullough is a professor of physics at the University of Wisconsin-Stout. Her PhD from the University of Minnesota was in science education with a focus on physics education research. She is the recipient of multiple awards, including her university system’s highest teaching award, her university’s outstanding research award, and her professional society’s service award. She is a fellow of the American Association of Physics Teachers. Her primary research area is gender and science and surrounding issues. She has also done significant work on women in leadership, and on students with disabilities.

About this ebook

Women and Physics is the second edition of a volume that brings together research on a wide variety of topics relating to gender and physics, cataloguing the extant literature to provide a readable and concise grounding for the reader. While there are many biographies and collections of essays in the area of women and physics, no other book is as research focused. Starting with the current numbers of women in physics in English-speaking countries, it explores the different issues relating to gender and physics at different educational levels and career stages. From the effects of family and schooling to the barriers faced in the workplace and at home, this volume is an exhaustive overview of the many studies focused specifically on women and physics. This edition contains updated references and new chapters covering the underlying structures of the research and more detailed breakdowns of career issues.

The post Women and physics: navigating history, careers, and the path forward appeared first on Physics World.

Why AI is a force for good in science communication

In August 2024 the influential Australian popular-science magazine Cosmos found itself not just reporting the news – it had become the news. Owned by CSIRO Publishing – part of Australia’s national science agency – Cosmos had posted a series of “explainer” articles on its website that had been written by generative artificial intelligence (AI) as part of an experiment funded by Australia’s Walkley Foundation. Covering topics such as black holes and carbon sinks, the text had been fact-checked against the magazine’s archive of more than 15,000 past articles to negate the worry of misinformation, but at least one of the new articles contained inaccuracies.

Critics, such as the science writer Jackson Ryan, were quick to condemn the magazine’s experiment as undermining and devaluing high-quality science journalism. As Ryan wrote on his Substack blog, AI not only makes things up and trains itself on copyrighted material, but “for the most part, provides corpse-cold, boring-ass prose”. Contributors and former staff also complained to Australia’s ABC News that they’d been unaware of the experiment, which took place just a few months after the magazine had made five of its eight staff redundant.

It’s all too easy for AI to get things wrong and contribute to the deluge of online misinformation

The Cosmos incident is a reminder that we’re in the early days of using generative AI in science journalism. It’s all too easy for AI to get things wrong and contribute to the deluge of online misinformation, potentially damaging modern society in which science and technology shape so many aspects of our lives. Accurate, high-quality science communication is vital, especially if we are to pique the public’s interest in physics and encourage more people into the subject.

Kanta Dihal, a lecturer at Imperial College London who researchers the public’s understanding of AI, warns that the impacts of recent advances in generative AI on science communication are “in many ways more concerning than exciting”. Sure, AI can level the playing field by, for example, enabling students to learn video editing skills without expensive tools and helping people with disabilities to access course material in accessible formats. “[But there is also] the immediate large-scale misuse and misinformation,” Dihal says.

We do need to take these concerns seriously, but AI could benefit science communication in ways you might not realize. Simply put, AI is here to stay – in fact, the science behind it led to the physicist John Hopfield and computer scientist Geoffrey Hinton winning the 2024 Nobel Prize for Physics. So how can we marshal AI to best effect not just to do science but to tell the world about science?

Dangerous game

Generative AI is a step up from “machine learning”, where a computer predicts how a system will behave based on data it’s analysed. Machine learning is used in high-energy physics, for example, to model particle interactions and detector performance. It does this by learning to recognize patterns in existing data, before making predictions and then validating that those predictions match the original data. Machine learning saves researchers from having to manually sift through terabytes of data from experiments such as those at CERN’s Large Hadron Collider.

Generative AI, on the other hand, doesn’t just recognize and predict patterns – it can create new ones too. When it comes to the written word, a generative AI could, for example, invent a story from a few lines of input. It is exactly this language-generating capability that caused such a furore at Cosmos and led some journalists to worry that AI might one day make their jobs obsolete. But how does a generative AI produce replies that feel like a real conversation?

Claude Shannon holding a wooden mouse
Child’s play Claude Shannon was an electrical engineer and mathematician who is considered the “father of information theory”. He is pictured here in 1952 with an early example of machine learning – a wheeled toy mouse called Theseus that was designed to navigate its way through a maze. (Courtesy: Yale Joel/The LIFE Picture Collection/Shutterstock)

Perhaps the best known generative AI is ChatGPT (where GPT stands for generative pre-trained transformer), which is an example of a Large Language Model (LLM). Language modelling dates back to the 1950s, when the US mathematician Claude Shannon applied information theory – the branch of maths that deals with quantifying, storing and transmitting information – to human language. Shannon measured how well language models could predict the next word in a sentence by assigning probabilities to each word based on patterns in the data the model is trained on.

Such methods of statistical language modelling are now fundamental to a range of natural language processing tasks, from building spell-checking software to translating between languages and even recognizing speech. Recent advances in these models have significantly extended the capabilities of generative AI tools, with the “chatbot” functionality of ChatGPT making it especially easy to use.

ChatGPT racked up a million users within five days of its launch in November 2022 and since then other companies have unveiled similar tools, notably Google’s Gemini and Perplexity. With more than 600 million users per month as of September 2024, ChatGPT is trained on a range of sources, including books, Wikipedia articles and chat logs (although the precise list is not explicitly described anywhere). The AI spots patterns in the training texts and builds sentences by predicting the most likely word that comes next.

ChatGPT operates a bit like a slot machine, with probabilities assigned to each possible next word in the sentence. In fact, the term AI is a little misleading, being more “statistically informed guessing” than real intelligence, which explains why ChatGPT has a tendency to make basic errors or “hallucinate”. Cade Metz, a technology reporter from the New York Times, reckons that chatbots invent information as much as 27% of the time.

One notable hallucination occurred in February 2023 when Bard – Google’s forerunner to Gemini – declared in its first public demonstration that the James Webb Space Telescope (JWST) had taken “the very first picture of a planet outside our solar system”. As Grant Tremblay from the US Center for Astrophysics pointed out, this feat had been accomplished in 2004, some 16 years before the JWST was launched, by the European Southern Observatory’s Very Large Telescope in Chile.

AI-generated image of a rat with significant errors
Badly wrong This AI-generated image of a rat originally appeared in the journal Frontiers in Cell Biology (11 1339390). The use of AI in the image, which bears little resemblance to the anatomy of a rat, was not originally disclosed and the article was subsequently retracted. (CC BY Xinyu Guo, Liang Dong and Dingjun Hao)

Another embarassing incident was the comically anatomically incorrect picture of a rat created by the AI image generator Midjourney, which appeared in a journal paper that was subsequently retracted. Some hallucinations are more serious. Amateur mushroom pickers, for example, have been warned to steer clear of online foraging guides, likely written by AI, that contain information running counter to safe foraging practices. Many edible wild mushrooms look deceptively similar to their toxic counterparts, making careful identification critical.

By using AI to write online content, we’re in danger of triggering a vicious circle of increasingly misleading statements, polluting the Internet with unverified output. What’s more, AI can perpetuate existing biases in society. Google, for example, was forced to publish an embarrassing apology, saying it would “pause” the ability to generate images with Gemini after the service was used to create images of racially diverse Nazi soldiers,

More seriously, women and some minority groups are under-represented in healthcare data, biasing the training set and potentially skewing the recommendations of predictive AI algorithms. One study led by Laleh Seyyed-Kalantari from the University of Toronto (Nature Medicine 27 2176) found that computer-aided diagnosis of chest X-rays are less accurate for Black patients than white patients.

Generative AI could even increase inequalities if it becomes too commercial. “Right now there’s a lot of free generative AI available, but I can also see that getting more unequal in the very near future,” Dihal warns. People who can afford to pay for ChatGPT subscriptions, for example, have access to versions of the AI based on more up-to-date training data. They therefore get better responses than users restricted to the “free” version.

Clear communication

But generative AI tools can do much more than churn out uninspired articles and create problems. One beauty of ChatGPT is that users interact with it conversationally, just like you’d talk to a human communicator at a science museum or science festival. You could start by typing something simple (such as “What is quantum entanglement?”) before delving into the details (e.g. “What kind of physical systems are used to create it?”). You’ll get answers that meet your needs better than any standard textbook.

Teenage girl using laptop at home
Opening up AI could help students, particularly those who face barriers to education, to explore scientific topics that interest them. (Courtesy: iStock/Imgorthand)

Generative AI could also boost access to physics by providing an interactive way to engage with groups – such as girls, people of colour or students from low-income backgrounds – who might face barriers to accessing educational resources in more traditional formats. That’s the idea behind online tuition platforms such as Khan Academy, which has integrated a customized version of ChatGPT into its tuition services.

Instead of presenting fully formed answers to questions, its generative AI is programmed to prompt users to work out the solution themselves. If a student types, say, “I want to understand gravity” into Khan’s generative AI-powered tutoring program, the AI will first ask what the student already knows about the subject. The “conversation” between the student and the chatbot will then evolve in the light of the student’s response.

As someone with cerebral palsy, AI has transformed how I work by enabling me to turn my speech into text in an instant

AI can also remove barriers that some people face in communicating science, allowing a wider range of voices to be heard and thereby boosting the public’s trust in science. As someone with cerebral palsy, AI has transformed how I work by enabling me to turn my speech into text in an instant (see box below).

It’s also helped Duncan Yellowlees, a dyslexic research developer who trains researchers to communicate. “I find writing long text really annoying, so I speak it into OtterAI, which converts the speech into text,” he says. The text is sent to ChatGPT, which converts it into a blog. “So it’s my thoughts, but I haven’t had to write them down.”

Then there’s Matthew Tosh, a physicist-turned-science presenter specializing in pyrotechnics. He has a progressive disease, which meant he faced an increasing struggle to write in a concise way. ChatGPT, however, lets him create draft social-media posts, which he then rewrites in his own sites. As a result, he can maintain that all-important social-media presence while managing his disability at the same time.

Despite the occasional mistake made by generative AI bots, misinformation is nothing new. “That’s part of human behaviour, unfortunately,” Tosh admits. In fact, he thinks errors can – perversely – be a positive. Students who wrongly think a kilo of cannonballs will fall faster than a kilo of feathers create the perfect chance for teachers to discuss Newtonian mechanics. “In some respects,” says Tosh, “a little bit of misinformation can start the conversation.”

AI as a voice-to-text tool

Claire Malone at her desk
Reaping the benefits Claire Malone uses AI-powered speech-to-text software, which helps her work as a science communicator. (Courtesy: Claire Malone)

As a science journalist – and previously as a researcher hunting for new particles in data from the ATLAS experiment at CERN – I’ve longed to use speech-to-text programs to complete assignments. That’s because I have a disability – cerebral palsy – that makes typing impractical. For a long time this meant I had to dictate my work to a team of academic assistants for many hours a week. But in 2023 I started using Voiceitt, an AI-powered app optimized for speech recognition for people with non-standard speech like mine.

You train the app by first reading out a couple of hundred short training phrases. It then deploys AI to apply thousands of hours of other non-standard speaker models in its database to optimize its training. As Voiceitt is used, it continues refining the AI model, improving speech recognition over time. The app also has a generative AI model to correct any grammatical errors created during transcription. Each week, I find myself correcting the app’s transcriptions less and less, which is a bonus when facing journalistic deadlines, such as the one for this article.

The perfect AI assistant?

One of the first news organizations to experiment with AI tools was Associated Press (AP), which in 2014 began automating routine financial stories about corporate earnings. AP now also uses AI to create transcripts of videos, write summaries of sports events, and spot trends in large stock-market data sets. Other news outlets use AI tools to speed up “back-office” tasks such as transcribing interviews, analysing information or converting data files. Tools such as MidJourney can even help journalists to brief professional illustrators to create images.

However, there is a fine line between using AI to speed up your workflow and letting it make content without human input. Many news outlets and writers’ associations have issued statements guaranteeing not to use generative AI as a replacement for human writers and editors. Physics World, for example, has pledged not to publish fresh content generated purely by AI, though the magazine does use AI to assist with transcribing and summarizing interviews.

So how can generative AI be incorporated into the effective and trustworthy communication of science? First, it’s vital to ask the right question – in fact, composing a prompt can take several attempts to get the desired output. When summarizing a document, for example, a good prompt should include the maximum word length, an indication of whether the summary should be in paragraphs or bullet points, and information about the target audience and required style or tone.

Generative AI is here to stay – and science communicators and journalists are still working out how best to use it to communicate science

Second, information obtained from AI needs to be fact checked. It can easily hallucinate, making a chatbot like an unreliable (but occasionally brilliant) colleague who can get the wrong end of the stick. “Don’t assume that whatever the tool is, that it is correct,” says Phil Robinson, editor of Chemistry World. “Use it like you’d use a peer or colleague who says ‘Have you tried this?’ or ‘Have you thought of that?’”

Finally, science communicators must be transparent in explaining how they used AI. Generative AI is here to stay – and science communicators and journalists are still working out how best to use it to communicate science. But if we are to maintain the quality of science journalism – so vital for the public’s trust in science – we must continuously evaluate and manage how AI is incorporated into the scientific information ecosystem.

Generative AI can help you say what you want to say. But as Dihal concludes: “It’s no substitute for having something to say.”

The post Why AI is a force for good in science communication appeared first on Physics World.

Space-based solar power: ‘We have nothing to lose and everything to gain’

The most important and pressing issue of our times is the transition to clean energy while meeting rising global demand. Cheap, abundant and reliable energy underpins the quality of life for all – and one potentially exciting way to do this is space-based solar power (SBSP). It would involve capturing sunlight in space and beaming it as microwaves down to Earth, where it would be converted into electricity to power the grid.

For proponents of SBSP such as myself, it’s a hugely promising technology. Others, though, are more sceptical. Earlier this year, for example, NASA published a report from its Office of Technology, Policy and Strategy that questioned the cost and practicality of SBSP. Henri Barde, a retired engineer who used to work for the European Space Agency (ESA) in Noordwijk, the Netherlands, has also examined the technical challenges in a report for the IEEE.

Some of these sceptical positions on SBSP were addressed in a recent Physics World article by James McKenzie. Conventional solar power is cheap, he argued, so why bother putting large solar power satellites in space? After all, the biggest barriers to building more solar plants here on Earth aren’t technical, but mostly come in the form of belligerent planning officials and local residents who don’t want their views ruined.

However, in my view we need to take a whole-energy-system perspective to see why innovation is essential for the energy transition. Wind, solar and batteries are “low-density” renewables, requiring many tonnes of minerals to be mined and refined for each megawatt-hour of energy. How can this be sustainable and give us energy security, especially when so much of our supply of these minerals depends on production in China?

Low-density renewables also require a Herculean expansion in electricity grid transmission pylons and cables to connect them to users. Other drawbacks of wind and solar is that they depend on the weather and require suitable storage – which currently does not exist at the capacity or cost needed. These forms of energy also need duplicated back-up, which is expensive, and other sources of baseload power for times when it’s cloudy or there’s no wind.

Look to the skies

With no night or weather in space, however, a solar panel in space generates 13 times as much energy than the same panel on Earth. SBSP, if built, would generate power continuously, transmitted as microwaves through the atmosphere with almost no loss. It could therefore deliver baseload power 24 hours a day, irrespective of local weather conditions on Earth.

SBSP could easily produce more or less power as needed, effectively smoothing out the unpredictable and varying output from wind and solar

Another advantage of SBSP is that could easily produce more or less power as needed, effectively smoothing out the unpredictable and varying output from wind and solar. We currently do this using fossil-fuel-powered gas-fired “peaker” plants, which could therefore be put out to pasture. SBSP is also scalable, allowing the energy it produces to be easily exported to other nations without expensive cables, giving it a truly global impact.

A recent whole-energy-system study by researchers at Imperial College London concluded that introducing just 8 GW of SBSP into the UK’s energy mix would deliver system savings of over £4bn every year. In my view, which is shared by others too, the utility of SBSP is likely to be even greater when considering whole continents or global alliances. It can give us affordable and reliable clean energy.

My firm, Space Solar, has designed a solar-power satellite called CASSIOPeiA, which is more than twice as powerful – based on the key metric of power per unit mass – as ESA’s design. So far, we have built and successfully demonstrated our power beaming technology, and following £5m of engineering design work, we have arguably the most technically mature design in the world.

If all goes to plan, we’ll have our first commercial product by 2029. Offering 30 MW of power, it could be launched by a single Starship rocket, and scale to gigawatt systems from there. Sure, there are engineering challenges, but these are mostly based on ensuring that the economics remain competitive. Space Solar is also lucky in having world-class experts working in spacecraft engineering, advanced photovoltaics, power beaming and in-space robotics.

Brighter and better

But why then was NASA’s study so sceptical of SBSP? I think it was because the report made absurdly conservative assumptions of the economics. NASA assumed an operating life of only 10 years: so to run for 30 years, the whole solar power satellite would have to be built and launched three times. Yet satellites today generally last for more than 25 years, with most baselined for a minimum 15 year life.

The NASA report also assumed that a satellite launched by Starship would remain at around $1500/kg. However, other independent analyses, such as “Space: the dawn of a new age” produced in 2022 by Citi Group, have forecast that it will be an order of magnitude less – just at $100/kg – by 2040. I could go on as there are plenty more examples of risk-averse thinking in the NASA report.

Buried in the report, however, the study also looked at more reasonable scenarios than the “baseline” and concluded that “these conditions would make SBSP systems highly competitive with any assessed terrestrial renewable electricity production technology’s 2050 cost projections”. Curiously, these findings did not make it into the executive summary.

The NASA study has been widely criticized, including by former NASA physicist John Mankins, who invented another approach to space solar dubbed SPS Alpha. Speaking on a recent episode of the DownLink podcast, he suspected NASA’s gloomy stance may in part be because it focuses on space tech and space exploration rather than energy for Earth. NASA bosses might fear that if they were directed by Congress to pursue SBSP, money for other priorities might be at risk.

I also question Barde’s sceptical opinion of the technology of SBSP, which he expressed in an article for IEEE Spectrum. Barde appeared not to understand many of the design features that make SPBSP technically feasible. He wrote, for example, about “gigawatts of power coursing through microwave systems” of the solar panels on the satellite, which sounds ominous and challenging to achieve.

In reality, the gigawatts of sunlight are reflected onto a large area of photovoltaics containing a billion or so solar cells. Each cell, which includes an antenna and electronic components to convert the sunlight into microwaves, is arranged in a sandwich module just a few millimetres thick handling just 2 W of power. So although the satellite delivers gigawatts overall, the figure is much lower at the component level. What’s more, each cell can be made using tried and tested radio-frequency components.

As for Barde’s fears about thermal management – in other words, how we can stop the satellite from overheating – that has already been analysed in detail. The plan is to use passive radiative cooling without active systems. Barde also warns of temperature swings as the satellites pass through eclipse during the spring and autumn equinox. But this problem is common to all satellites and has, in any case, been analysed as part of our engineering work. In essence, Barde’s claim of “insurmountable technical difficulties” is simply his opinion.

Until the first solar power satellite is commissioned, there will always be sceptics [but] that was also true of reusable rockets and cubesats, both of which are now mainstream technology

Until the first solar power satellite is commissioned, there will always be sceptics of what we are doing. However, that was also true of reusable rockets and cubesats, both of which are now mainstream technology. SBSP is a “no-regrets” investment that will see huge environmental and economic benefits, with spin-off technologies in wireless power beaming, in-space assembly and photovoltaics.

It is the ultimate blend of space technology and societal benefit, which will inspire the next generation of students into physics and engineering. Currently, the UK has a leadership position in SBSP, and if we have the vision and ambition, there is nothing to lose and everything to gain from backing this. We just need to get on with the job.

The post Space-based solar power: ‘We have nothing to lose and everything to gain’ appeared first on Physics World.

Axion clouds around neutron stars could reveal dark matter origins

Hypothetical particles called axions could form dense clouds around neutron stars – and if they do, they will give off signals that radio telescopes can detect, say researchers in the Netherlands, the UK and the US. Since axions are a possible candidate for the mysterious substance known as dark matter, this finding could bring us closer to understanding it.

Around 85% of the universe’s mass consists of matter that appears “dark” to us. We can observe its gravitational effect on structures such as galaxies, but we cannot observe it directly. This is because dark matter hardly interacts with anything as far as we know, making it very difficult to detect. So far, searches for dark matter on Earth and in space have found no evidence for any of the various dark matter candidates.

The new research raises hopes that axions could be different. These neutral, bosonic particles are extremely light and hardly interact with ordinary matter. They get their name from a brand of soap, having been first proposed in the 1970s as a way of “cleaning up” a problem in quantum chromodynamics (QCD). More recently, astronomers have suggested they could clean up cosmology, too, by playing a role in the formation of galaxies in the early universe. They would also be a clean start for particle physics, providing evidence for new physics beyond the Standard Model.

Signature signals

But how can we detect axions if they are almost invisible to us? In the latest work, researchers at the University of Amsterdam, Princeton University and the University of Oxford showed that axions, if they exist, will be produced in large quantities at the polar regions of neutron stars. (Axions may also be components of dark matter “halos” believed to be present in the universe, but this study investigated axions produced by neutron stars themselves.) While many axions produced in this way will escape, some will be captured by the stars’ strong gravitational field. Over millions of years, axions will therefore accumulate around neutron stars, forming a cloud dense enough to give off detectable signals.

To reach these conclusions, the researchers examined various axion cloud interaction mechanisms, including self-interaction, absorption by neutron star nuclei and electromagnetic interactions. They concluded that for most axion masses, it is the last mechanism – specifically, a process called resonant axion-photon mixing – that dominates. Notably, this mechanism should produce a stream of low-energy photons in the radiofrequency range.

The team also found that these radio emissions would be connected to four distinct phases of axion cloud evolution. These are a growth phase after the neutron star forms; a saturation phase during normal life; a magnetorotational decay phase towards the later stages of the star’s existence; and finally a large burst of radio waves when the neutron star dies.

Turn on the radio

The researchers say that several large radio telescopes around the globe could play a role in detecting these radiofrequency signatures. Examples include the Low-Frequency Array (LOFAR) in the Netherlands; the Murchison Widefield Array in Australia; and the Green Bank Telescope in the US. To optimize the chances of picking up an axion signal, the collaboration recommends specific observation times, bandwidths and signal-to-noise ratios that these radio telescopes should adhere to. By following these guidelines, they say, the LOFAR setup alone could detect up to four events per year.

Dion Noordhuis, a PhD student at Amsterdam and first author of a Physical Review X paper on the research, acknowledges that there could be other observational signals beyond those explored in the paper. These will require further investigation, and he suggests that a full understanding will require complementary efforts from multiple branches of physics, including particle (astro)physics, plasma physics and observational radioastronomy. “This work thereby opens up a new, cross-disciplinary field with lots of opportunities for future research,” he tells Physics World.

Sankarshana Srinivasan, an astrophysicist from the Ludwig Maximilian University in Munich, Germany, who was not involved in the research, agrees that the QCD axion is a well-motivated candidate for dark matter. The Amsterdam-Princeton-Oxford team’s biggest achievement, he says, is to realize how axion clouds could enhance the signal, while the team’s “state-of-the-art” modelling makes the work stand out. However, he also urges caution because all theories of axion-photon mixing around neutron stars make assumptions about the stars’ magnetospheres, which are still poorly understood.

The post Axion clouds around neutron stars could reveal dark matter origins appeared first on Physics World.

❌