↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 12 novembre 2024Physics World

From melanoma to malaria: photoacoustic device detects disease without taking a single drop of blood

Par : Tami Freeman
12 novembre 2024 à 10:30

Malaria remains a serious health concern, with annual deaths increasing yearly since 2019 and almost half of the world’s population at risk of infection. Existing diagnostic tests are less than optimal and all rely on obtaining an invasive blood sample. Now, a research collaboration from USA and Cameroon has demonstrated a device that can non-invasively detect this potentially deadly infection without requiring a single drop of blood.

Currently, malaria is diagnosed using optical microscopy or antigen-based rapid diagnostic tests, but both methods have low sensitivity. Polymerase chain reaction (PCR) tests are more sensitive, but still require blood sampling. The new platform – Cytophone – uses photoacoustic flow cytometry (PAFC) to rapidly identify malaria-infected red blood cells via a small probe placed on the back of the hand.

PAFC works by delivering low-energy laser pulses through the skin into a blood vessel and recording the thermoacoustic signals generated by absorbers in circulating blood. Cytophone, invented by Vladimir Zharov from the University of Arkansas for Medical Science, was originally developed as a universal diagnostic platform and first tested clinically for detection of cancerous melanoma cells.

“We selected melanoma because of the possibility of performing label-free detection of circulating cells using melanin as an endogenous biomarker,” explains Zharov. “This avoids the need for in vivo labelling by injecting contrast agents into blood.” For malaria diagnosis, Cytophone detects haemozoin, an iron crystal that accumulates in red blood cells infected with malaria parasites. These haemozoin biocrystals have unique magnetic and optical properties, making them a potential diagnostic target.

Photoacoustic detection
Photoacoustic detection Schematic of the focused ultrasound transducer array assessing a blood network. (Courtesy: Nat. Commun. 10.1038/s41467-024-53243-z)

“The similarity between melanin and haemozoin biomarkers, especially the high photoacoustic contrast above the blood background, motivated us to bring a label-free malaria test with no blood drawing to malaria-endemic areas,” Zharov tells Physics World. “To build a clinical prototype for the Cameroon study we used a similar platform and just selected a smaller laser to make the device more portable.”

The Cytophone prototype uses a 1064 nm laser with a linear beam shape and a high pulse rate to interrogate fast moving blood cells within blood vessels. Haemozoin nanocrystals in infected red blood cells absorb this light (more strongly than haemoglobin in normal red blood cells), heat up and expand, generating acoustic waves. These signals are detected by an array of 16 tiny ultrasound transducers in acoustic contact with the skin. The transducers have focal volumes oriented in a line across the vessel, which increases sensitivity and resolution, and simplifies probe navigation.

In vivo testing

Zharov and collaborators – also from Yale School of Public Health and the University of Yaoundé I – tested the Cytophone in 30 Cameroonian adults diagnosed with uncomplicated malaria. They used data from 10 patients to optimize device performance and assess safety. They then performed a longitudinal study in the other 20 patients, who attended four or five times at up to 37 days following antimalarial therapy, contributing 94 visits in total.

Photoacoustic waveforms and traces from infected blood cells have a particular shape and duration, and a different time delay to that of background skin signals. The team used these features to optimize signal processing algorithms with appropriate averaging, filtration and gating to identify true signals arising from infected red blood cells. As the study subjects all had dark skin with high melanin content, this time-resolved detection also helped to avoid interference from skin melanin.

On visit 1 (the day of diagnosis), 19/20 patients had detectable photoacoustic signals. Following treatment, these signals consistently decreased with each visit. Cytophone-positive samples exhibited median photoacoustic peak rates of 1.73, 1.63, 1.18 and 0.74 peaks/min on visits 1–4, respectively. One participant had a positive signal on visit 5 (day 30). The results confirm that Cytophone is sensitive enough to detect low levels of parasites in infected blood.

The researchers note that Cytophone detected the most common and deadliest species of malaria parasite, as well as one infection by a less common species and two mixed infections. “That was a really exciting proof-of-concept with the first generation of this platform,” says co-lead author Sunil Parikh in a press statement. “I think one key part of the next phase is going to involve demonstrating whether or not the device can detect and distinguish between species.”

The research team
Team work The researchers from the USA and Cameroon are using photoacoustic flow cytometry to rapidly detect malaria infection. (Courtesy: Sunil Parikh)

Performance comparison

Compared with invasive microscopy-based detection, Cytophone demonstrated 95% sensitivity at the first visit and 90% sensitivity during the follow-up period, with 69% specificity and an area under the ROC curve of 0.84, suggesting excellent diagnostic performance. Cytophone also approached the diagnostic performance of standard PCR tests, with scope for further improvement.

Staff required just 4–6 h of training to operate Cytophone, plus a few days experience to achieve optimal probe placement. And with minimal consumables required and the increasing affordability of lasers, the researchers estimate that the cost per malaria diagnosis will be low. The study also confirmed that the safety of the Cytophone device. “Cytophone has the potential to be a breakthrough device allowing for non-invasive, rapid, label-free and safe in vivo diagnosis of malaria,” they conclude.

The researchers are now performing further malaria-related clinical studies focusing on asymptomatic individuals and children (for whom the needle-free aspect is particularly important). Simultaneously, they are continuing melanoma trials to detect early-stage disease and investigating the use of Cytophone to detect circulating blood clots in stroke patients.

“We are integrating multiple innovations to further enhance Cytophone’s sensitivity and specificity,” says Zharov. “We are also developing a cost-effective wearable Cytophone for continuous monitoring of disease progression and early warning of the risk of deadly disease.”

The study is described in Nature Communications.

The post From melanoma to malaria: photoacoustic device detects disease without taking a single drop of blood appeared first on Physics World.

  •  
À partir d’avant-hierPhysics World

Quantized vortices seen in a supersolid for the first time

Par : No Author
11 novembre 2024 à 15:27

Quantized vortices – one of the defining features of superfluidity – have been seen in a supersolid for the first time. Observed by researchers in Austria, these vortices provide further confirmation that supersolids can be modelled as superfluids with a crystalline structure. This model could have variety of other applications in quantum many body physics and Austrian team now using it to study pulsars, which are rotating and magnetized neutron stars.

A superfluid is a curious state of matter that can flow without any friction. Superfluid systems that have been studied in the lab include helium-4; type-II superconductors; and Bose–Einstein condensates (BECs) – all of which exist at very low temperatures.

More than five decades ago, physicists suggested that some systems could exhibit crystalline order and superfluidity simultaneously in a unique state of matter called a supersolid. In such a state, the atoms would be described by the same wavefunction and are therefore delocalized across the entire crystal lattice. The order of the supersolid would therefore be defined by the nodes and antinodes of this wavefunction.

In 2004, Moses Chan of the Pennsylvania State University in the US and his PhD student Eun-Seong Kim reported observing a supersolid phase in superfluid helium-4. However, Chan and others have not been able to reproduce this result. Subsequently, researchers including Giovanni Modugno at Italy’s University of Pisa and Francesca Ferlaino at the University of Innsbruck in Austria have demonstrated evidence of supersolidity in BECs of magnetic atoms.

Irrotational behaviour

But until now, no-one had observed an important aspect of superfluidity in a supersolid: that a superfluid never carries bulk angular momentum. If a superfluid is placed in a container and the container is rotated at moderate angular velocity, it simply flows freely against the edges. As the angular momentum of the container increases, however, it becomes energetically costly to maintain the decoupling between the container and the superfluid. “Still, globally, the system is irrotational,” says Ferlaino; “So there’s really a necessity for the superfluid to heal itself from rotation.”

In a normal superfluid, this “healing” occurs by the formation of small, quantized vortices that dissipate the angular momentum, allowing the system to remain globally irrotational. “In an ordinary superfluid that’s not modulated in space [the vortices] form a kind of triangular structure called an Abrikosov lattice, because that’s the structure that minimizes their energy,” explains Ferlaino. It was unclear how the vortices might sit inside a supersolid lattice.

In the new work, Ferlaino and colleagues at the University of Innsbruck utilized a technique called magnetostirring to rotate a BEC of magnetic dysprosium-164 atoms. They caused the atoms to rotate simply by rotating the magnetic field. “That’s the beauty: it’s so simple but nobody had thought about this before,” says Ferlaino.

As the group increased the field’s rotation rate, they observed vortices forming in the condensate and migrating to the density minima. “Vortices are zeroes of density, so there it costs less energy to drill a hole than in a density peak,” says Ferlaino; “The order that the vortices assume is largely imparted by the crystalline structure – although their distance is dependent on the repulsion between vortices.”

Unexpected applications

The researchers believe the findings could be applicable in some unexpected areas of physics. Ferlaino tells of hearing a talk about the interior composition of neutron stars by the theoretical astrophysicist Massimo Mannarelli of Gran Sasso Laboratory in Italy. “During the coffee break I went to speak to him and we’ve started to work together.”

“A large part of the astrophysical community is convinced that the core of a neutron star is a superfluid,” Ferlaino says; “The crust is a solid, the core is a superfluid, and a layer called the inner crust has both properties together.” Pulsars are neutron stars that emit radiation in a narrow beam, giving them a well-defined pulse rate that depends on their rotation. As they lose energy through radiation emission, they gradually slow down.

Occasionally, however, their rotation rates suddenly speed up again in events called glitches. The researchers’ theoretical models suggest that the glitches could be caused by vortices unpinning from the supersolid and crashing into the solid exterior, imparting extra angular momentum. “When we impose a rotation on our supersolid that slows down, then at some point the vortices unpin and we see the glitches in the rotational frequency,” Ferlaino says. “This is a new direction – I don’t know where it will bring us, but for sure experimentally observing vortices was the first step.”

Theorist Blair Blakie of the University of Otago in New Zealand is excited by the research. “Vortices in supersolids were a bit of a curiosity in early theories, and sometimes you’re not sure whether theorists are just being a bit crazy considering things, but now they’re here,” he says. “It opens this new landscape for studying things from non-equilibrium dynamics to turbulence – all sorts of things where you’ve got this exotic material with topological defects in it. It’s very hard to predict what the killer application will be, but in these fields people love new systems with new properties.”

The research is described in Nature.

The post Quantized vortices seen in a supersolid for the first time appeared first on Physics World.

  •  

Sceptical space settlers, Einstein in England, trials of the JWST, tackling quantum fundamentals: micro reviews of the best recent books

Par : No Author
11 novembre 2024 à 12:00

A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?
By Kelly and Zach Weinersmith

Husband-and-wife writing team Kelly and Zach Weinersmith were excited about human settlements in space when they started research for their new book A City on Mars. But the more they learned, the more sceptical they became. From technology, practicalities and ethics, to politics and the legal framework, they uncovered profound problems at every step. With humorous panache and plenty of small cartoons by Zach, who also does the webcomic Saturday Morning Breakfast Cereal, the book is a highly entertaining guide that will dent the enthusiasm of most proponents of settling space. Kate Gardner

  • 2024 Particular Books

Einstein in Oxford
By Andrew Robinson

“England has always produced the best physicists,” Albert Einstein once said in Berlin in 1925. His high regard for British physics led him to pay three visits to the University of Oxford in the early 1930s, which are described by Andrew Robinson in his charming short book Einstein in Oxford. Sadly, the visits were not hugely productive for Einstein, who disliked the formality of Oxford life. His time there is best remembered for the famous blackboard – saved for posterity – on which he’d written while giving a public lecture. Matin Durrani

  • 2024 Bodleian Library Publishing

Pillars of Creation: How the James Webb Telescope Unlocked the Secrets of the Cosmos
By Richard Panek

The history of science is “a combination of two tales” says Richard Panek in his new book charting the story of the James Webb Space Telescope (JWST). “One is a tale of curiosity. The other is a tale of tools.” He has chosen an excellent case study for this statement. Pillars of Creation combines the story of the technological and political hurdles that nearly sank the JWST before it launched with a detailed account of its key scientific contributions. Panek’s style is also multi-faceted, mixing technical explanations with the personal stories of scientists fighting to push the frontiers of astronomy.  Katherine Skipper

  • 2024 Little, Brown

Quanta and Fields: the Biggest Ideas in the Universe
By Sean Carroll

With 2025 being the International Year of Quantum Science and Technology, the second book in prolific science writer Sean Carroll’s “Biggest Ideas” trilogyQuanta and Fields – might make for a prudent read. Following the first volume on “space, time and motion”, it tackles the key scientific principles that govern quantum mechanics, from wave functions to effective wave theory. But beware: this book is packed with equations, formulae and technical concepts. It’s essentially a popular-science textbook, in which Carroll does things like examine each term in the Schrödinger equation and delve into the framework for group theory. Great for physicists but not, perhaps, for the more casual reader. Tushna Commissariat

  • 2024 Penguin Random House

The post Sceptical space settlers, Einstein in England, trials of the JWST, tackling quantum fundamentals: micro reviews of the best recent books appeared first on Physics World.

  •  

Four-wave mixing could boost optical communications in space

Par : No Author
9 novembre 2024 à 16:02

A new and practical approach to the low-noise amplification of weakened optical signals has been unveiled by researchers in Sweden. Drawing from the principles of four-wave mixing, Rasmus Larsson and colleagues at Chalmers University of Technology believe their approach could have promising implications for laser-based communication systems in space.

Until recently, space-based communication systems have largely relied on radio waves to transmit signals. Increasingly, however, these systems are being replaced with optical laser beams. The shorter wavelengths of these signals offer numerous advantages over radio waves. These include higher data transmission rates; lower power requirements; and lower risks of interception.

However, when transmitted across the vast distances of space, even a tightly focused laser beam will spread out significantly by the time its light reaches its destination. This will weaken severely the signal’s strength.

To deal with this loss, receivers must be extremely sensitive to incoming signals. This involves the preamplification of the signal above the level of electronic noise in the receiver. But conventional optical amplifiers are far too noisy to achieve practical space-based communications.

Phase-sensitive amplification

In a 2021 study, Larsson’s team showed how these weak signals can, in theory, be amplified with zero noise using a phase-sensitive optical parametric amplifier (PSA). However, this approach did not solve the problem entirely.

“The PSA should be the ideal preamplifier for optical receivers,” Larsson explains. “However, we don’t see them in practice due to their complex implementation requirements, where several synchronized optical waves of different frequencies are needed to facilitate the amplification.” These cumbersome requirements place significant demands on both transmitter and receiver, which limits their use in space-based communications.

To simplify preamplification, Larsson’s team used four-wave mixing. Here, the interaction between light at three different wavelengths within a nonlinear medium produces light at a fourth wavelength.

In this case, a weakened transmitted signal is mixed with two strong “pump” waves that are generated within the receiver. When the phases of the signal and pump are synchronized inside a doped optical fibre, light at the fourth wavelength interferes constructively with the signal. This boosts the amplitude of the signal without sacrificing low-noise performance.

Auxiliary waves

“This allows us to generate all required auxiliary waves in the receiver, with the transmitter only having to generate the signal wave,” Larsson describes. “This is contrary to the case before where most, if not all waves were generated in the transmitter. The synchronization of the waves further uses the same specific lossless approach we demonstrated in 2021.”

The team says that this new approach offers a practical route to noiseless amplification within an optical receiver. “After optimizing the system, we were able to demonstrate the low-noise performance and a receiver sensitivity of 0.9 photons per bit,” Larsson explains. This amount of light is the minimum needed to reliably decode each bit of data and Larsson adds, “This is the lowest sensitivity achieved to date for any coherent modulation format.”

This unprecedented sensitivity enabled the team to establish optical communication links between a PSA-amplified receiver and a conventional, single-wave transmitter. With a clear route to noiseless preamplification through some further improvements, the researchers are now hopeful that their approach could open up new possibilities across a wide array of applications – especially for laser-based communications in space.

“In this rapidly emerging topic, the PSA we have demonstrated can facilitate much higher data rates than the bandwidth-limited single photon detection technology currently considered.”

This ability would make the team’s PSA ideally suited for communication links between space-based transmitters and ground-based receivers. In turn, astronomers could finally break the notorious “science return bottleneck”. This would remove many current restrictions on the speed and quantity of data that can be transmitted by satellites, probes, and telescopes scattered across the solar system.

The research is described in Optica.

The post Four-wave mixing could boost optical communications in space appeared first on Physics World.

  •  

The Arecibo Observatory’s ‘powerful radiation environment’ led to its collapse, claims report

Par : No Author
8 novembre 2024 à 14:40

The Arecibo Observatory’s “uniquely powerful electromagnetic radiation environment” is the most likely initial cause of its destruction and collapse in December 2020. That’s according to a new report by the National Academies of Sciences, Engineering, and Medicine, which states that failure of zinc in the cables that held the telescope’s main platform led to it falling onto the huge 305 m reflector dish – causing catastrophic damage.

While previous studies of the iconic telescope’s collapse had identified the deformation of zinc inside the cable sockets, other reasons were also put forward. They included poor workmanship and the effects of hurricane Maria, which hit the area in 2017. It subjected the telescope’s cables to the highest structural stress they had ever endured since the instrument opened in 1963.

Inspections after the hurricane showed some evidence of cable slippage. Yet these investigations, the report says, failed to note several failure patterns and did not provide plausible explanations for most of them. In addition, photos taken in 2019 gave “a clear indication of major socket deterioration”, but no further investigation followed.

The eight-strong committee, chaired by Roger McCarthy of the US firm McCarthy Engineering, that wrote the report found that move surprising. “The lack of documented concern from the contracted engineers about the inconsequentiality of cable pullouts or the safety factors between Hurricane Maria in 2017 and the failure is alarming,” they say.

Further research

The report concludes that the root cause of the catastrophe was linked to the zinc sockets, which suffered “unprecedented and accelerated long-term creep-induced failure”. Metallic creep – the slow, permanent deformation of a metal – is caused by stress and exacerbated by heat, making components based on the metal to fail. “Each failure involved both the rupture of some of the cable’s wires and a deformation of the socket’s zinc, and is therefore the failure of a cable-socket assembly,” the report notes.

As to the cause of the creep, the committee sees the telescope’s radiation environment as “the only hypothesis that…provides a plausible but unprovable answer”. The committee proposes that the telescope’s powerful transmitters induced electrical currents in the cables and sockets, potentially causing “long-term, low-current electroplasticity” in the zinc. The increased induced plasticity accelerated the natural ongoing creep in the zinc.

The report adds that the collapse of the platform is the first documented zinc-induced creep failure, despite the metal being used in such a way for over a century. The committee now recommends that the National Science Foundation (NSF), which oversees Arecibo, offer the remaining socket and cable sections to the research community for further analysis on the “large-diameter wire connections, the long-term creep behavior of zinc spelter connections, and [the] materials science”.

  • Meanwhile, the NSF had planned to reopen the telescope site as an educational center later this month but that has now be delayed until next year to coincide with the NSF’s 75th anniversary.

The post The Arecibo Observatory’s ‘powerful radiation environment’ led to its collapse, claims report appeared first on Physics World.

  •  

Top-cited author Vaidehi Paliya discusses the importance of citations and awards

Par : No Author
8 novembre 2024 à 10:21

More than 50 papers from India have been recognized with a top-cited paper award for 2024 from IOP Publishing, which publishes Physics World. The prize is given to corresponding authors who have papers published in both IOP Publishing and its partners’ journals from 2021 to 2023 that are in the top 1% of the most cited papers.

The winners include astrophysicist Vaidehi Paliya from Inter-University Centre for Astronomy and Astrophysics (IUCAA) and colleagues. Their work involved studying the properties of the “central engines” of blazars, a type of active galactic nucleus.

Vaidehi Paliya
Highly cited: Vaidehi Paliya

“Knowing that the astronomy community has appreciated the published research is excellent,” says Vaidehi. “It has been postulated for a long time that the physics of relativistic jets is governed by the central supermassive black hole and accretion disk, also known as the central engine of an active galaxy. Our work is probably the first to quantify their physical properties, such as the black hole mass and the accretion disk luminosity, for a large sample of active galaxies hosting powerful relativistic jets called blazars.”

Vaidehi explains that getting many citations for the work, which was published in Astrophysical Journal Supplement Series, indicates that the published results “have been helpful to other researchers” and that this broad visibility also increases the chance that other groups will come across the work. “[Citations] are important because they can therefore trigger innovative ideas and follow-up research critical to advancing scientific knowledge,” adds Vaidehi.

Vaidehi says that he often turns to highly cited research “to appreciate the genuine ideas put forward by scientists”, with two recent examples being what inspired him to work on the central engine problem.

Indeed, Vaidehi says that prizes such as IOP’s highly cited paper award are essential for researchers, especially students. “Highly cited work is crucial not only to win awards but also for the career growth of a researcher. Awards play a significant role in further motivating fellow researchers to achieve even higher goals and highlight the importance of innovation,” he says. “Such awards are definitely a highlight in getting a career promotion. The news of the award may also lead to opportunities. For instance, to be invited to join other researchers working in similar areas, which will provide an ideal platform for future collaboration and research exploration.”

Vaidehi adds that results that are meaningful to broader research areas will likely result in higher citations. “Bringing innovation to the work is the key to success,” he says. “Prestigious awards, high citation counts, and other forms of success and recognition will automatically follow. You will be remembered by the community only for your contribution to its advancement and growth, so be genuine.”

  • For the full list of top-cited papers from India for 2024, see here.

The post Top-cited author Vaidehi Paliya discusses the importance of citations and awards appeared first on Physics World.

  •  

How to boost the sustainability of solar cells

7 novembre 2024 à 16:00

In this episode of the Physics World Weekly podcast I explore routes to more sustainable solar energy. My guests are four researchers at the UK’s University of Oxford who have co-authored the “Roadmap on established and emerging photovoltaics for sustainable energy conversion”.

They are the chemist Robert Hoye; the physicists Nakita Noel and Pascal Kaienburg; and the materials scientist Sebastian Bonilla. We define what sustainability means in the context of photovoltaics and we look at the challenges and opportunities for making sustainable solar cells using silicon, perovskites, organic semiconductors and other materials.

This podcast is supported by Pfeiffer Vacuum+Fab Solutions.

Pfeiffer is part of the Busch Group, one of the world’s largest manufacturers of vacuum pumps, vacuum systems, blowers, compressors and gas abatement systems. Explore its products at the Pfeiffer website.

 

The post How to boost the sustainability of solar cells appeared first on Physics World.

  •  

Lightning sets off bursts of high-energy electrons in Earth’s inner radiation belt

7 novembre 2024 à 14:00

A supposedly stable belt of radiation 7000 km above the Earth’s surface may in fact be producing damaging bursts of high-energy electrons. According to scientists at the University of Colorado Boulder, US, the bursts appear to be triggered by lightning, and understanding them could help determine the safest “windows” for launching spacecraft – especially those with a human cargo.

The Earth is surrounded by two doughnut-shaped radiation belts that lie within our planet’s magnetosphere. While both belts contain high concentrations of energetic electrons, the electrons in the outer belt (which starts from about 4 Earth radii above the Earth’s surface and extends to about 9–10 Earth radii) typically have energies in the MeV range. In contrast, electrons in the inner belt, which is located between about 1.1 and 2 Earth radii, have energies between 10 and a few hundred kilo-electronvolts (KeV).

At the higher end of this energy scale, these electrons easily penetrate the walls of spacecraft and can damage sensitive electronics inside. They also pose risks to astronauts who leave the protective environment of their spacecraft to perform extravehicular activities.

The size of the radiation belts, as well as the energy and number of electrons they contain, varies considerably over time. One cause of these variations is sub-second bursts of energetic electrons that enter the atmosphere from the magnetosphere that surrounds it. These rapid microbursts are most commonly seen in the outer radiation belt, where they are the result of interactions with phenomena called whistler mode chorus radio waves. However, they can also be observed in the inner belt, where they are generated by whistlers produced by lightning storms. Such lightening-induced precipitation, as it is known, typically occurs at low energies of 10s to 100 KeV.

Outer-belt energies in inner-belt electrons

In the new study, researchers led by CU Boulder aerospace engineering student Max Feinland observed clumps of electrons with MeV energies in the inner belt for the first time. This serendipitous discovery came while Feinland was analysing data from a now-decommissioned NASA satellite called the Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX). He originally intended to focus on outer-belt electrons, but “after stumbling across these events in the inner belt, we thought they were interesting and decided to investigate further,” he tells Physics World.

After careful analysis, Feinland, who was working as an undergraduate research assistant in Lauren Blum’s team at CU Boulder’s Laboratory for Atmospheric and Space Physics at the time, identified 45 bursts of high-energy electrons in the inner belt in data from 1996 to 2006. At first, he and his colleagues weren’t sure what could be causing them, since the chorus waves known to produce such high-energy bursts are generally an outer-belt phenomenon. “We actually hypothesized a number of processes that could explain our observations,” he says. “We even thought that they might be due to Very Low Frequency (VLF) transmitters used for naval communications.”

The lightbulb moment, however, came when Feinland compared the bursts to records of lightning strikes in North America. Intriguingly, he found that several of the peaks in the electron bursts seemed to happen less than a second after the lighting strikes.

A lightning trigger

The researchers’ explanation for this is that radio waves produced after a lightning strike interact with electrons in the inner belt. These electrons then begin to oscillate between the Earth’s northern and southern hemispheres with a period of just 0.2 seconds. With each oscillation, some electrons drop out of the inner belt and into the atmosphere. This last finding was unexpected: while researchers knew that high-energy electrons can fall into the atmosphere from the outer radiation belt, this is the first time that they have observed them coming from the inner belt.

Feinland says the team’s discovery could help space-launch firms and national agencies decide when to launch their most sensitive payloads. With further studies, he adds, it might even be possible to determine how long these high-energy electrons remain in the inner belt after geomagnetic storms. “If we can quantify these lifetimes, we could determine when it is safest to launch spacecraft,” he says.

The researchers are now seeking to calculate the exact energies of the electrons. “Some of them may be even more energetic than 1 MeV,” Feinland says.

The present work is detailed in Nature Communications.

The post Lightning sets off bursts of high-energy electrons in Earth’s inner radiation belt appeared first on Physics World.

  •  

First human retinal image brings sight-saving portable OCT a step closer

Par : Tami Freeman
7 novembre 2024 à 10:40
Image of a human retina taken with the Akepa photonic chip
First in human Image of a human retina taken with the Akepa photonic chip, showing the retinal layers and key clinical features used for disease diagnosis and monitoring. (Courtesy: Siloton)

UK health technology start-up Siloton is developing a portable optical coherence tomography (OCT) system that uses photonic integrated circuits to miniaturize a tabletop’s-worth of expensive and fragile optical components onto a single coin-sized chip. In a first demonstration by a commercial organization, Siloton has now used its photonic chip technology to capture a sub-surface image of a human retina.

OCT is a non-invasive imaging technique employed as the clinical gold standard for diagnosing retinal disease. Current systems, however, are bulky and expensive and only available at hospital clinics or opticians. Siloton aims to apply its photonic chip – the optical equivalent of an electronic chip – to create a rugged, portable OCT system that patients could use to monitor disease progression in their own homes.

Siloton's Akepa photonic chip
Compact device Siloton’s photonic chip Akepa replaces approximately 70% of the optics found in traditional OCT systems. (Courtesy: Siloton)

The image obtained using Siloton’s first-generation OCT chip, called Akepa, reveals the fine layered structure of the retina in a healthy human eye. It clearly shows layers such as the outer photoreceptor segment and the retinal pigment epithelium, which are key clinical features for diagnosing and monitoring eye diseases.

“The system imaged the part of the retina that’s responsible for all of your central vision, most of your colour vision and the fine detail that you see,” explains Alasdair Price, Siloton’s CEO. “This is the part of the eye that you really care about looking at to detect disease biomarkers for conditions like age-related macular degeneration [AMD] or various diabetic eye conditions.”

Faster and clearer

Since Siloton first demonstrated that Akepa could acquire OCT images of a retinal phantom, the company has deployed some major software enhancements. For example, while the system previously took 5 min to image the phantom – an impractical length of time for human imaging – the imaging speed is now less than a second. The team is also exploring ways to improve image quality using artificial intelligence techniques.

Price explains that the latest image was recorded using the photonic chip in a benchtop set-up, noting that the company is about halfway through the process of miniaturizing all of the optics and electronics into a handheld binocular device.

“The electronics is all off-the-shelf, so we’re not going to focus too heavily on miniaturizing that until right at the end,” he says. “The innovative part is in miniaturizing the optics. We are very close to having it in that binocular headset now, the aim being that by early next year we will have that fully miniaturized.”

As such, the company plans to start deploying some research-only systems commercially next year. These will be handheld binocular-style devices that users hold up to their faces, complete with a base station for charging and communications. Speaking with over 100 patients in focus groups, Siloton confirmed that they prefer this binocular design over the traditional chin rest employed in full-size OCT systems.

“We were worried about that because we thought we may not be able to get the level of stability required,” says Price. “But we did further tests on the stability of the binocular system compared with the chin rest and actually found that the binoculars showed greater stability. Right now we’re still using a chin rest, so we’re hopeful that the binocular system will further improve our ability to record high-quality images.”

The Siloton founding team
Siloton founding team Left to right: Alasdair Price, Euan Allen and Ben Hunt. (Courtesy: Siloton)

Expanding applications

The principal aim of Siloton’s portable OCT system is to make the diagnosis and monitoring of eye diseases – such as diabetic macular oedema, retinal vein occlusion and AMD, the leading cause of sight loss in the developed world – more affordable and accessible.

Neovascular or “wet” AMD, for example, can be treated with regular eye injections, but this requires regular OCT scans at hospital appointments, which may not be available frequently enough for effective monitoring. With an OCT system in their own homes, patients can scan themselves every few days, enabling timely treatments as soon as disease progression is detected – as well as saving hospitals substantial amounts of money.

Ongoing improvements in “quality versus cost” of the Akepa chip has also enabled Siloton to expand its target applications outside of ophthalmology. The ability to image structures such as the optic nerve, for example, enables the use of OCT to screen for optic neuritis, a common early symptom in patients with multiple sclerosis.

The company is also working with the European Space Agency (ESA) on a project investigating spaceflight-associated neuro-ocular syndrome (SANS), a condition suffered by about 70% of astronauts and which requires regular monitoring.

“At the moment, there is an OCT system on the International Space Station. But for longer-distance space missions, things like Gateway, there won’t be room for such a large system,” Price tells Physics World. “So we’re working with ESA to look at getting our chip technology onto future space missions.”

The post First human retinal image brings sight-saving portable OCT a step closer appeared first on Physics World.

  •  

‘Buddy star’ could explain Betelgeuse’s varying brightness

6 novembre 2024 à 16:00

An unseen low-mass companion star may be responsible for the recently observed “Great Dimming” of the red supergiant star Betelgeuse. According to this hypothesis, which was put forward by researchers in the US and Hungary, the star’s apparent brightness varies when an orbiting companion – dubbed α Ori B or, less formally, “Betelbuddy” – displaces light-blocking dust, thereby changing how much of Betelgeuse’s light reaches the Earth.

Located about 548 light-years away, in the constellation Orion, Betelgeuse is the 10th brightest star in the night sky. Usually, its brightness varies over a period of 416 days, but in 2019–2020, its output dropped to the lowest level ever recorded.

At the time, some astrophysicists speculated that this “Great Dimming” might mean that the star was reaching the end of its life and would soon explode as a supernova. Over the next three years, however, Betelgeuse’s brightness recovered, and alternative hypotheses gained favour. One such suggestion is that a cooler spot formed on the star and began ejecting material and dust, causing its light to dim as seen from Earth.

Pulsation periods

The latest hypothesis was inspired, in part, by the fact that Betelgeuse experiences another cycle in addition to its fundamental 416-day pulsation period. This second cycle, known as the long secondary period (LSP), lasts 2170 days, and the Great Dimming occurred after its minimum brightness coincided with a minimum in the 416-day cycle.

While astrophysicists are not entirely sure what causes LSPs, one leading theory suggest that they stem from a companion star. As this companion orbits its parent star, it displaces the cosmic dust the star produces and expels, which in turn changes the amount of starlight that reaches us.

Lots of observational data

To understand whether this might be happening with Betelgeuse, a team led by Jared Goldberg at the Flatiron Institute’s Center for Computational Astrophysics; Meridith Joyce at the University of Wyoming; and László Molnár of the Konkoly Observatory, HUN-REN CSFK, Budapest; analysed a wealth of observational data from the American Association of Variable Star Observers. “This association has been collecting data from both professional and amateur astronomers, so we had access to decades worth of data,” explains Molnár. “We also looked at data from the space-based SMEI instrument and spectroscopic observations collected by the STELLA robotic telescope.”

The researchers combined these direct-observation data with advanced computer models that simulate Betelgeuse’s activity. When they studied how the star’s brightness and its velocity varied relative to each other, they realized that the brightest phase must correspond to a companion being in front of it. “This is the opposite of what others have proposed,” Molnár notes. “For example, one popular hypothesis postulates that companions are enveloped in dense dust clouds, obscuring the giant star when they pass in front of them. But in this case, the companion must remove dust from its vicinity.”

As for how the companion does this, Molnár says they are not sure whether it evaporates the dust away or shepherds it to the opposite side of Betelgeuse with its gravitational pull. Both are possible, and Goldberg adds that other processes may also contribute. “Our new hypothesis complements the previous one involving the formation of a cooler spot on the star that ejects material and dust,” he says. “The dust ejection could occur because the companion star was out of the way, behind Betelgeuse rather than along the line of sight.”

The least absurd of all hypotheses?

The prospect of a connection between an LSP and the activity of a companion star is a longstanding one, Goldberg tells Physics World. “We know the Betelgeuse has an LSP and if an LSP exists, that means a ‘buddy’ for Betelgeuse,” he says.

The researchers weren’t always so confident, though. Indeed, they initially thought the idea of a companion star for Betelgeuse was absurd, so the hardest part of their work was to prove to themselves that this was, in fact, the least absurd of all hypotheses for what was causing the LSP.

“We’ve been interested in Betelgeuse for a while now, and in a previous paper, led by Meridith, we already provided new size, distance and mass estimates for the star based on our models,” says Molnár. “Our new data started to point in one direction, but first we had to convince ourselves that we were right and that our claims are novel.”

The findings could have more far-reaching implications, he adds. While around one third of all red giants and supergiants have LSPs, the relationships between LSPs and brightness vary. “There are therefore a host of targets out there and potentially a need for more detailed models on how companions and dust clouds may interact,” Molnár says.

The researchers are now applying for observing time on space telescopes in hopes of finding direct evidence that the companion exists. One challenge they face is that because Betelgeuse is so bright – indeed, too bright for many sensitive instruments – a “Betelbuddy”, as Goldberg has nicknamed it, may be simpler to explain than it is to observe. “We’re throwing everything we can at it to actually find it,” Molnár says. “We have some ideas on how to detect its radiation in a way that can be separated from the absolute deluge of light Betelgeuse is producing, but we have to collect and analyse our data first.”

The study has been accepted for publication in The Astrophysical Journal. A pre-print is available on the arXiv.

The post ‘Buddy star’ could explain Betelgeuse’s varying brightness appeared first on Physics World.

  •  

Black hole in rare triple system sheds light on natal kicks

Par : No Author
6 novembre 2024 à 13:42

For the first time, astronomers have observed a black hole in a triple system with two other stars. The system is called V404 Cygni and was previously thought to be a closely-knit binary comprising a black hole and a star. Now, Kevin Burdge and colleagues at the Massachusetts Institute of Technology (MIT) have shown that the pair is orbited by a more distant tertiary star.

The observation supports the idea that some black holes do not experience a “natal kick” in momentum when they form. This is expected if a black hole is created from the sudden implosion of a star, rather than in a supernova explosion.

When black holes and neutron stars are born, they can gain momentum through mechanisms that are not well understood. These natal kicks can accelerate some neutron stars to speeds of hundreds of kilometres per second. For black holes, the kick is expected to be less pronounced — and in some scenarios, astronomers believe that these kicks must be very small.

Information about natal kicks can be gleaned by studying the behaviour of X-ray binaries, which usually pair a main sequence star with a black hole or neutron star companion. As these two objects orbit each other, material from the star is transferred to its companion, releasing vast amounts of gravitational potential energy as X-rays and other electromagnetic radiation.

Wobbling objects

In such binaries, any natal kick the black hole may have received during its formation can be deduced by studying how the black hole and its companion star orbit each other. This can be done using the radial velocity (or wobble) technique, which measures the Doppler shift of light from the orbiting objects as they accelerate towards and then away from an observer on Earth.

In their study, Burdge’s team scrutinized archival observations of V404 Cygni that were made using a number of different optical telescopes. A bright blob of light thought to be the black hole and its close-knit companion star is prominent in these images. But the team noticed something else, a second blob of light that could be a star orbiting the close-knit binary.

“We immediately noticed that there was another star next to the binary system, moving together with it,” Burdge explains. “It was almost like a happy accident, but was a direct product of an optical and an X-ray astronomer working together.”

As Burdge describes, the study came as a result of integrating his own work in optical astronomy with the expertise of MIT’s Erin Kara, who does X-ray astronomy on black holes. Burge adds, “We were thinking about whether it might be interesting to take high speed movies of black holes. While thinking about this, we went and looked at a picture of V404 Cygni, taken in visible light.”

Hierarchical triple

The observation provided the team with clear evidence that V404 Cygni is part of a “hierarchical triple” – an observational first. “In the system, a black hole is eating a star which orbits it every 6.5 days. But there is another star way out there that takes 70,000 years to complete its orbit around the inner system,” Burdge explains. Indeed, the third star is about 3500 au (3500 times the distance from the Earth to the Sun) from the black hole.

By studying these orbits, the team gleaned important information about the black hole’s formation. If it had undergone a natal kick when its progenitor star collapsed, the tertiary system would have become more chaotic – causing the more distant star to unbind from the inner binary pair.

The team also determined that the outer star is in the later stages of its main-sequence evolution. This suggests that V404 Cygni’s black hole must have formed between 3–5::billion years ago. When the black hole formed, the researchers believe it would have removed at least half of the mass from its binary companion. But since the black hole still has a relatively low mass, this means that its progenitor star must have lost very little mass as it collapsed.

“The black hole must have formed through a gentle process, without getting a big kick like one might expect from a supernova,” Burdge explains. “One possibility is that the black hole formed from the implosion of a star.”

If this were the case, the star would have collapsed into a black hole directly, without large amounts of matter being ejected in a supernova explosion. Whether or not this is correct, the team’s observations suggest that at least some black holes can form with no natal kick – providing deeper insights into the later stages of stellar evolution.

The research is described in Nature.

The post Black hole in rare triple system sheds light on natal kicks appeared first on Physics World.

  •  

UK particle physicist Mark Thomson selected as next CERN boss

Par : No Author
6 novembre 2024 à 13:29

The UK particle physicist Mark Thomson has been selected as the 17th director-general of the CERN particle-physics laboratory. Thomson, 58, was chosen today at a meeting of the CERN Council. He will take up the position on 1 January 2026 for a five-year period succeeding the current CERN boss Fabiola Gianotti, who will finish her second term next year.

Three candidates were shortlisted for the job after being put forward by a search committee. Physics World understands that the Dutch theoretical physicist and former Dutch science minister Robbert Dijkgraaf was also considered for the position. The other was reported to have been Greek particle physicist Paris Sphicas.

With a PhD in physics from the University of Oxford, Thomson is currently executive chair of the Science and Technology Facilities Council (STFC), one of the main funding agencies in the UK. He spent a significant part of career at CERN working on precise measurements of the W and Z boson in the 1990s as part of the OPAL experiment at CERN’s Large Electron-Positron Collider.

In 2000 he moved back to the UK to take up a position in experimental particle physics at the University of Cambridge. He was then a member of the ATLAS collaboration at CERN’s Large Hadron Collider (LHC) and between 2015 and 2018 served as co-spokesperson for the US Deep Underground Neutrino Experiment. Since 2018 he has served as the UK delegate to CERN’s Council.

Thomson was selected for his managerial credentials in science and connection to CERN. “Thomson is a talented physicist with great managerial experience,” notes Gianotti. “I have had the opportunity to collaborate with him in several contexts over the past years and I am confident he will make an excellent director-general. I am pleased to hand over this important role to him at the end of 2025.”

“Thomson’s election is great news – he has the scientific credentials, experience, and vision to ensure that CERN’s future is just as bright as its past, and it remains at the absolute cutting edge of research,” notes Peter Kyle, UK secretary of state for science, innovation and technology.“Work that is happening at CERN right now will be critical to scientific endeavour for decades to come, and for how we tackle some of the biggest challenges facing humanity.”

‘The right person’

Dirk Ryckbosch, a particle physicist at Ghent University and a delegate for Belgium in the CERN Council, told Physics World that Thomson is a “perfect match” for CERN. “As a former employee and a current member of the council, Thomson knows the ins and outs of CERN and he has the experience needed to lead a large research organization,” adds Ryckbosch.

The last UK director-general of CERN was Chris Llewellyn Smith who held the position between 1994 and 1998. Yet Ryckbosch acknowledges that within CERN, Brexit has never clouded the relationship between the UK and EU member states. “The UK has always remained a strong and loyal partner,” he says.

Thomson will have two big tasks when he becomes CERN boss in 2026: ensuring the start of operations with the upgraded LHC, known as the High-Luminosity LHC (HL-LHC) by 2030, and securing plans for the LHC’s successor.

CERN has currently put its weight behind the Future Circular Collider (FCC), which will cost about £12bn and be four times as large as the LHC with a 91 km circumference. The FCC would first be built as an electron-positron collider with the aim of studying the Higgs boson in unprecedented detail. It could later be upgraded as a hadron collider, known as the FCC-hh.

The construction of the FCC will, however, require additional funding from CERN member states. Earlier this year Germany, which is a main contributor to CERN’s annual budget, publicly objected to the FCC’s high cost. Garnering support from the FCC, if CERN selects it as its next project, will be a delicate balancing act for Thomson. “With his international network and his diplomatic skills, Mark is the right person for this,” concludes Ryckbosch.

That view is backed by particle theorist John Ellis from King’s College London, who told Physics World that Thomson has the “ideal profile for guiding CERN during the selection and initiation of its next major accelerator project”. Ellis adds that Thomson “brings to the role a strong record of research in collider physics as well as studies of electron-positron colliders and leadership in the DUNE neutrino experiment and also extensive managerial experience”.

The post UK particle physicist Mark Thomson selected as next CERN boss appeared first on Physics World.

  •  

Timber! Japan launches world’s first wooden satellite into space

5 novembre 2024 à 18:31

Researchers in Japan have launched the world’s first wooden satellite to test the feasibility of using timber in space. Dubbed LignoSat2, the small “cubesat” was developed by Kyoto University and the logging firm Sumitomo Forestry. It was launched on 4 November to the International Space Station (ISS) from the Kennedy Space Center in Florida by a SpaceX Falcon 9 rocket.

Given the lack of water and oxygen in space, wood is potentially more durable in orbit than it is on Earth where it can rot or burn. This makes it an attractive and sustainable alternative to metals such as aluminium that can create aluminium oxide particles during re-entry into the Earth’s atmosphere.

Work began on LignoSat in 2020. In 2022 scientists at Kyoto sent samples of cherry, birch and magnolia wood to the ISS where the materials were exposed to the harsh environment of space for 240 days to test their durability.

While each specimen performed well with no clear deformation, the researchers settled on building LignoSat from magnolia – or Hoonoki in Japanese. This type of wood has traditionally been used for sword sheaths and is known for its strength and stability.

LignoSat2 is made without screws of glue and is equipped with external solar panels and encased in an aluminium frame. Next month the satellite is expected to be deployed in orbit around the Earth for about six months to measure how the wood withstands the environment and how well it protects the chips inside the satellite from cosmic radiation.

Data will be collected on the wood’s expansion and contraction, the internal temperature and the performance of the electronic components inside.

Researchers are hopeful that if LignoSat is successful it could pave the way for satellites to be made from wood. This would be more environmentally friendly given that each satellite would simply burn up when it re-enters the atmosphere at the end of its lifetime.

“With timber, a material we can produce by ourselves, we will be able to build houses, live and work in space forever,” astronaut Takao Doi who studies human space activities at Kyoto University told Reuters.

The post Timber! Japan launches world’s first wooden satellite into space appeared first on Physics World.

  •  

Physicists propose new solution to the neutron lifetime puzzle

5 novembre 2024 à 15:00

Neutrons inside the atomic nucleus are incredibly stable, but free neutrons decay within 15 minutes – give or take a few seconds. The reason we don’t know this figure more precisely is that the two main techniques used to measure it produce conflicting results. This so-called neutron lifetime problem has perplexed scientists for decades, but now physicists at TU Wien in Austria have come up with a possible explanation. The difference in lifetimes, they say, could stem from the neutron being in not-yet-discovered excited states that have different lifetimes as well as different energies.

According to the Standard Model of particle physics, free neutrons undergo a process called beta decay that transforms a neutron into a proton, an electron and an antineutrino. To measure the neutrons’ average lifetime, physicists employ two techniques. The first, known as the bottle technique, involves housing neutrons within a container and then counting how many of them remain after a certain amount of time. The second approach, known as the beam technique, is to fire a neutron beam with a known intensity through an electromagnetic trap and measure how many protons exit the trap within a fixed interval.

Researchers have been performing these experiments for nearly 30 years but they always encounter the same problem: the bottle technique yields an average neutron survival time of 880 s, while the beam method produces a lifetime of 888 s. Importantly, this eight-second difference is larger than the uncertainties of the measurements, meaning that known sources of error cannot explain it.

A mix of different neutron states?

A team led by Benjamin Koch and Felix Hummel of TU Wien’s Institute of Theoretical Physics is now suggesting that the discrepancy could be caused by nuclear decay producing free neutrons in a mix of different states. Some neutrons might be in the ground state, for example, while others could be in a higher-energy excited state. This would alter the neutrons’ lifetimes, they say, because elements in the so-called transition matrix that describes how neutrons decay into protons would be different for neutrons in excited states and neutrons in ground states.

As for how this would translate into different beam and bottle lifetime measurements, the team say that neutron beams would naturally contain several different neutron states. Neutrons in a bottle, in contrast, would almost all be in the ground state – simply because they would have had time to cool down before being measured in the container.

Towards experimental tests

Could these different states be detected? The researchers say it’s possible, but they caution that experiments will be needed to prove it. They also note that theirs is not the first hypothesis put forward to explain the neutron lifetime discrepancy. Perhaps the simplest explanation is that the gap stems from unknown systematic errors in either the beam experiment, the bottle experiment, or both. Other, more theoretical approaches have also been proposed, but Koch says they do not align with existing experimental data.

“Personally, I find hypotheses that require fewer and smaller new assumptions – and that are experimentally testable – more appealing,” Koch says. As an example, he cites a 2020 study showing that a phenomenon called the inverse quantum Zeno effect could speed up the decay of bottle-confined neutrons, calling it “an interesting idea”. Another possible explanation of the puzzle, which he says he finds “very intriguing” has just been published and describes the admixture of novel bound electron-proton states in the final state of a weak decay, known as “Second Flavor Hydrogen Atoms”.

As someone with a background in quantum gravity and theoretical physics beyond the Standard Model, Koch is no stranger to predictions that are hard (and sometimes impossible, at least in the near term) to test. “Contributing to the understanding of a longstanding problem in physics with a hypothesis that could be experimentally tested soon is therefore particularly exciting for me,” he tells Physics World. “If our hypothesis of excited neutron states is confirmed by future experiments, it would shed a completely new light on the structure of neutral nuclear matter.”

The researchers now plan to collaborate with colleagues from the Institute for Atomic and Subatomic Physics at TU Wien to revaluate existing experimental data and explore various theoretical models. “We’re also hopeful about designing experiments specifically aimed at testing our hypothesis,” Koch reveals.

The present study is detailed in Physical Review D.

The post Physicists propose new solution to the neutron lifetime puzzle appeared first on Physics World.

  •  

Women and physics: navigating history, careers, and the path forward

Par : No Author
5 novembre 2024 à 12:31

Join us for an insightful webinar based on Women and Physics (Second Edition), where we will explore the historical journey, challenges, and achievements of women in the field of physics, with a focus on English-speaking countries. The session will dive into various topics such as the historical role of women in physics, the current statistics on female representation in education and careers, navigating family life and career, and the critical role men play in fostering a supportive environment. The webinar aims to provide a roadmap for women looking to thrive in physics.

Laura McCullough

Laura McCullough is a professor of physics at the University of Wisconsin-Stout. Her PhD from the University of Minnesota was in science education with a focus on physics education research. She is the recipient of multiple awards, including her university system’s highest teaching award, her university’s outstanding research award, and her professional society’s service award. She is a fellow of the American Association of Physics Teachers. Her primary research area is gender and science and surrounding issues. She has also done significant work on women in leadership, and on students with disabilities.

About this ebook

Women and Physics is the second edition of a volume that brings together research on a wide variety of topics relating to gender and physics, cataloguing the extant literature to provide a readable and concise grounding for the reader. While there are many biographies and collections of essays in the area of women and physics, no other book is as research focused. Starting with the current numbers of women in physics in English-speaking countries, it explores the different issues relating to gender and physics at different educational levels and career stages. From the effects of family and schooling to the barriers faced in the workplace and at home, this volume is an exhaustive overview of the many studies focused specifically on women and physics. This edition contains updated references and new chapters covering the underlying structures of the research and more detailed breakdowns of career issues.

The post Women and physics: navigating history, careers, and the path forward appeared first on Physics World.

  •  

Why AI is a force for good in science communication

5 novembre 2024 à 11:00

In August 2024 the influential Australian popular-science magazine Cosmos found itself not just reporting the news – it had become the news. Owned by CSIRO Publishing – part of Australia’s national science agency – Cosmos had posted a series of “explainer” articles on its website that had been written by generative artificial intelligence (AI) as part of an experiment funded by Australia’s Walkley Foundation. Covering topics such as black holes and carbon sinks, the text had been fact-checked against the magazine’s archive of more than 15,000 past articles to negate the worry of misinformation, but at least one of the new articles contained inaccuracies.

Critics, such as the science writer Jackson Ryan, were quick to condemn the magazine’s experiment as undermining and devaluing high-quality science journalism. As Ryan wrote on his Substack blog, AI not only makes things up and trains itself on copyrighted material, but “for the most part, provides corpse-cold, boring-ass prose”. Contributors and former staff also complained to Australia’s ABC News that they’d been unaware of the experiment, which took place just a few months after the magazine had made five of its eight staff redundant.

It’s all too easy for AI to get things wrong and contribute to the deluge of online misinformation

The Cosmos incident is a reminder that we’re in the early days of using generative AI in science journalism. It’s all too easy for AI to get things wrong and contribute to the deluge of online misinformation, potentially damaging modern society in which science and technology shape so many aspects of our lives. Accurate, high-quality science communication is vital, especially if we are to pique the public’s interest in physics and encourage more people into the subject.

Kanta Dihal, a lecturer at Imperial College London who researchers the public’s understanding of AI, warns that the impacts of recent advances in generative AI on science communication are “in many ways more concerning than exciting”. Sure, AI can level the playing field by, for example, enabling students to learn video editing skills without expensive tools and helping people with disabilities to access course material in accessible formats. “[But there is also] the immediate large-scale misuse and misinformation,” Dihal says.

We do need to take these concerns seriously, but AI could benefit science communication in ways you might not realize. Simply put, AI is here to stay – in fact, the science behind it led to the physicist John Hopfield and computer scientist Geoffrey Hinton winning the 2024 Nobel Prize for Physics. So how can we marshal AI to best effect not just to do science but to tell the world about science?

Dangerous game

Generative AI is a step up from “machine learning”, where a computer predicts how a system will behave based on data it’s analysed. Machine learning is used in high-energy physics, for example, to model particle interactions and detector performance. It does this by learning to recognize patterns in existing data, before making predictions and then validating that those predictions match the original data. Machine learning saves researchers from having to manually sift through terabytes of data from experiments such as those at CERN’s Large Hadron Collider.

Generative AI, on the other hand, doesn’t just recognize and predict patterns – it can create new ones too. When it comes to the written word, a generative AI could, for example, invent a story from a few lines of input. It is exactly this language-generating capability that caused such a furore at Cosmos and led some journalists to worry that AI might one day make their jobs obsolete. But how does a generative AI produce replies that feel like a real conversation?

Claude Shannon holding a wooden mouse
Child’s play Claude Shannon was an electrical engineer and mathematician who is considered the “father of information theory”. He is pictured here in 1952 with an early example of machine learning – a wheeled toy mouse called Theseus that was designed to navigate its way through a maze. (Courtesy: Yale Joel/The LIFE Picture Collection/Shutterstock)

Perhaps the best known generative AI is ChatGPT (where GPT stands for generative pre-trained transformer), which is an example of a Large Language Model (LLM). Language modelling dates back to the 1950s, when the US mathematician Claude Shannon applied information theory – the branch of maths that deals with quantifying, storing and transmitting information – to human language. Shannon measured how well language models could predict the next word in a sentence by assigning probabilities to each word based on patterns in the data the model is trained on.

Such methods of statistical language modelling are now fundamental to a range of natural language processing tasks, from building spell-checking software to translating between languages and even recognizing speech. Recent advances in these models have significantly extended the capabilities of generative AI tools, with the “chatbot” functionality of ChatGPT making it especially easy to use.

ChatGPT racked up a million users within five days of its launch in November 2022 and since then other companies have unveiled similar tools, notably Google’s Gemini and Perplexity. With more than 600 million users per month as of September 2024, ChatGPT is trained on a range of sources, including books, Wikipedia articles and chat logs (although the precise list is not explicitly described anywhere). The AI spots patterns in the training texts and builds sentences by predicting the most likely word that comes next.

ChatGPT operates a bit like a slot machine, with probabilities assigned to each possible next word in the sentence. In fact, the term AI is a little misleading, being more “statistically informed guessing” than real intelligence, which explains why ChatGPT has a tendency to make basic errors or “hallucinate”. Cade Metz, a technology reporter from the New York Times, reckons that chatbots invent information as much as 27% of the time.

One notable hallucination occurred in February 2023 when Bard – Google’s forerunner to Gemini – declared in its first public demonstration that the James Webb Space Telescope (JWST) had taken “the very first picture of a planet outside our solar system”. As Grant Tremblay from the US Center for Astrophysics pointed out, this feat had been accomplished in 2004, some 16 years before the JWST was launched, by the European Southern Observatory’s Very Large Telescope in Chile.

AI-generated image of a rat with significant errors
Badly wrong This AI-generated image of a rat originally appeared in the journal Frontiers in Cell Biology (11 1339390). The use of AI in the image, which bears little resemblance to the anatomy of a rat, was not originally disclosed and the article was subsequently retracted. (CC BY Xinyu Guo, Liang Dong and Dingjun Hao)

Another embarassing incident was the comically anatomically incorrect picture of a rat created by the AI image generator Midjourney, which appeared in a journal paper that was subsequently retracted. Some hallucinations are more serious. Amateur mushroom pickers, for example, have been warned to steer clear of online foraging guides, likely written by AI, that contain information running counter to safe foraging practices. Many edible wild mushrooms look deceptively similar to their toxic counterparts, making careful identification critical.

By using AI to write online content, we’re in danger of triggering a vicious circle of increasingly misleading statements, polluting the Internet with unverified output. What’s more, AI can perpetuate existing biases in society. Google, for example, was forced to publish an embarrassing apology, saying it would “pause” the ability to generate images with Gemini after the service was used to create images of racially diverse Nazi soldiers,

More seriously, women and some minority groups are under-represented in healthcare data, biasing the training set and potentially skewing the recommendations of predictive AI algorithms. One study led by Laleh Seyyed-Kalantari from the University of Toronto (Nature Medicine 27 2176) found that computer-aided diagnosis of chest X-rays are less accurate for Black patients than white patients.

Generative AI could even increase inequalities if it becomes too commercial. “Right now there’s a lot of free generative AI available, but I can also see that getting more unequal in the very near future,” Dihal warns. People who can afford to pay for ChatGPT subscriptions, for example, have access to versions of the AI based on more up-to-date training data. They therefore get better responses than users restricted to the “free” version.

Clear communication

But generative AI tools can do much more than churn out uninspired articles and create problems. One beauty of ChatGPT is that users interact with it conversationally, just like you’d talk to a human communicator at a science museum or science festival. You could start by typing something simple (such as “What is quantum entanglement?”) before delving into the details (e.g. “What kind of physical systems are used to create it?”). You’ll get answers that meet your needs better than any standard textbook.

Teenage girl using laptop at home
Opening up AI could help students, particularly those who face barriers to education, to explore scientific topics that interest them. (Courtesy: iStock/Imgorthand)

Generative AI could also boost access to physics by providing an interactive way to engage with groups – such as girls, people of colour or students from low-income backgrounds – who might face barriers to accessing educational resources in more traditional formats. That’s the idea behind online tuition platforms such as Khan Academy, which has integrated a customized version of ChatGPT into its tuition services.

Instead of presenting fully formed answers to questions, its generative AI is programmed to prompt users to work out the solution themselves. If a student types, say, “I want to understand gravity” into Khan’s generative AI-powered tutoring program, the AI will first ask what the student already knows about the subject. The “conversation” between the student and the chatbot will then evolve in the light of the student’s response.

As someone with cerebral palsy, AI has transformed how I work by enabling me to turn my speech into text in an instant

AI can also remove barriers that some people face in communicating science, allowing a wider range of voices to be heard and thereby boosting the public’s trust in science. As someone with cerebral palsy, AI has transformed how I work by enabling me to turn my speech into text in an instant (see box below).

It’s also helped Duncan Yellowlees, a dyslexic research developer who trains researchers to communicate. “I find writing long text really annoying, so I speak it into OtterAI, which converts the speech into text,” he says. The text is sent to ChatGPT, which converts it into a blog. “So it’s my thoughts, but I haven’t had to write them down.”

Then there’s Matthew Tosh, a physicist-turned-science presenter specializing in pyrotechnics. He has a progressive disease, which meant he faced an increasing struggle to write in a concise way. ChatGPT, however, lets him create draft social-media posts, which he then rewrites in his own sites. As a result, he can maintain that all-important social-media presence while managing his disability at the same time.

Despite the occasional mistake made by generative AI bots, misinformation is nothing new. “That’s part of human behaviour, unfortunately,” Tosh admits. In fact, he thinks errors can – perversely – be a positive. Students who wrongly think a kilo of cannonballs will fall faster than a kilo of feathers create the perfect chance for teachers to discuss Newtonian mechanics. “In some respects,” says Tosh, “a little bit of misinformation can start the conversation.”

AI as a voice-to-text tool

Claire Malone at her desk
Reaping the benefits Claire Malone uses AI-powered speech-to-text software, which helps her work as a science communicator. (Courtesy: Claire Malone)

As a science journalist – and previously as a researcher hunting for new particles in data from the ATLAS experiment at CERN – I’ve longed to use speech-to-text programs to complete assignments. That’s because I have a disability – cerebral palsy – that makes typing impractical. For a long time this meant I had to dictate my work to a team of academic assistants for many hours a week. But in 2023 I started using Voiceitt, an AI-powered app optimized for speech recognition for people with non-standard speech like mine.

You train the app by first reading out a couple of hundred short training phrases. It then deploys AI to apply thousands of hours of other non-standard speaker models in its database to optimize its training. As Voiceitt is used, it continues refining the AI model, improving speech recognition over time. The app also has a generative AI model to correct any grammatical errors created during transcription. Each week, I find myself correcting the app’s transcriptions less and less, which is a bonus when facing journalistic deadlines, such as the one for this article.

The perfect AI assistant?

One of the first news organizations to experiment with AI tools was Associated Press (AP), which in 2014 began automating routine financial stories about corporate earnings. AP now also uses AI to create transcripts of videos, write summaries of sports events, and spot trends in large stock-market data sets. Other news outlets use AI tools to speed up “back-office” tasks such as transcribing interviews, analysing information or converting data files. Tools such as MidJourney can even help journalists to brief professional illustrators to create images.

However, there is a fine line between using AI to speed up your workflow and letting it make content without human input. Many news outlets and writers’ associations have issued statements guaranteeing not to use generative AI as a replacement for human writers and editors. Physics World, for example, has pledged not to publish fresh content generated purely by AI, though the magazine does use AI to assist with transcribing and summarizing interviews.

So how can generative AI be incorporated into the effective and trustworthy communication of science? First, it’s vital to ask the right question – in fact, composing a prompt can take several attempts to get the desired output. When summarizing a document, for example, a good prompt should include the maximum word length, an indication of whether the summary should be in paragraphs or bullet points, and information about the target audience and required style or tone.

Generative AI is here to stay – and science communicators and journalists are still working out how best to use it to communicate science

Second, information obtained from AI needs to be fact checked. It can easily hallucinate, making a chatbot like an unreliable (but occasionally brilliant) colleague who can get the wrong end of the stick. “Don’t assume that whatever the tool is, that it is correct,” says Phil Robinson, editor of Chemistry World. “Use it like you’d use a peer or colleague who says ‘Have you tried this?’ or ‘Have you thought of that?’”

Finally, science communicators must be transparent in explaining how they used AI. Generative AI is here to stay – and science communicators and journalists are still working out how best to use it to communicate science. But if we are to maintain the quality of science journalism – so vital for the public’s trust in science – we must continuously evaluate and manage how AI is incorporated into the scientific information ecosystem.

Generative AI can help you say what you want to say. But as Dihal concludes: “It’s no substitute for having something to say.”

The post Why AI is a force for good in science communication appeared first on Physics World.

  •  

Space-based solar power: ‘We have nothing to lose and everything to gain’

Par : No Author
4 novembre 2024 à 18:00

The most important and pressing issue of our times is the transition to clean energy while meeting rising global demand. Cheap, abundant and reliable energy underpins the quality of life for all – and one potentially exciting way to do this is space-based solar power (SBSP). It would involve capturing sunlight in space and beaming it as microwaves down to Earth, where it would be converted into electricity to power the grid.

For proponents of SBSP such as myself, it’s a hugely promising technology. Others, though, are more sceptical. Earlier this year, for example, NASA published a report from its Office of Technology, Policy and Strategy that questioned the cost and practicality of SBSP. Henri Barde, a retired engineer who used to work for the European Space Agency (ESA) in Noordwijk, the Netherlands, has also examined the technical challenges in a report for the IEEE.

Some of these sceptical positions on SBSP were addressed in a recent Physics World article by James McKenzie. Conventional solar power is cheap, he argued, so why bother putting large solar power satellites in space? After all, the biggest barriers to building more solar plants here on Earth aren’t technical, but mostly come in the form of belligerent planning officials and local residents who don’t want their views ruined.

However, in my view we need to take a whole-energy-system perspective to see why innovation is essential for the energy transition. Wind, solar and batteries are “low-density” renewables, requiring many tonnes of minerals to be mined and refined for each megawatt-hour of energy. How can this be sustainable and give us energy security, especially when so much of our supply of these minerals depends on production in China?

Low-density renewables also require a Herculean expansion in electricity grid transmission pylons and cables to connect them to users. Other drawbacks of wind and solar is that they depend on the weather and require suitable storage – which currently does not exist at the capacity or cost needed. These forms of energy also need duplicated back-up, which is expensive, and other sources of baseload power for times when it’s cloudy or there’s no wind.

Look to the skies

With no night or weather in space, however, a solar panel in space generates 13 times as much energy than the same panel on Earth. SBSP, if built, would generate power continuously, transmitted as microwaves through the atmosphere with almost no loss. It could therefore deliver baseload power 24 hours a day, irrespective of local weather conditions on Earth.

SBSP could easily produce more or less power as needed, effectively smoothing out the unpredictable and varying output from wind and solar

Another advantage of SBSP is that could easily produce more or less power as needed, effectively smoothing out the unpredictable and varying output from wind and solar. We currently do this using fossil-fuel-powered gas-fired “peaker” plants, which could therefore be put out to pasture. SBSP is also scalable, allowing the energy it produces to be easily exported to other nations without expensive cables, giving it a truly global impact.

A recent whole-energy-system study by researchers at Imperial College London concluded that introducing just 8 GW of SBSP into the UK’s energy mix would deliver system savings of over £4bn every year. In my view, which is shared by others too, the utility of SBSP is likely to be even greater when considering whole continents or global alliances. It can give us affordable and reliable clean energy.

My firm, Space Solar, has designed a solar-power satellite called CASSIOPeiA, which is more than twice as powerful – based on the key metric of power per unit mass – as ESA’s design. So far, we have built and successfully demonstrated our power beaming technology, and following £5m of engineering design work, we have arguably the most technically mature design in the world.

If all goes to plan, we’ll have our first commercial product by 2029. Offering 30 MW of power, it could be launched by a single Starship rocket, and scale to gigawatt systems from there. Sure, there are engineering challenges, but these are mostly based on ensuring that the economics remain competitive. Space Solar is also lucky in having world-class experts working in spacecraft engineering, advanced photovoltaics, power beaming and in-space robotics.

Brighter and better

But why then was NASA’s study so sceptical of SBSP? I think it was because the report made absurdly conservative assumptions of the economics. NASA assumed an operating life of only 10 years: so to run for 30 years, the whole solar power satellite would have to be built and launched three times. Yet satellites today generally last for more than 25 years, with most baselined for a minimum 15 year life.

The NASA report also assumed that a satellite launched by Starship would remain at around $1500/kg. However, other independent analyses, such as “Space: the dawn of a new age” produced in 2022 by Citi Group, have forecast that it will be an order of magnitude less – just at $100/kg – by 2040. I could go on as there are plenty more examples of risk-averse thinking in the NASA report.

Buried in the report, however, the study also looked at more reasonable scenarios than the “baseline” and concluded that “these conditions would make SBSP systems highly competitive with any assessed terrestrial renewable electricity production technology’s 2050 cost projections”. Curiously, these findings did not make it into the executive summary.

The NASA study has been widely criticized, including by former NASA physicist John Mankins, who invented another approach to space solar dubbed SPS Alpha. Speaking on a recent episode of the DownLink podcast, he suspected NASA’s gloomy stance may in part be because it focuses on space tech and space exploration rather than energy for Earth. NASA bosses might fear that if they were directed by Congress to pursue SBSP, money for other priorities might be at risk.

I also question Barde’s sceptical opinion of the technology of SBSP, which he expressed in an article for IEEE Spectrum. Barde appeared not to understand many of the design features that make SPBSP technically feasible. He wrote, for example, about “gigawatts of power coursing through microwave systems” of the solar panels on the satellite, which sounds ominous and challenging to achieve.

In reality, the gigawatts of sunlight are reflected onto a large area of photovoltaics containing a billion or so solar cells. Each cell, which includes an antenna and electronic components to convert the sunlight into microwaves, is arranged in a sandwich module just a few millimetres thick handling just 2 W of power. So although the satellite delivers gigawatts overall, the figure is much lower at the component level. What’s more, each cell can be made using tried and tested radio-frequency components.

As for Barde’s fears about thermal management – in other words, how we can stop the satellite from overheating – that has already been analysed in detail. The plan is to use passive radiative cooling without active systems. Barde also warns of temperature swings as the satellites pass through eclipse during the spring and autumn equinox. But this problem is common to all satellites and has, in any case, been analysed as part of our engineering work. In essence, Barde’s claim of “insurmountable technical difficulties” is simply his opinion.

Until the first solar power satellite is commissioned, there will always be sceptics [but] that was also true of reusable rockets and cubesats, both of which are now mainstream technology

Until the first solar power satellite is commissioned, there will always be sceptics of what we are doing. However, that was also true of reusable rockets and cubesats, both of which are now mainstream technology. SBSP is a “no-regrets” investment that will see huge environmental and economic benefits, with spin-off technologies in wireless power beaming, in-space assembly and photovoltaics.

It is the ultimate blend of space technology and societal benefit, which will inspire the next generation of students into physics and engineering. Currently, the UK has a leadership position in SBSP, and if we have the vision and ambition, there is nothing to lose and everything to gain from backing this. We just need to get on with the job.

The post Space-based solar power: ‘We have nothing to lose and everything to gain’ appeared first on Physics World.

  •  

Axion clouds around neutron stars could reveal dark matter origins

Par : No Author
4 novembre 2024 à 10:00

Hypothetical particles called axions could form dense clouds around neutron stars – and if they do, they will give off signals that radio telescopes can detect, say researchers in the Netherlands, the UK and the US. Since axions are a possible candidate for the mysterious substance known as dark matter, this finding could bring us closer to understanding it.

Around 85% of the universe’s mass consists of matter that appears “dark” to us. We can observe its gravitational effect on structures such as galaxies, but we cannot observe it directly. This is because dark matter hardly interacts with anything as far as we know, making it very difficult to detect. So far, searches for dark matter on Earth and in space have found no evidence for any of the various dark matter candidates.

The new research raises hopes that axions could be different. These neutral, bosonic particles are extremely light and hardly interact with ordinary matter. They get their name from a brand of soap, having been first proposed in the 1970s as a way of “cleaning up” a problem in quantum chromodynamics (QCD). More recently, astronomers have suggested they could clean up cosmology, too, by playing a role in the formation of galaxies in the early universe. They would also be a clean start for particle physics, providing evidence for new physics beyond the Standard Model.

Signature signals

But how can we detect axions if they are almost invisible to us? In the latest work, researchers at the University of Amsterdam, Princeton University and the University of Oxford showed that axions, if they exist, will be produced in large quantities at the polar regions of neutron stars. (Axions may also be components of dark matter “halos” believed to be present in the universe, but this study investigated axions produced by neutron stars themselves.) While many axions produced in this way will escape, some will be captured by the stars’ strong gravitational field. Over millions of years, axions will therefore accumulate around neutron stars, forming a cloud dense enough to give off detectable signals.

To reach these conclusions, the researchers examined various axion cloud interaction mechanisms, including self-interaction, absorption by neutron star nuclei and electromagnetic interactions. They concluded that for most axion masses, it is the last mechanism – specifically, a process called resonant axion-photon mixing – that dominates. Notably, this mechanism should produce a stream of low-energy photons in the radiofrequency range.

The team also found that these radio emissions would be connected to four distinct phases of axion cloud evolution. These are a growth phase after the neutron star forms; a saturation phase during normal life; a magnetorotational decay phase towards the later stages of the star’s existence; and finally a large burst of radio waves when the neutron star dies.

Turn on the radio

The researchers say that several large radio telescopes around the globe could play a role in detecting these radiofrequency signatures. Examples include the Low-Frequency Array (LOFAR) in the Netherlands; the Murchison Widefield Array in Australia; and the Green Bank Telescope in the US. To optimize the chances of picking up an axion signal, the collaboration recommends specific observation times, bandwidths and signal-to-noise ratios that these radio telescopes should adhere to. By following these guidelines, they say, the LOFAR setup alone could detect up to four events per year.

Dion Noordhuis, a PhD student at Amsterdam and first author of a Physical Review X paper on the research, acknowledges that there could be other observational signals beyond those explored in the paper. These will require further investigation, and he suggests that a full understanding will require complementary efforts from multiple branches of physics, including particle (astro)physics, plasma physics and observational radioastronomy. “This work thereby opens up a new, cross-disciplinary field with lots of opportunities for future research,” he tells Physics World.

Sankarshana Srinivasan, an astrophysicist from the Ludwig Maximilian University in Munich, Germany, who was not involved in the research, agrees that the QCD axion is a well-motivated candidate for dark matter. The Amsterdam-Princeton-Oxford team’s biggest achievement, he says, is to realize how axion clouds could enhance the signal, while the team’s “state-of-the-art” modelling makes the work stand out. However, he also urges caution because all theories of axion-photon mixing around neutron stars make assumptions about the stars’ magnetospheres, which are still poorly understood.

The post Axion clouds around neutron stars could reveal dark matter origins appeared first on Physics World.

  •  

Universe’s lifespan too short for monkeys to type out Shakespeare’s works, finds study

1 novembre 2024 à 16:00

According to the well-known thought experiment, the infinite monkeys theorem, a monkey randomly pressing keys on a typewriter for an infinite amount of time would eventually type out the complete works of William Shakespeare purely by chance.

Yet a new analysis by two mathematicians in Australia finds that even a troop might not have the time to do so within the supposed timeframe of the universe.

To find out, the duo created a model that includes 30 keys – all the letters in the English language plus punctuation marks. They assumed a constant chimpanzee population of 200,000 could be enlisted to the task, each typing at one key per second until the end of the universe in about 10100 years.

“We decided to look at the probability of a given string of letters being typed by a finite number of monkeys within a finite time period consistent with estimates for the lifespan of our universe,” notes mathematician Stephen Woodcock from the University of Technology Sydney.

The mathematicians found that there is only a 5% chance a single monkey would type “bananas” within its own lifetime of just over 30 years. Yet even with all the chimps feverishly typing away, they would not be able to produce Shakespeare’s entire works (coming in at over 850,000 words) before the universe ends. They would, however, be able to type “I chimp, therefore I am”.

“It is not plausible that, even with improved typing speeds or an increase in chimpanzee populations, monkey labour will ever be a viable tool for developing non-trivial written works,” the authors conclude, adding that while the infinite monkeys theorem is true, it is also “somewhat misleading”, or rather it’s “not to be” in reality.

The post Universe’s lifespan too short for monkeys to type out Shakespeare’s works, finds study appeared first on Physics World.

  •  

Nanocrystal shape affects molecular binding

1 novembre 2024 à 14:00

Molecules known as ligands attach more densely to flatter, platelet-shaped semiconductor nanocrystals than they do to spherical ones – a counterintuitive result that could lead to improvements in LEDs and solar cells as well as applications in biomedicine. While spherical nanoparticles are more curved than platelets, and were therefore expected to have the highest density of ligands on their surfaces, Guohua Jia and colleagues at Australia’s Curtin University say they observed the exact opposite.

“We found that the density of a commonly employed ligand, oleylamine (OLA), on the surface of zinc sulphide (ZnS) nanoparticles is highest for nanoplatelets, followed by nanorods and finally nanospheres,” Jia says.

Colloidal semiconductor nanocrystals show promise for a host of technologies, including field-effect transistors, chemical catalysis and fluorescent biomedical imaging as well as LEDs and photovoltaic cells. Because nanocrystals have a large surface area relative to their volume, their surfaces play an important role in many physical and chemical processes.

Notably, these surfaces can be modified and functionalized with ligands, which are typically smaller molecules such as long-chain amines, thiols, phosphines and phosphonates. The presence of these ligands changes the nanocrystals’ behaviour and properties. For example, they can make the nanocrystals hydrophilic or hydrophobic, and they can change the speed at which charge carriers travel through them. This flexibility allows nanocrystals to be designed and engineered for specific catalytic, optoelectronic or biomedical applications.

Quantifying ligand density

Previous research showed that the size of nanocrystals affects how many surface ligands can attach to them. The curvature of the crystals can also have an effect. The new work adds to this body of research by exploring the role of nanocrystal shape in more detail.

In their experiments, Jia and colleagues measured the density of OLA ligands on ZnS nanocrystals using three techniques: thermogravimetric analysis-differential scanning calorimetry; 1H nuclear magnetic resonance spectroscopy; and inductively-coupled plasma-optical emission spectrometry. They combined these measurements with semi-empirical molecular dynamics simulations.

The experiments, which are detailed in the Journal of the American Chemical Society, revealed that Zn nanoplatelets with flat basal planes and uniform surfaces allow more ligands to attach tightly to them. This is because the ligands can stack in a parallel fashion on the nanoplatelets, whereas such tight stacking is more difficult on Zn nanodots and nanorods due to staggered atomic arrangements and multistep on their surfaces, Jia tells Physics World. “This results in a lower ligand density than on nanoplatelets,” he says.

The Curtin researchers now plan to study how the differently-shaped nanocrystals – spherical dots, rods and platelets – enter biological cells. This study will be important for improving the efficacy of targeted drug delivery.

The post Nanocrystal shape affects molecular binding appeared first on Physics World.

  •  

Mysterious brown dwarf is two objects, not one

Par : No Author
1 novembre 2024 à 11:45

Two independent studies suggest that the brown dwarf Gliese 229 B is not a single object, but rather a pair of brown dwarfs. The two teams reached this conclusion in different ways, with one using a combination of instruments at the European Southern Observatory’s Very Large Telescope (VLT) in Chile, and the other taking advantage of the extreme resolution of the infrared spectra measured by the Keck Observatory in Hawaii.

With masses between those of gas-giant planets and stars, brown dwarfs are too small to reach the extreme temperatures and pressures required to fuse hydrogen in their cores. Instead, a brown dwarf glows as it radiates heat accumulated during the gravitational collapse of its formation. While brown dwarfs are much dimmer than stars, their brightness increases with mass – much like stars.

In 1994, the first brown dwarf ever to be confirmed was spotted in orbit around a red dwarf star. Dubbed Gliese 229 B, the brown dwarf has a methane-rich atmosphere remarkably similar to Jupiter’s – and this was the first planet-like atmosphere observed outside the solar system. The discovery was especially important since it would help astronomers to gain deeper insights into the formation and evolution of massive exoplanets.

Decades-long mystery

Since the discovery, extensive astrometry and radial velocity measurements have tracked Gliese 229B’s gravitational influence on its host star – allowing astronomers to constrain its mass to 71 Jupiter masses. But, this mass seemed too high and sparked a decades-long astronomical mystery.

“This value didn’t make any sense, since a brown dwarf of that mass would be much brighter than Gliese 229 B. Therefore, astronomers got worried that our models of stars and brown dwarfs might be missing something big,” explains Jerry Xuan at the California Institute of Technology (Caltech), who led the international collaboration responsible for one of the studies. Xuan’s team also included Rebecca Oppenheimer – who was part of the team that first discovered Gliese 229 B as a PhD student at Caltech.

Xuan’s team investigated the mass–brightness mystery using separate measurements from two cutting-edge instruments at the VLT: CRIRES+, which is a high-resolution infrared spectrograph and the GRAVITY interferometer.

“CRIRES+ disentangles light from two objects by dispersing it at high spectral resolution, whereas GRAVITY combines light from four different eight metre telescopes to see much finer spatial details than previous instruments can resolve,” Xuan explains. “GRAVITY interferes light from all four of these telescopes to enhance the spatial resolution.”

Time-varying shifts

Meanwhile, a team of US astronomers led by Samuel Whitebrook at the University of California, Santa Barbara (UCSB), studied Gliese 229 B using the Near-Infrared Spectrograph (NIRSPEC) at the Keck Observatory in Hawaii. The extreme resolution of this instrument allowed them to measure time-varying shifts in the brown dwarf’s spectrum, which could hint at an as-yet unforeseen gravitational influence on its orbit.

Within GRAVITY’s combined observations, Xuan’s team discovered that Gliese 229 B was not a single object, but a pair of brown dwarfs that are separated by just 16 Earth–Moon distances and orbit each other every 12 days.

And, after fitting CRIRES+’s data to existing brown dwarf models, they detected features within Gliese 229 B’s spectrum that clearly indicated the presence of two different atmospheres.

Frequency shifts

Whitebrook’s team came to a very similar conclusion. Measuring the brown dwarf’s infrared spectrum at different epochs, they identified frequency shifts which had not shown up in previous measurements. Again, these discrepancies clearly hinted at the presence of a hidden binary companion to Gliese 229B.

The two objects comprising the binary have been named Gliese 229Ba and Gliese 229Bb. Crucially, both of these bodies would be significantly dimmer when compared to a brown dwarf of their combined mass. If the teams’ conclusions are correct, this could finally explain why Gliese 229B is so massive, despite its lacklustre brightness.

The findings also suggest that Gliese 229 B is only the first brown dwarf binary yet discovered. Based on their results, Xuan’s team believe it is likely that binaries of brown dwarfs, and potentially even giant planets like Jupiter, must also exist around other stars. These would provide intriguing targets for future observations.

“Finally, our findings also show how complex and messy the star formation process is,” Xuan says. “We should always be open to surprises, after all, the solar system is only one system in billions of stellar systems in the Milky Way galaxy.”

The Caltech-led team describes its observations in Nature, and the UCSB team in The Astrophysical Journal Letters.

The post Mysterious brown dwarf is two objects, not one appeared first on Physics World.

  •  

A PhD in cups of espresso: how logging my coffee consumption helped me write my thesis

Par : No Author
1 novembre 2024 à 10:20

Every PhD student has been warned at least once that doing a PhD is stressful, and that writing a thesis can make you thoroughly fed up, even if you’re working on a topic you’re passionate about.

When I was coming to the end of my PhD, this thought began to haunt me. I was enjoying my research on the interaction between light and plasmonic metamaterials, but I worried that the stress of writing my thesis would spoil it for me. Perhaps guided by this fear, I started logging my writing activity in a spreadsheet. I recorded how many hours per day I spent writing and how many pages and figures I had completed at the end of each day.

The immediate benefit was that the spreadsheet granted me a quick answer when, once a week, my supervisor asked me the deeply feared question: “So, how many pages?” Probably to his great surprise, my first answer was “Nine cups of espresso.”

In Naples, Italy, we have a relationship with coffee that borders on religious

The idea of logging my writing activity probably came from my background as an experimental physicist, but the use of espresso cups as a unit goes back to my roots in Naples, Italy. There, we have a relationship with coffee that borders on religious. And so, in a difficult time, I turned to the divine and found my strength in the consumption of coffee.

Graph showing PhD progress against
Scientific method When he was writing his PhD thesis, Vittorio Aita logged his progress in hours, pages and cups of espresso. (Courtesy: Vittorio Aita)

As well as tracking my writing, I also recorded the number of cups of espresso I drank each day. The data I gathered, which is summarized in the above graph, turned out to be quite insightful. Let’s get scientific:

I began writing my thesis on 27 April 2023. As shown by the spacing between entries in the following days, I started at a slow pace, dedicating myself to writing for only two days a week and consuming an average of three units of coffee per day. I should add that it was quite easy to “write” 16 pages on the first day because at the start of the process, you get a lot of pages free. Don’t underestimate the joy of realizing you’ve written 16 pages at once, even if those are just the table of contents and other placeholders.

In the second half of May, there was a sudden, two-unit increase in daily coffee consumption, with a corresponding increase in the number of pages written. Clearly by the sixth entry of my log, I was starting to feel like I wasn’t writing enough. This called for more coffee, and my productivity consequently peaked at seven pages in one day. By the end of May, I had already written almost 80 pages.

Readers with an eye for detail will also notice that on the second to last day of May, coffee consumption is not expressed as an integer. To explain this, I must refer again to my Italian background. Although I chose to define the unit of coffee by volume – a unit of espresso is the amount obtained from a reusable capsule, the half-integer value is representative of the importance of the quality of the grind. I had been offered a filtered coffee that my espresso-based cultural heritage could not consider worth a whole unit. Apologies to filter coffee drinkers.

From looking at the graph entries between the end of May and the middle of August, you would be forgiven for thinking that I took a holiday, despite my looming deadline. You would however be wrong. My summer break from the thesis was spent working on a paper.

However, in the last months of work, my slow-paced rhythm was replaced by a full-time commitment to my thesis. Days of intense writing (and figure-making!) were interspersed with final efforts to gather new data in the lab.

In October some photons from the end of the tunnel started to be detectable, but at this point I unfortunately caught COVID-19. As you can tell from the graph, in the last weeks of writing I worked overtime to get back on track. This necessitated a sudden increase in coffee units: having one more unit of coffee each day got me through a week of very long working days, peaking at a single day of 16 hours of work and 6 cups of espresso.

I felt suddenly lighter and I was filled with a deep feeling of fulfilment

I finally submitted my thesis on 20 December, and I did it with one of the most important people in my life at my side: my grandma. I clicked “send” and hugged her for as long as we both could breathe. I felt suddenly lighter and I was filled with a deep feeling of fulfilment. I had totalled 304 hours of writing, 199 pages and an impressive 180 cups of espresso.

With hindsight, this experience taught me that the silly and funny task of logging how much coffee I drank was in fact a powerful tool that stopped me from getting fed up with writing.

More often than not, I would observe the log after a day of what felt like slow progress and realize that I had achieved more than I thought. On other days, when I was disappointed with the number of pages I had written (once even logging a negative number), the amount of coffee I had consumed would remind me of how challenging they had been to complete.

Doing a PhD can be an emotional experience, particularly when writing up the thesis: the self-realization, the pride, the constant need to improve your work, and the desire to convey the spark and pull of curiosity that first motivated you. This must all be done in a way that is both enjoyable to read and sufficiently technical.

All of this can get frustrating, but I hope sharing this will help future students embrace the road to achieving a PhD. Don’t take yourself too seriously and keep looking for the fun in what you do.

The post A PhD in cups of espresso: how logging my coffee consumption helped me write my thesis appeared first on Physics World.

  •  
❌
❌