↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Physics cookbook is fun but fails to gel

Par : No Author

There’s a lot of physics in a cup of tea. Compounds in the tea leaves start to diffuse as soon as you pour over hot water and – if you look closely enough – you’ll see turbulence as your milk mixes in. This humble beverage also displays conservation of momentum in the parabolic vortex that forms when tea is stirred.

Tea is just one of the many topics covered in Physics in the Kitchen by George Vekinis, director of research at the National Research Centre “Demokritos” (NCSRD) in Greece. In writing this book, Vekinis – who is a materials physicist by training – joins a long tradition of scientists writing about cooking.

The book is full of insights into the physics and chemistry underlying creative processes in a kitchen, from making sauces and cooking vegetables to the use of acid and the science behind common equipment. One of the book’s strengths is that, while it has a logical structure, it is possible to dip in and out without reading everything.

Talking of dips, I particularly enjoyed the section on sauces. My experience in this area is confined to rouxs that are thickened with flour, and I was surprised to discover that Vekinis considers this to be a “bit of a cheat”. Sauces prepared in the “Greek way”, he points out, often don’t involve starch at all.

Instead, a smooth sauce can be made just by heating an acid such as wine or lemon with egg and broth. This ancient method, which the author describes in a long section on “Creamy emulsion or curdled mess?”, involves the extraction of small molecules and requires extra care to prevent curdling or splitting.

However, as a food physicist myself, I did have some issues with the science in this and later sections.

For example, Vekinis uses the word “gel” far too loosely. Sometimes he’s talking about the gels created when dissolved proteins form a solid-like network despite it mostly being liquid – such as the brown gel that appears below a roast ham or chicken that has cooled. However, he also uses the same word to describe what you get when starch granules swell and thicken when making a roux sauce, which is a very different process.

Moreover, Vekinis describes both kinds of gel as forming through “polymerization”, which is inaccurate. Polymerization is what happens when small molecular building blocks bond chemically together to form spindly, long-chain molecules. If these molecules link up, they can then form a branched gel, such as silicone, which has some structural similarities to a protein gel. However, the bonding process is very different, and I found this comparison with polymer science unhelpful.

Meanwhile, in the section “Wine, vinegar, and lemon”, we are told that to prepare a smooth sauce you have to boil “an acidic agent as a catalyst for a polymerization reaction” and that “dry wine does the job too”. Though the word is sometimes used colloquially, what is described here is not, in the scientific sense, a catalytic reaction.

Towards the end of the book, Vekinis moves beyond food and looks at the physics behind microwaves, fridges and other kitchen appliances. He describes, for example, how the oscillation of polar molecules such as water in microwaves produces heating that is completely distinct to a conventional oven.

It’s well known that a microwave oven doesn’t heat food uniformly and the book describes how standing waves in the oven produce hot and cold spots. However, I feel more could have been said about the effect of the shape and size of food on how it heats. There has been interesting work, for example, investigating the different heating patterns in square- and round-edged foods.

Overall, I found the book an enjoyable read even if Vekinis sometimes over-simplifies complicated subjects in his attempts to make tricky topics accessible. I shared the book with some teacher friends of mine, who all liked it too, saying they’d use it in their food-science lessons. They appreciated the way the book progresses from the simple (such as heat and energy) to the complex (such as advanced thermodynamic concepts).

Physics in the Kitchen is not meant to be a cookbook, but I do wonder if Vekinis – who describes himself as a keen cook as well as a scientist – could have made himself clearer by including a few recipes to illustrate the processes he describes. Knowing how to put them into practice will not only help us to make wonderful meals – but also enhance our enjoyment of them too.

  • 2023 Springer £17.99hb 208pp

The post Physics cookbook is fun but fails to gel appeared first on Physics World.

Revised calibration curve improves radiocarbon dating of ancient Kyrenia shipwreck

Par : No Author

The Kyrenia Ship is an ancient merchant vessel that sank off the coast of Cyprus in the 3rd century BCE. Through fresh analysis, a team led by Sturt Manning at Cornell University has placed tighter constraints on the age of the shipwreck. The researchers achieved this through a combination of techniques that improve the accuracy of radiocarbon dating, and reversing wood treatments that make dating impossible.

In the late 1960s, a diving expedition off the coast of Kyrenia, Northern Cyprus, uncovered the wreck of an ancient Greek merchant ship. With over half of its hull timbers still in good condition, the wreck was remarkably well preserved, and carried an archaeological treasure trove of valuable coins and artefacts.

“Ancient shipwrecks like these are amazing time capsules, since their burial in deeper water creates a near oxygen-free environment,” Manning explains. “This means that we get a remarkable preservation of materials like organics and metals, which usually do not preserve well in archaeological contexts.”

Following the discovery, the Kyrenia ship was carefully excavated and brought to the surface, where its timbers were treated to prevent further decay. In accordance with preservation techniques at the time, this involved impregnating the wood with polyethylene glycol (PEG) – but as archaeologists attempted to determine the age of the wreck through radiocarbon dating, this approach soon created problems.

To perform radiocarbon dating, researchers need to measure the amount of carbon-14 (14C) that a sample contains. This isotope is created naturally in the atmosphere and absorbed into wood through photosynthesis, but after the tree is cut down, it gradually decays into more stable isotopes (mainly 12C and 13C). This means that researchers can accurately estimate the age of a sample by measuring the proportion of 14C it contains, compared with 12C and 13C.

However, when samples from the Kyrenia ship were treated with PEG, the wood became contaminated with far older, petroleum-derived carbon. “Initially, it was not possible to get useful radiocarbon dates on the PEG-conserved wood,” Manning explains.

Reconstructed wreck Remains of the Kyrenia Ship hull shortly after reassembly of the timbers recovered from the seabed excavation. (Courtesy: CC BY 4.0/Kyrenia Ship Excavation team)

Recent archaeological studies indicate that the Kyrenia ship had likely sunk between 294 and 290 BCE. But radiocarbon dating using the most up-to-date version of the radiocarbon “calibration curve” for this period – which accounts for how concentrations of 14C in the atmosphere vary over time – still didn’t align with the archaeological constraints.

“With the current internationally approved methods, radiocarbon dates on some of the non-PEG-treated materials, such as almonds in the cargo, gave results inconsistent with any of the archaeological assessments,” says Manning.

To address this disparity, the researchers employed a combination of approaches to improve on previous estimates of the Kyrenia ship’s true age. Part of their research involved analysing the most up-to-date calibration curve for the period when the ship sank, and comparing it with wood samples that had been dated using a different technique: analysing their distinctive patterns of tree rings.

Tree-ring patterns vary from year to year due to short-term variations in rainfall, but are broadly shared by all trees growing in the same region at a given time. Taking advantage of this, Manning’s team carried out radiocarbon dating on a number of samples that had already been dated from their tree ring patterns.

“We used known-age tree-rings from the western US and the Netherlands to redefine the atmospheric radiocarbon record in the northern hemisphere over the period between 400 and 250 BCE,” Manning explains. Variations between atmospheric concentrations of 14C differ between Earth’s hemispheres, since the northern hemisphere contains far more vegetation overall.

In addition to revising the radiocarbon calibration curve, the team also investigated new techniques for cleaning PEG from contaminated samples. They tested the techniques on samples dating from around 60 CE, which had undergone radiocarbon dating before being treated with PEG. They showed that with the appropriate sample pretreatment, they could closely reproduce these known dates.

By combining these techniques, the researchers had all the tools that they needed to constrain the age of the Kyrenia ship. “With a technique called Bayesian chronological modelling, we combined all the tree-ring information from the ship timbers, the radiocarbon dates, and the ship’s archaeological time sequence – noting how the ship’s construction must predate its last cargo and sinking,” Manning describes.

“The date for the ship is most likely between 293 and 271 BCE: confirming other recent arguments that the original late 4th century BCE date for the ship needs a little revision,” he says.

By constraining this date, Manning’s team hopes that the work could enable researchers to better understand where the Kyrenia ship and its numerous artefacts fit within the wider chronology of ancient Greece. In turn, their discoveries could ultimately help archaeologists and historians to deepen their understanding of a fascinating era in history.

The researchers report their findings in PLOS ONE.

The post Revised calibration curve improves radiocarbon dating of ancient Kyrenia shipwreck appeared first on Physics World.

New titanium:sapphire laser is tiny, low-cost and tuneable

Par : No Author

A compact, integrated titanium:sapphire laser that needs only a simple green LED as a pump source has been created by researchers at Stanford University in the US. Their design reduces the cost and footprint of a titanium:sapphire laser by three orders of magnitude and the power consumption by two. The team believes its device represents a key step towards the democratization of a laser type that plays important roles in scientific research and industry.

Since its invention by Peter Moulton at the Massachusetts Institute of Technology in 1982, the titanium:sapphire laser has become an important research and engineering tool. This is thanks to its ability to handle high powers and emit either spectrally pure continuous wave signals or broadband, short pulses. Indeed, the laser was used to produce the first frequency combs, which play important roles in optical metrology.

Unlike numerous other types of lasers such as semiconductor lasers, titanium:sapphire lasers have proved extremely difficult to miniaturize because traditional designs require very high input power to achieve lasing. “Titanium:sapphire has the ability to output very high powers, but because of the way the laser level structure works – specifically the fluorescence has a very short lifetime – you have to pump very hard in order to see appreciable amounts of gain,” says Stanford’s Joshua Yang. Traditional titanium:sapphire lasers have to be pumped with high-powered lasers – and therefore cost in excess of $100,000.

Logic, sensing, and quantum computing

If titanium:sapphire lasers could be miniaturized and integrated into chips, potential applications would include optical logic, sensing and quantum computing. Last year, Yubo Wang and colleagues at Yale University unveiled a chip-integrated titanium:sapphire laser that utilized an indium gallium nitride pump diode coupled to a titanium:sapphire gain medium through its evanescent field. The evanescent component of the electromagnetic field does not propagate but decays exponentially with distance from the source. By reducing loss, this integrated setup reduced the lasing threshold by more than an order of magnitude. However, Jelena Vučković – the leader of the Stanford group – says that “the threshold was still relatively high because the overlap with the gain medium was not maximized”.

In the new research, Vučković’s group fabricated their laser devices by creating monocrystalline titanium:sapphire optical resonators about 40 micron across and less than 1 micron thick on a layer of sapphire using a silicon dioxide interface. The titanium:sapphire was then polished to within 0.1 micron smoothness using reactive ion etching. The resonators achieved almost perfect overlap of the pump and lasing modes, which led to much less loss and a lasing threshold 22 times lower than in any titanium:sapphire laser used previously. “All the fabrication processes are things that can be done in most traditional clean rooms and are adaptable to foundries,” says Yang – who is first author of a paper in Nature that describes the new laser.

The researchers achieved lasing with a $37 green laser diode as the pump. However, subsequent experiments described in the paper used a tabletop green laser because the team is still working to couple the cheaper laser into the system into the system effectively.

Optimization challenge

“Being able to complete the whole picture of diode to on-chip laser to systems applications is really just an optimization challenge, and of course one we’re really excited to work on,” says Yang. “But even with the low optimization we start with, it’s still able to achieve lasing.”

The researchers went on to demonstrate two things that had never been achieved before. First, they incorporated the tunability so valued in titanium:sapphire lasers into their system by using an integrated heater to modify the refractive index of the resonator, allowing it to lase in different modes. They achieved single mode lasing in a range of over 50 nm, and believe that it should be possible, with optimization, to extend this to several hundred nanometres.

They also performed a cavity quantum electrodynamics experiment with colour centres in silicon carbide using their light source: “That’s why [titanium:sapphire] lasers are so popular in quantum optics labs like ours,” says Vučković; “If people want to work with different colour centres or quantum dots, they don’t have a specific wavelength at which they work.” The use of silicon carbide is especially significant, she says, because it is becoming popular in the high-power electronics used in systems like electric cars.

Finally, they produced a titanium:sapphire laser amplifier, something that the team says has not been reported before. They injected 120 pJ pulses from a commercial titanium:sapphire laser and amplified them to 2.3 nJ over a distance of 8 mm down the waveguide. The distortion introduced by the amplifier was the lowest allowed by the laws of wave motion – something that had not been possible for any integrated amplifier at any wavelength.

Yubo Wang is impressed: “[Vučković and colleagues have] achieved several important milestones, including very low-threshold lasing, very high-power amplification and also tuneable laser integration, which are all very nice results,” he says. “At the end of the paper, they have a compelling demonstration of cavity-integrated artificial atoms using their titanium:sapphire laser.” He says he would be interested to see if the team could produce multiple devices simultaneously at wafer scale. He also believes it would be interesting to look at integration of other visible-wavelength lasers: “I’m expecting to see more results in the next few years,” he says.

The post New titanium:sapphire laser is tiny, low-cost and tuneable appeared first on Physics World.

Oculomics: a window to the health of the body

Par : No Author

More than 13 million eye tests are carried out in the UK each year, making it one of the most common medical examinations in the country. But what if eye tests could tell us about more than just the health of the eye? What if these tests could help us spot some of humanity’s greatest healthcare challenges, including diabetes, Alzheimer’s or heart disease?

It’s said that the eye is the “window to the soul”. Just as our eyes tell us lots about the world around us, so they can tell us lots about ourselves. Researchers working in what’s known as “oculomics” are seeking ways to look at the health of the body, via the eye. In particular, they’re exploring the link between certain ocular biomarkers (changes or abnormalities in the eye) with systemic health and disease. Simply put, the aim is to unlock the valuable health data that the eye holds on the body (Chronic Disease. Ophthalmol. Ther. 13 1427).

Oculomics is particularly relevant when it comes to chronic conditions, such as dementia, diabetes and cardiovascular disease. They make up most of the “burden of disease” (a factor that is calculated by looking at the sum of the mortality and morbidity of a population) and account for around 80% of deaths in industrialized nations. We can reduce how many people die or get ill from such diseases through screening programmes. Unfortunately, most diseases don’t get screened for and – even when they do – there’s limited or incomplete uptake.

Cervical-cancer screening, for example, is estimated to have saved the lives of one in 65 of all British-born women since 1950 (Lancet 364 249), but nearly a third of eligible women in the UK do not attend regular cervical screening appointments. This highlights the need for new and improved screening methods that are as non-intimidating, accessible and patient-friendly as a trip to a local high-street optometrist.

Seeing the light: the physics and biology of the eye

In a biological sense, the eye is fantastically complex. It can adapt from reading this article directly in front of you to looking at stars that are light-years away. The human eye is a dynamic living tissue that can operate across six orders of brightness magnitude, from the brightest summer days to the darkest cloudy nights.

The eye has several key structures that enable this (figure 1). At the front, the cornea is the eye’s strongest optical component, refracting light as it enters the eye to form an image at the back of the eye. The iris allows the eye to adapt to different light levels, as it changes size to control how much light enters the eye. The crystalline lens provides depth-dynamic range, changing size and shape to focus on objects nearby or far away from the eye. The aqueous humour (a water-like fluid in front of the lens) and the vitreous humour (a gel-like liquid between the lens and the retina) give the eye its shape, and provide the crucial separation over which the refraction of light takes place. Finally, light reaches the retina, where the “pixels” of the eye – the photoreceptors – detect the light.

1 Look within

Diagram of the eye with labels including iris, cornea and vitreous humour
(Courtesy: Occuity)

The anatomy of the human eye, highlighting the key structures including the iris, cornea, the lens and the retina.

The tissues and the fluids in the eye have optical characteristics that stem from their biological properties, making optical methods ideally suited to study the eye. It’s vital, for example, that the aqueous humour is transparent – if it were opaque, our vision would be obscured by our own eyes. The aqueous humour also needs to fulfil other biological properties, such as providing nutrition to the cornea and lens.

To do all these things, our bodies produce the aqueous humour as an ultrafiltered blood plasma. This plasma contains water, amino acids, electrolytes and more, but crucially no red blood cells or opaque materials. The molecules in the aqueous humour reflect the molecules in the blood, meaning that measurements on the aqueous humour can reveal insights into blood composition. This link between optical and biological properties is true for every part of the eye, with each structure potentially revealing insights into our health.

Chronic disease insights and AI

Currently, almost all measurements we take of the eye are to discern the eye’s health only. So how can these measurements tell us about chronic diseases that affect other parts of the body? The answer lies in both the incredible properties of the eye, and data from the sheer number of eye examinations that have taken place.

Chronic diseases can affect many different parts of the body, and the eye is no exception (figure 2). For example, cardiovascular disease can change artery and vein sizes. This is also true in the retina and choroid (a thin layer of tissue that lies between the retina and the white of the eye) – in patients with high blood pressure, veins can become dilated, offering optometrists and ophthalmologists insight into this aspect of a patient’s health.

For example, British optometrist and dispensing optician Jason Higginbotham, points out that throughout his career “Many eye examinations have yielded information about the general health of patients – and not just their vision and eye health. For example, in some patients, the way the arteries cross over veins can ‘squash’ or press on the veins, leading to a sign called ‘arterio-venous nipping’. This is a possible indicator of hypertension and hardening of the arteries.”

Higginbotham, who is also the managing editor of Myopia Focus, adds that “Occasionally, one may spot signs of blood-vessel leakage and swelling of the retinal layers, which is indicative of active diabetes. For me, a more subtle sign was finding the optic nerves of one patient appearing very pale, almost white, with them also complaining of a lack of energy, becoming ‘clumsier’ in their words and finding their vision changing, especially when in a hot bath. This turned out to be due to multiple sclerosis.”

2 Interconnected features

Diagram of the eye with labels explaining detectable changes that occur
(Courtesy: Occuity)

Imaging the eye may reveal ocular biomarkers of systemic disease, thanks to key links between the optical and biological properties of the eye. With the emergence of oculomics, it may be possible – through a standard eye test – to detect cardiovascular diseases; cancer; neurodegenerative disease such as Alzheimer’s, dementia and Parkinson’s disease; and even metabolic diseases such as diabetes.

However, precisely because there are so many things that can affect the eye, it can be difficult to attribute changes to a specific disease. If there is something abnormal in the retina, could this be an indicator of cardiovascular disease, or could it be diabetes? Perhaps it is a by-product of smoking – how can an optometrist tell?

This is where the sheer number of measurements becomes important. The NHS has been performing eye tests for more than 60 years, giving rise to databases containing millions of images, complete with patient records about long-term health outcomes. These datasets have been fed into artificial intelligence (AI) deep-learning models to identify signatures of disease, particularly cardiovascular disease (British Journal of Ophthalmology 103 67J Clin Med. 10.3390/jcm12010152). Models can now predict cardiovascular risk factors with accuracy that is comparable to the current state-of-the-art. Also, new image-analysis methods are under constant development, allowing further signatures of cardiovascular disease, diabetes and even dementia to be spotted in the eye.

But bias is a big issue when it comes to AI-driven oculomics. When algorithms are developed using existing databases, groups or communities with historically worse healthcare provision will be under-represented in these databases. Consequently, the algorithms may perform worse for them, which risks embedding past and present inequalities into future methods. We have to be careful not to let such biases propagate through the healthcare system – for example, by drawing on multiple databases from different countries to reduce sensitivities to country-specific bias.

Although AI oculomics methods have not yet moved beyond clinical research, it is only a matter of time. Ophthalmology companies such as Carl Zeiss Meditec (Ophthalmology Retina 7 1042) and data companies such as Google are developing AI methods to spot diabetic retinopathy and other ophthalmic diseases. Regulators are also engaging more and more with AI, with the FDA having reviewed at least 600 medical devices that incorporate AI or machine learning across medical disciplines, including nine in the ophthalmology space, by October 2023.

Eye on the prize

So how far can oculomics go? What other diseases could be detected by analysing hundreds of thousands of images? And, more importantly, what can be detected with only one image or measurement of the eye?

Ultimately, the answer lies in matching the imaging technique to the disease. It is critical to choose the measurement technique that fits the disease. So, if we want to detect more diseases, we need more measurement techniques.

At Occuity, a UK-based medical technology company, we are developing solutions to some of humanity’s greatest health challenges through optical diagnostic technologies. Our aim is to develop pain-free, non-contact screening and monitoring of chronic health conditions, such as glaucoma, myopia, diabetes and Alzheimer’s disease (Front Aging Neurosci.13 720167). We believe that the best way that we can improve health is by developing instruments that can spot specific signatures of disease. This would allow doctors to start treatments earlier, give researchers a better understanding of the earliest stages of disease, and ultimately, help people live healthier, happier lives.

Currently, we are developing a range of instruments that target different diseases by scanning a beam of light through the different parts of the eye and measuring the light that comes back. Our first instruments measure properties such as the thickness of the cornea (needed for accurate glaucoma diagnosis); and the length of the eyeball, which is key to screening and monitoring the epidemic of myopia, which is expected to affect half of the world’s population by 2050. As we advance these technologies, we open up opportunities for new measurements to advance scientific research and clinical diagnostics.

Looking into the past

The ocular lens provides a remarkable record of our molecular history because, unlike many other ocular tissues, the cells within the lens do not get replaced as people age. This is particularly important for a family of molecules dubbed “advanced glycation end-products”, or AGEs. These molecules are waste products that build up when glucose levels are too high. While present in everybody, they occur in much higher concentrations in people with diabetes and pre-diabetes people who have higher blood-glucose levels and are at high risk of developing diabetes, but largely without symptoms. Measurements of a person’s lens AGE concentration may therefore indicate their diabetic state.

Fortunately, these AGEs have a very important optical property – they fluoresce. Fluorescence is a process where an atom or molecule absorbs light at one colour and then re-emits light at another colour – it’s why rubies glow under ultraviolet light. The lens is the perfect place to look for these AGEs, as it is very easy to shine light into the lens. Luckily, a lot of this fluorescence makes it back out of the lens, where it can be measured (figure 3).

3 AGEs and fluorescence

Graph with x axis labelled fluorescence and y axis labelled age. The data are spread out but roughly follow a line that is gently rising from left to right
(Courtesy: Occuity)

Fluorescence, a measure of advanced glycation end-products (AGE) concentration, rises as people get older. However, it increases faster in diabetes as higher blood-glucose levels accelerate the formation of AGEs, potentially making lens fluorescence a powerful tool for detecting diabetes and pre-diabetes. This chart shows rising fluorescence as a function of both age and diabetic status, taken as part of an internal Occuity trial on 21 people using a prototype instrument; people with diabetes are shown by orange points and people without diabetes are shown by blue points. Error bars are the standard deviation of three measurements. These measurements are non-invasive, non-contact and take just seconds to perform.

Occuity has developed optical technologies that measure fluorescence from the lens as a potential diabetes and pre-diabetes screening tool, building on our optometry instruments. Although they are still in the early stages of development, the first results taken earlier this year are promising, with fluorescence clearly increasing with age, and strong preliminary evidence that the two people with diabetes in the dataset have higher lens fluorescence than those without diabetes. If these results are replicated in larger studies, this will show that lens-fluorescence measurement techniques are a way of screening for diabetes and pre-diabetes rapidly and non-invasively, in easily accessible locations such as high-street optometrists and pharmacists.

Such a tool would be revolutionary. Almost five million people in the UK have diabetes, including over a million with undiagnosed type 2 diabetes whose condition goes completely unmonitored. There are also over 13 million people with pre-diabetes. If they can be warned before they move from pre-diabetes to diabetes, early-stage intervention could reverse this pre-diabetic state, preventing progression to full diabetes and drastically reducing the massive impact (and cost) of the illness.

Living in the present

Typical diabetes management is invasive and unpleasant, as it requires finger pricks or implants to continuously monitor blood glucose levels. This can result in infections, as well as reduce the effectiveness of diabetes management, leading to further complications. Better, non-invasive glucose-measurement techniques could transform how patients can manage this life-long disease.

As the aqueous humour is an ultra-filtered blood plasma, its glucose concentration mimics that of the glucose concentration in blood. This glucose also has an effect on the optical properties of the eye, increasing the refractive index that gives the eye its focusing power (figure 4).

4 Measuring blood glucose level

Graph with x axis labelled refractive index and y axis labelled glucose concentration. The data points show a gradually rising line from left to right
(Courtesy: Occuity)

The relationship between blood glucose and optical measurements on the eye has been probed theoretically and experimentally at Occuity. Their goal is to create a non-invasive, non-contact measure of blood glucose concentration for diabetics. Occuity has shown that changes in glucose concentration comparable to that observed in blood has a measurable effect on refractive index in cuvettes and is moving towards equivalent measurements in the anterior chamber.

As it happens, the same techniques that we at Occuity use to measure lens and eyeball thickness can be used to measure the refractive index of the aqueous humour, which correlates with glucose concentration. Preliminary cuvette-based tests are close to being precise enough to measure glucose concentrations to the accuracy needed for diabetes management – non-invasively, without even touching the eye. This technique could transform the management of blood-glucose levels for people with diabetes, replacing the need for repetitive and painful finger pricks and implants with a simple scan of the eye.

Eye on the future

As Occuity’s instruments become widely available, the data that they generate will grow, and with AI-powered real-time data analysis, their predictive power and the range of diseases that can be detected will expand too. By making these data open-source and available to researchers, we can continuously expand the breadth of oculomics.

Oculomics has massive potential to transform disease-screening and diagnosis through a combination of AI and advanced instruments. However, there are still substantial challenges to overcome, including regulatory hurdles, issues with bias in AI, adoption into current healthcare pathways, and the cost of developing new medical instruments.

Despite these hurdles, the rewards of oculomics are too great to pass up. Opportunities such as diabetes screening and management, cardiovascular risk profiling and early detection of dementia offer massive health, social and economic benefits. Additionally, the ease with which ocular screening can take place removes major barriers to the uptake of screening.

With more than 35,000 eye exams being carried out in the UK almost every day, each one offers opportunities to catch and reverse pre-diabetes, to spot cardiovascular risk factors and propose lifestyle changes, or to identify and potentially slow the onset of neurodegenerative conditions. As oculomics grows, the window to health is getting brighter.

The post Oculomics: a window to the health of the body appeared first on Physics World.

Satellites burning up in the atmosphere may deplete Earth’s ozone layer

Par : No Author

The increasing deployment of extensive space-based infrastructure is predicted to triple the number of objects in low-Earth orbit over the next century. But at the end of their service life, decommissioned satellites burn up as they re-enter the atmosphere, triggering chemical reactions that deplete the Earth’s ozone layer.

Through new simulations, Joseph Wang and colleagues at the University of Southern California have shown how nanoparticles created by satellite pollution can catalyse chemical reactions between ozone and chlorine. If the problem isn’t addressed, they predict that the level of ozone depletion could grow significantly in the coming decades.

From weather forecasting to navigation, satellites are a vital element of many of the systems we’ve come to depend on. As demand for these services continues to grow, swarms of small satellites are being rolled out in mega-constellations such as Starlink. As a result, low-Earth orbit is becoming increasingly cluttered with manmade objects.

Once a satellite reaches the end of its operational lifetime, international guidelines suggest that it should re-enter the atmosphere within 25 years to minimize the risk of collisions with other satellites. Yet according to Wang’s team, re-entries from a growing number of satellites are a concerning source of pollution; and one that has rarely been considered so far.

As they burn up on re-entry, satellites can lose between 51% and 95% of their mass – and much of the vaporized material they leave behind will remain in the upper atmosphere for decades.

One particularly concerning component of this pollution is aluminium, which makes up close to a third of the mass of a typical satellite. When left in the upper atmosphere, aluminium will react with the surrounding oxygen, creating nanoparticles of aluminium oxide (AlO). Although this compound isn’t reactive itself, its nanoparticles have large surface areas and excellent thermal stability, making them extremely effective at catalysing reactions between ozone and chlorine.

For this ozone–chlorine reaction to occur, chlorine-containing compounds must first be converted into reactive species – which can’t happen without a catalyst. Typically, catalysts come in the form of tiny, solid particles found in stratospheric clouds, which provide surfaces for the chlorine activation reaction to occur. But with higher concentrations of AlO nanoparticles in the upper atmosphere, the chlorine activation reaction can occur more readily – depleting the vital layer that protects Earth’s surface from damaging UV radiation.

Backwards progress

The ozone layer has gradually started to recover since the signing in 1987 of the Montreal Protocol – in which all UN member states agreed to phase out production of the substances primarily responsible for ozone depletion. With this new threat, however, Wang’s team predict that much of this progress could be reversed if the problem isn’t addressed soon.

In their study, reported in Geophysical Research Letters, the researchers assessed the potential impact of satellite-based pollution through molecular dynamics simulations, which allowed them to calculate the mass of ozone-depleting nanoparticles produced during satellite re-entry.

They discovered that a small 250 kg satellite can generate around 30 kg of AlO nanoparticles. By extrapolating this figure, they estimated that in 2022 alone, around 17 metric tons of AlO compounds were generated by satellites re-entering the atmosphere. They also found that the nanoparticles may take up to 30 years to drift down from the mesosphere into the stratospheric ozone layer, introducing a noticeable delay between satellite decommissioning and eventual ozone depletion in the stratosphere.

Extrapolating their findings further, Wang’s team then considered the potential impact of future mega-constellation projects currently being planned. Altogether, they estimate that some 360 metric tons of AlO nanoparticles could enter the upper atmosphere each year if these plans come to fruition.

Although these estimates are still highly uncertain, the researchers’ discoveries clearly highlight the severity of the threat that decommissioned satellites pose for the ozone layer. If their warning is taken seriously, they hope that new strategies and international guidelines could eventually be established to minimize the impact of these ozone-depleting nanoparticles, ensuring that the ozone layer can continue to recover in the coming decades.

The post Satellites burning up in the atmosphere may deplete Earth’s ozone layer appeared first on Physics World.

Shapeshifting organism uses ‘cellular origami’ to extend to 30 times its body length

Par : No Author

For the first time, two researchers in the US have observed the intricate folding and unfolding of “cellular origami”. Through detailed observations, Eliott Flaum and Manu Prakash at Stanford University discovered helical pleats in the membrane of a single-celled protist, which enable the organism to reversibly extend to over 30 times its own body length. The duo now hopes that the mechanism could inspire a new generation of advanced micro-robots.

A key principle in biology is that a species’ ability to survive is intrinsically linked with the physical structure of its body. One group of organisms where this link is still poorly understood are protists: single-celled organisms that have evolved to thrive in almost every ecological niche on the planet.

Although this extreme adaptability is known to stem from the staggering variety of shapes, sizes and structures found in protist cells, researchers are still uncertain as to how these structures have contributed to their evolutionary success.

In their study, reported in Science, Flaum and Prakash investigated a particularly striking feature found in a protist named Lacrymaria olor. Measuring 40 µm in length, this shapeshifting organism hunts its prey by launching a neck-like like feeding apparatus up to 1200 µm in less than 30 s. Afterwards, the protrusion retracts just as quickly: an action that can be repeated over 20,000 times throughout the cell’s lifetime.

Through a combination of high-resolution fluorescence and electron microscopy techniques, the duo found that this extension occurs through the folding and unfolding of an intricate helical structure in L. olor’s cytoskeleton membrane. These folds occur along bands of microtubule filaments embedded in the membrane, which group together to form accordion-like pleats.

Altogether, Flaum and Prakash found 15 of these pleats in L. olor’s membrane, which wrap around the cell in elegant helical ribs. The structure closely resembles “curved crease origami”, a subset of traditional origami in which folds follow complex curved paths instead of straight ones.

“When you store pleats on the helical angle in this way, you can store an infinite amount of material,” says Flaum in a press statement. “Biology has figured this out.”

“It is incredibly complex behaviour,” adds Prakash. “This is the first example of cellular origami. We’re thinking of calling it lacrygami.”

Perfection in projection

A further striking feature of L. olor’s folding mechanism is that the transition between its folded and unfolded states can happen thousands of times without making a single error: a feat that would be incredibly difficult to reproduce in any manmade mechanism with a similar level of intricacy.

To explore the transition in more detail, Flaum and Prakash investigated points of concentrated stress within the cell’s cytoskeleton. Named “topological singularities”, the positions of these points are intrinsically linked to the membrane’s helical geometry.

The duo discovered that L. olor’s transition is controlled by two types of singularity. The first of these is called a d-cone: a point where the cell’s surface develops a sharp, conical point due to the membrane bending and folding without stretching. Crucially, a d-cone can travel across the membrane in a neat line, and then return to its original position along the exact same path as the membrane folds and unfolds.

The second type of topological singularity is called a twist singularity, and occurs in the membrane’s microtubule filaments through their rotational deformation. Just like the d-cone, this singularity will travel along the filaments, then return to its original position as the cell folds and unfolds.

As Prakash explains, both singularities are key to understanding how L. olor’s transition is so consistent. “L. olor is bound by its geometry to fold and unfold in this particular way,” he says. “It unfolds and folds at this singularity every time, acting as a controller. This is the first time a geometric controller of behaviour has been described in a living cell.”

The researchers hope that their remarkable discovery could provide new inspiration for our own technology. By replicating L. olor’s cellular origami, it may be possible to design micro-scale machines whose movements are encoded into patterns of pleats and folds in their artificial membranes. If achieved, such structures could be suitable for a diverse range of applications: from miniature surgical robots to deployable habitats in space.

The post Shapeshifting organism uses ‘cellular origami’ to extend to 30 times its body length appeared first on Physics World.

Dark matter’s secret identity: WIMPs or axions?

Par : No Author

A former South Dakota gold mine is the last place you might think to look to solve one of the universe’s biggest mysteries. Yet what lies buried in the Sanford Underground Research Facility, 1.47 km beneath the surface, could be our best chance of detecting the ghost of the galaxy: dark matter.

Deep within those old mine tunnels, accessible only by a shaft from the surface, is seven tonnes of liquid xenon, sitting perfectly still (figure 1).

This is the LUX-ZEPELIN (LZ) experiment. It’s looking for the tiny signatures that dark matter is predicted to leave in its wake as it passes through the Earth. To have any chance of success, LZ needs to be one of the most sensitive experiments on the planet.

“The centre of LZ, in terms of things happening, is the quietest place on Earth,” says Chamkaur Ghag, a physicist from University College London in the UK, and spokesperson for the LZ collaboration. “It is the environment in which to look for the rarest of interactions.”

For more than 50 years astronomers have puzzled over the nature of the extra gravitation first observed in galaxies by Vera Rubin, assisted by Kent Ford, who noticed stars orbiting galaxies under the influence of more gravity than could be accounted for by visible matter. (In the 1930s Fritz Zwicky had noticed a similar phenomenon in the movement of galaxies in the Coma Cluster.)

Most (though not all – see part one of this series “Cosmic combat: delving into the battle between dark matter and modified gravity“) scientists believe this extra mass to be dark matter. “We see these unusual gravitational effects, and the simplest explanation for that, and one that seems self-consistent so far, is that it’s dark matter,” says Richard Massey, an astrophysicist from Durham University in the UK.

The standard model of cosmology tells us that about 27% of all the matter and energy in the universe is dark matter, but no-one knows what it actually is. One possibility is a hypothetical breed of particle called a weakly interacting massive particle (WIMP), and it is these particles that LZ is hoping to find. WIMPs are massive enough to produce a substantial gravitational field, but they otherwise only gently interact with normal matter via the weak force.

With more questions than answers, the search for dark matter is heading for a showdown

“The easiest explanation to solve dark matter would be a fundamental particle that interacts like a WIMP,” says Ghag. Should LZ fail in its mission, however, there are other competing hypotheses. One in particular that is lurking in the wings is a lightweight competitor called the axion.

Experiments are under way to pin down this vast, elusive portion of the cosmos. With more questions than answers, the search for dark matter is heading for a showdown.

Going deep underground

According to theory, as our solar system cruises through space we’re moving through a thin fog of dark matter. Most of the dark-matter particles, being weakly interacting, would pass through Earth, but now and then a WIMP might interact with a regular atom.

This is what LZ is hoping to detect, and the seven tonnes of liquid xenon are designed to be a perfect WIMP trap. The challenge the experiment faces is that even if a WIMP were to interact with a xenon atom, it has to be differentiated from the other particles and radiation, such as gamma rays, that could enter the liquid.

1 Buried treasure

The LUX-ZEPELIN (LZ) experiment
(Courtesy: Matthew Kapust, Sanford Underground Research Facility)

The seven-tonne tank of liquid xenon that comprises the LZ detector. The experiment is located almost a mile beneath the Earth to reduce background effects, which astronomers hope will enable them to identify weakly interacting massive particles (WIMPs).

Both a gamma ray and a WIMP can create a cloud of ionized free electrons inside the detector, and in both cases, when the ionized electrons recombine with the xenon atoms, they emit flashes of light. But both mechanisms are slightly different, and LZ is designed to detect the unique signature of a WIMP interaction.

When a gamma ray enters the detector it can interact with an electron in the xenon, which flies off and causes a chain of ionizations by interacting with other neighbouring electrons. The heavy WIMP, however, collides with the xenon nucleus, sending it spinning through the liquid, bumping into other nuclei, and indirectly ionizing a few atoms along the way.

To differentiate these two events, an electric field of a few tens of kilovolts is cast across the xenon tank, drawing some of the ionized electrons toward the top of the tank before they can recombine. When these electrons reach the top, they enter a thin layer of gas and produce another, second, burst of light.

When a gamma ray enters the tank, the second flash is brighter than the first – the recoil electron flies off like a bullet, and most of the electrons it liberates are pulled up by the detector before they recombine.

A nucleus is much heavier than an electron, so when a WIMP interacts with the xenon, the path of the recoil is shorter. The cloud of electrons generated by the interaction is therefore localized to a smaller area and more of the electrons find a “partner” ion to recombine with before the electric field can pull them away. This means that for a WIMP, the first flash is brighter than the second.

In practice, there is a range of brightnesses depending upon the energies of the particles, but statistically an excess of brighter first flashes above a certain background level would be a strong signature of WIMPs.

“Looking for dark matter experimentally is about understanding your backgrounds perfectly,” explains Ghag. “Any excess or hint of a signal above our expected background model – that’s what we’re going to use to ascribe statistical significance.”

LZ is now up and running, as of late 2021, and has completed about 5% of its search. Before it could begin its hunt, the project had to endure a five-year process to screen every component of the detector, to make sure that the background effects of every nut, bolt and washer have been accounted for.

WIMPs in crisis?

How many, if any, WIMPs are detected will inform physicists about the interaction cross-section of the dark-matter particle – meaning how likely it is to interact with normal matter it comes into proximity with.

The timing couldn’t be more crucial. Some of the more popular WIMP candidates are predicted by a theory called “supersymmetry”, which posits that every particle in the Standard Model has a more massive “superpartner” with a different quantum spin. Some of these superpartners were candidates for WIMPs but the Large Hadron Collider (LHC) has failed to detect them, throwing the field – and the hypothetical WIMPs associated with them – into crisis.

Francesca Chadha-Day, a physicist who works at Durham University and who studies dark-matter candidates based on astrophysical observations, thinks time may be up for supersymmetry. “The standard supersymmetric paradigm hasn’t materialized, and I think it might be in trouble,” she says.

Ruling out WIMPs now would be like building the LHC but stopping before turning it on

Chamkaur Ghag

She does, however, stress that supersymmetry is “only one source of WIMPs”. Supersymmetry was proposed to explain certain problems in physics, such as why gravity is more feeble than the weak force. Even if supersymmetry is a dead end, there are alternative theories to solve these problems that also predict the existence of particles that could be WIMPs.

“It’s way too early to give up on WIMPs,” adds Ghag. LZ needs to run for at least 1000 days to reach its full sensitivity and he says that ruling out WIMPs now would be “like building the LHC but stopping before turning it on”.

The axion universe

With question marks nevertheless hanging over WIMPs, an alternative type of dark-matter particle has been making waves.

Dubbed axions, Chadha-Day describes them as “dark matter for free”, because they were developed to solve an entirely different problem.

“There’s this big mystery in particle physics that we call the Strong CP Problem,” says Chadha-Day. C refers to charge and P, parity. The CP problem describes how, if you switch a particle for its oppositely charged antiparticle and swap it for a spatial mirror image, the laws of physics would still function the same for it.

The Standard Model predicts that the strong force, which glues quarks together inside protons and neutrons, should actually violate CP symmetry. Yet in practice, it plays ball with the conservation of charge and parity. Something is intervening and interacting with the strong force to maintain symmetry. This something is proposed to be the axion.

“The axion is by far the most popular way of solving the Strong CP Problem because it is the simplest,” says Chadha-Day. “And then when you look at the properties of the axion you also find that it can act as dark matter.”

Supersymmetry’s difficulties have seen a recent boom in support for axions as dark matter

These properties include rarely interacting with other particles and sometimes being non-relativistic, meaning that some axions would move slowly enough to clump into haloes around galaxies and galaxy clusters, which would account for their additional mass. Like WIMPs, however, axions have yet to be detected.

Supersymmetry’s difficulties have seen a recent boom in support for axions as dark matter. “There are strong motivations for axions,” says Ghag, “Because they could exist even if they are not dark matter.”

Lensing patterns

Axions are predicted to be lighter than WIMPs and to interact with matter via the electromagnetic force (and gravity) rather than the weak force. Experiments to directly detect axions use magnetic fields, because in their presence an axion can transform into a photon. However, because axions might exist even if they aren’t dark matter, to test them against WIMPs, physicists have to take a different approach.

The extra mass from dark matter around galaxies and galaxy clusters can bend the path of light coming from more distant objects, magnifying them and warping their appearance, sometimes even producing multiple images (figure 2). The shape and degree of this effect, called “gravitational lensing”, is impacted by the distribution of dark matter in the lensing galaxies. WIMPs and axions are predicted to distribute themselves slightly differently, so gravitational lensing can put the competing theories to the test.

2 Seeing quadruple

Lensing effects around six astronomical objects
(Courtesy: NASA, ESA, A Nierenberg (JPL) and T Treu (UCLA))

Galaxies and galaxy clusters can bend the light coming from bright background objects such as quasars, creating magnified images. If the lensing effect is strong, as in these images, we may even observe multiple images of a single quasar. The top right image shows quasar HS 0810+2554 (see figure 4).

If dark matter is WIMPs, then they will form a dense clump at the centre of a galaxy, smoothly dispersing with increasing distance. Axions, however, operate differently. “Because axions are so light, quantum effects become more important,” says Chadha-Day.

These effects should show up on large scales – the axion halo around a galaxy is predicted to exhibit long-range quantum interference patterns, with the density fluctuating in peaks and troughs thousands of light-years across.

Gravitational lensing could potentially be used to reveal these patterns, using something called the “critical curve”. Think of a gravitational lens as a series of lines where space has been warped by matter, like on a map where the contour lines indicate height. The critical curve is where the contours bunch up the most (figure 3).

3 Cosmic cartography

Gravitational lensing around the Abell 1689 galaxy cluster
(CC BY Astron. Astrophys. Rev. 19 47)

Gravitational lensing around the Abell 1689 galaxy cluster. Red lines indicate the critical curve where magnification is infinite and yellow contours indicate the regions of the sky where objects are magnified by more than a factor of 10.

Critical curves “are lines of sight in the universe where you get enormous magnification in gravitational lensing, and they have different patterns depending on whether dark matter is WIMPs or axions”, says Massey. With axions, the quantum interference pattern can render the critical curve wavy.

In 2023 a team led by Alfred Amruth of the University of Hong Kong found some evidence of wavy effects in the critical curve. They studied the quasar HS 0810+2554 – the incredibly luminous core of a distant galaxy that is being gravitationally lensed (we can see four images of it from Earth) by a foreground object. They found that the lensing pattern could be better explained by axions than WIMPs (figure 4), though because they only studied one system, this is far from a slam dunk for axions.

Dark-matter interactions

Massey prefers not to tie himself to any one particular model of dark matter, instead opting to take a phenomenological approach. “I look to test whether dark-matter particles can interact with other dark-matter particles,” he says. Measuring how much dark matter interacts with itself (another kind of cross section) can be used to narrow down its properties.

4 Making waves

Comparison of the shapes of gravitational lenses from four models of dark matter
(First published in Amruth et al. 2023 Nature Astron. 7 736. Reprinted with permission from Springer Nature.)

The shape of a gravitational lens would change depending on whether dark matter is WIMPs or axions. Alfred Amruth and colleagues developed a model of the gravitational lensing of quasar HS 0810+2554 (see figure 2). Light from the quasar is bent around a foreground galaxy, and the shape of the gravitational lensing depends on the properties of the dark matter in the galaxy. The researchers tested models of both WIMP-like and axion-like dark matter.

The colours indicate the amount of magnification, with the light blue lines representing the critical curves of high magnification. Part a shows a model of WIMP-like dark matter, whereas b, c and d show different models of axionic dark matter. Whereas the WIMP-like critical curve is smooth, the interference between the wavelike axion particles makes the critical curve wavy.

The best natural laboratories in which to study dark matter interacting with itself are galaxy cluster collisions, where vast quantities of matter and, theoretically, dark matter collide. If dark-matter halos are interacting with each other in cluster collisions, then they will slow down, but how do you measure this when the objects in question are invisible?

“This is where the bits of ordinary matter are actually useful,” says Massey. Cluster collisions contain both galaxies and clouds of intra-cluster hydrogen. Using gravitational lensing, scientists can work out where the dark matter is in relation to these other cosmic objects, which can be used to work out how much it is interacting.

The galaxies in clusters are so widely spaced that they sail past each other during the collision. By contrast, intra-cluster hydrogen gas clouds are so vast that they can’t avoid each other, and so they don’t move very far. If the dark matter doesn’t interact with itself, it should be found out with the galaxies. If the interaction is strong, however, it will be located with the hydrogen clouds. If it interacts just a bit, then the dark matter will be somewhere in-between. Its location can therefore be used to estimate the interaction cross-section, and this value can be handed to theorists to test which dark-matter model best fits the bill.

High-altitude astronomy

The problem is that cluster collisions can take a hundred million years to run their course. What’s needed is to see galaxy cluster collisions at all stages, with different velocities, from different angles.

Enter SuperBIT – the Super Balloon-borne Imaging Telescope, on which Massey is the UK principal investigator. Reaching 40 km into the atmosphere while swinging beneath a super-pressure balloon provided by NASA, SuperBIT was a half-metre aperture telescope designed to map dark matter in as many galaxy-cluster collisions as possible to piece together the stages of such a collision.

SuperBIT flew five times, embarking on its first test flight in September 2015 (figure 5). “We would bring it back down, tinker with it, improve it and send it back up again, and by the time of the final flight it was working really well,” says Massey.

5 Far from home

Photo of the Earth taken from the superBIT telescope
(CC BY-SA Javier Romualdez/SuperBIT)

The SuperBIT telescope took gravitational lensing measurements of cluster collisions to narrow down the properties of dark matter. This photo of the Earth was taken from SuperBIT during one of its five flights.

That final flight took place during April and May 2023, launching from New Zealand and journeying around the Earth five and a half times. The telescope parachuted to its landing site in Argentina, but while it touched down well enough, the release mechanism had frozen in the stratosphere and the parachute did not detach. Instead, the wind caught it and dragged SuperBIT across the landscape.

“It went from being aligned to within microns to being aligned within kilometres! The whole thing was just a big pile of mirrors and metal, gyroscopes and hard drives strewn across Argentina, and it was heart-breaking,” says Massey, who laughs about it now. Fortunately, the telescope had worked brilliantly and all the data had been downloaded to a remote drive before catastrophe struck.

As long as a detection remains elusive, the identity of dark matter will continue to be a sore point for astronomers and physicists

The SuperBIT team is working through that data now. If there is any evidence that dark-matter particles have collided, the resulting estimate of the interaction cross-section will point to specific theoretical models and rule out others.

Astronomical observations can guide us, but only a positive detection of a dark-matter particle in an experiment such as LZ will settle the matter. As long as a detection remains elusive, the identity of dark matter will continue to be a sore point for astronomers and physicists. It also keeps the door ajar for alternative theories, and proponents of modified Newtonian dynamics (MOND) are already trying to exploit those cracks, as we shall see in the third and final part of this series.

  • In the first instalment of this three-part series, Keith Cooper explored the struggles and successes of modified gravity in explaining phenomena at varying galactic scales

The post Dark matter’s secret identity: WIMPs or axions? appeared first on Physics World.

Waffle-shaped solar evaporator delivers durable desalination

Par : No Author

Water is a vital resource to society and is one of the main focus areas for the United Nations Sustainable Development Goals. However, around two thirds of the world still doesn’t have regular access to freshwater – with people in this category facing water scarcity for at least a month each year.

Alongside, every two minutes a child dies from water-, sanitation- and hygiene-related diseases; and freshwater sources are becoming ever more polluted, causing further stress on water supplies. With many water-related challenges around the world, new ways of producing freshwater are being sought. In particular, solar steam-based desalination methods are seen as a green way of producing potable water from seawater.

Solar steam generation a promising approach

There are various water treatment technologies available today, but one that has gathered a lot of attention lately is solar steam generation. Interfacial solar absorbers convert solar energy into heat to remove the salt from seawater and produce freshwater. By localizing the absorbed energy at the surface, interfacial solar absorbers reduce heat loss to bulk water.

Importantly, solar absorbers can be used off-grid and in remote regions, where potable water access is the most unreliable. However, many of these technologies cannot yet be made at scale because of salt crystallization on the solar absorber, which reduces both the light absorption and the surface area of the interface. Over time, the solar absorption capabilities become reduced and the supply of water becomes obstructed.

Quasi-waffle design could prevent crystallization

To combat the salt crystallization challenge, researchers in China have developed a waffle-shaped solar evaporator (WSE). The WSE is made of a graphene-like porous monolith, fabricated via a zinc-assisted pyrolysis route using biomass and recyclable zinc as the precursor materials.

First authors Yanjun Wang and Tianqi Wei from Nanjing University and their colleagues designed the WSE with a basin and ribs, plus extra sidewalls (that conventional plane-shaped solar evaporators don’t have) to drive the Marangoni effect in the device. The Marangoni effect is the flow of fluid from regions with low surface tension to those of high surface tension. Marangoni effects can be induced by both gradients in solute concentration or in temperature – and the WSE’s extra sidewalls trigger both effects.

Schematic of waffle-shaped solar evaporator
WSE schematic Brine evaporation in a conventional plane evaporator (A) and a WSE (B); white spots denote salt concentration; blue arrows indicate salt transport. The extra sidewalls in the WSE induce Marangoni flows (red and brown arrows). C) The interfacial solar steam generation device. (Courtesy: Y Wang et al Sci. Adv. 10.1126/sciadv.adk1113)

When the saltwater evaporates, the faster evaporation and more efficient heat consumption on the plateaus than in the basins creates gradients in solute concentration and temperature. Based on these gradients, the sidewalls then generate a surface-tension gradient, which induces solute- and temperature-driven Marangoni flows in the same direction.

The two Marangoni effects increase the convection of fluid in the device, accelerating the transport of salt ions and diluting the maximum salinity of the system below the critical saturation value – therefore preventing salt crystallization from occurring. This leads to continuous salt rejection with reduced fouling at the interface.

The WSE delivers a solar absorption of 98.5% and high evaporation rates of 1.43 kg/m2/h in pure water and 1.40 kg/m2/h in seawater. In an outdoor experiment using a prototype WSE to treat a brine solution, the device produced freshwater at up to 2.81 l/m2 per day and exhibited continuous operation for 60 days without requiring cleaning.

The WSE’s ability to alleviate the salt crystallization issues, combined with its cost-efficiency, means that the device could theoretically be commercially scalable in the future.

Overall, the WSE overcomes the three main obstacles faced when designing solar desalination devices: efficient water evaporation and condensation, and preventing salt fouling. While the device achieved a high desalination stability (evident from the long cleaning cycles), the evaporation rate is currently restricted by the upper limits of a single-stage evaporator. The researchers point out that introducing a multistage evaporator to the system could help improve the solar-to-water efficiency and the freshwater yield of the device. They are now designing such a multistage evaporator to further their current research.

The findings are reported in Science Advances.

The post Waffle-shaped solar evaporator delivers durable desalination appeared first on Physics World.

Why optics is critical to meeting society’s grand challenges

Par : No Author

Over the last century, optics and photonics have transformed the world. A staggering 97% of all intercontinental communications traffic travels down optical fibres, enabling around $10 trillion of business transactions daily across the globe. Young people especially are at the heart of some of the most dramatic changes in optical technologies the world has ever witnessed.

Whether it’s a growing demand for higher data rates, larger cloud storage and cleaner energy supplies – or simply questions around content and self-censorship – communications networks, based on optics and photonics, are a crucial aspect of modern life. Even our knowledge of the impact of climate change comes mostly from complex optical instruments that are carried by satellites including spectrometers, narrow linewidth lasers and sophisticated detectors. They provide information that can be used to model key aspects of the Earth’s atmosphere, landforms and oceans.

Optics and photonics can also help us to monitor the behaviour of earthquakes and volcanoes – both terrestrial and underwater – and the risk and impact of tsunamis on coastal populations. The latter requires effective modelling together with satellite and ground-based observations.

Recent developments in optical quantum technologies are also beginning to bear fruit in areas such as high-resolution gravimetry. It allows tiny changes in subsurface mass distributions to be detected by measuring the spatial variations in gravity, and with it the movement of magma and the prediction of volcanic activity.

The challenge ahead

The UK-based Photonics Leadership Group (PLG) estimates that by 2035 more than 60% of the UK economy will directly depend on photonics to keep it competitive, becoming one of the top three UK economic sectors. PLG projects that the UK photonics industry will increase from £14.5bn today to £50bn over that period. The next 25 years are likely to see further significant advances in photonics, integrated circuits, far-infrared detector breakthroughs, free-space optical communication and quantum optical technologies.

There are likely to be breakthroughs in bandgap engineering in compound-semiconductor alloy technologies that will let us easily make and operate room-temperature very-long-wavelength infrared detectors and imaging devices. This could boost diagnostic medical imaging for management of pain, cancer detection and neurodiagnostics.

The joint effort between photonics and compound semiconductor materials science will become a significant capability in a sustainable 21st century and beyond. Defence and security are also likely to benefit from long-range spectroscopic identification of trace molecules. Optics and photonics will dominate space, with quantum technologies coming into service for communications and environmental monitoring, even if the proliferation of low-Earth-orbiting space objects are likely to cause congestion and hamper direct line-of-sight communications and monitoring.

Such developments, however, don’t come without their challenges, especially when adapting to the pace of change. Optics has a long history in the UK and the evolving nature of the subject is similar to that faced over a century ago by the Optical Society, Physical Society and the Institute of Physics (see box below).

Education will be key and making undergraduate courses attractive as will having a good balance of optics, photonics and fundamental physics in the curriculum. Making sure that students get experience in photonics engineering labs that reflect practical on-the-job tasks will be crucial as will close partnerships with the photonics industry and professional societies when aligning course content with the needs of the photonics industry.

Postgraduate photonics research in the UK remains strong, but we cannot rest on our laurels and it must be improved further, if not expanded.

Another challenge will be tackling the gap in optics and photonics advances between low-income nations and those that are high-income. These include access to optics and photonics education, research collaborations and mentoring as well as the need to equip developing nations with optics and photonics expertise to tackle global issues like desertification, climate change and the supply of potable water.

Desertification exacerbates economic, environmental and social issues and is entwined with poverty. According to the United Nations Convention to Combat Desertification, 3.2 billion people worldwide are negatively affected by spreading deserts. The International Commission for Optics is working with the International Science Council to tackle this by offering educational development, improving access to optical technologies and international collaborations with an emphasis on low-income countries.

If I had a crystal ball, I would say that over the next 25 years global economies will depend even more on optics and photonics for their survival, underpinning tighter social, economic and physical networks driven by artificial intelligence and quantum-communication technologies. Optical societies as professional bodies must play a leading role in addressing and communicating these issues head on. After all, only they can pull together like-minded professionals and speak with one voice to the needs and challenges of society.

Why the Optical Group of the Institute of Physics is the UK’s optical society

The Optical Group of the Institute of Physics, which is celebrating its 125th anniversary this year, can trace its roots back to 1899 when the Optical Society of London was formed by a group of enthusiastic optical physicists, led by Charles Parsons and Frank Twyman. Until 1931 it published a journal – Transactions of the Optical Society – which attracted several high-profile physicists including George Paget Thomson and Chandrasekhara Raman.

Many activities of the Optical Society overlapped with those of the Physical Society of London and they held several joint annual exhibitions at Imperial College London. When the two organizations formally merged in 1932, the Optical Group of the Physical Society became the de facto national optical society of the UK and Ireland.

In 1947 the Physical Society – via the Optical Group – became a signatory to the formation of the International Commission for Optics, which is now made up of more than 60 countries and provides policy recommendations and co-ordinates international activities in optics. The Optical Group is also a member of the European Optical Society.

In 1960 the Physical Society merged with the Institute of Physics (IOP), and today, the Optical Group of the IOP, of which I am currently chair, has a membership above 2100. The group represents UK and Irish optics, organizes conferences, funds public engagement projects and supports early-career researchers.

The post Why optics is critical to meeting society’s grand challenges appeared first on Physics World.

Liquid crystals generate entangled photon pairs

Par : No Author
Diagram showing a beam of laser light impinging on a liquid crystal and producing a pair of entangled photons
Highly adaptable entanglement: The new technique makes it possible to alter both the flux and the polarization state of the photon pairs simply by changing the orientation of the molecules in the liquid crystal. This can be done either by engineering the sample geometry or applying an electric field. (Courtesy: Adapted from Sultanov, V., Kavčič, A., Kokkinakis, E. et al. Tunable entangled photon-pair generation in a liquid crystal. Nature (2024). https://doi.org/10.1038/s41586-024-07543-5)

Researchers in Germany and Slovenia have found a new, more adaptable way of generating entangled photons for quantum physics applications. The technique, which relies on liquid crystals rather than solid ones, is much more tunable and reconfigurable than today’s methods, and could prove useful in applications such as quantum sensing.

The usual way of generating entangled photon pairs is in a crystal such as lithium niobate that exhibits a nonlinear polarization response to an applied electric field. When a laser beam enters such a crystal, most of the photons pass straight through. A small fraction, however, are converted into pairs of entangled photons via a process known as spontaneous parametric down-conversion (SPDC). Because energy is conserved, the combined energy and momenta of the entangled photons must equal those of the original photons.

This method is both cumbersome and inflexible, explains team leader Maria Chekhova. “First they grow a crystal, then they cut it in a certain way, and after it’s cut it can only be used in one way,” says Chekhova, an optical physicist at the Friedrich-Alexander Universität Erlangen-Nürnberg and the Max-Planck Institute for the Science of Light, both in Germany. “You cannot generate pairs at one wavelength with one sort of entanglement and then use it in a different way to generate pairs at a different wavelength with a different polarization entanglement. It’s just one rigid source.”

In the new work, Chekhova, Matjaž Humar of the Jožef Stefan Institute in Slovenia and colleagues developed an SPDC technique that instead uses liquid crystals. These self-assembling, elongated molecules are easy to reconfigure with electric fields (as evidenced by their widespread use in optical displays) and some types exhibit highly nonlinear optical effects. For this reason, Noel Clark of the University of Colorado at Boulder, US, observes that “liquid crystals have been in the nonlinear optics business for quite a long time, mostly doing things like second harmonic generation and four-wave mixing”.

Generating and modifying entanglement

Nobody, however, had used them to generate entanglement before. For this, Chekhova, Humar and colleagues turned to the recently developed ferroelectric nematic type of liquid crystals. After preparing multiple 7-8 μm-thick layers of these crystals, they placed them between two electrodes with a predefined twist of either zero, 90° or 180° between the molecules at either end.

When they irradiated these layers with laser light at 685 nm, the photons underwent SPDC with an efficiency almost as high as that of the most commonly used solid crystals of the same thickness. What is more, although individual photons in a pair are always entangled in the time/frequency domain – meaning that their frequencies must be anti-correlated to ensure conservation of energy – the technique produces photons with a broad range of frequencies overall. The team believes this widens its applications: “There are ways to concentrate the emission around a narrow bandwidth,” Chekhova says. “It’s more difficult to create a broadband source.”

The researchers also demonstrated that they could modify the nature of the entanglement between the photons. Although the photons’ polarizations are not normally entangled, applying a voltage across the liquid crystal is enough to make them so. By varying the voltage on the electrodes and the twist on the molecules’ orientations, the researchers could even control the extent of this entanglement — something they confirmed by measuring the degree of entanglement at one voltage and twist setting and noting that it was in line with theoretical predictions.

Potential extensions

The researchers are now exploring several extensions to the work. According to their calculations, it should be possible to use liquid crystals to produce non-classical “squeezed” states of light, in which the uncertainty in one variable drops below the standard quantum limit at the expense of the other.  “We just need higher efficiency,” Chekhova says.

Another possibility would be to manipulate the chirality within the crystal layers with an applied voltage. The team also seeks to develop practical devices: “Pixellated devices could produce photon pairs in which each part of the beam had its own polarization,” Chekhova says. “You could then produce structured light and encode quantum information into the structure of the beam.” This could be useful in sensing, she adds.

“This liquid crystal device has a lot of potential flexibility that would never have been available in crystalline materials,” says Clark, who was not involved in the research. “If you want to change something in a [solid] crystal, then you tweak something, you have to re-grow the crystal and evaluate what you have. But in this liquid crystal, you can mix things in, you can put electric field on and change the orientation.”

The research is published in Nature.

The post Liquid crystals generate entangled photon pairs appeared first on Physics World.

Speed of sound in quark–gluon plasma is measured at CERN

Par : No Author

The speed of sound in a quark–gluon plasma has been measured by observing high-energy collisions between lead nuclei at CERN’s Large Hadron Collider. The work, by the CMS Collaboration, provides a highly precise test of lattice quantum chromodynamics (QCD), and could potentially inform neutron star physics.

The strong interaction – which binds quarks together inside hadrons – is the strongest force in the universe. Unlike the other forces, which become weaker as particles become further apart, its strength grows with increasing separation. What is more, when quarks gain enough energy to move apart, the space between them is filled with quark–antiquark pairs, making the physics ever-more complex as energies rise.

In the interior of a proton or neutron, the quarks and gluons (the particles that mediate the strong interaction) are very close together and effectively neutralize one another’s colour charge, leaving just a small perturbation that accounts for the residual strong interaction between protons and neutrons.  At very high energies, however, the particles become deconfined, forming a hot, dense and yet almost viscosity-free fluid of quarks and gluons, all strongly interacting with one another. Calculations of this quark gluon plasma are non-perturbative, and other techniques are needed. The standard approach is lattice QCD.

Speed of sound is key

To check whether the predictions of lattice QCD are correct, the speed of sound is key. “The specific properties of quark–gluon plasma correspond to a specific value of how fast sound will propagate,” says CMS member Wei Li of Rice University in Texas. He says indirect measurements have provided constraints in the past, but the value has never been measured directly.

In the new work, the CMS researchers collided heavy ions of lead instead of protons because – like cannonballs compared with bullets – these are easier to accelerate to high energies and momenta. The CMS detector monitored the particles emitted in the collisions, using a two-stage detection system to determine what type of collisions had occurred and what particles had been produced in the collisions.

“We pick the collisions that were almost exactly head-on,” explains Li, “Those types of collisions are rare.” The energy is deposited into the plasma, heating it and leading to the creation of particles. The researchers monitored the energies and momenta of the particles emitted from different collisions to reconstruct the energy density of the plasma immediately after each collision. “We look at the variations between the different groups of events,” he explains. “The temperature of the plasma is tracked based on the energies of the particles that are coming out, because it’s a thermal source that emits particles.”

In this way, the researchers were able to measure the speed at which heat – and therefore energy density – flowed through the plasma. Under these extreme conditions, this is identical to the speed of sound i.e. the rate at which pressure travels. “In relativity, particle number is not conserved,” says Li; “You can turn particles into energy and energy into particles. But energy is conserved, so we always talk about total energy density.”

Even more stringent tests

The team’s findings matched the predictions of lattice QCD and the researchers would now like to conduct even more stringent tests. “We have extracted the speed of sound at one specific temperature,” says Li. “Whereas lattice QCD has predicted how the speed of sound goes with temperature as a continuous function. In principle, a more convincing case would be to measure at multiple temperatures and have them come out all agreeing with the lattice QCD prediction.” One remarkable prediction of lattice QCD is that, as the temperature of the quark–gluon plasma drops to its lowest possible temperature, the sound speed reaches a minimum before then increasing as the temperature drops further and the quarks become bound into hadrons. “It would be remarkable if we could observe that,” he says.

The research is described in a paper in Reports on Progress in Physics.

“I think it’s a good paper,” says nuclear theorist Larry McLerran of the University of Washington in Seattle – who is not a CMS member. He believes its most interesting aspect, however, is not what it shows about the theory being tested but what it demonstrates about the techniques used to test it. “The issue of sound velocity is interesting,” he says. “They have a way of calculating it – actually two ways of calculating it, one of which is kind of hand waving, but then it’s backed up with detailed simulation – and it agrees with lattice gauge theory calculations.”

McLerran is also interested in the potential to study heavy-ion collisions at low energies, and hopes these might give clues about the cold, dense matter in neutron stars. “In heavy ion collisions, you can calculate the sound velocity squared as a function of density using numerical methods, whereas these numerical methods don’t work at high density and low temperature, which is the limiting case for neutron stars. So being able to measure a simple bulk property of the matter and do it well is important.”

The post Speed of sound in quark–gluon plasma is measured at CERN appeared first on Physics World.

Scientists uncover hidden properties of rare-earth element promethium

Par : No Author

For the first time, researchers have experimentally examined the chemistry of the lanthanide element promethium. The investigation was carried out by Alex Ivanov and colleagues at Oak Ridge National Laboratory in the US – the same facility at which the element was first discovered almost 80 years ago.

Found on the sixth row of the periodic table, the lanthanide rare-earth metals possess an unusually diverse range of magnetic, optical and electrical properties, which are now exploited in many modern technologies. Yet despite their widespread use, researchers still know very little about the chemistry of promethium, a lanthanide with an atomic number of 61, which was first identified in 1945 by researchers on the Manhattan project.

“As the world slowly recovered from a devastating war, a group of national laboratory scientists from the closed town of Oak Ridge, Tennessee, isolated an unknown radioactive element,” Ivanov describes. “This last rare-earth lanthanide was subsequently named promethium, derived from the Greek mythology hero Prometheus, who stole fire from heaven for the use of mankind.”

Despite its relatively low atomic number compared with the other lanthanides, promethium’s chemical properties have remained elusive in the decades following its discovery. Part of the reason for this is that promethium is the only lanthanide with no stable isotopes. Only small quantities of synthetic promethium (mostly promethium-147 with a half-life of 2.62 years) are available, extracted from nuclear reactors, through tedious and energy-intensive purification processes.

Ultimately, this limited availability means that researchers are still in the dark about even the most basic aspects of promethium’s chemistry: including the distance between its atoms when bonded together, and the number of atoms a central promethium atom will bond to when forming a molecule or crystal lattice.

Ivanov’s team revisited this problem in their study, taking advantage of the latest advances in isotope separation technology. In a careful, multi-step process, they harvested atoms of promethium-147 from an aqueous solution of plutonium waste, and bonded them to a group of specially selected organic molecules. “By doing this, we could study how promethium interacts with other atoms in a solution environment, providing insights that were previously unknown,” Ivanov explains

Using synchrotron X-ray absorption spectroscopy to study these interactions, the researchers observed the very first appearance of a promethium-based chemical complex: a molecular structure whose central promethium atom is bonded to several neighbouring organic molecules.

Altogether, they observed nine promethium-binding oxygen atoms in the complex, which allowed them to probe several of the metal’s fundamental chemical properties for the first time. “We discovered how promethium bonds with oxygen atoms, measured the lengths of these bonds, and compared them to other lanthanides,” Ivanov describes.

Based on these results, the researchers then studied a complete set of comparable chemical complexes spanning all lanthanide elements. This enabled them to experimentally observe the phenomenon of “lanthanide contraction” across the whole lanthanide series for the first time.

Lanthanide contraction describes the decrease in the atomic radii of lanthanide elements as their atomic number increases, due to increasingly poor shielding from nuclear charge by inner-shell electrons. The effect causes the lanthanide–oxygen bond length to shrink. Ivanov’s team observed that this shortening accelerated early in the lanthanide series, before slowing down as the atomic number increased.

The team’s discoveries have filled a glaring gap in our understanding of promethium’s chemistry. By building on their results, the researchers hope that future studies could pave the way for a wide range of important applications for the element.

“This new knowledge could improve the methods used to separate promethium and other lanthanides from one another, which is crucial for advancing sustainable energy systems,” Ivanov describes. “By understanding how promethium bonds in a solution, we can better explore its potential use in advanced technologies like pacemakers, spacecraft power sources and radiopharmaceuticals.”

The researchers report their findings in Nature.

The post Scientists uncover hidden properties of rare-earth element promethium appeared first on Physics World.

‘I was always interested in the structure of things’: particle physicist Çiğdem İşsever on the importance of thinking about physics early

Par : No Author
Çiğdem İşsever
Çiğdem İşsever “My main focus is to shed light, experimentally, on the so-called Higgs mechanism.” (Credit: DESY Courtesy of Cigdem Issever)

The 2012 discovery of the Higgs boson at CERN’s Large Hadron Collider (LHC) was a momentous achievement. Despite completing the so-called Standard Model of particle physics, the discovery of this particle opened up the search for physics beyond the Standard Model and the elements of nature that assist the Higgs boson in granting all other matter particles their mass. One researcher who is taking a deeper look at the Higgs boson is the experimental particle physicist Çiğdem İşsever – lead scientist in the particle physics group at Deutsches Elektronen-Synchrotron (DESY) in Hamburg, and the experimental high-energy physics group at Humboldt University of Berlin.

After obtaining her degree in physics and completing a PhD in natural sciences at the University of Dortmund in Germany by 2001, İşsever was a postdoc at DESY and at the University of California, Santa Barbara in the US. From 2004 to 2019, she was based at the University of Oxford, where from 2014 she held a professorship in elementary particle physics. She then became head of physics and from 2015 taught at Lincoln College, Oxford, before moving back to DESY in 2019.

As a member of the ATLAS collaboration at CERN since 2004, İşsever’s research has focused on how the Higgs boson defines our reality. “My main focus is to shed light, experimentally, on the so-called Higgs mechanism, which explains how elementary particles and gauge bosons acquire mass in nature,” explains İşsever.

The Higgs mechanism is a key parameter of the Higgs boson, which particle physicists are trying to experimentally constrain, to gain important insight about the shape of the so-called Higgs potential. Determining if the Higgs potential is exactly as predicted by our theories or “if nature has chosen a different shape for it influences the very physics that determines the shape of our universe and even its eventual fate,” she explains.

What lies within

İşsever was fascinated by the inner workings of nature from a very young age. “I was always interested in how things are made, or why something is the way it is,” she says. “My father is not a physicist, but when I was in the first or second year of primary school, we would talk like adults about physics. He would discuss with me how nuclear reactors split the atom and if it was possible to bring it back together.”

As a child of six or seven, İşsever recalls industriously dissecting the vegetables on her plate, to reveal their inner structure. “This might sound really weird, but I wouldn’t just eat vegetables and fruit… I would really carefully cut them open. Look where the seeds are, see how many chambers a tomato has.”

This early fascination with the natural world on small scales deeply influenced İşsever and led to her interest in science communication. Keen to inspire young minds, and help children engage with science from a early age, the ATLAScraft project was developed by İşsever together with her husband and fellow DESY physicist Steven Worm, and physicist Becky Parker, from Queen Mary University, London. The project was a collaboration between the University of Birmingham, University of Oxford, the Institute for Research in Schools and the Abingdon Science Partnership, with technical expertise from CERN.

ATLAScraft provides users a map of CERN, the ATLAS detector and the LHC; all which have been created in the hugely popular computer game Minecraft. The idea behind the project was to bring the LHC and its scientific endeavours to a whole new generation, but it was also about breaking cultural stereotypes, especially getting more women in physics.

“Children decide quite early in their life, as early as primary school, if science is for them or not,” İşsever explains. They decided to visit pupils between five and 11, and “talk to them before they buy into science-related stereotypes of the male scientist and his female assistant,” says İşsever, adding that “When we went to schools in the UK to talk about our physics, I would be the main presenter of the physics concept, and Steve would be my sidekick. This was something we did deliberately to challenge these stereotypes.” Thanks to ATLAScraft, you can now take a virtual tour of ATLAS, via a 3D interactive map complete with the buildings, beamline tunnels and the actual ATLAS detector, all within Minecraft.

Pairing up

This year İşsever will also be involved in CERN’s 70th anniversary celebrations. She sees these as further opportunities to communicate CERN’s discoveries to a wider audience. However, İşsever’s research is still her prevailing passion. She is currently excited about her work to discover Higgs “pair production” at the LHC. Experimentally detecting these pairs of Higgs bosons is a crucial step in understanding how the Higgs boson may interact with itself, as this will determine the shape of the potential of the Higgs field.

“This hasn’t yet happened. When it does, if we collect enough data, we should be able to constrain the Higgs coupling as a parameter,” says İşsever. She adds that this search could also lead to the discovery of physics beyond the Standard Model. “To me, this represents the true thrill of discovering something new, which would be amazing.”

When it comes to the future of CERN and particle physics in general, the proposed successor to the LHC – the Future Circular Collider (FCC) – is an interesting prospect. More than 90 km in diameter – three times that of the LHC – the FCC would allow for a significant upgrade in collision energies.

One of the LHCf detectors
One of the Large Hadron Collider forward (LHCf) detectors. (Courtesy: CERN)

While she acknowledges how useful the FCC would be, İşsever believes that a less energetic electron–positron collider, could be vital as a next step. Such an instrument could lead to a deeper understanding of the Higgs boson and its associated phenomena, as well as allowing particle physicists to “infer the energy scale we should investigate with future machines,” she adds.

Looking ahead

Beyond the Higgs boson, İşsever is also involved with the Large Hadron Collider forward (LHCf) experiment, that captures and measures forward-travelling particles that escape “standard” detectors like ATLAS. LHCf could help build a better understanding of the cosmic rays that bombard the atmosphere of Earth from space.

İşsever also acknowledges the importance of non-collider experiments, even though they are unlikely to end the collider-dominated era of particle physics. “Collider experiments are much more general-purpose experiments. If you think of, for example, the ATLAS experiment, it’s not just one experiment. At any time, there are something like 200 or more analyses going on in parallel. You can think of each of them as an individual experiment. So, it is a very efficient way to perform experimental physics.”

The post ‘I was always interested in the structure of things’: particle physicist Çiğdem İşsever on the importance of thinking about physics early appeared first on Physics World.

Researchers build 0.05 T MRI scanner that produces diagnostic quality images

Par : No Author

Magnetic resonance imaging (MRI) is an essential tool used by radiologists to visualize tissues and diagnose disease, particularly for brain, cardiac, cancer and orthopaedic conditions. However, the high cost of an MRI scanner and dedicated MR imaging suite, combined with the scanner’s operational complexity, has severely limited its use in low- and middle-income countries, as well as in rural healthcare facilities.

Among member countries of the Organization for Economic Co-operation and Development (OCED), the number of MRI scanners (in 2021) ranged from just 0.24 per million people in Columbia to 55 per million in Japan. This significant disparity negatively impacts the quality of healthcare for the global population.

Aiming to close this gap in MRI availability, researchers at the University of Hong Kong’s Laboratory of Biomedical Imaging and Signal Processing are developing a whole-body, ultralow-field 0.05 T MR scanner that operates from a standard wall power outlet and does not require radiofrequency (RF) or magnetic shielding cages.

Targeted as both an alternative and a supplement to conventional 1.5 T and 3 T MRI systems, the novel scanner incorporates a compact permanent magnet and employs data-driven deep learning for image formation. This simplified design not only makes the MRI scanner easier to operate, but should significantly lower its acquisition and maintenance costs compared with current clinical MRI systems.

Writing in Science, principal investigator Ed X Wu and colleagues describe how they used the 0.05 T scanner, along with deep-learning reconstruction methods developed by the team, to obtain anatomical images of the brain, spine, abdomen and knee with comparable image quality to that of a 3T system. In one example, they acquired spine MRI scans showing details of intervertebral disks, the spinal cord and cerebrospinal fluid, in 8 min or less.

Whole-body scanner design

Central to the scanner’s hardware design is a permanent neodymium ferrite boron (NdFeB) magnet with a double-plate structure. Permanent magnets are safer to operate than superconductive magnets as they generate less heat and acoustic noise during imaging. Ultralow-field MRI also benefits from low sensitivity to metallic implants, fewer image susceptibility artefacts at air–tissue interfaces and an extremely low RF specific absorption rate.

The magnet features two NdFeB plates connected by four vertical pillars, chosen to optimize openness and patient comfort. Its key components – including yokes, magnet plates, pole pieces, anti-eddy current plates and shimming rings – were designed to create a uniform field suitable for whole-body imaging while maintaining shoulder and chest accessibility. The final magnetic field was 0.048 T at room temperature (corresponding to a 2.045 MHz proton resonance frequency).

Prototype ultralow-field MRI scanner
Low-cost, low-power, shielding-free The prototype MRI scanner has a compact footprint of roughly 1.3 m2 and requires neither magnetic nor RF shielding cages. (Courtesy: Ed X Wu)

The magnet assembly has exterior dimensions of 114.0 x 102.6 x 69.9 cm, with a 40 x 92 cm gap for patient entry, and weighs approximately 1300 kg, making it potentially portable for point-of-care imaging.

In the absence of RF shielding, the researchers used deep learning to eliminate electromagnetic interference (EMI). Specifically, they positioned 10 small EMI sensing coils around the scanner and inside the electronics cabinet to acquire EMI signals. During scanning, the EMI sensing coils and the MRI receive coil simultaneously sample data within two windows: one for MR signal acquisition, the other for EMI signal characterization. The team then used a deep-learning direct signal prediction (Deep-DSP) model to predict EMI-free MR signals from the acquired data.

For the study, 30 healthy volunteers were scanned with the 0.05 T system, using standard protocols and optimized contrasts for the various anatomical structures. To overcome the weak MR signal at 0.05 T, the team also designed a data-driven deep-learning image formation method – the partial Fourier super-resolution (PF-SR) model – that integrates image reconstruction and 3D multiscale super-resolution, validating the model by comparing 0.055 T brain scans with 3 T images from the same subjects (as described in Science Advances). This PF-SR reconstruction improved the 0.05 T image quality by suppressing artefacts and noise and increasing spatial resolution.

The researchers are currently optimizing the scanner design and algorithms. They plan to perform experimental assessment and optimization of ultralow-field data acquisition and deep-learning image reconstruction, to yield the optimal trade-offs between image fidelity, resolution, contrast, scan time and cost for each specific application. They are also evaluating clinical applications of the 0.05 T scanner in depth.

“We shall continue to refine our data-driven approaches in order to minimize the hardware requirements while advancing imaging quality and speed,” Wu tells Physics World. “We are starting to plan our research of the PF-SR in detecting various pathologies, and are currently training the PF-SR models with datasets from both normal and abnormal subjects.”

The post Researchers build 0.05 T MRI scanner that produces diagnostic quality images appeared first on Physics World.

Leading-edge facilities and cross-disciplinary collaboration underpin AWE’s nuclear remit

Par : No Author

AWE is no ordinary physics-based business. With a specialist workforce of around 7000 employees, AWE supports the UK government’s nuclear defence strategy and Continuous At-Sea-Deterrent (nuclear-armed submarines), whilst also providing innovative technologies and know-how to support international initiatives in counter-terrorism and nuclear-threat reduction. Tracy Hart, physics business operations manager at AWE, talked to Physics World about the opportunities for theoretical and experimental physicists within the company’s core production, science, engineering and technology divisions.

Why should a talented physics graduate consider AWE as a long-term career choice?

AWE provides the focal point for research, development and support of the UK’s nuclear-weapons stockpile. Our teams work at the cutting edge of science, technology and engineering across the lifecycle of the warhead – from initial concept and design to final decommissioning and disposal. The goal: to deter the most extreme threats our nation might face, now and in the future. Within that context, we offer unique professional opportunities across a range of technical and leadership roles suitable for bright, dynamic and innovative graduates in physics, mathematics, engineering and high-performance computing.

What can early-career scientists at AWE expect in terms of training and development?

For starters, the early-career training programme is accredited by 10 professional bodies, including the Institute of Physics (IOP) and the Institute of Mathematics and its Applications (IMA). That’s because we want AWE scientists and engineers to be the best of the best, with heads of profession within the management team prioritizing development of their technical staff on an individualized basis. There are lots of opportunities for self-guided learning along the way, with our technical training modules covering an extensive programme of courses in areas like machine learning, advanced programming (e.g. Python, Java, C++) and Monte Carlo modelling.

More specifically, our physicists have their IOP membership paid for by AWE, while a structured mentoring programme provides guidance along the path to CPhys chartership (a highly regarded professional validation scheme overseen by the IOP). We also prioritize external collaboration and work closely with the UK academic community – notably, the University of Oxford and Imperial College London – sponsoring PhD studentships and establishing centres of excellence for joint research.

How about long-term career progression?

There’s a can-do culture at AWE, with a lot of talented scientists and engineers more than ready – and willing – to take on additional responsibility after just a few months in situ. Fast-track development pathways are supported through fluid grading and a promotion process that enables staff to advance by developing their technical knowledge in a given specialism and/or their leadership competencies in wider management roles. It’s all about opportunity: we take a lot of time – and care – recruiting talented people, so it’s important to ensure they can access diverse career pathways across the business.

What research and technical infrastructures are available to scientists at AWE?

Our experts work with advanced experimental and modelling capabilities to keep the nation safe and secure. A case in point is the Orion Laser Facility, a critical component of AWE’s working partnerships with academia (with around 15% of its usage ring-fenced for such collaborations).

The size of a football stadium, Orion enables our teams to replicate the conditions found at the heart of a nuclear explosion – ensuring the safety, reliability and performance of warheads throughout their lifecycle. This high-energy-density plasma physics capability underpins not only our weapons research, but also yields fundamental scientific insights for astrophysicists studying star formation and researchers working on nuclear fusion.

There is also AWE’s high-performance computing (HPC) programme and a unique scientific computing platform on a scale that only a few companies across the UK can match. Our latest Damson supercomputer, for example, is one of the most advanced of its kind and performs 4.3 trillion calculations every second – essential for 3D modelling and simulation capabilities to support our research into the performance and reliability of nuclear warheads.

Does AWE work on nuclear non-proliferation activities?

We are home to the Comprehensive Test Ban Treaty Organization (CTBTO) National Data Centre for Seismology and Infrasound. Through the collection and analysis of data from monitoring systems all over the world, the centre works with the UK Ministry of Defence (MOD) to identify potential nuclear explosions conducted by other countries. Further, the team supports the MOD and international partners in underpinning the CTBT, providing expertise on arms control verification, development of forensic monitoring techniques, as well as the capability to analyse and advise on nuclear tests.

How important is cross-disciplinary collaboration to AWE’s mission?

The multidisciplinary nature of our programme means there’s a place for domain experts – technical leaders in their specialist niche – as well as “big-picture” scientists, engineers and managers who might be equally at ease when working across a range of scientific disciplines. Ultimately, collaboration informs everything we do. A case study in this regard is The Hub, a new purpose-built facility that will, when completed, consolidate many ageing laboratories and workshops into a central campus that integrates engineering, science, learning and administrative functions.

What sorts of projects do physicists get to work on at AWE?

The physics department at AWE recruits a broad range of skillsets spanning systems assessment, design physics, radiation science and detection, material physics and enabling technologies. Among our priorities right now is to scale the talent pipeline for ongoing studies in the criticality safety group. Roles in this area are multidisciplinary, combining strong technical understanding of the nuclear physics of criticality alongside the operational know-how of writing safety assessments.

Put simply, nuclear physics domain knowledge is applied to derive safe working limits and restrictions for a wide variety of operations that use fissile material across the nuclear material and facility lifecycles. These derivations regularly involve the use of nuclear data from real-world experiments and Monte Carlo computer codes. What’s more, the production of safety assessments requires an understanding of hazard identification methods and various fault analysis techniques to determine how a criticality could occur and what safety systems are required to manage that risk.

What about other recruitment priorities at AWE?

Current areas of emphasis for the HR team include the HPC programme – where we’re looking for systems administrators and applied computational scientists – and design physics – where we need candidates with a really strong physics and mathematics background plus the versatility to put that knowledge to work versus our unique requirements. Our design physics team uses state-of-the-art multiphysics codes to model hydrodynamics, radiation transport and nuclear processes plus a range of experimental data to benchmark their predictions. Operationally, that means understanding the complex physical processes associated with nuclear-weapons function, while applying those insights to current systems as well as next-generation weapons design.

The key take-away: if you’re looking for a role with excitement, intrigue and something that really makes a difference, then now is the time to join AWE.

 

The post Leading-edge facilities and cross-disciplinary collaboration underpin AWE’s nuclear remit appeared first on Physics World.

An investigation into battery thermal runaway initiation and propagation

Par : No Author
An investigation into battery thermal runaway initiation and propagation

Abuse testing and failure recreation of thermal runaway in lithium-ion battery packs within Exponent’s London laboratory has shown how battery fires can initiate and propagate. This webinar discusses how even small amounts of moisture ingress into a battery pack can lead to thermal runaway of the cells within the pack. Specific conditions and behaviours of saltwater ingress-driven circuit board faults were investigated, and localized temperature increases of greater than 400 °C even at relatively low voltages and fault currents were demonstrated, showing the potential for saltwater induced circuit board faults to lead to cell thermal runaway events. The extent and severity of e-mobility battery fires resulting from a single cell thermal runaway failure was explored. Various suppression techniques a user may attempt to implement if they experience a battery fire in a household environment were evaluated. Tests were run of water flows typical of a household garden hose as well as different fire blankets deployed both before the forced thermal runaway event, and after initiation. In addition, various design approaches, such as added thermal insulation between cells, were shown to help prevent cell-to-cell propagation and reduce the severity of a battery pack failure.

An interactive Q&A session follows the presentation.

Samuel Lawton
Samuel Lawton

Samuel Lawton is an expert in batteries and energy storage with extensive experience in failure analysis, pack design, quality evaluation, factory auditing, and thermal testing and cell testing. He holds a Ph.D. in Chemistry and is currently a senior scientist at Exponent, where he leads complex projects focused on battery performance and safety.

In his role, Samuel specializes in root cause failure analysis, thermal event investigations, and product validation. He is proficient in X-ray computed tomography, non-destructive and destructive cell testing, and bespoke abuse testing. His previous work at OXIS Energy Ltd included developing novel anode protections and unique cathode materials for advanced battery systems.

The post An investigation into battery thermal runaway initiation and propagation appeared first on Physics World.

The route to ‘net zero’: how the manufacturing industry can help

Par : No Author

The manufacturing industry is one of the largest emitters of carbon dioxide and other greenhouse gases worldwide. Manufacturing inherently consumes large amounts of energy and raw materials, and while the sector still relies mainly on fossil fuels, it generates emissions that directly contribute to climate change and environmental pollution. To combat global warming and its potentially devastating impact upon our planet, there’s an urgent need for the manufacturing industry to move towards net zero operation.

Cranfield University, a specialist postgraduate university in the UK, is working to help the industry achieve this task. Teams at the university’s science, technology and engineering centres are devising ways to accelerate the journey towards more sustainable manufacturing – whether by introducing manufacturing processes that use less energy and raw materials; investigating renewable and low-carbon energy sources; creating new materials with enhanced recyclability; or implementing smart functions that extend the life of existing assets.

Greener manufacturing

One way to lower the carbon footprint of manufacturing is to move to 3D printing, an additive fabrication technique that inherently reduces waste.

“The machining techniques used in conventional manufacturing require a lot of power and a lot of raw material, which itself requires energy to create,” explains Muhammad Khan, acting head of Cranfield’s Centre for Life-cycle Engineering and Management and reader in damage mechanics. “In 3D printing, however, the amount of power required to generate the same complex part is far less, which impacts the overall carbon footprint.”

Materials used for 3D printing, particularly polymeric or other organic materials, are generally recyclable and easier to reuse, further reducing emissions. “Within our centre, we are working on polymeric materials to replace existing metallic materials in areas such as aerospace and automotive applications,” says Khan.

3D printing also enables manufacturers to rapidly tailor the design and properties of a product to meet changing requirements.

David Ayre
David Ayre “It’s important that everyone makes a move towards net zero, because we’re not going to make any impact unless the whole world is on board.” (Courtesy: Cranfield University)

“We’ve seen this a lot in Formula One,” says David Ayre, a senior lecturer in composites and polymers in Cranfield University’s Composites and Advanced Materials Centre. “They’ll 3D print prototyping materials to quickly push out the structures they need on their cars. Twenty years ago, the resins used for this were brittle and only suitable for prototyping. But now we have developed more robust resins that can actually be used on working structures.”

Another benefit of 3D printing is that it can be performed on a smaller scale, enabling manufacturing sites to be installed locally. This could be next to the resource that the printer will use or next to the consumers that are going to use it; in either case, reducing transportation costs. While the cost implications of this “end of the street” model haven’t yet won through, the pressure to reduce CO2 emissions “might be the driver that starts to change the way we look at manufacturing”, Ayre notes.

Recycling opportunities

The introduction of novel advanced materials can also help increase sustainability. Thermal barrier coatings developed at Cranfield, for example, enable jet engines to work at higher temperatures, increasing efficiency and reducing fuel consumption. “There’s a huge role for engineers to play,” says Ayre.

Designing materials that can be recycled and reused is another important task for Ayre’s team. Producing raw material requires vast amounts of energy, a step that can be eliminated by recycling. Aluminium, for instance, is easy to process, highly recyclable and used to create a vast spectrum of products. But there are still some challenges to address, says Ayre.

“The aerospace industry likes to machine parts. They’ll take a one tonne billet of aluminium and end up with a 100 kg part,” he explains. “I worked with a student last year looking at how to recycle the swarf that comes from that machining. Unfortunately, aluminium is quite reactive and the swarf oxidizes back to the ore state, where it’s not really easy to recycle. These are the sorts of issues that we need to get around.”

The centre also focuses on composite materials, such as those used to manufacture wind turbine blades. Ayre notes that turbine blades built in the 1970s are now reaching the end of their usable life – and the composites they’re made from are difficult to recycle. The team is working to find ways to recycle these materials, though Ayre points out that it was such composites that enabled growth in the wind turbine market and the resulting source of renewable energy.

Alongside, the researchers are developing recyclable composite materials, such as bioresins and fibres produced from natural products, although work is still at an early stage. “These materials don’t have the same properties as petroleum-derived resins and ceramic, carbon and glass fibres,” Ayre says. “I don’t think we’re close yet to being able to replace our super-lightweight, super-stiff carbon fibre composite structures that motorsport and aerospace are utilizing.”

Smart materials

Meanwhile, Khan’s team at Cranfield is developing materials with smart functionalities, such as self-healing, self-cleaning or integrated sensing. One project involves replacing domestic pipelines used for wastewater distribution with 3D-printed self-cleaning structures. This will reduce water requirements compared with conventional pipelines, reducing the overall carbon footprint.

Muhammad Khan
Muhammad Khan “If you can extend device life by utilizing smart mechanisms…This can positively contribute to the net zero agenda.” (Courtesy: Cranfield University)

With a focus on maintaining existing assets, rather than creating new ones, the researchers are also developing self-healing structures that can repair themselves after any damage. “If you can extend device life twice or thrice by utilizing these smart mechanisms, you can reduce the amount of raw material used and the emissions generated during manufacturing of replacement parts,” says Khan. “This can positively contribute to the net zero agenda.”

Another project involves developing structures with integrated sensing functionality. Such devices, which monitor their own health by providing information such as displacement or vibration responses, eliminate the need to employ external sensors that require energy to construct and operate. The diagnostic data could provide users with an early warning of signs of damage or help determine the remaining useful life of a structure.

“Life estimation is challenging, but is something we are looking to incorporate in the future – how we can utilize the raw data from embedded sensing elements to model the remaining useful life,” says Khan. “That prediction could allow users to plan maintenance and replacement routines, and save a system from catastrophic failure.”

Building for the future

Cranfield University also aims to embed this sense of sustainability in its students – the engineers of the future – with a focus on net zero integral to all its engineering and related courses.

“The majority of our manufacturing and materials students will go on to an engineering career and need to appreciate their role in sourcing sustainable materials for any parts they’re designing and investigating manufacturing routes with low CO2 footprint,” Ayre explains. Students also learn about asset management – choosing the right product in the initial stages to minimize maintenance costs and extend a component’s life.

Elsewhere, Khan is working to ensure that standards agencies keep sustainability in mind. His centre is part of a consortium aiming to bring the goal of achieving net zero into standards. The team recently demonstrated how the existing asset management standard – ISO 15,000 – can be modified to incorporate net zero elements. The next step is to convince ISO and other agencies to accept these concepts, allowing people to manage their assets in a more environmentally friendly way without compromising availability or performance.

Ultimately, says Ayre, alongside “trying to encourage humanity not to want more and more and more”, lowering global emissions could rely on engineers getting creative and finding innovative ways to produce products that people want, but at reduced cost to the environment. It’s also vital that customers take on these ideas. “There’s no point us coming up with new-fangled manufacturing process and new materials if nobody has the experience or the confidence to take it anywhere,” he points out.

“It’s important that everyone makes a move towards net zero, because we’re not going to make any impact unless the whole world is on board,” says Ayre.

The post The route to ‘net zero’: how the manufacturing industry can help appeared first on Physics World.

A love of triangles, the physics of spin, volcanic science and Pascal’s papers: micro reviews of the best recent books

Par : No Author

Love Triangle: the Life-changing Magic of Trigonometry
By Matt Parker

Comedian and science author Matt Parker is on a mission to elevate the reputation of the humble triangle. Despite dealing with what might be familiar concepts, Love Triangle shows that geometry and trigonometry can pop up in exciting and unexpected places. From cosmology to skateboarding, Parker argues that triangles underpin both the epic and the everyday. The book, which is funny and accessible, would also be suitable for keen teenage readers. Katherine Skipper

  • 2024 Penguin Random house

The Science of Spin: the Force Behind Everything – From Falling Cats to Jet Engines
By Roland Ennos

We’ve all had fun with spinning tops, pushed each other on playground swings or relied on washing machines rotating at high speeds to wring dry our wet clothes. In The Science of Spin, University of Hull visiting professor Roland Ennos examines the myriad ways spin affects our lives. From the movement of cricket balls to the shielding of the Earth’s atmosphere and even black holes, this delightful and easy-to-follow book won’t leave your head spinning. Matin Durrani

  • 2023 Oneworld

Adventures in Volcanoland: What Volcanoes Tell Us About the World and Ourselves
By Tamsin Mather

University of Oxford earth scientist Tamsin Mather explains the science of volcanoes through her fascination and career with them in Adventures in Volcanoland. She describes visits to volcanoes large and small, and traces how we humans have understood (or failed to understand) what volcanoes are since ancient times. From gods and fire to radioactivity and tectonics, and from her current research on volcanic gases to future possibilities such as harnessing their power as a renewable energy source, this is an accessible and enjoyable read. Kate Gardner

  • 2024 Abacus Books

A Summer With Pascal 
By Antoine Compagnon
Translated by Catherine Porter

Based on a radio series on France Inter, in A Summer With Pascal literary critic Antoine Compagnon analyses Blaise Pascal’s major philosophical and theological works Pensées and Lettres Provinciales. Short chapters cover topics including “the art of persuasion”, predestination and uncertainty. References to Pascal’s scientific and mathematical work are few, but this close analysis may still be of interest to Physics World readers who want to know more about the 17th-century polymath. Kate Gardner

  • First published in French 2020 by Éditions des Équateurs
  • 2024 Harvard University Press

The post A love of triangles, the physics of spin, volcanic science and Pascal’s papers: micro reviews of the best recent books appeared first on Physics World.

Climate physicist Claudia Sheinbaum Pardo elected Mexican president in landslide win

Par : No Author

The physicist Claudia Sheinbaum Pardo has been elected president of Mexico following a landslide victory on 2 June. She gained more than twice as many votes as her nearest opponent, the computer engineer Xóchitl Gálvez Ruiz. When she takes up office on 1 October, Sheinbaum Pardo will become Mexico’s first female president.

Sheinbaum Pardo, 61, was born on 24 June 1962 and both of her parents were scientists. Her mother, Annie Pardo Cemo, is a biochemist while her father, Carlos Sheinbaum Yoselevitz, is a chemical engineer.

Both she and her brother, Alex, followed their parents into science and became physicists. Sheinbaum Pardo earned a physics degree from the National Autonomous University of Mexico (UNAM) in 1989 before carrying out a PhD in energy engineering at UNAM.

Her PhD research, which focused on energy consumption in Mexico and other countries, was mostly carried out at the Lawrence Berkeley National Laboratory in the US. After graduating in 1995, Sheinbaum Pardo joined UNAM’s Institute for Engineering where she worked on the transition to renewable energy sources.

From science to politics

Sheinbaum Pardo’s political activities began during her undergraduate years at UNAM. In the early 1990s she joined a protest about the university’s tuition fees and later helped set up the left-wing National Regeneration Movement (Morena) party in 2011.

When Andrés Manuel López Obrador became mayor of Mexico City in 2000, he selected Sheinbaum Pardo as environment secretary. The pair remained politically close, but when López Obrador lost the 2006 presidential election, she returned to UNAM as a researcher.

Sheinbaum Pardo co-authored sections of the United Nations Intergovernmental Panel on Climate Change (IPCC) fourth assessment report, which warned that the warming of the climate is “unequivocal”. For their work on climate change, the 2000 members of the IPCC shared half the 2007 Nobel Peace Prize with former US vice-president Al Gore.

When López Obrador finally won Mexico’s presidency in 2018, following another failed attempt in 2012, Sheinbaum was elected mayor of Mexico City by a landslide. In that role, she did a lot for the environment, including electrifying the metropolis’s bus fleet, starting  construction of a photovoltaic plant to cut emissions of carbon dioxide and boosting the conurbation’s bicycle lanes.

While López Obrador largely favoured the country’s oil industry and cut science funding during his six years in office, Sheinbaum Pardo has said that she intends to focus on renewable energy technologies and “to make Mexico a scientific and innovation power”.

Yet Mexico’s scientific community questions whether she will be able to achieve this. Some political commentators have expressed doubts that she will be able to escape the shadow of her mentor and govern in her own style.

The post Climate physicist Claudia Sheinbaum Pardo elected Mexican president in landslide win appeared first on Physics World.

Embracing Neurodiversity in Research: How does academic publishing need to change?

Par : No Author

 

To have an accessible and inclusive environment where everyone can thrive can only be achieved if we collectively address the barriers that stand in the way.

In academia there are many challenges that often hold back neurodivergent individuals from reaching their full potential, and this has to change.

In this webinar, the expert panel will be discussing the current state of play, their experiences of working in academia or industry as a neurodivergent person, and what the needs of neurodiverse individuals are.

We’ll then focus in on academic publishing and what more publishers need to do and change in their processes. Are practices clear and easy to understand? What additional support should be provided and where? How do publishers ensure neurodivergent individuals have access to opportunities that will allow them to pursue their research careers without jeopardising their wellbeing?

Panel, left to right: Sharon Zivkovic, Angela Carradus, Vicky Mountford-Brown, Kellie Forbes-Simpson, Vicky Williams, Sujeet Jaydeokar

Sharon Zivkovic is the founder and CEO of the social enterprise Community Capacity Builders, Adjunct Research Fellow at Torrens University Australia and member of Emerald Publishing’s Impact Advisory Board. As an autistic social entrepreneur and systems thinker, Sharon has used her innate bottom-up and associative thinking skills, and systemizing capabilities, to develop and commercialize a number of social innovations. Community Capacity Builders has recently established a Centre for Autistic Social Entrepreneurship, which aims to build the capacity of disability service providers, social enterprise support organizations, and business advisors to support autistic social entrepreneurs in a neurodiversity-affirming manner.

Angela Carradus is an academic and business owner specializing in relational and systems approaches to leading and managing business. In 2022 she reached burn out in her academic career following a diagnosis of ADHD and Long COVID. This continuing battle has enabled her to consider her rich and varied path professionally, which has included training as an actor and achieving a PhD at Lancaster University. Following her diagnosis of ADHD she has had the opportunity to consider how the current academic environment can often make it very difficult for the neurodivergent community and is now a passionate advocate to explore a new approach that can better support neurodivergent students and staff in academia.

Vicky Mountford-Brown is an assistant professor in entrepreneurship at Northumbria University and vice-president-elect for Enterprise Educators UK. Vicky’s research interests centre largely around identities, social inequalities and pedagogies, with current projects exploring imposterism and neurodiversity in academia, neurodiversity and (entrepreneurial) learning, and neurodiversity and entrepreneurship.

Kellie Forbes-Simpson is an assistant professor in entrepreneurship at Newcastle Business School, Northumbria University, and is an experienced and award-winning entrepreneurship educator. Kellie’s interest in neurodiversity comes from her programme leader role, where in some years more than 50% of the nascent entrepreneurs on her programme have identified as neurodivergent. Kellie is now researching how and why her programme seems to provide support to neurodivergent entrepreneurs. Kellie also has personal experience of neurodiversity, after a late diagnosis on dyslexia during her PhD studies.

Vicky Williams is chief executive of Emerald Publishing, a UK business founded in 1967. She has worked in academic publishing for more than 20 years, with C-suite responsibility for a range of business areas in that time – business development, M&A, marketing, digital, and HR. She has been chief executive of Emerald since 2018, and is proud to be part of a business that innovates, takes risks, responds to its communities, and really values its people. Both in and out of work, Vicky is a keen advocate for gender diversity, having launched Emerald’s Equality, Diversity and Inclusion programme in 2016, and speaks widely on this topic at global forums and events. She holds advisory board and non-executive roles in academia and publishing, and is the trustee responsible for social mobility at the Keith Howard Foundation, which supports charities across Yorkshire.

Sujeet Jaydeokar is a consultant psychiatrist and director of research at the Cheshire and Wirral Partnership NHS Foundation Trust. He, along with Mahesh Odiyoor, was instrumental in setting up the Centre for Autism, Neurodevelopmental Disorders and Intellectual Disabilities (CANDDID), for which he is also the clinical director and chair. Sujeet is a parent carer of a boy with multiple neurodevelopmental issues. His lived experience and clinical work drives his interests in research and education. He is a programme lead for the post-graduate qualifications in neurodevelopmental conditions at the University of Chester. He is a fellow of the Royal College of Psychiatrists. His particular areas of interest are in health inequalities, service development and the phenomenology of neurodevelopmental conditions.

This webinar is being made in partnership with Emerald Publishing and NEA (Neurodiversity & Entrepreneurship Association)

 

 

 


 

The post Embracing Neurodiversity in Research: How does academic publishing need to change? appeared first on Physics World.

The art of cosmic simulations: can we build a universe on a computer?

Par : No Author

As I write this, I’m immersed in the excitement surrounding the upcoming solar eclipse in parts of North America. In Chicago, my home, we’re poised to experience 90% of the eclipse’s totality, a spectacle that has sparked enthusiasm among my undergraduate students, especially since we will be watching it during class.

We are not, however, the first to be drawn to these remarkable astronomical events, as Romeel Davé – a theoretical astrophysicist at the University of Edinburgh in the UK – explains in Simulating the Cosmos: Why the Universe Looks the Way It Does. In third-century China, eclipses were seen as important omens by the emperor, and astronomers developed remarkably precise methods to predict them.

The stakes were high for these early theorists – inaccuracies once resulted in the execution of two astronomers, turning the refinement of their predictive techniques into a quest for survival.

In his book, Davé traces a direct link between these ancient celestial predictions and modern cosmologists who use powerful supercomputers to model the universe. He explains how “numerical cosmology” can be used to compensate for our inability to experimentally manipulate the cosmos, and asks whether it will ever be possible to capture the entire universe in a simulation.

To recreate the cosmos on a computer, we first need to know what it’s made of. Davé sets the scene by clearly explaining the so-called “concordance model”, which tells us that the universe is 68% dark energy and 27% dark matter, with visible matter making up only 5%. In this framework, dark energy drives the accelerating expansion of the universe, while the gravitational pull of dark matter assembles galaxies and galaxy clusters into large-scale structures.

The book also offers theoretical insights into the Big Bang and the rapid expansion of the early universe, woven with engaging anecdotes. While explaining why those of us on Earth don’t notice the universe expanding, Davé recounts a memorable t-shirt worn by one of his professors during graduate school, featuring a whimsical question: “If the universe is expanding, why can’t I ever find a parking space?”

With the concordance model as our guide, Davé explains how, by inputting laws of physics and the conditions of the early universe into computer simulations, we can understand how and why the universe evolved to its current state.

Even with a simplified model that includes only gravitational effects, an accurate simulation of the entire universe would require far more computing power than exists on Earth. Astronomers must therefore accept some level of inaccuracy, and Davé dedicates considerable attention to the compromises and innovations that are made to optimize these simulations. He explains, for example, how astronomers have developed sophisticated algorithms to group nearby masses together, considerably simplifying the calculation of gravitational forces.

Davé does not shy away from technical explanations, but though the book includes equations, they’re presented in a way that shouldn’t be daunting to the general reader

Armed with this toolbox of simulation techniques, Davé then discusses in detail his main area of research – galaxy formation and its simulation. Pioneering theories of galaxy formation were developed in the 1970s and 1980s, but early simulations were beset by challenges – most notably the “overcooling problem” where the simulated universe cooled too quickly, and produced far more galaxies than are observed in real life.

But Davé believes that the field is now in a “golden age”. He explains how cosmologists developed corrections to the overcooling problem by including small-scale effects like black holes and supernovae. Once believed to be isolated entities floating through space, simulations suggest that galaxies form a vast, interconnected structure called the “cosmic web”. Towards the end of the book, Davé is optimistic about the future of cosmic simulations, predicting that the advent of machine learning and artificial intelligence will bring us even closer to building a working, evolving universe on a computer.

This book will be of particular pedagogical significance to students who are interested in numerical cosmology. Davé does not shy away from technical explanations, but although the book includes equations, they’re presented in a way that shouldn’t daunt the general reader. He also uses illustrations to guide the reader through this complex topic.

The final chapter is undoubtedly bold, but I found it somewhat disjointed and abrupt. The author’s discussion of the possibility that our world is a simulated reality feels forced, while the introduction of Stephen Wolfram’s speculative “theory of everything” is insufficiently tethered to the preceding chapters. Though speculative thinking has its place, I found myself wishing for a more solid motivation for the scientific groundwork laid out in the rest of the book.

Nevertheless, these criticisms are minor in the context of the book’s broader contributions. Today, our quest to simulate the universe is driven not by the immediate threat of an emperor’s wrath, but by a deep curiosity about our place in the cosmos. While many popular science books focus exclusively on early-universe phenomena like inflation and the Big Bang, there’s a noticeable gap in literature addressing computational physics and numerical cosmology, and this book fills a crucial void.

  • 2023 Reaktion books 200pp £15.95/$22.50hb

The post The art of cosmic simulations: can we build a universe on a computer? appeared first on Physics World.

Laser-driven accelerator benefits from clever use of light pulses

Par : No Author

Physicists in Germany say they have passed an important milestone in the development of laser-driven, plasma-based particle acceleration. Proton pulses with energies as high as 150 MeV were created by Tim Ziegler and colleagues at Helmholtz Centre Dresden–Rossendorf (HZDR). This is about 50% higher than the previous record for the technique, and was achieved by better exploiting the temporal profile of laser pulses.

Conventional particle accelerators use radio-frequency cavities to create the high voltages needed to drive particles to near the speed of light. These facilities tend to big; energy hungry; and often require expensive cryogenic cooling. This limits the number of facilities that can be built and where they can be located. If accelerators could be made smaller and less expensive, it would be a boon for applications as diverse as cancer therapies and materials science.

As a result, there is a growing interest in laser-driven plasma-based accelerators, which have the potential to be far more compact and energy efficient that conventional systems.

Ripping away electrons

These accelerators work by firing intense laser pulses into wafer-thin solid targets. The pulse rips away electrons from the target, leaving behind the positively charged atomic cores. This creates a very large voltage difference over a very small distance – which can be used to accelerate pulses of charged particles such as protons.

While these voltage gradients can be much larger than those in conventional accelerators, significant challenges must be overcome before this technique can be used in practical facilities.

“The adoption of plasma-based proton acceleration has been hampered by the slow progress in increasing ion energy,” Ziegler explains. One challenge is that today’s experiments are done at one of just a few high-power, ultrashort-pulse lasers around the world – including HZDR’s DRACO-PW facility. “Firing only a few shots per day, access and availability at these few facilities is constrained,” adds Ziegler.

One curious aspect of the ultrashort laser pulses from DRACO-PW is that some of the light precedes the main pulse. This means that the full power of the laser is not used to ionize the target. But now, Ziegler’s team has turned this shortcoming into an advantage.

Early arrival

“This preceding laser light modifies our particle source – a thin plastic foil – making it transparent to the main laser pulse,” Ziegler explains. “This allows the light of the main pulse to penetrate deeper into the foil and initiates a complex cascade of plasma acceleration mechanisms at ultra-relativistic intensities.”

The researchers tested this approach at DRACO-PW. When they previously to irradiated a solid foil target, the plasma accelerated protons to energies as high as 80 MeV.

In their latest experiment, they irradiated the target with a pulse energy of 22 J, and used the leading portion of the pulse to control the target’s transparency. This time, they accelerated a beam of protons to 150 MeV – almost doubling their previous record.

This accelerated proton beam had two distinct parts: a broadband component at proton energies lower than 70 MeV; and a high-energy component comprising protons travelling in a narrow and well-defined beam.

Linear scaling

“Notably, this high-energy component showed a linear scaling of maximum proton energy with increased laser energy, which is fundamentally different to the square-root scaling of the lower energy component,” Ziegler explains. The experiment also revealed that the degree of transparency in the solid target was strongly connected with its interaction with the laser – providing the team with tight control over the accelerator’s performance.

Ziegler believes the result could pave the way for smarter accelerator systems. “This observed sensitivity to subtle changes in the initial laser-plasma conditions makes this parameter ideal for future studies, which will aim for automated optimization of interaction parameters,” he says.

Now that they have boosted the efficiency of ion acceleration, the researchers are hopeful that laser-driven facilities could be built a fraction of the space and energy requirements of conventional facilities.

This would be particularly transformative in medicine, says Ziegler. “Our breakthrough opens up new possibilities to investigate new radiobiological concepts for precise, gentle tumour treatments, as well as scientific studies in efficient neutron generation and advanced materials analysis.”

The research is described in Nature Physics.

The post Laser-driven accelerator benefits from clever use of light pulses appeared first on Physics World.

❌