↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierPhysics World

Can a classical computer tell if a quantum computer is telling the truth?

Par : No Author
11 mars 2024 à 10:30

Quantum computers can solve problems that would be impossible for classical machines, but this ability comes with a caveat: if a quantum computer gives you an answer, how do you know it’s correct? This is particularly pressing if you do not have direct access to the quantum computer (as in cloud computing), or you don’t trust the person running it. You could, of course, verify the solution with your own quantum processor, but not everyone has one to hand.

So, is there a way for a classical computer to verify the outcome of a quantum computation? Researchers in Austria say the answer is yes. Working at the University of Innsbruck, the Austrian Academy of Sciences and Alpine Quantum Technologies GmbH, the team experimentally executed a process termed Mahadev’s protocol, which is based on so-called post-quantum secure functions. These functions involve calculations that are too complex for even a quantum computer to crack, but with a “trapdoor” that allows a classical machine with the correct key to solve them easily. The team say these trapdoor calculations could verify the trustworthiness of a quantum computation using only a classical machine.

Honest Bob?

To understand how the protocol works, assume we have two parties. One of them, traditionally known as Alice, has the trapdoor information and wants to verify that a quantum computation is correct. The other, known as Bob, does not have the trapdoor information, and needs to prove that the calculations on his quantum computer can be trusted.

As a first step, Alice prepares a specific task for Bob to handle. Bob then reports the outcome to Alice. Alice could verify this outcome herself with a quantum computer, but if she wants to use a classical one, she needs to give Bob further information. Bob uses this information to entangle several of his main quantum bits (or qubits) with additional ones. If Bob performs a measurement on some of the qubits, this determines the state of the remaining qubits. While Bob does not know the state of the qubits in advance of the measurements, Alice, thanks to her trapdoor calculations, does. This means Alice can ask Bob to verify the qubits’ state and decide, based on his answer, whether his quantum computer is trustworthy.

Relieved Alice

The team ran this protocol on a quantum processor that uses eight trapped 40Ca+ ions as qubits. The measurements Bob makes relate to the energy of the qubits’ quantum states. To obtain a signal above background noise, the researchers ran the protocol 2000 times for each data point, ultimately proving that Bob’s answers could be trusted.

The researchers call their demonstration a proof of concept and acknowledge that more work is needed to make it practical. Additionally, a full, secure verification would require more than 100 qubits, which is out of scope for most of today’s processors. According to Barbara Kraus, one of the team’s leaders and now a quantum algorithms expert at the Technical University of Munich, Germany, even the simplified version of the protocol was challenging to implement. This is because verifying the output of a quantum computation is experimentally much more demanding than doing the computation, as it requires entangling more qubits.

Nonetheless, the demonstrated protocol contains all the steps required for a complete verification, and the researchers plan to develop it further. “An important task concerning the verification of quantum computations and simulations is to develop practical verification protocols with a high security level,” Kraus tells Physics World.

Andru Gheorghiu, a quantum computing expert from the Chalmers University of Technology in Sweden who was not involved in the research, calls it an important first step towards being able to verify general quantum computations. However, he notes that it currently only works for verifying a simple, one-qubit computation that could be reproduced with an ordinary laptop. Still, he says it offers insights into the challenges of trying to scale up to larger computations.

The research appears in Quantum Science and Technology.

The post Can a classical computer tell if a quantum computer is telling the truth? appeared first on Physics World.

  •  

How a technique for recycling rare-earth permanent magnets could transform the green economy

11 mars 2024 à 12:00
Some wind turbines, a pile of hard disk drives being recycled and motors in electric cars
Growth prospects Rare-earth permanent magnets are vital for the “green economy”, but with more than 99% scrapped, the potential market for HyProMag’s recycled magnets stretches from wind turbines and computer hard drives to motors in electric cars. (Courtesy (from left): Shutterstock/pedrosala; iStock/madsci; iStock/Aranga87)

I recently went on a trade mission to Canada funded by Innovate UK, where I met Allan Walton – a materials scientist who co-founded a company called HyProMag. Spun off from the University of Birmingham in 2018, HyProMag has developed a technique for recycling rare-earth magnets, which are widely used in wind turbines, electric-vehicle (EV) motors and other parts of the “green economy”.

Having been invited to tour HyProMag’s prototype recycling facility on the Birmingham campus, I saw that the technology was shaping up to be a great UK success story. So when Physics World sent me a press release announcing that the company is due to start commercial production at Tyseley Energy Park in Birmingham by mid-2024, I knew my instincts were well founded.

Rare-earth permanent magnets – as I described in my column a few months ago – are alloys of elements such as neodymium, samarium and cerium. With the transition to a “clean-energy” economy now in full swing, demand for rare earths is high. Estimates suggest that the market will grow by as much as a factor of seven between 2021 and 2040.

Trouble is, some 80–90% of the world’s neodymium is currently made – or controlled by – Chinese companies. That’s prompted some nations, such as the US, to revamp their own production of permanent magnets. But another way to secure supplies of rare earths is to recycle materials. That’s why the imminent start-up of HyProMag’s facility is so interesting, especially as its process is so energy efficient.

Extracting elements

There are lots of possible methods to extract rare-earth elements from waste materials or from products that have reached the end of their lives. Most of the work has so far focussed on getting the individual elements by first dissolving the magnets and then recovering the rare earths from liquid-waste streams that re-enter the supply chain early in the magnet-making process.

This approach is often called “long-loop” recycling as everything is broken down using various techniques and recovered as rare-earth oxides. These oxides then have to be converted into metals before being cast into alloys and broken down into a fine alloy powder to make the magnets. Long-loop recycling is an important but energy intensive and expensive process.

The Tyseley plant takes a different approach, based as it is on the University of Birmingham’s patented Hydrogen Processing of Magnet Scrap (HPMS) technique. It uses hydrogen as a processing gas to separate magnets from waste streams as a magnet alloy powder, which can be compactified into “sintered” rare-earth magnets. Not requiring heat, it’s a relatively quick process dubbed “short-loop” recycling.

A staggering 259 million hard disk drives were shipped in 2021, so the market for recycled magnets is huge.

When I looked around the company’s prototype line last year, I noticed that it can recycle the hard disk drives (HDDs) found in computers. Each disk can have as much as 16g of magnetic material, about a quarter of which are rare-earth elements. That’s only a small fraction of the disk’s overall mass but, as you’ll recall me pointing out, a staggering 259 million HDDs were shipped in 2021, so the market is huge.

HyProMag’s production method involves a robot with magnetic-field sensors first identifying the location of the HDD’s motor, which contains the all-important rare-earth permanent magnet. The section with the motor is then chopped off, with the rest of the disk sent for conventional recycling. The motor section is finally exposed to hydrogen at atmospheric pressure and room temperature via the HPMS technique.

Amazingly, the rare-earth magnets – typically alloys of neodymium, iron and boron (NdFeB) – just break apart to form a powder. I’ve seen videos of the process and it’s like watching something turn to rust. Crucially, the powder becomes demagnetized so any coatings on the magnet peel away from the surface of the magnets and can be easily separated.

The extracted NdFeB powder is then sieved to remove impurities before being re-processed into new magnetic materials or rare-earth alloys. HyProMag reckons that the process requires 88% less energy than that needed to make rare-earth magnets from primary sources, which is impressive. It has already produced more than 3000 new rare-earth magnets at its pilot plant for project partners and potential customers, with the magnets tested in a wide range of applications in the automotive, aerospace and electronics sectors.

Production promises

But the company wants to get past the trial phase and become a volume supplier of magnets. That’s why the Tyseley scale-up plant is so important. The company reckons it will initially be able to process up to 20 tonnes of rare-earth magnets and alloys a year – and eventually five times that amount. HyProMag is also planning further facilities in Germany and the US.

The technology is promising because so many products contain rare-earth magnets, but when they’re scrapped the magnets get shredded and break apart. The resulting powder remains magnetic, sticking to the ferrous scrap and plant components, but less than 1% of the magnets get recycled. HyProMag can, however, efficiently remove this material before it’s shredded and is already eyeing up a diverse range of economically viable sources of scrap.

“It is difficult to see large-scale recycling of rare-earth magnets taking off without an efficient separation process such as HPMS,” Walton says. “The current pilot line allows us to process up to two tonnes of scrap applications in a single run, with the commercial plant scaled to allow much larger batch sizes.” Loading to powder removal can be done, the company claims, in as little as four hours.

As the demand for rare earths increases and the amount of second-hand magnetic material available also rises, recycling such magnets is becoming an ever-bigger opportunity and an ever-more viable process. Just look at the growth of the EV sector: a typical electric motor has 2–5 kg of magnetic material and worldwide sales of EVs are expected to rise to 65 million per year by 2030, according to market-research firm IHS Markit.

Another huge source of rare earths are wind turbines, many of which are reaching the end of their lives after decades of use. Their generators contain up to 650 kg of rare earths per megawatt of generator capacity. Given that the UK aims to have up to 75 GW of off-shore wind capacity by 2050, it will have nearly 50,000 tonnes of rare-earth magnets in the years to come, according to Martyn Cherrington from Innovate UK, who runs its Circular Critical Materials Supply Chain (CLIMATES) programme.

Such long-term opportunities often need government support – and the recycling of rare-earth permanent magnets has been no exception. Indeed, the fundamental research behind HyProMag’s work began many years before it was spun off. The company has also benefited from financial support from a range of sources, including UK Research and Innovation’s Driving the Electric Revolution programme, the European Union and private investors.

In 2023 HyProMag Ltd was bought by the Canadian firm Maginito, which is part of Mkango Resources – a mineral-exploration and development company listed on the UK and Canadian stock exchanges. Mkango clearly saw the potential of HyProMag’s recycling and magnet-manufacturing technology. It’s a great UK success story, which could have huge long-term global potential for the circular economy.

The post How a technique for recycling rare-earth permanent magnets could transform the green economy appeared first on Physics World.

  •  

Photonic metastructure does vector–matrix multiplication

Par : No Author
11 mars 2024 à 15:03

A new silicon photonics platform that can do mathematical operations far more efficiently than previous designs has been unveiled by Nader Engheta and colleagues at the University of Pennsylvania. The US-based team hopes that its system will accelerate progress in optical computing.

Analogue optical computers can do certain calculations more efficiently than conventional digital computers. They work by encoding information into light signals and then sending the signals through optical components that process the information. Applications include optical imaging, signal processing and equation solving.

Some of these components can be made from photonic metamaterials, which contain arrays of structures with sizes that are on par, or smaller, than the wavelength of light. By carefully controlling the size and distribution of these structures, various information-processing components can be made.

Unlike the bulky lenses and filters that were used to create the first analogue optical computers, devices based on photonic metamaterials are smaller and easier to integrate into compact circuits.

Mathematical operations

Over the past decade, Engheta’s team have made several important contributions to the development of such components. Starting in 2014, they showed that photonic metamaterials can be used to perform mathematical operations on light signals.

They have since expanded on this research. “In 2019, we introduced the idea of metamaterials that can solve equations,” Engheta says. “Then in 2021, we extended this idea to structures that can solve more than one equation at the same time.” In 2023, the team developed a new approach for fabricating ultrathin optical metagratings.

Engheta and colleagues have now set their sights on vector–matrix multiplication, which is a vital operation for the artificial neural networks used in some artificial intelligence systems. The team has created the first photonic nanostructure capable of doing vector–matrix multiplication. The material was made using a silicon photonics (SiPh) platform that integrates optical components onto a silicon substrate.

Inverse design

The researchers also used an inverse design procedure. Instead of taking a known nanostructure and determining if it has the correct optical properties, inverse design begins with a set of desired optical properties. Then, a photonic structure is reverse-engineered to have those properties. Using this approach, the team designed a highly compact material that is suited to doing vector-matrix multiplications with light.

“By combining the inverse design method with the SiPh platform, we could design structures with sizes on the order of 10-30 micron, with a silicon thickness ranging between 150–220 nm,” Engheta explains.

The team says that its new photonic platform can do vector–matrix multiplication far more efficiently than existing technologies. Engheta also points out that the platform is also more secure than existing systems. “Since this vector-matrix multiplication computation is done optically and simultaneously, one does not need to store the intermediate-stage information. Therefore, the results and processes are less vulnerable to hacking.”

The team anticipates that their approach will have important implications for how artificial intelligence is implemented.

The research is described in Nature Photonics.

The post Photonic metastructure does vector–matrix multiplication appeared first on Physics World.

  •  

Modelling lung cells could help personalize radiotherapy

Par : No Author
12 mars 2024 à 10:30

A new type of computer model that can reveal radiation damage at the cellular level could improve radiotherapy outcomes for lung cancer patients.

Roman Bauer, a computational neuroscientist at the University of Surrey in the UK, in collaboration with Marco Durante and Nicolò Cogno from GSI Helmholtzzentrum für Schwerionenforschung in Germany, created the model, which simulates how radiation interacts with the lungs on a cell-by-cell basis.

Over half of all patients with lung cancer are treated using radiotherapy. Although this approach is effective, it leaves up to 30% of recipients with radiation-induced injuries. These can trigger serious conditions that affect breathing, such as fibrosis – in which the lining of the alveoli (air sacs) in the lungs is thickened and stiffened – and pneumonitis – when the walls of the alveoli become inflamed.

In order to limit radiation damage to healthy tissue while still killing cancer cells, radiotherapy is delivered in several separate “fractions”. This allows a higher – and therefore more effective – dose to be administered overall because some of the damaged healthy cells can repair themselves in between each fraction.

Currently, radiotherapy fractionation schemes are chosen based on past experience and generalized statistical models, so are not optimized for individual patients. In contrast, personalized medicine could be achieved thanks to this new model which, as Durante, director of the Biophysics Department at GSI explains, looks at “toxicity in tissues starting from the basic cellular reactions and [is] therefore able to predict what happens to any patient” when different fractionation schemes are chosen.

The team developed an “agent-based” model (ABM) consisting of separate interacting units or agents – which in this case mimic lung cells – coupled with a Monte Carlo simulator. The ABM, described in Communications Medicine, builds a representation of an alveolar segment consisting of 18 alveoli each 260 µm in diameter. Next, Monte Carlo simulations of irradiation of these alveoli are carried out at the microscopic and nanoscopic scale, and information about the radiation dose delivered to each cell and its distribution is fed back into the ABM.

The ABM uses this information to work out whether each cell would live or die, and outputs the final results in the form of a 3D picture. Crucially, the coupled model can simulate the passage of time and thus show the severity of radiation damage – and the progression of the medical conditions it may cause – hours, days, months or even years after treatment.

“What I found very exciting is that these computational simulations actually delivered results that matched with various experimental observations from different groups, labs and hospitals. So our computational approach could in principle be used within a clinical setting,” says Bauer, the spokesperson for the international BioDynaMo collaboration, which aims to bring new computational methods into healthcare via the software suite used to build this model.

Bauer began working on computational cancer models after a close friend died from the disease aged just 34. “Every cancer is different and every person is different, with different shaped organs, genetic predispositions and lifestyles,” he explains. His hope is that information from scans, biopsies and other tests could be fed into the new model to provide a picture of each individual. An AI-assisted therapy protocol could then be created that would output a closely tailored treatment plan that improves the patient’s chances of survival.

Bauer is currently seeking collaborators from other disciplines, including physics, to help move towards a clinical trial following lung cancer patients over several years. Meanwhile, the team intends to expand the model’s use into other areas of medicine.

Durante, for instance, is hoping to study viral infection with this lung model as it “may predict the pneumonitis induced by the COVID-19 infection”. Meanwhile, Bauer has begun simulating the development of circuits in the brains of premature babies, with the goal of better understanding “at what time point to intervene and how”.

The post Modelling lung cells could help personalize radiotherapy appeared first on Physics World.

  •  

Sticky UV-sensitive tape makes 2D material transfers easier

12 mars 2024 à 13:00

A new type of sticky tape that is sensitive to ultraviolet light makes it easier and cheaper to transfer two-dimensional materials like graphene onto different surfaces. According to its Japan-based developers, the new tape technique could revolutionize 2D materials transfer, bringing us closer to integrating such materials into devices.

2D materials form the basis of many advanced electronic and optoelectronics devices. Because they are just a few atoms thick, however, these materials are difficult to transfer onto device surfaces. Current methods are highly complex and often involve etching a substrate with corrosive acids. The materials’ extreme thinness also means they often need a polymer film to support them during the fabrication process. This film must be removed with solvent afterwards, which is time-consuming and costly, and can damage the material by introducing unwanted defects that degrade its electronic and mechanical properties.

A new functional tape

Researchers led by Hiroki Ago of Kyushu University say they have now found an alternative solution. The new functional tape, which the team developed with the help of artificial intelligence (AI), is made from a polyolefin film and a thin adhesive layer. Before it is exposed to UV light, the tape exhibits strong van der Waals interactions with graphene (a 2D form of carbon) and sticks to it. After UV exposure, these interactions weaken so that the graphene can be readily released and transferred onto a target surface. The tape also stiffens slightly after UV exposure, which makes it even easier to peel the graphene off it.

Working in collaboration with experts from the Japanese manufacturing firm Nitto Denko, the researchers then developed transfer tapes for other technologically important 2D materials. These include hexagonal boron nitride (hBN), which is sometimes referred to as white graphene or “graphene’s cousin”, and transition metal dichalcogenides (TMDs), which show promise for post-silicon electronics. In images obtained using optical and atomic force microscopes, the surfaces of these materials after tape transfer appeared smoother and contained fewer defects than those transferred using conventional approaches.

Flexible and easily cut to size

Since the UV tape is flexible and (unlike protective polymer films) does not need to be removed with organic solvents after transfer, it can be used with substrates that are curved or sensitive to such solvents, such as plastics. Ago thinks this could expand the tape’s applications, and he and his colleagues demonstrated this by making a plastic device that uses graphene to sense terahertz radiation. “Such a device could be promising for medical imaging or airport security as this radiation can pass through objects, just like X-rays,” he explains.

The UV tape is easy to cut to the required size, too, making it easier to transfer just the right amount of 2D material. This “cut-and-transfer” process, as the researchers call it, will minimize waste and reduce cost.

A collaboration that stuck

Before developing the new tape, Ago’s research group worked for more than 10 years on chemical vapour deposition as a means of synthesizing high-quality graphene, hBN and TMDs. During that time, he says, many researchers requested their samples, but most of them had problems transferring these 2D materials to their substrates. “I therefore thought: what if they could easily do this transfer by themselves? This is why we started to try making our 2D materials tapes,” Ago says.

Image showing steps in the tape-transfer process. Tape is stuck to the graphene grown on a copper film, UV light is applied, the graphene+tape is electrochemically separated from the copper, the graphene+tape is applied to a silicon substrate, and the tape is peeled off, leaving just the graphene and its substrate
Researchers from Kyushu University and Nitto Denko developed a tape that changes how well it sticks to 2D materials in response to UV light. (Courtesy: Ago Lab, Kyushu University)

To advance the technique, Ago collaborated with Nitto Denko, which makes a wide variety of adhesive tapes. Because these tapes were more often used for thick materials like paper, the collaboration struggled at first, but their work paid off: “After extensive research, we finally succeeded in developing UV tapes and transfer processes suitable for the clean transfer of 2D materials,” Ago tells Physics World.

Towards large-scale manufacturing processes of 2D materials

Ago says the most direct application for the technique, which the team describe in Nature Electronics, would be to integrate it into large-scale manufacturing processes for 2D materials. From there, he adds, “I personally expect the development of cutting-edge advanced devices with our UV tape transfer because we can transfer various types of 2D materials and even stack these materials together in different orientations, a process that allows new electronic properties to emerge.”

Though the transfer process is relatively smooth, Ago and colleagues acknowledge that it does produce some wrinkles and bubbles in the 2D materials. They are working on improvements to the composition of the adhesive layer that might help resolve this problem. Another focus for improvement is to increase the size of the transferred 2D materials beyond the 4-inch (102 mm) wafers they currently use.

“I also want to develop the fabrication of more sophisticated devices using different types of 2D materials and UV tapes,” Ago reveals. “These could substantially change the way electronic and photonics devices are produced.” Further collaborations with academia and industry, he says, could enable the team “to improve this unique tape transfer technique and push forward the realization of commercial products using 2D materials”.

The post Sticky UV-sensitive tape makes 2D material transfers easier appeared first on Physics World.

  •  

Rhapsody as European synchrotron examines Niccolò Paganini’s violin

12 mars 2024 à 15:54

An almost 300-year-old violin that was played by the great virtuoso Niccolò Paganini has been studied at the European Synchrotron, the ESRF.

As one of the most famous violins in the world, “Il Cannone” was crafted in 1743 by the great luthier Bartolomeo Giuseppe Guarneri. The instrument was Paganini’s most treasured due to its unique acoustic properties.

Paganini is considered to be one of the greatest violinists of all time, so talented that it was rumoured that his mother had sold his soul to the devil to gain his abilities.

The ESRF teamed up with the violin’s custodians, the municipality of Genoa, and the Premio Paganini, to carry out an X-ray analysis to help determine the structural status of the wood and bonding parts of the violin.

The measurements were performed on ESRF’s new beamline, BM18, which is able to construct a 3D X-ray image of the instrument with micrometre resolution using a technique called phase-contrast X-ray microtomography.

It is hoped that carrying out such measurements will help to preserve the instrument, which is only occasionally played.

ESRF scientist Luigi Paolasini, who led the project, says it was a “fantastic experience” to work on the violin.

“[It] opens new possibilities to investigate the conservation of ancient musical instruments of cultural interest, as a crossing point between music, history and science”, he says.

The post Rhapsody as European synchrotron examines Niccolò Paganini’s violin appeared first on Physics World.

  •  

Surf’s up: Physics World admires the famous Severn bore

12 mars 2024 à 18:38

This morning some of the Physics World team set out from Bristol at 7:00 and by 8:30 we were standing on a muddy riverbank in the pouring rain. Along with a growing crowd of people, we were watching the River Severn rush towards the sea – swollen by this winter’s heavy rains.

While some were sharing flasks of coffee and tea while huddling under umbrellas, the braver in the crowd were launching surfboards and kayaks into the cold river. Most had wetsuits and specialist gear on, but one hardy paddler was out in a T-shirt and tracksuit bottoms. (No Physics World personnel got into the river, we watched safely from the bank).

Then, just after 9:00 and ahead of schedule, a huge wave came roaring up from the sea some 50 km away. This was the Severn’s tidal bore. I first spotted it as it rounded a bend in the river, picking up about half a dozen surfers and kayakers and launching them upstream. While most were just scattered by the wave, two managed to surf several hundred metres past us before being pushed into a tree that was leaning precariously from the opposite bank.

Extreme range

Today’s bore was rated a five-out-of-five, and that’s why we made the trek to watch it. The Severn has one of the highest tides in the world and this morning the tidal range in its estuary (at Avonmouth) was nearly 14 m. This extreme range was caused by the alignment of the Moon and Sun through Earth’s equator – which happens around the equinoxes.

The tidal bore is created when the incoming tide enters a shallow, narrowing river. When the rising tide over tops the river flow, a surge of water travels upstream as a series of waves. Indeed, another amazing aspect of this morning was how rapidly the tide rose as the bore passed. Before the event, the level of the river was constant but after the wave passed it had risen about 2 m in what seemed just a few minutes.

There are several other rivers around the world that have tidal bores, and you can read more about them – and the physics behind the phenomenon – in this article by the physicist Michael Berry: “Chasing the Silver Dragon: the physics of tidal bores”.

The post Surf’s up: <em>Physics World</em> admires the famous Severn bore appeared first on Physics World.

  •  

Solid-state battery electrolyte makes a fast lithium-ion conductor

13 mars 2024 à 10:30

Researchers at the University of Liverpool, UK have developed a new solid-state battery electrolyte that conducts lithium ions so rapidly, it could compete with the liquid electrolytes found in today’s ubiquitous lithium-ion batteries. This high lithium-ion conductivity is a prerequisite for rechargeable energy storage, but it is unusual in solids, which are otherwise attractive for batteries because they are safer and quicker to charge.

The new electrolyte has the chemical formula Li7Si2S7I and contains ordered sulphide and iodide ions arranged in both a hexagonal and cubic-close-packed structure. This structure makes the material highly conductive because it facilitates the movement of lithium ions in all three dimensions. “One could envisage it as a structure that allows lithium ions to have more ‘options’ to choose from for movement, which means they are less likely to get stuck,” explains Matt Rosseinsky, the Liverpool chemist who led the research.

The right material with the right properties

To identify a material that facilitates this freedom of movement, Rosseinsky and colleagues used a combination of artificial intelligence (AI) and crystal structure prediction tools. “Our original idea was to create a new structural family of ion conductors inspired by the complex and diverse crystal structures of intermetallic materials, such as NiZr, in order to generate a wide range of potential sites for the lithium ions to move between,” Rosseinsky explains. AI and other software tools helped the team know where to look, though “the final decisions were always made by the researchers and not the software”.

After synthesizing the material in their laboratory, the researchers determined its structure with diffraction techniques and its lithium-ion conductivity with NMR and electrical transport measurements. They then demonstrated the lithium-ion conductivity efficiency experimentally by integrating the material into a battery cell.

Exploring unchartered chemistry

Rosseinsky’s research focuses on designing and discovering materials to support a transition to more sustainable forms of energy. This type of research involves a wide variety of techniques, including digital and automated methods, exploratory synthesis of materials with new structures and bonding, and the targeted synthesis of materials with real-world applications. “Our study brought all these directions together,” he says.

Discovering materials that differ from known ones is difficult, Rosseinsky adds, not least because any candidate materials must be experimentally realized in the lab. Once he and his colleagues have determined a material’s synthetic chemistry, they must then measure its electronic and structural properties. This inevitably requires interdisciplinary research: in the present work, Rosseinsky teamed up with colleagues in the Materials Innovation Factory, the Leverhulme Research Centre for Functional Materials Design, the Stephenson Institute for Renewable Energy and the Albert Crewe Centre and School of Engineering as well as his own department of chemistry.

Applicable to the larger field of battery research

The process the team developed, which is detailed in Science, could be applicable throughout the field of battery research and beyond, Rosseinsky says. “The knowledge gained in our work about how to favour fast ion motion in solids is relevant for materials other than those employed in lithium-ion batteries and is generalizable to other techniques that rely on ion-conducting materials,” he tells Physics World. “This includes proton or oxide ion conducting materials and solid-state fuel cells or electrolysers for hydrogen generation, as well as sodium and magnesium-conducing materials in alternative battery structures.”

The researchers say that Li7Si2S7I is likely just the first of many new materials accessible with their new approach. “There is thus much to do in defining which materials can be studied and how their ion transport properties connect to their structures and compositions,” Rosseinsky concludes.

The post Solid-state battery electrolyte makes a fast lithium-ion conductor appeared first on Physics World.

  •  

Explaining the origin of life with physics

13 mars 2024 à 12:00

Can you explain the origin of life on Earth using the principles of thermodynamics and statistical mechanics? It’s not a question that even physics students see in their more challenging assignments. But it is one that Liam Graham – physicist turned economist – attempts to answer in his debut book Molecular Storms: the Physics of Stars, Cells and the Origin of Life.

Throughout Molecular Storms, Graham uses a light, informal tone with a measured injection of humour to keep readers on a direct path from the laws of thermodynamics to the inception of biological diversity. He begins by painting a picture of the motions of molecules in the “molecular storm”. The opening chapters acquaint the reader with the main tenets of statistical mechanics (such as microstates and Brownian motion) as well as, of course, thermodynamics.

Graham clearly explains that the entropy (disorder) of a closed system is destined to increase, and describes in detail the operation of heat engines, motors and their lesser-known cousin, ratchets. Other blockbuster principles of physics – such as Noether’s theorem (which relates conservation laws to symmetries in nature) and quantum superposition – are also introduced in passing, more in the form of acknowledgement than explanation.

Graham continues with an examination of the prerequisites of life. The physics groundwork that he’s laid lets him explore how the formation of planets, the action of enzymes and the biological processes essential to the functioning of cells can all be understood in terms of the thermodynamical concepts of ratchets and heat engines.

This section is supported by a brief but clear detour into how mixtures of molecules are driven to chemical equilibrium by the molecular storm. The diversion into chemistry is necessary for the reader to follow the lengthy discussion in the next few chapters about the reactions of compounds, which play a central role in the metabolism of cells. The book ends with a detailed discussion of the thermodynamics that would have been key to the production of organic molecules and the environment of the newly formed Earth, like hydrothermal vents and ponds.

As someone with a pure physics background, I was tempted to refer to other sources to fully understand the more biology-heavy chapters. Still, there is enough detail for the reader to comfortably follow the general direction of the book’s argument. But given the virtual impossibility of explaining every relevant process of such a complex subject in detail – while still entertaining and holding the reader’s attention – Graham includes lots of well-researched suggestions for further reading and links to relevant research papers.

Which “hard problem”?

Graham’s career, characterized by a journey across various disciplines including physics, philosophy and economics, is reflected in the structure of his book. This blend of different fields might be why Molecular Storms is such an engaging read. The strong undertone of statistical mechanics throughout the narrative undoubtedly owes its origin to his first degree in theoretical physics from the University of Cambridge.

But Graham also draws on his background in philosophy to address the puzzle of the origin of life, referring repeatedly to the concept of a “Boltzmann brain” – that is, the idea that random fluctuations of matter could give rise to consciousness. In a similar vein, he explicitly demotes the “hard problem of consciousness” – which questions how physical matter gives rise to conscious and subjective experience – saying, “The origin of life is as complex a problem as there is (I suspect it will prove harder than the so-called ‘hard problem’ of consciousness).”

Molecular Storms is likely to appeal to readers on two levels. First, it can be seen as a fascinating guide for a reader with a general interest in physics, examining a physicist’s view of the emergence of life. This casual reader can enjoy the ride without needing to turn to the mathematical calculations outlined in the appendices.

This book is a good example of the interdisciplinary nature of scientific research, something that is often under-emphasized in undergraduate courses

Alternatively, an undergraduate student interested in this area would benefit from working through the calculations and following the explanations. This book is also a good example of the interdisciplinary nature of scientific research, something that is often under-emphasized in undergraduate courses. However, I would advise student readers to have other texts on hand unless they already have a very good conceptual grasp of the principles mentioned.

Indeed, both the casual reader and the student would benefit from referring to the online resources for illustrations of the concepts discussed, as the diagrams in the book are sometimes merely representative of the online content.

But as most Physics World readers are likely to fall into one of these categories, I would highly recommend that you add Molecular Storms to your reading list.

  • 2023 Springer 291pp £29.99pb £23.99ebook

The post Explaining the origin of life with physics appeared first on Physics World.

  •  

Ultraviolet dual-comb spectroscopy system counts single photons

Par : No Author
13 mars 2024 à 14:50
Dual comb spectroscopy
How it works: the top frequency comb is passed through a sample of interest and then into a beamsplitter. The bottom frequency comb operates at a slightly different pulse repetition frequency and is combined with the top comb in the beamsplitter. Photons in the combined beam are counted by a detector. (Courtesy: Bingxin Xu et al/Nature/ CC BY 4.0 DEED)

Dual-comb spectroscopy – absorption spectroscopy that utilizes the interference between two frequency combs – has been performed at ultraviolet wavelengths using single photons. The work could lead to the use of the technique at shorter wavelengths, where high-power comb lasers are unavailable. The technique could also find new applications.

Since their invention at the dawn of the 21st century, frequency combs have become important tools in optics. As a result, Theodor Hänsch of the Max Planck Institute for Quantum Optics in Germany and John Hall of the US National Institute for Standards and Technology shared the 2005 Nobel Prize for their invention. A frequency comb comprises short, periodic light pulses containing a very broad spectrum of light with intensity peaks at regular frequency intervals – resembling the teeth of a comb. Such spectra are particularly useful whenever light at a precisely defined frequency is needed, such as in atomic clocks or spectroscopy.

In traditional spectroscopy, a frequency comb can be used as an “optical ruler” when probing a sample with another laser. “You have a continuous-wave [CW] laser interacting with the sample that you want to analyse and you want to measure the absolute frequency of this CW laser,” explains Nathalie Picqué of the Max Planck Institute of Quantum Optics. “And for this you beat the laser with the frequency comb. So the frequency comb gives you the possibility to measure any frequency but at a given time you only measure one.”

Intensity changes

In contrast, dual-comb spectroscopy exposes the sample to broadband light from a frequency comb itself. As the input is broadband, the output is also broadband. However, the light passing through the sample combines with the light from a second frequency comb with a slightly different repetition frequency at an interferometer. The changing intensity of the light emerging from the interferometer is recorded (see figure).

If the sample has not interacted with the first frequency comb – the periodic intensity change simply reflects the difference in the repetition frequency between the combs. However, if the sample absorbs light from the comb, this alters the shape of the intensity modulation. The absorbed frequencies can be recovered from a Fourier transform of this temporal interference pattern.

Dual-comb spectroscopy has been very successful at infrared frequencies. Using the technique at higher frequencies, however, is problematic. “There are no ultrafast lasers that directly emit in the ultraviolet region,” explains Picqué, “so you need to use non-linear frequency conversion, and the more you want to go into the ultraviolet, the more stages of non-linear frequency conversion you need.” Non-linear frequency up-conversion is very inefficient, so the power drops at each stage.

Low-power solution

So far, most researchers have focused on increasing the power in the incoming infrared laser. “You have a very challenging experiment with high power lasers, a lot of noise and a very expensive system,” says Picqué. In the new research, therefore Picqué, Hänsch and colleagues at the Max Planck Institute for Quantum Optics created a system with much lower power requested.

The researchers up-converted two infrared combs twice, first in a lithium niobate crystal and then in bismuth triborate. The resulting ultraviolet combs generated average optical powers of at most 50 pW. The researchers passed one of these through a cell of heated caesium gas, while the other one was sent straight to the interferometer. One arm of the interferometer was sent to a single photon counter. “There are really very few counts,” says Picqué; “If you take one scan the signal does not look like anything.” However, they then repeated exactly the same scan over and over again. “When we repeat the scan 100,000 or close to a million times we get our time domain interference signal, which is the signal we are looking for.”

In around 150 s of scanning time, the researchers could resolve two atomic transitions in caesium that have similar frequencies, with signal-to-noise ratios of about 200. They could also observe the splitting of  one of the transitions caused by the the hyperfine interaction.

“The idea of working at very low light levels is very counterintuitive,” says Picqué. “We show that the technique can work with optical powers that are one million times weaker than what has been used before.” They now hope to push to even shorter wavelengths in the vacuum ultraviolet. Aside from ultraviolet spectroscopy, the capacity to utilize dual-comb spectroscopy at very low powers could prove useful in a variety of other situations, explains Picqué, such as where samples are prone to radiation damage.

Dual-comb expert Jason Jones of the University of Arizona, who does experiments far into the vacuum ultraviolet is enthusiastic about the Max Planck work. “No matter how far you go into the ultraviolet, you’ll always have some minimum amount of light because of the way it’s generated, so if you can use less light, you’ll always be able to go deeper,” he says. “Being able to use single photons and still get good signal-to-noise spectroscopic results is significant for that.”

The research is described in Nature.

The post Ultraviolet dual-comb spectroscopy system counts single photons appeared first on Physics World.

  •  

New attosecond X-ray spectroscopy technique ‘freezes’ atomic nuclei in place

13 mars 2024 à 17:00

Scientists can now follow the movement of electrons and the ionization of molecules in real time thanks to a new attosecond X-ray spectroscopy technique. Like stop-motion photography, the technique effectively “freezes” the atomic nucleus in place, meaning that its motion does not skew the results of measurements on the electrons whizzing around it. According to the technique’s developers, it could be used not only to probe the structure of molecules, but also to track the birth and evolution of reactive species that form via ionizing radiation.

“The chemical reactions induced by radiation that we want to study are the result of the electronic response of the target that happens on the attosecond timescale (10-18 seconds),” explains Linda Young, a physicist at Argonne National Laboratory and the University of Chicago, US, who co-led the research together with Robin Santra of the Deutsches Elektronen-Synchrotron (DESY) and the University of Hamburg in Germany and Xiaosong Li of the University of Washington, US. “Until now, radiation chemists could only resolve events at the picosecond timescale (10-12 seconds), which is a million times slower than an attosecond. It’s kind of like saying ‘I was born and then I died.’ You’d like to know what happens in between. That’s what we are now able to do.”

Pump and probe

The new technique works as follows. First, the researchers apply an attosecond X-ray pulse with a photon energy of 250 electron volts (eV) to a sample – of water, in this case, though the team say the technique could work with a wide range of condensed-matter systems. This initial “pump” pulse excites electrons from the water molecule’s outer (valence) orbitals, which are responsible for molecular bonding and chemical reactions. These orbitals are further from the atomic nucleus, and they have much lower binding energies than the inner “core” orbitals: around 10-40 eV compared to about 500 eV. This makes it possible to ionize them – a process known as valence ionization – without affecting the rest of the molecule.

Around 600 attoseconds after the valence ionization, the researchers fire a second attosecond pulse – the probe pulse – at the sample, with an energy of around 500 eV. “The short time delay between the pump and probe pulses is one of the reasons why the hydrogen atoms themselves do not have time to move and are like ‘frozen’,” Young explains. “This means their movement does not affect the measurement results.”

When the probe pulse interacts with the holes (vacancies) left behind in the valence orbitals following valence ionization, the pulse’s energy distribution changes. By reflecting the pulse from a grating that disperses this energy distribution onto a two-dimensional detector, the researchers obtain what Young calls a spectral “snapshot” or “fingerprint” of electrons occupying the valence orbitals.

Finding flaws in earlier results

By observing the motion of the X-ray-energized electrons as they move into excited states, the researchers uncovered flaws in the interpretation of earlier X-ray spectroscopy measurements on water. These earlier experiments produced X-ray signals that appeared to stem from different structural shapes, or ​“motifs,” in the dynamics of water or hydrogen atoms, but Santra says the new study shows this is not the case.

Photo of a thin stream of water falling from a spout
On target: To record the movement of electrons excited by X-ray radiation, the scientific team created a thin, approximately 1 centimeter-wide, sheet of liquid water as a target for the X-ray beam. (Courtesy: Emily Nienhuis | Pacific Northwest National Laboratory)

“In principle, one could have thought that the timing precision of this type of experiment is limited by the lifetime (which is around a couple of femtoseconds, or 10-15 seconds) of the X-ray-excited electronic quantum states produced,” he tells Physics World. “Through quantum-mechanical calculations, however, we showed that the observed signal is confined to less than a femtosecond. This is the reason why we were able to show that X-ray spectroscopy measurements on the structure of liquid water had been previously misinterpreted: unlike these earlier measurements, ours were not affected by moving hydrogen atoms.”

Experimental goals and challenges

The researchers’ initial goal was to understand the origin of reactive species created when X-rays and other forms of ionizing radiation impinge on matter. These reactive species form on an attosecond time scale following ionization, and they play important roles in biomedical and nuclear science as well as chemistry.

One of the challenges they encountered was that the X-ray beamline they used – ChemRIXS, part of the Linac Coherent Light Source at the SLAC National Accelerator Laboratory in Menlo Park, California – had to be completely reconfigured to perform all-X-ray attosecond transient absorption spectroscopy. This powerful new technique makes it possible to study processes on extremely short time scales.

The researchers now plan to extend their studies from pure water to more complex liquids. “Here, the different molecular constituents can act as traps for the freed electrons and produce new reactive species,” Young says.

They report their present work in Science.

The post New attosecond X-ray spectroscopy technique ‘freezes’ atomic nuclei in place appeared first on Physics World.

  •  

Magnetic microbots show promise for treating aneurysms and brain tumours

Par : No Author
14 mars 2024 à 11:00
Magnetic soft microfibrebots in a blood vessel
Remote control Schematic showing (top panel) how microfibrebots can anchor to a blood vessel, navigate via helical propulsion, elongate to pass through narrow regions and aggregate to block blood flow. Potential applications (bottom panel) include coil embolization of aneurysms and tumours, and selective particle embolization of tumours. (Courtesy: Jianfeng Zang, HUST)

A team of researchers in China has developed novel magnetic coiling “microfibrebots” and used them to embolize arterial bleeding in a rabbit – paving the way for a range of controllable and less invasive treatments for aneurysms and brain tumours.

When attempting to stop bleeding in aneurysms or stem the flow of blood to brain tumours (a process known as embolization), surgeons generally run a slim catheter through the femoral artery and navigate it through blood vessels to deliver embolic agents. Although widely used, these catheters are difficult to guide through complex vascular networks.

In an effort to address this challenge, a team of researchers at Huazhong University of Science and Technology (HUST) created tiny magnetic, soft microfibrebots that can carry out such procedures remotely. The devices, made from a magnetized fibre twisted into a helix shape, can fit a range of different vessel sizes and move along in a corkscrew fashion when exposed to an external magnetic field. The results of the research, presented in Science Robotics, demonstrate how the devices were successfully used to stem arterial bleeding in a rabbit.

As co-author Jianfeng Zang explains, the microfibrebots are made by using thermal energy to draw magnetic soft composite materials into microfibres, which are then “magnetized and moulded to give them helical magnetic polarity”. By controlling the magnetic field, the magnetic soft microfibre robot demonstrated reversible morphological transformation (elongation or aggregation) and spiral propulsion through blood flow (both upstream and downstream). This allows it to be navigated through complex vascular systems and perform robotic embolization in the sub-millimetre region.

“The article shows how we performed in vitro embolization of aneurysms and tumours in a neurovascular model, and performed robotic navigation and embolization under real-time fluoroscopy in an in vivo rabbit femoral artery model,” says Zang. “These experiments demonstrate the potential clinical value of this work and pave the way for future robot-assisted embolization surgical options.”

Anchoring function

According to first author Xurui Liu, a PhD student at HUST, each microfibrebot possesses an anchoring function, similar to that of a vascular stent, enabling it to be stably anchored to the inner wall of blood vessels through contact friction to avoid being washed away by the blood flow.

“Its helical magnetization distribution provides the microfibre robot with a net magnetization direction along its central axis. By applying an external magnetic field consistent with the direction of the net magnetization direction, the robot can be elongated,” she says.

“Conversely, when the external magnetic field is opposite to the direction of net magnetization, the robot will gather,” she adds. “The softness and high robustness of this microfibre robot ensures that its morphological reconstruction function remains fully reversible after more than a thousand aggregation and elongation cycles.”

Promising alternative

In contrast to the magnetic soft robots reported in earlier research, Zang confirms that the helical magnetization direction characteristics of the new robots enable their deformation and movement modes to be orthogonally decoupled independently of the control magnetic field, providing “unique magnetic field control flexibility”.

“This feature not only allows a single microfibre robot to move at high speed against the blood flow under the action of a rotating magnetic field, but also enables independent control of the shape and movement of multiple microfibrebots,” Zang explains.

“Additionally, these devices are compatible with commonly used interventional catheters to maximize their potential for use in clinical settings,” he adds.

Faced with the challenges of traditional methods such as catheter-based embolization – particularly in terms of their operational limitations and insufficient precision, as well as the health risks related to doctors being exposed to radiation for long periods of time (from the X-ray guidance system) – Zang points out that the development of magnetic microfibrebot technology provides clinicians with a new means of improving existing treatments.

“The development of microfibrebots provides a new perspective for vascular embolization treatment and shows application potential in minimally invasive surgical treatment technology. This technology provides an effective complement or alternative to traditional catheter embolization technology by precisely controlling blood flow occlusion,” he says.

Zang notes that while this technology shows potential, there are still challenges to overcome prior to its clinical application. These include structural optimization of microfibrebots, increasing the biocompatibility of materials, and development of blood vessel positioning and tracking systems. “The research team is working to address these key issues to advance the application of the technology,” he adds.

The post Magnetic microbots show promise for treating aneurysms and brain tumours appeared first on Physics World.

  •  

Controllable Cooper pair splitter could separate entangled electrons on demand

14 mars 2024 à 14:00

Entangled particles – that is, those with quantum states that remain correlated regardless of the distance between them – are important for many quantum technologies. Devices called Cooper-pair splitters can, in principle, generate such entangled particles by separating the electrons that pair up within superconducting materials, but the process was considered too random and uncontrollable to be of practical use.

Physicists at Aalto University in Finland have now put forward a theoretical proposal indicating that these electron pairs could, in fact, be split on demand by applying time-dependent voltages to quantum dots placed on either side of a superconducting strip. The technique, which preserves the entangled state of the separated electrons, might aid the development of quantum computers that use entangled electrons as quantum bits (qubits).

When a conventional superconducting material is cooled to very low temperatures, the electrons within it overcome their mutual repulsion and pair up. These so-called Cooper pairs propagate through the material without any resistance. The paired-up electrons are naturally entangled, with spins that point in opposite directions. Extracting and separating these electron pairs while preserving their entanglement would be useful for a host of applications, including quantum computing, but doing this is no easy task.

In the latest work, which is detailed in Physical Review B, physicists led by theorist Christian Flindt propose a new way to operate a Cooper pair splitter. Their design consists of a superconducting strip that contains two electrodes and is coupled to two quantum dots (nanosized pieces of semiconducting material) on either side of the strip. When a voltage is applied to the electrodes, Cooper-paired electrons within the superconductor are drawn to the tip of the superconducting strip and become separated, with each quantum dot accommodating one separated electron at a time. These separated electrons can then be passed on through a nanowire.

Time-dependent voltages

The key to the team’s set-up is that the voltage applied to the electrode on one side of the strip varies in time such that exactly two Cooper pairs are split and ejected during each periodic oscillation. “In experiments so far, the applied voltages were kept constant,” Flindt explains. “In our proposal, we show how the splitting of Cooper pairs can be controlled with time-dependent voltages applied to the device.”

Based on their calculations, Flindt and colleagues estimate that their Cooper-pair splitter could separate entangled electrons at a frequency in the gigahertz range. Most modern computers operate with clock cycles in this range, and for many quantum technologies it is important to have a similarly fast source of entangled particles. Indeed, combining several splitters together could help form the basis of a quantum computer that operates using entangled electrons, the team says.

Experimentalists invited to “pick up the baton”

The Aalto physicists decided to undertake their study because they realized that there was a need to control the splitting of Cooper pairs. Their biggest challenge was to figure out how  to vary the voltages in time such that the Cooper pairs would be split on demand. Looking forward, they think it should be possible to realize their proposal experimentally and hope that experimentalists will “pick up the baton”.

“It would also be interesting to investigate how our on-demand Cooper pair splitter can be integrated into a larger quantum electronic circuit to develop quantum information processing,” Flindt tells Physics World.

The post Controllable Cooper pair splitter could separate entangled electrons on demand appeared first on Physics World.

  •  

‘It can be a long road and that’s okay’  –  Prineha Narang on going the distance in science

Par : No Author
15 mars 2024 à 10:45

When she was in middle school in the US between the ages of 11 and 14, Prineha Narang wasn’t planning on becoming a physicist. As a sporty preteen, her attention was instead on the running track. “I was convinced that I was going to do something athletic. I had always been good in my math and science courses, but I’d never really thought of that as a career,” Narang explains. “It was actually a track coach who gently pushed me towards STEM (science, technology, engineering and mathematics) saying, ‘You’re good at running, but I hear you’re really good at math and science.’”

The coach’s comment would seem to be justified. Narang went on to do a PhD in applied physics at Caltech, and after postdoctoral positions at Harvard University and the Department of Physics at MIT, she joined the faculty at Harvard in 2017. But she says there wasn’t a single defining moment where she realized she was destined for a career in physics, describing her trajectory as a gradual progression.

Now Narang runs a group at the University of California Los Angeles (UCLA)  where she researches non-equilibrium materials science – controlling quantum matter and quantum systems using external drives like lasers or electron beams. The work of the NarangLab spans areas of physics, chemistry, computing and engineering.

Writing your own rules

Narang says that her journey to define herself and her research has not been seamless. She notes that there was a lack of programmes focused on undergraduate women in physics, and little support for women in the field, adding that perhaps this inequality was something that hadn’t been identified as a problem at that time.

“One of the challenges was finding someone who could help me find my way through all of the different things you could do in this field, as I recognized that there weren’t that many female faculty members to assure me that I belonged there,” Narang says. “That kind of a question remarkably went away when I became a graduate student at Caltech and had incredibly supportive mentors, both in my own research as well as others on the faculty.”

In our group, we have embraced this interdisciplinary approach

Another challenge Narang faced came after she had become a full faculty member. She had to decide what her research area would be and how it would fit in the broader sphere of physics. The work of the NarangLab is hard to fit into a box, but that’s exactly how she likes it. “In our group, we have embraced this interdisciplinary approach,” Narang explains. “We think about how you can bring together condensed matter and optics, how you can bring together device physics – and make this happen in a synergistic manner.”

Staying curious

Narang’s research has received many awards, including the 2023 Maria Goeppert Mayer Award from the American Physical Society and a 2023 Guggenheim Fellowship in Physics. She was also recently selected as a United States Science Envoy. But she says there’s a surprising secret to her work. “The focus of the group is doing excellent science while having fun,” she explains. “That’s something that we emphasize a lot, and it comes from my own experience in science. I want people to feel that excitement when working on a topic, especially when they have a new result.”

I get a lot of satisfaction out of communicating the science that we’re doing because I’m excited about it

Narang applies the same enthusiasm when communicating their results. She adds that this is particularly important when disseminating ideas that aren’t easily accessible, such as those that the team works with every day. “I think it’s really important to go out there and make that effort,” Narang says. “I get a lot of satisfaction out of communicating the science that we’re doing because I’m excited about it, and I feel like if I could get other people to see it the way I do, they would be excited about it, too.”

Life lessons

Narang doesn’t let doing and talking about exciting physics stop her from outdoor pursuits like mountain climbing and running – and though this may just be a hobby today, her early interest in athletics has resulted in life experience that she carries over to her career.

“I still run. Science has a lot in common with distance running. For example, the most important thing is actually to get out there and run and continue to try,” Narang says. “Some days are amazing, and other days you feel like, ‘Oh my gosh, that crushed me’. It kind of feels the same with the science.”

Narang adds that the key to overcoming this feeling in both long-distance running and in science is the determination to push through feelings of despondency. “Something I try to convey to junior scientists is that not everything needs to come to you instantly,” Narang concludes. “It can be a long road, and that’s okay.”

The post ‘It can be a long road and that’s okay’  –  Prineha Narang on going the distance in science appeared first on Physics World.

  •  

Mapping brain circuits reveals potential treatment targets for brain disorders

Par : Tami Freeman
15 mars 2024 à 11:00

The brain’s frontal circuits play a vital role in controlling motor, cognitive and behavioural functions. Disruption of the fronto-subcortical circuits, which connect the frontal cortex in the forebrain with basal ganglia located deeper within, can result in a range of neurological disorders. It’s not clear, however, which connections are associated with which dysfunctions. To shed light on this problem and help identify potential treatment targets, an international research team has used deep brain stimulation (DBS) to map the circuits associated with four different brain disorders.

DBS is an invasive therapy in which surgically implanted electrodes modulate brain networks by electrical stimulation of target regions. One such target – the subthalamic nucleus – is of particular interest as it receives input from the entire frontal cortex to the basal ganglia. Indeed, electrical stimulation of the subthalamic nucleus has been shown to alleviate symptoms of several brain disorders.

The research team – led by Andreas Horn from the Center for Brain Circuit Therapeutics at Harvard Medical School and Charité – Universitätsmedizin Berlin, and Ningfei Li from Charité – studied a total of 534 DBS electrodes implanted to treat four brain disorders: Parkinson’s disease (PD), dystonia, obsessive-compulsive disorder (OCD) and Tourette’s syndrome (TS).

First author Barbara Hollunder and colleagues first examined data from 197 patients who had DBS electrodes bilaterally implanted in the subthalamic nucleus to treat these disorders, including 70 with dystonia, 94 with PD, 19 with OCD and 14 with TS.

For each disorder, they mapped stimulation effects at the subthalamic level across the cohort to identify the sites associated with the most beneficial stimulation. These DBS “sweet spots” differed in location on the subthalamic nucleus for the four disorders.

Mapping brain circuits Fibre bundle associated with symptom improvement following DBS in OCD. A set of bilateral electrodes implanted for treatment in a single patient is represented alongside the tract. (Courtesy: Barbara Hollunder)

Next, the researchers mapped stimulation effects to the fronto-subcortical circuits, enabling them to identify which brain circuits had become dysfunctional (and could be targeted for treatment) in each disorder. The circuits that benefitted most from stimulation (referred to as “sweet streamlines”) included projections from sensorimotor cortices for dystonia, the primary motor cortex for TS, the supplementary motor area for PD, and the ventromedial prefrontal and anterior cingulate cortices for OCD.

“We were able to use brain stimulation to precisely identify and target circuits for the optimal treatment of four different disorders,” says Horn in a press statement. “In simplified terms, when brain circuits become dysfunctional, they may act as brakes for the specific brain functions that the circuit usually carries out. Applying DBS may release the brake and may in part restore functionality.”

Clinical potential

These disease-specific streamline models hold potential for guiding future clinical treatments. To confirm this capability, the researchers performed further experiments using independent data. They validated the PD and OCD streamline models (selected due to patient availability) in two additional retrospective groups of 32 and 35 patients, respectively.

In these additional patients, the researchers used the level of overlap between stimulation volumes and the respective streamline model to estimate clinical outcomes. For both disorders, they observed a good match between the estimates and improvements in symptoms.

The researchers also performed three prospective experiments using the identified circuits to improve treatment benefit. For two patients, they reprogrammed their DBS implants to maximize the overlap of stimulation volumes with the respective streamline model. The first patient, a 67-year-old male with PD, had benefited from a 60% reduction in symptoms upon conventional clinical treatment with DBS. Optimized stimulation based on streamline-guided parameters improved this treatment benefit further to a 71% reduction in symptoms.

In the second case, a 21-year-old female with severe treatment-resistant OCD, one month after streamline-based DBS reprogramming she experienced a 37% reduction in global obsessive-compulsive symptoms, compared with a 17% symptom reduction under clinical stimulation parameters.

Finally, the team implanted a pair of subthalamic electrodes to treat a 32-year-old male who had suffered from treatment-resistant OCD since the age of 18. Four weeks after surgery, with DBS informed by the streamline models, he reported a 77% reduction in global obsessive-compulsive symptoms, with improvements seen within one day of switching on the DBS.

The researchers suggest that their successful validations of the OCD and PD streamline targets may provide initial evidence for clinical applications in the context of prospective validation studies. They note that – if further confirmed – the identified circuits may represent therapeutic targets that could also be used for stereotactic targeting in neurosurgery and potentially non-invasive transcranial magnetic stimulation.

Li tells Physics World that in future, the researchers “plan to refine the model, focusing more on fine-grained dysfunctional brain circuits, and validate our findings through prospective clinical trials”.

The researchers describe their findings in Nature Neuroscience.

The post Mapping brain circuits reveals potential treatment targets for brain disorders appeared first on Physics World.

  •  

Keith Burnett: IOP president says it is our duty to make physics more inclusive

15 mars 2024 à 13:50

This episode of the Physics World Weekly podcast features a wide ranging interview with Keith Burnett, who is president of the Institute of Physics (IOP).

The IOP is the professional body and learned society for physics in the UK and Ireland. It represents 21,000 members and a key goal of the institute is to make physics accessible to people from all backgrounds.

Burnett, who is halfway through his two-year term in office, was knighted in 2013 for his services to science and higher education. He has served as vice chancellor of the University of Sheffield and is also an advocate for high-quality vocational education and technician training.

He talks to Physics World’s Matin Durrani about the challenges facing universities; physicists as entrepreneurs; supporting early-career physicists; and the need for the IOP to continue its drive to boost the diversity of the physics community.

  • The Institute of Physics owns IOP Publishing, which brings you Physics World

Image courtesy of Hannah Veale

The post Keith Burnett: IOP president says it is our duty to make physics more inclusive appeared first on Physics World.

  •  

Soap bubbles transform into lasers

Par : Stefan Popa
15 mars 2024 à 15:00

Soap has long been a household staple, but scientists in Slovenia have now found a new use for it by transforming soap bubbles into tiny lasers. Working at the Jožef Stefan Institute and the University of Ljubljana, they began by creating soap bubbles a few millimetres in diameter. When they mixed these with a fluorescent dye and pumped them with a pulsed laser, the bubbles began to lase. The wavelengths of light the bubble emits are highly responsive to its size, paving the way for bubble-laser sensors that can detect tiny changes in pressure or ambient electric field.

A laser requires three key components: a gain medium, an energy source for the gain medium and an optical resonator. The gain medium amplifies the light, meaning that for every photon that goes into the gain medium, more than one photon comes out. This phenomenon can be exploited by placing the gain medium in a resonator – for example, between two mirrors or inside a loop – such that the photons emitted by the gain medium go back through it to create an amplified, coherent beam of light.

The soap-bubble lasers do exactly that. To make them, Matjaž Humar and Zala Korenjak mixed standard soap solution with fluorescent dye, which acts as the gain medium. The bubbles form at the end of a capillary tube, and illuminating them with a pulsed laser pumps the gain medium. The light the gain medium produces circulates along the surface of the bubble, which acts as a resonator.

To characterize the bubble’s output, the researchers used a spectrometer to measure the wavelengths of light it produces. Only after the system reaches a threshold pumping energy do the researchers see peaks in the bubble’s wavelength spectrum – a key marker of lasing.

From St Paul’s Cathedral to the surface of a soap bubble

Forming a resonator out of a sphere is not, in itself, new. Micro-cavities formed in spheres, rings and toroids have all found uses in sensing, and are known as whispering gallery mode resonators after the famous whispering gallery at St Paul’s Cathedral in London. Within this large, circular room, two people who stand facing the wall on opposite sides can hear each other even at a whisper thanks to the efficient guiding of sound waves along the room’s curved walls.

Photo showing a bubble laser with a ring of bright green light around the centre
Whispering gallery modes: Laser light propagates along the surface of a soap bubble in the same way as sound travels around the walls of the famous “whispering gallery” in St Paul’s Cathedral, London. (Courtesy: Matjaž Humar and Zala Korenjak/Jožef Stefan Institute)

In much the same way, Humar and Korenjak found that light propagates along the surface of the soap bubble in their laser, and appears as a bright band on the bubble’s shell. As the light travels around the surface of the bubble, it interferes, creating distinct “modes” of the resonator. These modes show up as a series of regularly spaced peaks in the wavelength spectrum of the bubble.

Image of a smectic bubble laser superimposed on a spectrum of its light emissions showing regularly-spaced peaks
Spectrum from a bubble: A smectic bubble laser emits regularly spaced wavelengths of light. (Courtesy: Matjaž Humar and Zala Korenjak/Jožef Stefan Institute)

Don’t burst my bubble

“There are many micro-resonators used as laser cavities, including solid spherical shells,” Matjaž notes. “Soap bubbles, however, have not been studied as optical cavities until now.”

This may be partly because bubble lasers made of soap have limited practicality. As water evaporates from the surface of the bubble, the bubble’s thickness changes rapidly until it pops.

A more practical solution the researchers pursued is to make bubbles out of smectic liquid crystals. These do not contain water and can form very thin bubbles, typically around 30-120 nanometres (nm) thick. These smectic bubble lasers are more stable and can survive almost indefinitely. As Matjaž explains, thicker bubbles (such as those created by soap), allow many modes in the resonator, resulting in many, possibly overlapping peaks in the wavelength spectrum. Thinner bubbles (less than 200 nm), however, allow only one mode in the resonator. This single-mode operation manifests as evenly distributed peaks in the lasing spectra.

The researchers demonstrated that the wavelength the bubble lasers emitted could be tuned by altering their environment. Specifically, changing the ambient pressures or electric fields altered the size of the bubble, which changes the size of the resonator and, in turn, the wavelength of the laser emission. The measurements they present show that the smectic bubble lasers are sensitive to electric fields as small as 0.35V/mm and pressure changes of 0.024 Pa – on par or better than some existing sensors.

The pair describe their work in Physical Review X.

The post Soap bubbles transform into lasers appeared first on Physics World.

  •  

Researchers reveal the fluid dynamics behind cicadas’ ‘unique’ urination

16 mars 2024 à 11:00

This year promises to be a bumper one for cicadas given that 2024 marks the first time in more than 200 years that two broods belonging to two species will emerge at the same time.

Now researchers at Georgia Institute of Technology in the US say we might have more to worry about than just the cacophony that the insects are famous for.

They have studied cicadas’ “unique” ability to produce jets of urination from their small bodies.

Most insects urinate via droplets given that it takes less energy to do so and that their orifices are too small to do anything else.

Cicadas, however, are such voracious eaters of tree sap that individually flicking away each drop would be too taxing and would result in being unable to extract enough nutrients.

To get around this problem, they instead pee via short jets (see video above).

“Previously, it was understood that if a small animal wants to eject jets of water, then this becomes a bit challenging, because the animal expends more energy to force the fluid’s exit at a higher speed,” notes Elio Challita, who is currently based at Harvard University, US. “This is due to surface tension and viscous forces. But a larger animal can rely on gravity and inertial forces to pee. ”

Due to the cicadas’ larger size they use less energy to expel a jet and indeed, it turns out that cicadas are the smallest animal to create such high-speed jets.

The team thinks that a greater understanding of cicadas urination could help in the design of better nozzles and robots.

And with a double brood emerging this year, it could be a noisy, and wet, summer.

The post Researchers reveal the fluid dynamics behind cicadas’ ‘unique’ urination appeared first on Physics World.

  •  

Einstein’s only experiment is found in French museum

17 mars 2024 à 15:49
Einstein de Haas experiment
Do the twist: schematic of the Einstein–de Haas experiment showing the mirror, solenoid and ferromagnetic cylinder. (Courtesy: Jasper Olbrich/CC BY-SA 3.0)

Albert Einstein is famous as a theoretical physicist, but he also did one significant experiment. This was the Einstein–de Haas experiment, which he did in 1915 with the Dutch physicist Wander de Haas. This work showed that the magnetization of ferromagnetic materials such as iron is related to the angular momentum of electrons.

Now, some of the apparatus used by Einstein and de Haas has been found languishing in the Ampère Museum near Lyon, which is one of France’s oldest science museums. The finding was made by Alfonso San Miguel of the Claude Bernard Lyon 1 University and Bernard Pallandre, who is a curator at the museum. They say that the provenance of the objects can be verified by documents associated with Geertruida de Haas-Lorentz. She was a physicist and the wife of de Haas. San Miguel and Pallandre say that she donated the equipment to the museum in the 1950s.

The Einstein–de Haas experiment involves a cylinder of ferromagnetic material that is suspended by a thread so that it can rotate about its axis of symmetry. A mirror is situated at top of the cylinder such that the rotation of the cylinder can be measured by reflecting a beam of light onto a screen (see figure).

Curious rotation

The cylinder is placed in the centre of a solenoid. When an electrical current is sent through  solenoid, it creates a magnetic field that magnetizes the cylinder – which becomes a bar magnet. This results in the cylinder rotating slightly, which is observed in the deflection of the light beam. If the magnetic field is then reversed, the cylinder rotates in the opposite direction.

This rotation is not predicted by classical electromagnetic theory because the cylindrical symmetry of the experiment offers no way for the magnetic field to exert a torque on the ferromagnet.

Instead, the observed rotation supports the idea that magnetism is created by charged currents that flow in circles within a ferromagnetic material – an idea that was first put forth nearly a century earlier by the French physicist André-Marie Ampère.

As well as having magnetic moments, these orbiting electrons also have angular momentum. The magnetization of the cylinder involves the alignment of these magnetic moments. This results in changes in the directions of the angular momenta of the electrons when the magnetic field is applied. Because angular momentum must be conserved, the cylinder rotates in response to this change.

We now know that electrons have intrinsic angular momentum (spin) as well as orbital angular momentum. The Einstein–de Haas experiment can be used to study how both of these contribute to the magnetization of a material.

The post Einstein’s only experiment is found in French museum appeared first on Physics World.

  •  

New metamaterial could make true one-way glass

18 mars 2024 à 10:30

A proposed new optical metamaterial could behave like true one-way glass thanks to the Tellegen effect, which connects a material’s response to light waves with its magnetization and polarization. Under the design put forward by researchers in Finland, the US, Sweden and Greece, the new metamaterial would be formed from randomly oriented nanocylinders consisting of ferromagnets and a high-permittivity dielectric that operates at the right resonance. Unlike previous proposals, the metamaterial would not require external magnetic fields to operate, and its developers say it could also make solar cells more efficient.

The Tellegen effect is also known as the nonreciprocal magnetoelectric effect (NME), and it occurs when the electric field component of light (an electromagnetic wave) magnetizes a material at the same time as the magnetic field component polarizes it. The effect shows much promise for advanced technologies such as magnet-free optical isolators, as well as for fundamental research – for instance on the electrodynamics of relativistic matter and theoretical particles called axions.

Enhancing the effect through metamaterials

For light in the visible part of the electromagnetic spectrum, the NME in natural materials is negligible because the magnetization effect is weak, explains Shadi Safaei Jazi, a PhD student at Finland’s Aalto University who led the research. Most proposed approaches involving such materials only work for microwaves, and this is partly why the Tellegen effect has not been exploited in realistic industrial applications yet.

The magnetization component of the NME can, however, be enhanced in metamaterials and metasurfaces. These artificially engineered materials are structured in ways that give them properties such as a negative refractive index that are rare or absent in natural materials.

In the new work, which is detailed in Nature Communications, Safaei Jazi and colleagues describe a three-dimensional metamaterial that shows a strong Tellegen effect in the visible frequency range. This metamaterial would be formed from nanocylinders containing two components: a ferromagnetic nanodisc in a single-domain magnetic state, and a high-permittivity dielectric nanodisc that supports a so-called magnetic Mie-type resonance (a structural resonance at the level of the nanocylinder).

Spontaneous magnetization and the magnetoelectric effect

The researchers suggest that this 3D metamaterial could be made by randomly distributing the nanocylinders within a host medium such as water or a polymer. The ferromagnetic nanodiscs would exhibit spontaneous magnetization and the magnetoelectric effect (ME) without the need for an external magnetic field. Using conventional materials such as cobalt and silicon to make up the structure would increase the ME by two orders of magnitude compared to other known natural materials at room temperature, the team add.

The team also showed that using emerging materials such as magnetic Weyl semimetals in the metamaterial would enhance the ME even further, by almost four orders of magnitude. Weyl semimetals are a recently discovered class of topological material in which electrons behave like massless particles thanks to a special kind of symmetry in their electronic structure.

Seeing clearly in one direction

One potential application for such magnetoelectric colloids would be a true one-way glass, Safaei Jazi says. “Such a glass should not be confused with commercial semi-transparent reciprocal glass, which lets light through in both directions,” she explains. “Only when the brightness is different between the two sides (for example, inside and outside a window), does the latter act like a one-way glass.”

A true one-way glass based on the proposed magnetoelectric metamaterials would incorporate several layers of magnetoelectric coatings on top of a conventional glass surface, she continues. “Conventional technology for such a glass would require strong and bulky electromagnets surrounding the glass to create magnetization and break reciprocal light transmission. These electromagnets would completely obscure the view and make the system opaque to light in both directions.”

Viktar Asadchy, an electro-optical engineer at Aalto and Stanford University who supervised the project, says that the team’s system would, in principle, show strong spontaneous magnetization and one-way light transmission without external magnetic fields. “This means that a window with that glass in your house, office, or car would allow you to enjoy a perfect view, regardless of the brightness outside, and people wouldn’t be able to see anything inside,” Asadchy says.

The proposed one-way glass could also make solar cells more efficient, Safaei Jazi tells Physics World. This is because it would block the thermal emissions that today’s cells radiate back towards the Sun, which reduces the amount of energy the cells can capture.

The post New metamaterial could make true one-way glass appeared first on Physics World.

  •  

Wigner’s friend: the quantum thought experiment that continues to confound

18 mars 2024 à 11:56

The quantum world provides fertile material for thought experiments that seem so strange-but-true as to defy logic. One of the most notorious is “Wigner’s friend”, which has challenged physicists and philosophers ever since it was first conceived by the Hungarian-American physicist Eugene Wigner. He published the thought experiment in a 1961 book edited by the mathematician Irving Good entitled The Scientist Speculates: an Anthology of Partly-baked Ideas.

Wigner’s thought experiment is a more humane version of Schrödinger’s less complex but more famous thought experiment a quarter century before, which involved a cat inside a box whose fate hangs on a quantum event. Inside the box Schrödinger’s cat is dead or alive, whereas for someone outside, the cat remains dead-and-alive; it’s in a “superposition”. The bizarre situation only vanishes when the box lid opens.

The set-up of Wigner’s thought experiment is disarmingly simple. Wigner and his friend are interested in the outcome of a particular experiment, let’s say preparing a quantum bit (qubit) whose measurement outcome will be either 0 or 1. The friend goes into a lab and sets up the equipment, while Wigner remains outside. Each is fully versed in the quantum formalism.

Counterintuitively, their predictions differ. Wigner’s friend – the experimentalist – prepares the qubit with a superposition of states, and predicts the final state to be 0 with 50% probability, or 1 with 50% probability. Wigner, on the other hand, is isolated from his friend. Using a single quantum state in superposition to describe his friend plus the lab contents, Wigner predicts that the system will remain in superposition with 100% probability.

Wigner maintains this prediction even if he believes that his friend has finished the experiment. According to quantum mechanics, Wigner cannot separate the friend out from the rest of the lab contents. Wigner must therefore ask his friend in order to gain information about the friend’s quantum state. So who’s got the right answer: Wigner or his friend?

Both are right

The answer is that both probabilities are correct – from the standpoint of each individual. Their two correct uses of the mathematics give different predictions: Wigner predicts that the state is 100% in superposition, while the friend predicts that the measurement outcome of the qubit is either 1 or 0. Essentially, Wigner’s thought experiment says that what’s true depends on where you stand.

But if we assume that the probabilities describe the same “set of facts” – and that there’s something that’s true from everyone’s point of view – then these predictions are in conflict. Wigner himself, and many who followed, thought it paradoxical that the quantum formalism gives two differing predictions for the same state of affairs. They believed that objectivity requires that observers must characterize the facts in the same way regardless of their position.

What makes this scenario seem paradoxical, however, is its reliance on hidden classical assumptions. One assumption is that Wigner is right and his friend wrong (or vice-versa) because both are ultimately modelling the outcome of the friend’s qubit measurement. But suppose their differing predictions mean that the two are modelling different systems. Wigner is modelling the friend-qubit-lab environment, while his friend models just the qubit.

In a classical situation, Wigner and friend could have the same probabilities for predicting the outcome of a coin flip. Even if Wigner were, say, standing behind a curtain, he would not have to treat the friend flipping the coin as being in superposition. In the quantum situation, however, Wigner cannot single out and isolate the probabilities for just the coin. There may as well be no “coin” for Wigner – there’s not one thing among others in a room full of objects.

But back to Wigner’s thought experiment. What happens when the laboratory door opens and Wigner and friend can talk about their predictions? The two had disagreed but now it looks like they agree on the final state of the qubit. It seems that their previously inconsistent descriptions of a single state of affairs have converged into one.

That’s not what happens, though. Rather, Wigner’s new information does not repudiate his initial prediction. The quantum formalism indicates Wigner and friend had consistent descriptions for two different states of affairs. This feels paradoxical only if we give in to our intuition and assume that it was the same system for Wigner and friend all along.

The eagerly awaited moment when Wigner and his friend share their findings, then, is not the resolution to the paradox, but what happens after the paradoxical situation has already ended. Wigner had his correct formalism and the friend had theirs.

Wigner, and many of those who followed, were bothered by the fact that there could be two people using the same methods on the same experiment arriving at two correct descriptions depending on whether one was inside or outside the lab. Our classical intuition is that the system is the same for everyone. Quantum mechanics inclines us to think that we can have different systems without there being an inconsistency or be objective without needing to make all our descriptions identical.

The critical point

Quantum information theorists have turned Wigner’s friend into a powerful set of thought experiments for testing the plausibility of physical assumptions we make when we share information. These elaborated thought experiments involve multiple participants in multiple labs, entangled quantum states between friends and real-life entangled photon experiments to smoke out what our classical assumptions are.

Is there a fork in the road, classical or quantum? To stick with the classical interpretation that says Wigner’s friend involves two inconsistent descriptions of one state of affairs produces paradoxes. The quantum perspective implies there are descriptions of two different states of affairs. The first is intuitive but ends up in a contradiction, the other is less intuitive, but consistent. Quantum friendship means never having to say you’re sorry for your use of the formalism.

Robert P Crease is a professor (click link below for full bio), Jennifer Carter is a lecturer and Gino Elia is a PhD student, all in the Department of Philosophy, Stony Brook University, US.

The post Wigner’s friend: the quantum thought experiment that continues to confound appeared first on Physics World.

  •  
❌
❌