↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 20 septembre 2024Physics World

The physics of cycling’s ‘Everesting’ challenge revealed

20 septembre 2024 à 17:00

“Everesting” involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m.

The challenge became popular during the COVID-19 lockdowns and in 2021 the Irish cyclist Ronan McLaughlin was reported to have set a new “Everesting” record of 6:40:54. This was almost 20 minutes faster than the previous world record of 6:59:38 set by the US’s Sean Gardner in 2020.

Yet a debate soon ensued on social media concerning the significant tailwind that day of 5.5 meters per second, which they claimed would have helped McLaughlin to climb the hill multiple times.

But did it? To investigate, Martin Bier, a physicist at East Carolina University in North Carolina, has now analysed what effect air resistance might have when cycling up and down a hill.

“Cycling uses ‘rolling’, which is much smoother and faster, and more efficient [than running],” notes Bier. “All of the work is purely against gravity and friction.”

Bier calculated that a tailwind does help slightly when going uphill, but most of the work when doing so is generating enough power to overcome gravity rather than air resistance.

When coming downhill, however, any headwind becomes significant given that the force of air resistance increases with the square of the cyclist’s speed. The headwind can then have a huge effect, causing a significant reduction in speed.

So, while a tailwind going up is negligible the headwind coming down certainly won’t be. “There are no easy tricks,” Bier adds. “If you want to be a better Everester, you need to lose weight and generate more [power]. This is what matters — there’s no way around it.”

The post The physics of cycling’s ‘Everesting’ challenge revealed appeared first on Physics World.

  •  

Air-powered computers make a comeback

20 septembre 2024 à 13:00

A device containing a pneumatic logic circuit made from 21 microfluidic valves could be used as a new type of air-powered computer that does not require any electronic components. The device could help make a wide range of important air-powered systems safer and less expensive, according to its developers at the University of California at Riverside.

Electronic computers rely on transistors to control the flow of electricity. But in the new air-powered computer, the researchers use tiny valves instead of transistors to control the flow of air rather than electricity. “These air-powered computers are an example of microfluidics, a decades-old field that studies the flow of fluids (usually liquids but sometimes gases) through tiny networks of channels and valves,” explains team leader William Grover, a bioengineer at UC Riverside.

By combining multiple microfluidic valves, the researchers were able to make air-powered versions of standard logic gates. For example, they combined two valves in a row to make a Boolean AND gate. This gate works because air will flow through the two valves only if both are open. Similarly, two valves connected in parallel make a Boolean OR gate. Here, air will flow if either one or the other of the valves is open.

Complex logic circuits

Combining an increasing number of microfluidic valves enables the creation of complex air-powered logic circuits. In the new study, detailed in Device, Grover and colleagues made a device that uses 21 microfluidic valves to perform a parity bit calculation – an important calculation employed by many electronic computers to detect errors and other problems.

The novel air-powered computer detects differences in air pressure flowing through the valves to count the number of bits. If there is an error, it outputs an error signal by blowing a whistle. As a proof-of-concept, the researchers used their device to detect anomalies in an intermittent pneumatic compression (IPC) device – a leg sleeve that fills with air and regularly squeezes a patient’s legs to increase blood flow, with the aim of preventing blood clots that could lead to strokes. Normally, these machines are monitored using electronic equipment.

“IPC devices can save lives, but they aren’t as widely employed as they could be,” says Grover. “In part, this is because they’re so expensive. We wanted to see if we could reduce their cost by replacing some of their electronic hardware with pneumatic logic.”

Air’s viscosity is important

Air-powered computers behave very similarly, but not quite identically to electronic computers, Grover adds. “For example, we can often take an existing electronic circuit and make an air-powered version of it and it’ll work just fine, but at other times the air-powered device will behave completely differently and we have to tweak the design to make it function.”

The variations between the two types of computers come down to one important physical difference between electricity and air, he explains: electricity does not have viscosity, but air does. “There are also lots of little design details that are of little consequence in electronic circuits but which become important in pneumatic circuits because of air’s viscosity. This makes our job a bit harder, but it also means we can do things with pneumatic logic that aren’t possible – or are much harder to do – with electronic logic.”

In this work, the researchers focused on biomedical applications for their air-powered computer, but they say that this is just the “tip of the iceberg” for this technology. Air-powered systems are ubiquitous, from the brakes on a train, to assembly-line robots and medical ventilators, to name but three. “By using air-powered computers to operate and monitor these systems, we could make these important systems more affordable, more reliable and safer,” says Grover.

“I have been developing air-powered logic for around 20 years now, and we’re always looking for new applications,” he tells Physics World. “What is more, there are areas in which they have advantages over conventional electronic computers.”

One specific application of interest is moving grain inside silos, he says. These enormous structures hold grain and other agricultural products and people often have to climb inside to spread out the grain – an extremely dangerous task because they can become trapped and suffocate.

“Robots could take the place of humans here, but conventional electronic robots could generate electronic sparks that could create flammable dust inside the silo,” Grover explains. “An air-powered robot, on the other hand, would work inside the silo without this risk. We are thus working on an air-powered ‘brain’ for such a robot to keep people out of harm’s way.”

Air-powered computers aren’t a new idea, he adds. Decades ago, there was a multitude of devices being designed that ran on water or air to perform calculations. Air-powered computers fell out of favour, however, when transistors and integrated circuits made electronic computers feasible. “We’ve therefore largely forgotten the history of computers that ran on things other than electricity. Hopefully, our new work will encourage more researchers to explore new applications for these devices.”

The post Air-powered computers make a comeback appeared first on Physics World.

  •  

Quantum hackathon makes new connections

Par : No Author
20 septembre 2024 à 10:40

It is said that success breeds success, and that’s certainly true of the UK’s Quantum Hackathon – an annual event organized by the National Quantum Computing Centre (NQCC) that was held in July at the University of Warwick. Now in its third year, the 2024 hackathon attracted 50% more participants from across the quantum ecosystem, who tackled 13 use cases set by industry mentors from the private and public sectors. Compared to last year’s event, participants were given access to a greater range of technology platforms, including software control systems as well as quantum annealers and physical processors, and had an additional day to perfect and present their solutions.

The variety of industry-relevant problems and the ingenuity of the quantum-enabled solutions were clearly evident in the presentations on the final day of the event. An open competition for organizations to submit their problems yielded use cases from across the public and private spectrum, including car manufacturing, healthcare and energy supply. While some industry partners were returning enthusiasts, such as BT and Rolls Royce, newcomers to the hackathon included chemicals firm Johnson Matthey, Aioi R&D Lab (a joint venture between Oxford University spin-out Mind Foundry and the global insurance brand Aioi Nissay Dowa) and the North Wales Police.

“We have a number of problems that are beyond the scope of standard artificial intelligence (AI) or neural networks, and we wanted to see whether a quantum approach might offer a solution,” says Alastair Hughes, lead for analytics and AI at North Wales Police. “The results we have achieved within just two days have proved the feasibility of the approach, and we will now be looking at ways to further develop the model by taking account of some additional constraints.”

The specific use case set by Hughes was to optimize the allocation of response vehicles across North Wales, which has small urban areas where incidents tend to cluster and large swathes of countryside where the crime rate is low. “Our challenge is to minimize response times without leaving some of our communities unprotected,” he explains. “At the moment we use a statistical process that needs some manual intervention to refine the configuration, which across the whole region can take a couple of months to complete. Through the hackathon we have seen that a quantum neural network can deliver a viable solution.”

Teamwork
Problem solving Each team brought together a diverse range of skills, knowledge and experience to foster learning and accelerate the development process. (Courtesy: NQCC)

While Hughes had no prior experience with using quantum processors, some of the other industry mentors are already investigating the potential benefits of quantum computing for their businesses. At Rolls Royce, for example, quantum scientist Jarred Smalley is working with colleagues to investigate novel approaches for simulating complex physical processes, such as those inside a jet engine. Smalley has mentored a team at all three hackathons, setting use cases that he believes could unlock a key bottleneck in the simulation process.

The hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors

“Some of our crazy problems are almost intractable on a supercomputer, and from that we extract a specific set of processes where a quantum algorithm could make a real impact,” he says. “At Rolls Royce our research tends to be focused on what we could do in the future with a fault-tolerant quantum computer, and the hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors.”

Since the first hackathon in 2022, Smalley says that there has been an improvement in the size and capabilities of the hardware platforms. But perhaps the biggest advance has been in the software and algorithms available to help the hackers write, test and debug their quantum code. Reflecting that trend in this year’s event was the inclusion of software-based technology providers, such as Q-CTRL’s Fire Opal and Classiq, that provide tools for error suppression and optimizing quantum algorithms. “There are many more software resources for the hackers to dive into, including algorithms that can even analyse the problems themselves,” Smalley says.

Cathy White, a research manager at BT who has mentored a team at all three hackathons, agrees that rapid innovation in hardware and software is now making it possible for the hackers to address real-world problems – which in her case was to find the optimal way to position fault-detecting sensors in optical networks. “I wanted to set a problem for which we could honestly say that our classical algorithms can’t always provide a good approximation,” she explained. “We saw some promising results within the time allowed, and I’m feeling very positive that quantum computers are becoming useful.”

Both White and Smalley could see a significant benefit from the extended format, which gave hackers an extra day to explore the problem and consider different solution pathways. The range of technology providers involved in the event also enabled the teams to test their solutions on different platforms, and to adapt their approach if they ran into a problem. “With the extra time my team was able to use D-Wave’s quantum annealer as well as a gate-model approach, and it was impressive to see the diversity of algorithms and approaches that the students were able to come up with,” White comments. “They also had more scope to explore different aspects of the problem, and to consolidate their results before deciding what they wanted to present.”

One clear outcome from the extended format was more opportunity to benchmark the quantum solutions against their classical counterparts. “The students don’t claim quantum advantage without proper evidence,” adds White. “Every year we see remarkable progress in the technology, but they can help us to see where there are still challenges to be overcome.”

According to Stasja Stanisic from Phasecraft, one of the four-strong judging panel, a robust approach to benchmarking was one of the stand-out factors for the winning team. Mentored by Aioi R&D Lab, the team investigated a risk aggregation problem, which involved modelling dynamic relationships between data such as insurance losses, stock market data and the occurrence of natural disasters. “The winning team took time to really understand the problem, which allowed them to adapt their algorithm to match their use-case scenario,” Stanisic explains. “They also had a thorough and structured approach to benchmarking their results against other possible solutions, which is an important comparison to make.”

The team presenting their results
Learning points Presentations on the final day of the event enabled each team to share their results with other participants and a four-strong judging panel. (Courtesy: NQCC)

Teams were judged on various criteria, including the creativity of the solution, its success in addressing the use case, and investigation of scaling and feasibility. The social impact and ethical considerations of their solution was also assessed. Using the NQCC’s Quantum STATES principles for responsible and ethical quantum computing (REQC), which were developed and piloted at the NQCC, the teams, for example, considered the potential impact of their innovation on different stakeholders and the explainability of their solution. They also proposed practical recommendations to maximize societal benefit. While many of their findings were specific to their use cases, one common theme was the need for open and transparent development processes to build trust among the wider community.

“Quantum computing is an emerging technology, and we have the opportunity right at the beginning to create an environment where ethical considerations are discussed and respected,” says Stanisic. “Some of the teams showed some real depth of thought, which was exciting to see, while the diverse use cases from both the public and private sectors allowed them to explore these ethical considerations from different perspectives.”

Also vital for participants was the chance to link with and learn from their peers. “The hackathon is a place where we can build and maintain relationships, whether with the individual hackers or with the technology partners who are also here,” says Smalley. For Hughes, meanwhile, the ability to engage with quantum practitioners has been a game changer. “Being in a room with lots of clever people who are all sparking off each other has opened my eyes to the power of quantum neural networks,” he says. “It’s been phenomenal, and I’m excited to see how we can take this forward at North Wales Police.”

  • To take part in the 2025 Quantum Hackathon – whether as a hacker, an industry mentor or technology provider – please e-mail the NQCC team at nqcchackathon@stfc.ac.uk

The post Quantum hackathon makes new connections appeared first on Physics World.

  •  

Rheo-electric measurements to predict battery performance from slurry processing

Par : No Author
20 septembre 2024 à 08:58

The market for lithium-ion batteries (LIBs) is expected to grow ~30x to almost 9 TWh produced annually in 2040 driven by demand from electric vehicles and grid scale storage. Production of these batteries requires high-yield coating processes using slurries of active material, conductive carbon, and polymer binder applied to metal foil current collectors. To better understand the connections between slurry formulation, coating conditions, and composite electrode performance we apply new Rheo-electric characterization tools to battery slurries. Rheo-electric measurements reveal the differences in carbon black structure in the slurry that go undetected by rheological measurements alone. Rheo-electric results are connected to characterization of coated electrodes in LIBs in order to develop methods to predict the performance of a battery system based on the formulation and coating conditions of the composite electrode slurries.

Jeffrey Richards (left) and Jeffrey Lopez (right)

Jeffrey Richards is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on understanding the rheological and electrical properties of soft materials found in emergent energy technologies.

Jeffrey Lopez is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on using fundamental chemical engineering principles to study energy storage devices and design solutions to enable accelerated adoption of sustainable energy technologies.



The post Rheo-electric measurements to predict battery performance from slurry processing appeared first on Physics World.

  •  
À partir d’avant-hierPhysics World

Simultaneous structural and chemical characterization with colocalized AFM-Raman

Par : No Author
19 septembre 2024 à 17:27

The combination of Atomic Force Microscopy (AFM) and Raman spectroscopy provides deep insights into the complex properties of various materials. While Raman spectroscopy facilitates the chemical characterization of compounds, interfaces and complex matrices, offering crucial insights into molecular structures and compositions, including microscale contaminants and trace materials. AFM provides essential data on topography and mechanical properties, such as surface texture, adhesion, roughness, and stiffness at the nanoscale.

Traditionally, users must rely on multiple instruments to gather such comprehensive analysis. HORIBA’s AFM-Raman system stands out as a uniquely multimodal tool, integrating an automated AFM with a Raman/photoluminescence spectrometer, providing precise pixel-to-pixel correlation between structural and chemical information in a single scan.

This colocalized approach is particularly valuable in applications such as polymer analysis, where both surface morphology and chemical composition are critical; in semiconductor manufacturing, for detecting defects and characterizing materials at the nanoscale; and in life sciences, for studying biological membranes, cells, and tissue samples. Additionally, it’s ideal for battery research, where understanding both the structural and chemical evolution of materials is key to improving performance.

João Lucas Rangel

João Lucas Rangel currently serves as the AFM & AFM-Raman global product manager at HORIBA and holds a PhD in biomedical engineering. Specializing in Raman, infrared, and fluorescence spectroscopies, his PhD research was focused on skin dermis biochemistry changes. At HORIBA Brazil, João started in 2012 as molecular spectroscopy consultant, transitioning into a full-time role as an application scientist/sales support across Latin America, expanding his responsibilities, overseeing the applicative sales support, and co-management of the business activities within the region. In 2022, João was invited to join HORIBA France as a correlative microscopy – Raman application specialist, being responsible to globally develop the correlative business, combing HORIBA’s existing technologies with other complementary technologies. More recently, in 2023, João was promoted to the esteemed position of AFM & AFM-Raman global product manager. In this role, João oversees strategic initiatives aiming at the company’s business sustainability and future development, ensuring its continued success and future growth.

The post Simultaneous structural and chemical characterization with colocalized AFM-Raman appeared first on Physics World.

  •  

Diagnosing and treating disease: how physicists keep you safe during healthcare procedures

19 septembre 2024 à 16:42

This episode of the Physics World Weekly podcast features two medical physicists working at the heart of the UK’s National Health Service (NHS). They are Mark Knight, who is chief healthcare scientist at the NHS Kent and Medway Integrated Care Board, and Fiammetta Fedele, who is head of non-ionizing radiation at Guy’s and St Thomas NHS Foundation Trust in London.

They explain how medical physicists keep people safe during healthcare procedures – while innovating new technologies and treatments. They also discuss the role that artificial intelligence could play in medical physics and take a look forward to the future of healthcare.

This episode is supported by RaySearch Laboratories.

RaySearch Laboratories unifies industry solutions, empowering healthcare providers to deliver precise and effective radiotherapy treatment. RaySearch products transform scattered technologies into clarity, elevating the radiotherapy industry.

RaySearch logo

 

 

 

 

 

 

The post Diagnosing and treating disease: how physicists keep you safe during healthcare procedures appeared first on Physics World.

  •  

RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia

Par : Tami Freeman
19 septembre 2024 à 14:45

GenesisCare is the largest private radiation oncology provider in Australia, operating across five states and treating around 30,000 cancer patients each year. At the heart of this organization, ensuring the safety and efficiency of all patient radiotherapy treatments, lies a single server running LAP’s RadCalc quality assurance (QA) software.

RadCalc is a 100% software-based platform designed to streamline daily patient QA. The latest release, version 7.3.2, incorporates advanced 3D algorithms for secondary verification of radiotherapy plans, EPID-based pre-treatment QA and in vivo dosimetry, as well as automated 3D calculation based on treatment log files.

For GenesisCare, RadCalc provides independent secondary verification for 100 to 130 new plans each day, from more than 43 radiation oncology facilities across the country. The use of a single QA platform for all satellite centres helps to ensure that every patient receives the same high standard of care. “With everyone using the same software, we’ve got a single work instruction and we’re all doing things the same way,” says Leon Dunn, chief medical physicist at GenesisCare in Victoria.

“While the individual states operate as individual business units, the physics team operates as one, and the planners operate as one team as well,” adds Peter Mc Loone, GenesisCare’s head of physics for Australia. “We are like one team nationally, so we try to do things the same way. Obviously, it makes sense to make sure everyone’s checking the plans in the same way as well.”

User approved

GenesisCare implemented RadCalc more than 10 years ago, selected in part due to the platform’s impressive reputation amongst its users in Australia. “At that time, RadCalc was well established in radiotherapy and widely used,” explains Dunn. “It didn’t have all the features that it has now, but its basic features met the requirements we needed and it had a pretty solid user base.”

Today, GenesisCare’s physicists employ RadCalc for plan verification of all types of treatment across a wide range of radiotherapy platforms – including Varian and Elekta linacs, Gamma Knife and the Unity MR-linac, as well as superficial treatments and high dose-rate brachytherapy. They also use RadCalc’s plan comparison tool to check that the output from the treatment planning system matches what was imported to the MOSAIQ electronic medical record system.

“Before we had the plan comparison feature, our radiation therapists had to manually check control points in the plan against what was on the machine,” says Mc Loone. “RadCalc checks a wide range of values within the plan. It’s a very quick check that has saved us a lot of time, but also increased the safety aspect. We have certainly picked up errors through its use.”

Keeping treatments safe

The new feature that’s helping to make a big difference, however, is GenesisCare’s recent implementation of RadCalc’s 3D independent recalculation tool. Dunn explains that RadCalc previously performed a 2D comparison between the dose to a single point in the treatment planning system and the calculated dose to that point.

The new module, on the other hand, employs RadCalc’s collapsed-cone convolution algorithm to reconstruct 3D dose on the patient’s entire CT data set. Enabled by the introduction of graphics processing units, the algorithm performs a completely independent 3D recalculation of the treatment plan on the patient’s data.  “We’ve gone from a single point to tens of thousands of points,” notes Dunn.

Importantly, this 3D recalculation can discover any errors within a treatment plan before it gets to the point at which it needs to be measured. “Our priority is for every patient to have that second check done, thereby catching anything that is wrong with the treatment plan, hopefully before it is seen by the doctor. So we can fix things before they could become an issue,” Dunn says, pointing out that in the first couple of months of using this tool, it highlighted potentially suboptimal treatment plans to be improved.

Peter Mc Loone
Peter Mc Loone: “It’s a very quick check that has saved us a lot of time, but also increased the safety aspect.” (Courtesy: GenesisCare)

In contrast, previous measurement-based checks had to be performed at the end of the entire planning process, after everyone had approved the plan and it had been exported to the treatment system. “Finding an error at that point puts a lot of pressure on the team to redo the plan and have everything reapproved,” Mc Loone explains. “By removing that stress and allowing checks to happen earlier in the piece, it makes the overall process safer and more efficient.”

Dunn notes that if the second check shows a problem with the plan, the plan can still be sent for measurements if needed, to confirm the RadCalc findings.

Increasing efficiency

As well as improving safety, the ability to detect errors early on in the planning process speeds up the entire treatment pathway. Operational efficiency is additionally helped by RadCalc’s high level of automation.

Once a treatment plan is created, the planning staff need to export it to RadCalc, with a single click. RadCalc then takes care of everything else, importing the entire data set, sending it to the server for recalculation and then presenting the results. “We don’t have to touch any of the processes until we get the quality checklist out, and that’s a real game changer for us,” says Dunn.

“We have one RadCalc system, that can handle five different states and several different treatment planning systems [Varian’s Eclipse and Elekta’s Monaco and GammaPlan],” notes Mc Loone. “We can have 130 different plans coming in, and RadCalc will filter them correctly and apply the right beam models using that automation that LAP has built in.”

Because RadCalc performs 100% software-based checks, it doesn’t require access to the treatment machine to run the QA (which usually means waiting until the day’s clinical session has finished). “We’re no longer waiting around to perform measurements on the treatment machine,” Dunn explains. “It’s all happening while the patients are being treated during the normal course of the day. That automation process is an important time saver for us.”

This shift from measurement- to software-based QA also has a huge impact on the radiation therapists. As they were already using the machines to treat patients, the therapists were tasked with delivering most of the QA cases – at the end of the day or in between treatment sessions – and informing the physicists of any failures.

“Since we’ve introduced RadCalc, they essentially get all that time back and can focus on doing what they do best, treating patients and making sure it’s all done safely,” says Dunn. “Taking that burden away from them is a great additional bonus.”

Looking to the future, GenesisCare next plans to implement RadCalc’s log file analysis feature, which will enable the team to monitor and verify the performance of the radiotherapy machines. Essentially, the log files generated after each treatment are brought back into RadCalc, which then verifies that what the machine delivered matched the original treatment plan.

“Because we have so many plans going through, delivered by many different accelerators, we can start to build a picture of machine performance,” says Dunn. “In the future, I personally want to look at the data that we collect through RadCalc. Because everything’s coming through that one system, we’ve got a real opportunity to examine safety and quality at a system level, from treatment planning system through to patient treatment.”

The post RadCalc QA: ensuring safe and efficient radiotherapy throughout Australia appeared first on Physics World.

  •  

The free-to-read Physics World Big Science Briefing 2024 is out now

19 septembre 2024 à 14:00

Over the past decades, “big science” has become bigger than ever be it planning larger particle colliders, fusion tokamaks or space observatories. That development is reflected in the growth of the Big Science Business Forum (BSBF), which has been going from strength to strength following its first meeting in 2018 in Copenhagen.

This year, more than 1000 delegates from 500 organizations and 30 countries will descend on Trieste from 1 to 4 October for BSBF 2024. The meeting will see European businesses and organizations such as the European Southern Observatory, the CERN particle-physics laboratory and Fusion 4 Energy come together to discuss the latest developments and business trends in big science.

A key component of the event – as it was at the previous BSBF in Granada, Spain, in 2022 – is the Women in Big Science group, who will be giving a plenary session about initiatives to boost and help women in big science.

In this year’s Physics World Big Science Briefing, Elizabeth Pollitzer – co-founder and director of Portia, which seeks to improve gender equality in science, technology, engineering and mathematics.

She explains why we need gender equality in big science and what measures must be taken to tackle the gender imbalance among staff and users of large research infrastructures.

One prime example of big science is particle physics. Some 70 years since the founding of CERN and a decade following the discovery of the Higgs boson at the lab’s Large Hadron Collider (LHC) in 2012, particle physics stands at a crossroads. While the consensus is that a “Higgs factory” should come next after the LHC, there is disagreement over what kind of machine it should be – a large circular collider some 91 km in circumference or a linear machine just a few kilometres long.

As the wrangling goes on, other proposals are also being mooted such as a muon collider. Despite needing new technologies, a muon collider has the advantage that it would only require a circular collider in a tunnel roughly the size of the LHC.

Another huge multinational project is the ITER fusion tokamak currently under construction in Cadarache, France. Hit by cost hikes and delays for decades, there was more bad news earlier this year when ITER said the tokamak will now not fire up until 2035. ”Full power” mode with deuterium and tritium won’t happen until 2039 some 50 years since the facility was first mooted.

Backers hope that ITER will lay the way towards fusion power plants delivering electricity to the grid, but huge technical challenges lie in store. After all, those reactors will have to breed their own tritium so they become fuel independent, as John Evans explains.

Big science also involves dedicated user facilities. In this briefing we talk to Gianluigi Botton from the Diamond Light Source in the UK and Mike Witherell from the Lawrence Berkeley National Laboratory on managing such large scale research infrastructures and their plans for the future.

We hope you enjoy the briefing and let us know your feedback on the issue.

The post The free-to-read <em>Physics World Big Science Briefing</em> 2024 is out now appeared first on Physics World.

  •  

Vortex cannon generates toroidal electromagnetic pulses

19 septembre 2024 à 11:34
electromagnetic cannons emit electromagnetic vortex pulses thanks to coaxial horn antennas
Toroidal pulses Air cannons produce visible vortex rings by generating rotating air pressure differences, while electromagnetic cannons emit electromagnetic vortex pulses using coaxial horn antennas. (Courtesy: Ren Wang; Pan-Yi Bao; Zhi-Qiang Hu; Shuai Shi; Bing-Zhong Wang; Nikolay I Zheludev; Yijie Shen)

Toroidal electromagnetic pulses can be generated using a device known as a horn microwave antenna. This electromagnetic “vortex cannon” produces skyrmion topological structures that might be employed for information encoding or for probing the dynamics of light–matter interactions, according to its developers in China, Singapore and the UK.

Examples of toroidal or doughnut-like topology abound in physics – in objects such as Mobius strips and Klein bottles, for example. It is also seen in simpler structures like smoke rings in air and vortex rings in water, as well as in nuclear currents. Until now, however, no one had succeeded in directly generating this topology in electromagnetic waves.

A rotating electromagnetic wave structure

In the new work, a team led by Ren Wang from the University of Electronic Science and Technology of China, Yijie Shen from Nanyang Technological University in Singapore and colleagues from the University of Southampton in the UK employed wideband, radially polarized, conical coaxial horn antennas with an operating frequency range of 1.3–10 GHz. They used these antennas to create a rotating electromagnetic wave structure with a frequency in the microwave range.

The antenna comprises inner and outer metal conductors, with 3D-printed conical and flat-shaped dielectric supports at the bottom and top of the coaxial horn, respectively

“When the antenna emits, it generates an instantaneous voltage difference that forms the vortex rings,” explains Shen. “These rings are stable over time – even in environments with lots of disturbances – and maintain their shape and energy over long distances.”

Complex features such as skyrmions

The conical coaxial horn antenna generates an electromagnetic field in free space that rotates around the propagation direction of the wave structure. The researchers experimentally mapped the toroidal electromagnetic pulses at propagation distances of 5, 50 and 100 cm from the horn aperture, using a planar microwave anechoic chamber (a shielded room covered with electromagnetic absorbers) to measure the spatial electromagnetic fields of the antenna, using a scanning frame to move the antenna to the desired measurement area. They then connected a vector network analyser to the transmitting and receiving antennas to obtain the magnitude and phase characteristics of the electromagnetic field at different positions.

The researchers found that the toroidal pulses contained complex features such as skyrmions. These are made up of numerous electric field vectors and can be thought of as two-dimensional whirls (or “spin textures”). The pulses also evolved over time to more closely resemble canonical Hellwarth–Nouchi toroidal pulses. These structures, first theoretically identified by the two physicists they are named after, represent a radically different, non-transverse type of electromagnetic pulse with a toroidal topology. These pulses, which are propagating counterparts of localized toroidal dipole excitations in matter, exhibit unique electromagnetic wave properties, explain Shen and colleagues.

A wide range of applications

The researchers say that they got the idea for their new work by observing how smoke rings are generated from an air cannon. They decided to undertake the study because toroidal pulses in the microwave range have applications in a wide range of areas, including cell phone technology, telecommunications and global positioning. “Understanding both the propagation dynamics and characterizing the topological structure of these pulses is crucial for developing these applications,” says Shen.

The main difficulty faced in these experiments was generating the pulses in the microwave part of the electromagnetic spectrum. The researchers attempted to do this by adapting existing optical metasurface methodologies, but failed because a large metasurface aperture of several metres was required, which was simply too impractical to fabricate. They overcame the problem by making use of a microwave horn emitter that’s more straightforward to create.

Looking forward, the researchers now plan to focus on two main areas. The first is to develop communication, sensing, detection and metrology systems based on toroidal pulses, aiming to overcome the limitations of existing wireless applications. Secondly, they hope to generate higher-order toroidal pulses, also known as supertoroidal pulses.

“These possess unique characteristics such as propagation invariance, longitudinal polarization, electromagnetic vortex streets (organized patterns of swirling vortices) and higher-order skyrmion topologies,” Shen tells Physics World. “The supertoroidal pulses have the potential to drive the development of ground-breaking applications across a range of fields, including defence systems or space exploration.”

The study is detailed in Applied Physics Reviews.

The post Vortex cannon generates toroidal electromagnetic pulses appeared first on Physics World.

  •  

A comprehensive method for assembly and design optimization of single-layer pouch cells

Par : No Author
18 septembre 2024 à 16:08

For academic researchers, the cell format for testing lithium-ion batteries is often overlooked. However, choices in cell format and their design can affect cell performance more than one may expect. Coin cells that utilize either a lithium metal or greatly oversized graphite negative electrode are common but can provide unrealistic testing results when compared to commercial pouch-type cells. Instead, single-layer pouch cells provide a more similar format to those used in industry while not requiring large amounts of active material. Moreover, their assembly process allows for better positive/negative electrode alignment, allowing for assembly of single-layer pouch cells without negative electrode overhang. This talk presents a comparison between coin, single-layer pouch, and stacked pouch cells, and shows that single-layer pouch cells without negative electrode overhang perform best. Additionally, a careful study of the detrimental effects of excess electrode material is shown. The single-layer pouch cell format can also be used to measure pressure and volume in situ, something that is not possible in a coin cell. Last, a guide to assembling reproducible single-layer pouch cells without negative electrode overhang is presented.

An interactive Q&A session follows the presentation.

Matthew Garayt
Matthew Garayt

Matthew D L Garayt is a PhD candidate in the Jeff Dahn, Michael Metzger, and Chongyin Yang Research Groups at Dalhousie University. His work focuses on materials for lithium- and sodium-ion batteries, with a focus on increased energy density and lifetime. Before this, he worked at E-One Moli Energy, the first rechargeable lithium battery company in the world, where he worked on high-power lithium-ion batteries, and completed a summer research term in the Obrovac Research Group, also at Dalhousie. He received a BSc (Hons) in applied physics from Simon Fraser University.

The post A comprehensive method for assembly and design optimization of single-layer pouch cells appeared first on Physics World.

  •  

Gallium-doped bioactive glass kills 99% of bone cancer cells

Par : Tami Freeman
18 septembre 2024 à 16:00

Osteosarcoma, the most common type of bone tumour, is a highly malignant cancer that mainly affects children and young adults. Patients are typically treated with an aggressive combination of resection and chemotherapy, but survival rates have not improved significantly since the 1970s. With alternative therapies urgently needed, a research team at Aston University has developed a gallium-doped bioactive glass that selectively kills over 99% of bone cancer cells.

The main objective of osteosarcoma treatment is to destroy the tumour and prevent recurrence. But over half of long-term survivors are left with bone mass deficits that can lead to fractures, making bone restoration another important goal. Bioactive glasses are already used to repair and regenerate bone – they bond with bone tissue and induce bone formation by releasing ions such as calcium, phosphorus and silicon. But they can also be designed to release therapeutic ions.

Team leader Richard Martin and colleagues propose that bioactive glasses doped with  gallium ions could address both tasks – helping to prevent cancer recurrence and lowering the  risk of fracture. They designed a novel biomaterial that provides targeted drug delivery to the tumour site, while also introducing a regenerative scaffold to stimulate the new bone growth.

“Gallium is a toxic ion that has been widely studied and is known to be effective for cancer therapy. Cancer cells tend to be more metabolically active and therefore uptake more nutrients and minerals to grow – and this includes the toxic gallium ions,” Martin explains. “Gallium is also known to inhibit bone resorption, which is important as bone cancer patients tend to have lower bone density and are more prone to fractures.”

Glass design

Starting with a silicate-based bioactive glass, the researchers fabricated six glasses doped with between 0 and 5 mol% of gallium oxide (Ga2O3). They then ground the glasses into powders with a particle size between 40 and 63 µm.

Martin notes that gallium is a good choice for incorporating into the glass, as it is effective in a variety of simple molecular forms. “Complex organic molecules would not survive the high processing temperatures required to make bioactive glasses, whereas gallium oxide can be incorporated relatively easily,” he says.

To test the cytotoxic effects of the bioactive glasses on cancer cells, the team created “conditioned media”, by incubating the gallium-doped glass particles in cell culture media at concentrations of 10 or 20 mg/mL.  After 24 h, the particles were filtered out to leave various levels of gallium ions in the media.

The researchers then exposed osteosarcoma cells, as well as normal osteoblasts as controls, to conditioned media from the six gallium-doped powders. Cell viability assays revealed significant cytotoxicity in cancer cells exposed to the conditioned media, with a reduction in cell viability correlating with gallium concentration.

After 10 days, cancer cells exposed to media conditioned with 10 mg/mL of 4 and 5% gallium-doped glass showed decreased cell viability, to roughly 60% and less than 10%, respectively. The 20 mg/mL of 4% and 5% gallium-doped glass were the most toxic to the cancer cells, causing 60% and more than 99% cell death, respectively, after 10 days.

Exposure to gallium-free bioglass did not significantly impact cell viability – confirming that the toxicity is due to gallium and not the other components of the glass (calcium, sodium, phosphorus and silicate ions).

While the glasses preferentially killed osteosarcoma cells compared with normal osteoblasts, some cytotoxic effects were also seen in the control cells. Martin believes that this slight toxicity to normal healthy cells is within safe limits, noting that the localized nature of the treatment should significantly reduce side effects compared with orally administered gallium.

“Further experiments are needed to confirm the safety of these materials,” he says, “but our initial studies show that these gallium-doped bioactive glasses are not toxic in vivo and have no effects on major organs such as the liver or kidneys.”

The researchers also performed live/dead assays on the osteosarcoma and control cells. The results confirmed the highly cytotoxic effect of gallium-doped bioactive glass on the cancer cells with relatively minor toxicity towards normal cells. They also found that exposure to the gallium-doped glass significantly reduced cancer cell proliferation and migration.

Bone regeneration

To test whether the bioactive glasses could also help to heal bone, the team exposed glass samples to simulated body fluid for seven days. Under these physiological conditions, the glasses gradually released calcium and phosphorous ions.

FTIR and energy dispersive X-ray spectroscopy revealed that these ions precipitated onto the glass surface to form an amorphous calcium phosphate/hydroxyapatite layer – indicating the initial stages of bone regeneration. For clinical use, the glass particles could be mixed into a paste and injected into the void created during tumour surgery.

“This bioactivity will help generate new bone formation and prevent bone mass deficits and potential future fractures,” Martin and colleagues conclude. “The results when combined strongly suggest that gallium-doped bioactive glasses have great potential for osteosarcoma-related bone grafting applications.”

Next, the team plans to test the materials on a wide range of bone cancers to ensure the treatment is effective against different cancer types, as well as optimizing the dosage and delivery before undertaking preclinical tests.

The researchers report their findings in Biomedical Materials.

The post Gallium-doped bioactive glass kills 99% of bone cancer cells appeared first on Physics World.

  •  

Adaptive deep brain stimulation reduces Parkinson’s disease symptoms

Par : No Author
18 septembre 2024 à 11:10

Deep brain stimulation (DBS) is an established treatment for patients with Parkinson’s disease who experience disabling tremors and slowness of movements. But because the therapy is delivered with constant stimulation parameters – which are unresponsive to a patient’s activities or variations in symptom severity throughout the day – it can cause breakthrough symptoms and unwanted side effects.

In their latest Parkinson’s disease initiative, researchers led by Philip Starr from the UCSF Weill Institute for Neurosciences have developed an adaptive DBS (aDBS) technique that may offer a radical improvement. In a feasibility study with four patients, they demonstrated that this intelligent “brain pacemaker” can reduce bothersome side effects by 50%.

The self-adjusting aDBS, described in Nature Medicine, monitors a patient’s brain activity in real time and adjusts the level of stimulation to curtail symptoms as they arise. Generating calibrated pulses of electricity, the intelligent aDBS pacemaker provides less stimulation when Parkinson’s medication is active, to ward off excessive movements, and increases stimulation to prevent slowness and stiffness as the drugs wear off.

Starr and colleagues conducted a blinded, randomized feasibility trial to identify neural biomarkers of motor signs during active stimulation, and to compare the effects of aDBS with optimized constant DBS (cDBS) during normal, unrestricted daily life.

The team recruited four male patients with Parkinson’s disease, ranging in age from 47 to 68 years, for the study. Although all participants had implanted DBS devices, they were still experiencing symptom fluctuations that were not resolved by either medication or cDBS therapy. They were asked to identify the most bothersome residual symptom that they experienced.

To perform aDBS, the researchers developed an individualized data-driven pipeline for each participant, which turns the recorded subthalamic or cortical field potentials into personalized algorithms that auto-adjust the stimulation amplitudes to alleviate residual motor fluctuations. They used both in-clinic and at-home neural recordings to provide the data.

“The at-home data streaming step was important to ensure that biomarkers identified in idealized, investigator-controlled conditions in the clinic could function in naturalistic settings,” the researchers write.

The four participants received aDBS alongside their existing DBS therapy. The team compared the treatments by alternating between cDBS and aDBS every two to seven days, with a cumulative period of one month per condition.

The researchers monitored motor symptoms using wearable devices plus symptom diaries completed daily by the participants. They evaluated the most bothersome symptoms, in most cases bradykinesia (slowness of movements), as well as stimulation-associated side effects such as dyskinesia (involuntary movements). To control for other unwanted side effects, participants also rated other common motor symptoms, their quality of sleep, and non-motor symptoms such as depression, anxiety, apathy and impulsivity.

The study revealed that aDBS improved each participant’s most bothersome symptom by roughly 50%. Three patients also reported improved quality-of-life using aDBS. This change was so obvious to these three participants that, even though they did not know which treatment was being delivered at any time, they could often correctly guess when they were receiving aDBS.

The researchers note that the study establishes the methodology for performing future trials in larger groups of males and females with Parkinson’s disease.

“There are three key pathways for future research,” lead author Carina Oehrn tells Physics World. “First, simplifying and automating the setup of these systems is essential for broader clinical implementation. Future work by Starr and Simon Little at UCSF, and Lauren Hammer (now at the Hospital of the University of Pennsylvania) will focus on automating this process to increase access to the technology. From a practicality standpoint, we think it necessary to develop an AI-driven smart device that can identify and auto-set treatment settings with a clinician-activated button.”

“Second, long-term monitoring for safety and sustained effectiveness is crucial,” Oehrn added. “Third, we need to expand these approaches to address non-motor symptoms in Parkinson’s disease, where treatment options are limited. I am studying aDBS for memory and mood in Parkinson’s at the University of California-Davis. Little is investigating aDBS for sleep disturbances and motivation.”

The post Adaptive deep brain stimulation reduces Parkinson’s disease symptoms appeared first on Physics World.

  •  

Dark-matter decay could have given ancient supermassive black holes a boost

Par : No Author
17 septembre 2024 à 17:19

The decay of dark matter could have played a crucial role in triggering the formation of supermassive black holes (SMBHs) in the early universe, according to a trio of astronomers in the US. Using a combination of gas-cloud simulations and theoretical dark matter calculations, Yifan Lu and colleagues at the University of California, Los Angeles, uncovered promising evidence that the decay of dark matter may have provided the radiation necessary to prevent primordial gas clouds from fragmenting as they collapsed.

SMBHs are thought to reside at the centres of most large galaxies, and can be hundreds of thousands to billions of times more massive than the Sun. For decades, astronomers puzzled over how such immense objects could have formed, and the mystery has deepened with recent observations by the James Webb Space Telescope (JWST).

Since 2023, JWST has detected SMBHs that existed less than one billion years after the birth of the universe. This is far too early to be the result of conventional stellar evolution, whereby smaller black holes coalesce to create a SMBH.

Fragmentation problem

An alternative explanation is that vast primordial gas clouds in the early universe collapsed directly into SMBHs. However, as Lu explains, this theory challenges our understanding of how matter behaves. “Detailed calculations show that, in the absence of any unusual radiation, the largest gas clouds tend to fragment and form a myriad of small halos, not a single supermassive black hole,” he says. “This is due to the formation of molecular hydrogen, which cools the rest of the gas by radiating away thermal energy.”

For SMBHs to form under these conditions, molecular hydrogen would have needed to be somehow suppressed, which would require an additional source of radiation from within these ancient clouds. Recent studies have proposed that this extra energy could have come from hypothetical dark-matter particles decaying into photons.

“This additional radiation could cause the dissociation of molecular hydrogen, preventing fragmentation of large gas clouds into smaller pieces,” Lu explains. “In this case, gravity forces the entire large cloud to collapse as a whole into a [SMBH].”

In several recent studies, researchers have used simulations and theoretical estimates to investigate this possibility. So far, however, most studies have either focused on the mechanics of collapsing gas clouds or on the emissions produced by decaying dark matter, with little overlap between the two.

Extra ingredient needed

“Computer simulations of clouds of gas that could directly collapse to black holes have been studied extensively by groups farther on the astrophysics side of things, and they had examined how additional sources of radiation are a necessary ingredient,” explains Lu’s colleague Zachary Picker.

“Simultaneously, people from the dark matter side had performed some theoretical estimations and found that it seemed unlikely that dark matter could be the source of this additional radiation,” adds Picker.

In their study, Lu, Picker, and Alexander Kusenko sought to bridge this gap by combining both approaches: simulating the collapse of a gas cloud when subjected to radiation produced by the decay of several different candidate dark-matter particles. As they predicted, some of these particles could indeed provide the missing radiation needed to dissociate molecular hydrogen, allowing the entire cloud to collapse into a single SMBH.

However, dark matter is a hypothetical substance that has never been detected directly. As a result, the trio acknowledges that there is currently no reliable way to verify their findings experimentally. For now, this means that their model will simply join a growing list of theories that aim to explain the formation of SMBHs. But if the situation changes in the future, the researchers hope their model could represent a significant step forward in understanding the early universe’s evolution.

“One day, hopefully in my lifetime, we’ll find out what the dark matter is, and then suddenly all of the papers written about that particular type will magically become ‘correct’,” Picker says. “All we can do until then is to keep trying new ideas and hope they uncover something interesting.”

The research is described in Physical Review Letters.

The post Dark-matter decay could have given ancient supermassive black holes a boost appeared first on Physics World.

  •  

Magnetically controlled prosthetic hand restores fine motion control

Par : Tami Freeman
16 septembre 2024 à 17:30

A magnetically controlled prosthetic hand, tested for the first time in a participant with an amputated lower arm, provided fine control of hand motion and enabled the user to perform everyday actions and grasp fragile objects. The robotic prosthetic, developed by a team at Scuola Superiore Sant’Anna in Pisa, uses tiny implanted magnets to predict and carry out intended movements.

Losing a hand can severely affect a person’s ability to perform everyday work and social activities, and many researchers are investigating ways to restore lost motor function via prosthetics. Most available or proposed strategies rely on deciphering electrical signals from residual nerves and muscles to control bionic limbs. But this myoelectric approach cannot reproduce the dexterous movements of a human hand.

Instead, Christian Cipriani and colleagues developed an alternative technique that exploits the physical displacement of skeletal muscles to decode the user’s motor intentions. The new myokinetic interface uses permanent magnets implanted into the residual muscles of the user’s amputated arm to accurately control finger movements of a robotic hand.

“Standard myoelectric prostheses collect non-selective signals from the muscle surface and, due to that low selectivity, typically support only two movements,” explains first author Marta Gherardini. “In contrast, myokinetic control enables simultaneous and selective targeting of multiple muscles, significantly increasing the number of control sources and, consequently, the number of recognizable movements.”

First-in-human test

The first patient to test the new prosthesis was a 34-year-old named Daniel, who had recently lost his left hand and had started to use a myoelectric prosthesis. The team selected him as a suitable candidate because his amputation was recent and blunt, he could still feel the lost hand and the residual muscles in his arm moved in response to his intentions.

For the study, the team implanted six cylindrical (2 mm radius and height) neodymium magnets coated with a biocompatible shell into three muscles in Daniel’s residual forearm. In a minimally invasive procedure, the surgeon used plastic instruments to manipulate the magnets into the tip of the target muscles and align their magnetic fields, verifying their placement using ultrasound.

Daniel also wore a customized carbon fibre prosthetic arm containing all of the electronics needed to track the magnets’ locations in space. When he activates the residual muscles in his arm, the implanted magnets move in response to the muscle contractions. A grid of 140 magnetic field sensors in the prosthesis detect the position and orientation of these magnets and transmit the data to an embedded computing unit. Finally, a pattern recognition algorithm translates the movements into control signals for a Mia-Hand robotic hand.

Gherardini notes that the pattern recognition algorithm rapidly learnt to control the hand based on Daniel’s intended movements. “Training the algorithm took a few minutes, and it was immediately able to correctly recognize the movements,” she says.

In addition to the controlled hand motion arising from intended grasping, the team found that elbow movement activated other forearm muscles. Tissue near the elbow was also compressed by the prosthetic socket during elbow flexion, which caused unintended movement of nearby magnets. “We addressed this issue by estimating the elbow movement through the displacement of these magnets, and adjusting the position of the other magnets accordingly,” says Gherardini.

Robotic prosthesis user grasps a fragile plastic cup

Test success The robotic prosthesis allowed Daniel to grasp fragile objects such as a plastic cup. (Courtesy: © 2024 Scuola Superiore Sant’Anna)

During the six-week study, the team performed a series of functional tests commonly used to assess the dexterity of upper limb prostheses. Daniel successfully completed these tests, with comparable performance to that achieved using a traditional myoelectric prosthetic (in tests performed before the implantation surgery).

Importantly, he was able to control finger movements well enough to perform a wide range of everyday activities – such as unscrewing a water bottle cap, cutting with a knife, closing a zip, tying shoelaces and removing pills from a blister pack. He could also control the grasp force to manipulate fragile objects such as an egg and a plastic cup.

The researchers report that the myokinetic interface worked even better than they expected, with the results highlighting its potential to restore natural motor control in people who have lost limbs. “This system allowed me to recover lost sensations and emotions: it feels like I’m moving my own hand,” says Daniel in a press statement.

At the end of the six weeks, the team removed the magnets. Asides for low-grade inflammation around one magnet that had lost its protective shell, all of the surrounding tissue was healthy. “We are currently working towards a long-term solution by developing a magnet coating that ensures long-term biocompatibility, allowing users to eventually use this system at home,” Gherardini tells Physics World.

She adds that the team is planning to perform another test of the myokinetic prosthesis within the next two years.

The myokinetic prosthesis is described in Science Robotics.

The post Magnetically controlled prosthetic hand restores fine motion control appeared first on Physics World.

  •  

NASA suffering from ageing infrastructure and inefficient management practices, finds report

Par : No Author
16 septembre 2024 à 15:34

NASA has been warned that it may need to sacrifice new missions in order to rebalance the space agency’s priorities and achieve its long-term objectives. That is the conclusion of a new report – NASA at a Crossroads: Maintaining Workforce, Infrastructure, and Technology Preeminence in the Coming Decades – that finds a space agency battling on many fronts including ageing infrastructure, China’s growing presence in space, and issues recruiting staff.

The report was requested by Congress and published by the National Academies of Sciences, Engineering, and Medicine. It was written by a 13-member committee, which included representatives from industry, academia and government, and was chaired by Norman Augustine, former chief executive of Lockheed Martin. Members visited all nine NASA centres and talked to about 400 employees to compile the report.

While the panel say that NASA had “motivate[ed] many of the nation’s youth to pursue careers in science and technology” and “been a source of inspiration and pride to all Americans”, they highlight a variety of problems at the agency. Those include out-of-date infrastructure, a pressure to prioritize short-term objectives, budget mismatches, inefficient management practices, and an unbalanced reliance on commercial partners. Yet according to Augustine, the agency’s main problem is “the more mundane tendency to focus on near-term accomplishments at the expense of long-term viability”.

As well as external challenges such as China’s growing role in space, the committee discovered that many were homegrown. They found that 83% of NASA’s facilities are past their design lifetimes. For example, the capacity of the Deep Space Network, which provides critical communications support for uncrewed missions, “is inadequate” to support future craft and even current missions such as the Artemis Moon programme “without disrupting other projects”.

There is also competition from private space firms in both technology development and recruitment. According to the report, NASA has strict hiring rules and salaries it can offer. It takes 81 days, on average, from the initial interview to an offer of employment. During that period, the subject will probably receive offers from private firms, not only in the space industry but also in the “digital world”, which offer higher salaries.

In addition, Augustine notes, the agency is giving its engineers less opportunity “to get their hands dirty” by carrying out their own research. Instead, they are increasingly managing outside contractors who are doing the development work. At the same time, the report identifies a “major reduction” over the past few decades in basic research that is financed by industry – a trend that the report says is “largely attributable to shareholders seeking near-term returns as opposed to laying groundwork for the future”.

Yet the committee also finds that NASA faces “internal and external pressure to prioritize short-term measures” without considering longer-term needs and implications. “If left unchecked these pressures are likely to result in a NASA that is incapable of satisfying national objectives in the longer term,” the report states. “The inevitable consequence of such a strategy is to erode those essential capabilities that led to the organization’s greatness in the first place and that underpin its future potential.”

Cash woes

Another concern is the US government budget process that operates year by year and is slowly reducing NASA’s proportional share of funding. The report finds that the budget is “often incompatible with the scope, complexity, and difficulty of [NASA’s] work” and the funding allocation “has degraded NASA’s capabilities to the point where agency sustainability is in question”. Indeed, during the agency’s lifetime, the proportion of the US budget devoted to government R&D has declined from 1.9% of gross domestic product to 0.7%. The panel also notes a trend of reducing investment in research and technology as a fraction of funds devoted to missions. “NASA is likely to face budgetary problems in the future that greatly exceed those we’ve seen in recent years,” Augustine told a briefing.

The panel now calls on NASA to work with Congress to establish “an annually replenished revolving fund – such as a working capital fund” to maintain and improve the agency’s infrastructure. It would be financed by the US government as well as users of NASA’s facilities and be “sufficiently capitalized to eliminate NASA’s current maintenance backlog over the next decade”. While it is unclear how the government and the agency will react to that proposal, as Augustine warned, for NASA, “this is not business as usual”.

The post NASA suffering from ageing infrastructure and inefficient management practices, finds report appeared first on Physics World.

  •  

Stop this historic science site in St Petersburg from being sold

16 septembre 2024 à 09:00

In the middle of one of the most expensive neighbourhoods in St Petersburg, Russia, is a vacant and poorly kept lot about half an acre in size. It’s been empty for years for a reason: on it stood the first scientific research laboratory in Russia – maybe even the world – and for over two and a half centuries generations of Russian scientists hoped to restore it. But its days as an empty lot may be over, for the land could soon be sold to the highest bidder.

The laboratory was the idea of Mikhail Lomonosov (1711–1765), Russia’s first scientist in the modern sense. Born in 1711 into a shipping family on an island in the far north of Russia, Lomonosov developed a passion for science that saw him study in Moscow, Kyiv and St Petersburg. He then moved to Germany, where he got involved in the then revolutionary, mathematically informed notion that matter is made up of smaller elements called “corpuscles”.

In 1741, at the age of 30, Lomonosov returned to Russia, where he joined the St Petersburg Academy of Science. There he began agitating for the academy to set up a physico-chemistry laboratory of its own. Until then, experimental labs in Russia and elsewhere had been mainly applied institutions for testing and developing paints, dyes and glasses, and for producing medicines and chemicals for gunpowder. But Lomonosov wanted something very different.

His idea was for a lab devoted entirely to basic research and development that could engage and train students to do empirical research on materials. Most importantly, he wanted the academy to run the lab, but the state to pay for it. After years of agitating, Lomonosov’s plan was approved, and the St Petersburg laboratory opened in 1748 on a handy site in the centre of St Petersburg, just a 20-minute walk from the academy, near the university, museums and the city’s famous bridges.

The laboratory was a remarkable place, equipped with furnaces, ovens, scales, thermometers, microscopes, grindstones and other instruments for studying materials

The laboratory was a remarkable place, equipped with furnaces, ovens, scales, thermometers, microscopes, grindstones and various other instruments for studying materials and their properties. Lomonosov and his students used these to analyse ores, minerals, silicates, porcelain, silicates, glasses and mosaics. He also carried out experiments with implications for fundamental theory.

In 1756, for instance, Lomonosov found that certain experiments involving the oxidation of lead carried out by the British chemist Robert Boyle were in error. Indirectly, Lomonosov also suggested a general law of conservation covering the total weight of chemically reacting substances. The law is, these days, usually attributed to the French chemist Antoine Lavoisier, who also came up with the notion three decades later. But Lomonosov’s work had suggested it.

A symbol for science

Lomonosov left the formal leadership of the laboratory in 1757, after which it was headed by several other academy professors. The lab continued to serve the academy’s research until 1793 when several misfortunes, including a flood and a robbery, led to it running down. Still, the lab has had huge significance as a symbol that Russian scientists have appealed to ever since as a model for more state support. It also inspired the setting-up of other chemical laboratories, including a similar facility built at Moscow University in 1755.

For the last two and a half centuries, however, the laboratory’s allies have struggled to keep the site from becoming just real estate in a pricey St Petersburg neighbourhood. In 1793 an academician bought the land from the Academy of Sciences and rebuilt the lab as housing, although preserving its foundations and the old walls. Over the next century, a series of private owners owned the plot, again rebuilding the laboratory and associated house.

The area was levelled again during the Siege of Leningrad in the Second World War, though the lab’s foundations remained intact. After the war, the Soviet Union tried to reconstruct the lab, as did the Russian Academy of Sciences. More recently, advocates have tried to rebuild the lab in time for the 300th anniversary of the Russian Academy of Science, which takes place in 2024–2025.

Three photos of a disused plot of land in a city
A piece of history The current state of the Lomonosov lab plot in St Petersburg. Fermilab accelerator physicist Vladimir Shiltsev (with umbrella, left) visited the site with lab restoration enthusiasts in 2021. A limited view can be seen from outside the plot on 2nd Linia Street (top right). Locked gates prevent general access to the land. (Courtesy: Vladimir Shiltsev (left image); Victor Borisov (right-hand images))

All these attempts have failed. Meanwhile, ownership of the site was passed around several Russian administrative agencies, most recently to the Russian State Pedagogical University. Last March, the university put the land in the hands of a private real estate agent who advertised the site in a public notice with the statement that the land was “intended for scientific facilities”, without reference to the lab. The plot is supposed to open for bids this fall.

But scientists and historians worry about the vagueness of that phrase and are distrustful of its source. There is nothing to stop the university from succumbing to the extremely high market prices that developers would pay for its enticing location in the centre of St Petersburg.

The critical point

Money, wrote Karl Marx in his famous article on the subject, is “the confounding and confusing of all natural and human qualities”. As he saw it, money strips what it is used for of ties to human life and meaning. Monetizing Lomonosov’s lab makes us speak of it quantitatively in real-estate terms. In such language, the site is simply a flat, featureless half-acre plot of land that, one metre down, has pieces of stone that were once part of an earlier building.

It also encourages us to speak of the history of this plot as just a series of owners, buildings and events. Some might even say that we have already preserved the history of Lomonosov’s lab because much of its surviving contents are on display in a nearby museum called the Kunstkamera (or art chamber). What, therefore, could be the harm of selling the land?

The land is where Lomonosov, his spirited colleagues and students, shared experiences and techniques, made friendships and established networks

Turning the history of science into nothing more than a tale of instruments promotes the view that science is all about clever individuals who use tools to probe the world for knowledge. But the places where scientists work are integral to science too. The plot of land on the 2nd avenue of Vasilevsky Island is where Lomonosov, his spirited colleagues and students, shared experiences and techniques, made friendships and established networks.

It’s where humans, instruments, materials and funding came together in dynamic events that revealed new knowledge of how materials behave in different conditions. The lab is also historically important because it impressed academy and state authorities enough that they continued to support scientific research as essential to Russia’s future.

Sure, appreciating this dimension of science history requires more than restoring buildings. But preserving the places where science happens keeps alive important symbols of what makes science possible, then and now, in a world that needs more of it. Selling the site of Lomonosov’s lab for money amounts to repudiating the cultural value of science.

The post Stop this historic science site in St Petersburg from being sold appeared first on Physics World.

  •  

What happens when a warp drive collapses?

14 septembre 2024 à 15:02

Simulations of space–times that contain negative energies can help us to better understand wormholes or the interior of black holes. For now, however, the physicists who performed the new study, who admit to being big fans of Star Trek, have used their result to model the gravitational waves that would be emitted by a hypothetical failing warp drive.

Gravitational waves, which are ripples in the fabric of space–time, are emitted by cataclysmic events in the universe, like binary black hole and neutron star mergers. They might also be emitted by more exotic space–times such as wormholes or warp drives, which unlike black hole and neutron mergers, are still the stuff of science fiction.

First predicted by Albert Einstein in his general theory of relativity, gravitational waves were observed directly in 2015 by the Advanced LIGO detectors, which are laser interferometers comprising pairs of several-kilometre-long arms positioned at right angles to each other. As a gravitational wave passes through the detector, it slightly expands one arm while contracting the other. This creates a series of oscillations in the lengths of the arms that can be recorded as interference pattern variations.

The first detection by LIGO arose from the collision and merging of two black holes. These observations heralded the start of the era of gravitational-wave astronomy and viewing extreme gravitational events across the entire visible universe. Since then, astrophysicists have been asking themselves if signals from other strongly distorted regions of space–time could be seen in the future, beyond the compact binary mergers already detected.

Warp drives or bubbles

A “warp drive” (or “warp bubble”) is a hypothetical device that could allow space travellers to traverse space at faster-than-light speeds – as measured by some distant observer. Such a bubble contracts spacetime in front of it and expands spacetime behind it. It can do this, in theory, because unlike objects within space–time, space–time itself can bend, expand or contract at any speed. A spacecraft contained in such a drive could therefore arrive at its destination faster than light would in normal space without breaking Einstein’s cosmic speed limit.

The idea of warp drives is not new. They were first proposed in 1994 by the Mexican physicist Miguel Alcubierre who named them after the mode of travel used in the sci-fi series Star Trek. We are not likely to see such drives anytime soon, however, since the only way to produce them is by generating vast amounts of negative energy – perhaps by using some sort of undiscovered exotic matter.

A warp drive that is functioning normally, and travelling at a constant velocity, does not emit any gravitational waves. When it collapses, accelerates or decelerates, however, this should generate gravitational waves.

A team of physicists from Queen Mary University of London (QMUL), the University of Potsdam, the Max Planck Institute (MPI) for Gravitational Physics in Potsdam and Cardiff University decided to study the case of a collapsing warp drive. The warp drive is interesting, say the researchers, since it uses gravitational distortion of spacetime to propel a spaceship forward, rather than a usual kind of fuel/reaction system.

Decomposing spacetime

The team, led by Katy Clough of QMUL, Tim Dietrich from Potsdam and Sebastian Khan at Cardiff, began by describing the initial bubble by the original Alcubierre definition and gave it a fixed wall thickness. They then developed a formalism to describe the warp fluid and how it evolved. They varied its initial velocity at the point of collapse (which is related to the amplitude of the warp bubble). Finally, they analysed the resulting gravitational-wave signatures and quantified the radiation of energy from the space–time region.

While Einstein’s equations of general relativity treat space and time on an equal footing, we have to split the time and space dimensions to do a proper simulation of how the system evolves, explains Dietrich. This approach is normally referred to as the 3+1 decomposition of spacetime. “We followed this very common approach, which is routinely used to study binary black hole or binary neutron star mergers.”

It was not that simple, however: “given the particular spacetime that we were investigating, we also had to determine additional equations for the simulation of the material that is sustaining the warp bubble from collapse,” says Dietrich. “We also had to find a way to introduce the collapse that then triggers the emission of gravitational waves.”

Since they were solving Einstein’s field equation directly, the researchers say they could read off how spacetime evolves and the gravitational waves emitted from their simulation.

Very speculative work

Dietrich says that he and his colleagues are big Star Trek fans and that the idea for the project, which they detail in The Open Journal of Astrophysics, came to them a few years ago in Göttingen in Germany, where Clough was doing her postdoc. “Sebastian then had the idea of using the simulations that we normally use to help detect black holes to look for signatures of the Alcubierre warp drive metric,” recalls Dietrich. “We thought it would be a quick project, but it turned out to be much harder than we expected.”

The researchers found that, for warp ships around a kilometre in size, the gravitational waves emitted are of a high frequency and, therefore, not detectable with current gravitational-wave detectors. “While there are proposals for new gravitational-wave detectors at higher frequencies, our work is very speculative, and so it probably wouldn’t be sufficient to motivate anyone to build anything,” says Dietrich. “It does have a number of theoretical implications for our understanding of exotic spacetimes though,” he adds. “Since this is one of the few cases in which consistent simulations have been performed for spacetimes containing exotic forms of matter, namely negative energy, our work could be extended to also study wormholes, the inside of black holes, or the very early stages of the universe, where negative energy might prevent the formation of singularities.

Even though they “had a lot of fun” during this proof-of-principle project, the researchers say that they will now probably go back to their “normal” work, namely the study of compact binary systems.

The post What happens when a warp drive collapses? appeared first on Physics World.

  •  

UK reveals next STEPs toward prototype fusion power plant

13 septembre 2024 à 10:30

“Fiendish”, “technically tough”, “difficult”, “complicated”. Those were just a few of the choice words used at an event last week in Oxfordshire, UK, to describe ambitious plans to build a prototype fusion power plant. Held at the UK Atomic Energy Authority (UKAEA) Culham campus, the half-day meeting on 5 September saw engineers and physicists discuss the challenges that lie ahead as well the opportunities that this fusion “moonshot” represents.

The prototype fusion plant in question is known as the Spherical Tokamak for Energy Production (STEP), which was first announced by the UK government in 2019 when it unveiled a £220m package of funding for the project. STEP will be based on “spherical” tokamak technology currently being pioneered at the UK’s Culham Centre for Fusion Energy (CCFE). In 2022 a site for STEP was chosen at the former coal-fired power station at West Burton in Nottinghamshire. Operations are expected to begin in the 2040s with STEP aiming to prove the commercial viability of fusion by demonstrating net energy, fuel self-sufficiency and a viable route to plant maintenance.

A spherical tokamak is more compact than a traditional tokamak, such as the ITER experimental fusion reactor currently being built in Cadarache, France, which has been hit with cost hikes and delays in recent years. The compact nature of the spherical tokamak, which was first pioneered in the UK in the 1980s, is expected to minimize costs, maximise energy output and possibly make it easier to maintain when scaled up to a fully-fledged fusion power plant.

The current leading spherical tokamaks worldwide are the Mega Amp Spherical Tokamak (MAST-U) at the CCFE and the National Spherical Torus Experiment at the Princeton Plasma Physics Laboratory (PPPL) in the US, which is nearing the completion of an upgrade. Despite much progress, however, those tokamaks are yet to demonstrate fusion conditions through the use of the hydrogen isotope tritium in the fuel, which is necessary to achieve a “burning” plasma. This goal has, though, already been achieved in traditional tokamaks such as the Joint European Torus, which turned off in 2023.

“STEP is a big extrapolation from today’s machines,” admitted STEP chief engineer Chris Waldon at the event. “It is complex and complicated but we are now beginning to converge on a single design [for STEP]”.

A fusion ‘moonshot’

The meeting at Culham was held to mark the publication of 15 papers on the technical progress made on STEP over the past four years. They cover STEP’s plasma, its maintenance, magnets, tritium-breeding programme as well as pathways for fuel self-sufficiency (Philosophical Transactions A 382 20230416). Officials were keen to stress, however, that the papers were a snapshot of progress to date and that since then some aspects of the design have progressed.

One issue that crept up during the talks was the challenge of extrapolating every element of tokamak technology to STEP – a feat described by one panellist as being “so far off our graphs”. While theory and modelling have come a long way in the last decade, even the best models will not be a substitute for the real thing. “Until we do STEP we won’t know everything,” says physicist Steve Cowley, director of the PPPL. Those challenges involve managing potential instabilities and disruptions in the plasma – which at worst could obliterate the wall of a reactor – as well as operating high-temperature superconducting magnets to confine the plasma that have yet to be tested under the intensity of fusion conditions.

We need to produce a project that will deliver energy someone will buy

Ian Chapman

Another significant challenge is self-breeding tritium via neutron capture in lithium, which would be done in a roughly one-metre thick “blanket” surrounding the reactor. This is far from straightforward and the STEP team are still researching what technology might prevail – whether to use a solid pebble-bed or liquid lithium. While liquid lithium is good at producing tritium, for example, extracting the isotope to put back into the reactor is complex.

Howard Wilson, fusion pilot plant R&D lead at the Oak Ridge National Laboratory in the US, was keen to stress that STEP will not be a commercial power plant. Instead, its job rather is to demonstrate “a pathway towards commercialisation”. That is likely to come in several stages, the first being to generate 1 GW of power, which would result in 100 MW to the “grid” (the other 900 MW needed to power the systems). The second stage will be to test if that power production is sustainable via the self-breeding of tritium back into the reactor, what is known as a “closed fuel cycle”.

Ian Chapman, chief executive of the UKAEA, outlined what he called the “fiendish” challenges that lie ahead for fusion, even if STEP demonstrates that it is possible to deliver energy to the grid in a sustainable way. “We need to produce a project that will deliver energy someone will buy,” he said. That will be achieved in part via STEP’s third objective, which is to get a better understanding of the maintenance requirements of a fusion power plant and the impact that would have on reactor downtime. “We fail if there is not cost-effective solution,” added STEP engineering director Debbie Kempton.

STEP officials are now selecting industry partners — in engineering and construction — to work alongside the UKAEA to work on the design. Indeed, STEP is as much about physically building a plant as it is creating a whole fusion industry. A breathless two-minute pre-event promotional film — that loftily compared the development of fusion to the advent of the steam train and vaccines — was certainly given a much needed reality check.

The post UK reveals next STEPs toward prototype fusion power plant appeared first on Physics World.

  •  

Annular eclipse photograph bags Royal Observatory Greenwich prize

12 septembre 2024 à 20:30

US photographer Ryan Imperio has beaten thousands of amateur and professional photographers from around the world to win the 2024 Astronomy Photographer of the Year.

The image – Distorted Shadows of the Moon’s Surface Created by an Annular Eclipse – was taken during the 2023 annular eclipse.

It captures the progression of Baily’s beads, which are only visible when the Moon either enters or exits an eclipse. They are formed when sunlight shines through the valleys and craters of the Moon’s surface, breaking the eclipse’s well-known ring pattern.

“This is an impressive dissection of the fleeting few seconds during the visibility of the Baily’s beads,” noted meteorologist and competition judge Kerry-Ann Lecky Hepburn. “This image left me captivated and amazed. It’s exceptional work deserving of high recognition.”

As well as winning the £10,000 top prize, the image will go on display along with other selected pictures from the competition at an exhibition at the National Maritime Museum observatory that opens on 13 September.

The award – now in its 16th year – is run by the Royal Observatory Greenwich in association with insurer Liberty Specialty Markets and BBC Sky at Night Magazine.

The competition received over 3500 entries from 58 countries.

The post Annular eclipse photograph bags Royal Observatory Greenwich prize appeared first on Physics World.

  •  

Looking to the future of statistical physics, how intense storms can affect your cup of tea

12 septembre 2024 à 16:47

In this episode of the Physics World Weekly podcast we explore two related areas of physics, statistical physics and thermodynamics.

First up we have two leading lights in statistical physics who explain how researchers in the field are studying phenomena as diverse as active matter and artificial intelligence.

They are Leticia Cugliandolo who is at Sorbonne University in Paris and Marc Mézard at Bocconi University in Italy.

Cugliandolo is also chief scientific director of Journal of Statistical Mechanics, Theory, and Experiment (JSTAT) and Mézard has just stepped down from that role. They both talk about how the journal and statistical physics have evolved over the past two decades and what the future could bring.

The second segment of this episode explores how intense storms can affect your cup of tea. Our guests are the meteorologists Caleb Miller and Giles Harrison, who measured the boiling point of water as storm Ciarán passed through the University of Reading in 2023. They explain the thermodynamics of what they found, and how the storm could have affected the quality of the millions of cups of tea brewed that day.

The post Looking to the future of statistical physics, how intense storms can affect your cup of tea appeared first on Physics World.

  •  

Carbon defect in boron nitride creates first omnidirectional magnetometer

12 septembre 2024 à 11:43

A newly discovered carbon-based defect in the two-dimensional material hexagonal boron nitride (hBN) could be used as a quantum sensor to detect magnetic fields in any direction – a feat that is not possible with existing quantum sensing devices. Developed by a research team in Australia, the sensor can also detect temperature changes in a sample using the boron vacancy defect present in hBN. And thanks to its atomically thin structure, the sensor can conform to the shape of a sample, making it useful for probing structures that aren’t perfectly smooth.

The most sensitive magnetic field detectors available today exploit quantum effects to map the presence of extremely weak fields. To date, most of these have been made out of diamond and rely on the nitrogen vacancy (NV) centres contained within. NV centres are naturally occurring defects in the diamond lattice in which two carbon atoms are replaced with a single nitrogen atom, leaving one lattice site vacant. Together, the nitrogen atom and the vacancy can behave as a negatively charged entity with an intrinsic spin. NV centres are isolated from their surroundings, which means that their quantum behaviour is robust and stable.

When a photon hits an NV– centre, it can excite an electron to a higher-energy state. As it then decays back to the ground state, it may emit a photon of a different wavelength. The NV– centre has three spin sublevels, and the excited state of each sublevel has a different probability of emitting a photon when it decays.

By exciting an individual NV– centre repeatedly and collecting the emitted photons, researchers can detect its spin state. And since the spin state can be influenced by external variables such as magnetic field, electric field, temperature, force and pressure, NV– centres can therefore be used as atomic-scale sensors. Indeed, they are routinely employed today to study a wide variety of biological and physical systems.

There is a problem though – NV centres can only detect magnetic fields that are aligned in the same direction as the sensors. Devices must therefore contain many sensors placed at different alignment angles, which makes them difficult to use and limited to specific applications. What’s more, the fact that they are rigid (diamond being the hardest material known), means they cannot conform to the sample being studied.

A new carbon-based defect

Researchers recently discovered a new carbon-based defect in hBN, in addition to the boron vacancy that it is already known to contain. In this latest work, and thanks to a carefully calibrated Rabi experiment (a method for measuring nuclear spin), a team led by Jean-Philippe Tetienne of RMIT University and Igor Aharonovich of the University of Technology Sydney found that the carbon-based defect behaves as a spin-half system (S=1/2). In comparison, the spin in the boron defect is equal to one. And it’s this spin-half nature of the former that enables it to detect magnetic fields in any direction, say the researchers.

Team members Sam Scholten and Priya Singh
Research team Sam Scholten and Priya Singh working on their hBN quantum sensing system. (Courtesy: RMIT University)

“Having two different independently addressable spin species within the same material at room temperature is unique, not even diamond has this capability,” explains Priya Singh from RMIT University, one of the lead authors of this study. “This is exciting because each spin species has its advantages and limitations, and so with hBN we can combine the best of both worlds. This is important especially for quantum sensing, where the spin half enables omnidirectional magnetometry, with no blind spot, while the spin one provides directional information when needed and is also a good temperature sensor.”

Until now, the spin multiplicity of the carbon defect was under debate in the hBN community, adds co-first author Sam Scholten from the University of Melbourne. “We have been able to unambiguously prove its spin-half nature, or more likely a pair of weakly coupled spin-half electrons.”

The new S=1/2 sensor can be controlled using light in the same way as the boron vacancy-based sensor. What’s more, the two defects can be tuned to interact with each other and thus used together to detect both magnetic fields and temperature at the same time. Singh points out that the carbon-based defects were also naturally present in pretty much every hBN sample the team studied, from commercially sourced bulk crystals and powders to lab-made epitaxial films. “To create the boron vacancy defects in the same sample, we had to perform just one extra step, namely irradiating the samples with high-energy electrons, and that’s it,” she explains.

To create the hBN sensor, the researchers simply drop casted a hBN powder suspension onto the target object or transferred an epitaxial film or an exfoliated flake. “hBN is very versatile and easy to work with,” says Singh. “It is also low cost and easy to integrate with various other materials so we expect lots of applications will emerge in nanoscale sensing – especially thanks to the omnidirectional magnetometry capability, unique for solid-state quantum sensors.”

The researchers are now trying to determine the exact crystallographic structure of the S=1/2 carbon defects and how they can engineer them on-demand in a few layers of hBN. “We are also planning sensing experiments that leverage the omnidirectional magnetometry capability,” says Scholten. “For instance, we can now image the stray field from a van der Waals ferromagnet as a function of the azimuthal angle of the applied field. In this way, we can precisely determine the magnetic anisotropy, something that has been a challenge with other methods in the case of ultrathin materials.”

The study is detailed in Nature Communications.

The post Carbon defect in boron nitride creates first omnidirectional magnetometer appeared first on Physics World.

  •  

Dancing humans embody topological properties

Par : Anna Demming
11 septembre 2024 à 18:08

High school students and scientists in the US have used dance to illustrate the physics of topological insulators. The students followed carefully choreographed instructions developed by scientists in what was a fun outreach activity that explained topological phenomena. The exercise demonstrates an alternative analogue for topologically nontrivial systems, which could be potentially useful for research.

“We thought that the way all of these phenomena are explained is rather contrived, and we wanted to, in some sense, democratize the notions of topological phases of matter to a broader audience,” says Joel Yuen-Zhou who is a theoretical chemist at the University of California, San Diego (UCSD). Yuen-Zhou led the research, which was done in collaboration with students and staff at Orange Glen High School near San Diego.

Topological insulators are a type of topological material where the bulk is an electrical  insulator but the surface or edges (depending on whether the system is 3D or 2D) conducts electricity. The conducting states arise due to a characteristic of the electronic band structure associated with the system as a whole, which means they persist despite defects or distortions in the system so long as the fundamental topology of the system is undisturbed. Topology can be understood in terms of a coffee mug being topologically equivalent to a ring doughnut, because they both have a hole all the way through. This is unlike a jam doughnut which does not have a hole and is therefore not topologically equivalent to a coffee mug.

Insulators without the conducting edge or surface states are “topologically trivial” and have insulating properties throughout. Yuen-Zhou explains that for topologically nontrivial properties to emerge, the system must be able to support wave phenomena and have something that fulfils the role of a magnetic field in condensed matter topological insulators. As such, analogues of topological insulators have been reported in systems ranging from oceanic and atmospheric fluids to enantiomeric molecules and active matter. Nonetheless, and despite the interest in topological properties for potential applications, they can still seem abstract and arcane.

Human analogue

Yuen-Zhou set about devising a human analogue of a topological insulator with then PhD student Matthew Du, who is now at the University of Chicago. The first step was to establish a Hamiltonian that defines how each site in a 2D lattice interacts with its neighbours and a magnetic field. They then formulated the Schrödinger equation of the system as an algorithm that updates after discrete steps in time and reproduces essential features of topological insulator behaviour. These are chiral propagation around the edges when initially excited at an edge; robustness to defects; propagation around the inside edge when the lattice has a hole in it; and an insulating bulk.

The USCD researchers then explored how this quantum behaviour could be translated into human behaviour. This was a challenge because quantum mechanics operates in the realm of complex numbers that have real and an imaginary components. Fortunately, they were able to identify initial conditions that lead to only real number values for the interactions at each time step of the algorithm. That way the humans, for whom imaginary interactions might be hard to simulate, could legitimately manifest only real numbers as they step through the algorithm. These real values were either one (choreographed as waving flags up), minus one (waving flags down) or zero (standing still).

“The structure isn’t actually specific just to the model that we focus on,” explains Du. “There’s actually a whole class of these kinds of models, and we demonstrate this for another example – actually a more famous model – the Haldane model, which has a honeycomb lattice.”

The researchers then created a grid on a floor at Orange Glen High School, with lines in blue or red joining neighbouring squares. They defined whether the interaction between those sites was parallel or antiparallel (that is, whether the occupants of the squares should wave the flags in the same or opposite direction to each other when prompted).

Commander and caller

A “commander” acts as the initial excitation that starts things off. This is prompted by someone who is not part of the 2D lattice, whom the researchers liken to a caller in line, square or contra dancing. The caller then prompts the commander to come to a standstill, at which point all those who have their flags waving determine if they have a “match”, that is, if they are dancing in kind or opposite to their neighbours as designated by the blue and red lines. Those with a match then stop moving, after which the “commander” or excitation moves to the one site where there is no such match.

Yuen-Zhou and Du taught the choreography to second and third year high school students. The result was that excitations propagated around the edge of the lattice, but bulk excitations fizzled out. There was also a resistance to “defects”.

“The main point about topological properties is that they are characterized by mathematics that are insensitive to many details,” says Yuen-Zhou. “While we choreograph the dance, even if there are imperfections and the students mess up, the dance remains and there is the flow of the dance along the edges of the group of people.”

The researchers were excited about showing that even a system as familiar as a group of people could provide an analogue of a topological material, since so far these properties have been “restricted to very highly engineered systems or very exotic materials,” as Yuen-Zhou points out.

“The mapping of a wave function to real numbers then to human movements clearly indicates the thought process of the researchers to make it more meaningful to students as an outreach activity,” says Shanti Pise, a principal technical officer at the Indian Institute of Science, Education and Research in Pune. She was not involved in this research project but specializes in using dance to teach mathematical ideas. “I think this unique integration of wave physics and dance would also give a direction to many researchers, teachers and the general audience to think, experiment and share their ideas!”

The research is described in Science Advances.

The post Dancing humans embody topological properties appeared first on Physics World.

  •  
❌
❌