↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 10 mai 2024Physics World

Grounds for celebration as ‘hub of all things coffee’ opens at University of California, Davis

10 mai 2024 à 17:38

Physicists are well-known for their interest in coffee, not only drinking it but also studying the fascinating science behind an espresso.

Now researchers at the University of California, Davis (UC Davis), have taken it a whole new level by forming a research institute dedicated to the science of the perfect brew.

The Coffee Center will be used by more than 50 researchers and includes labs dedicated to brewing, “sensory and cupping” and the chemical analysis of coffee.

The centre has its origins in a 2013 course on “the design of coffee” by UC Davis chemical engineers William Ristenpart and Tonya Kuhl.

Two years later and a coffee lab at the university was established and in 2022 construction began on the Coffee Center, which was funded with $6m from private donors.

The official opening on 3 May was attended by over 200 people, who were treated to bean roasting and espresso brewing demonstrations.

“Think of this center as a hub of all things coffee,” noted UC Davies chancellor Gary May at the opening. “Together, we bring rigorous coffee science and cutting-edge technology to the world stage.”

Better latte than never.

The post Grounds for celebration as ‘hub of all things coffee’ opens at University of California, Davis appeared first on Physics World.

  •  

The future of 2D materials: grand challenges and opportunities

Par : No Author
10 mai 2024 à 16:41
Source: Shutterstock, Marco de Benedictis

Graphene, the first 2D material, was isolated by Prof. Andre Geim and Prof. Konstantin Novoselov in 2004. Since then, a variety of 2D materials have been discovered, including transition metal dichalcogenides, phosphorene and mxene. 2D materials have remarkable characteristics and are making significant contributions towards quantum technologies, electronics, medicine, and renewable energy generation and storage to name but a few fields. However, we are still exploring the full potential of 2D materials, and many challenges must be overcome.

Join us for this panel discussion, hosted by 2D Materials, where leading experts will share their insights and perspectives on the current status, challenges and future directions of 2D materials research. You will have the opportunity to ask questions during the Q&A session.

Have a question for the panel?

We welcome questions in advance of the webinar, so please fill in this form.

Left to right: Stephan Roche, Konstantin Novoselov, Joan Redwing, Yury Gogotsi and Cecilia Mattevi

Chair
Prof. Stephan Roche has been ICREA Research Professor and head of the Theoretical & Computational Nanoscience Group at the Catalan Institute of Nanoscience and Nanotechnology (ICN2). He is a theoretician expert in the study of quantum transport theory in condensed matter, spin transport physics and devices simulation.

Speakers
Prof. Konstantin Novoselov is the Langworthy Professor of Physics and Royal Society Research Professor at The University of Manchester. In 2004, he isolated graphene alongside Andre Geim and was awarded the Nobel Prize in Physics in 2010 for his achievements.

Prof. Joan Redwing is a Distinguished Professor of Materials Science and Engineering at Penn State University where she holds an adjunct appointment in the Department of Electrical and Computer Engineering. Her research focuses on crystal growth and epitaxy of electronic materials, with an emphasis on thin film and nanomaterial synthesis by metalorganic chemical vapour deposition.

Prof. Yury Gogotsi is a Distinguished University Professor and Charles T and Ruth M Bach Endowed Chair in the Department of Materials Science and Engineering at Drexel University. He is the founding director of the A.J. Drexel Nanomaterials Institute.

Prof. Cecilia Mattevi is a Professor of Materials Science in the Department of Materials at Imperial College London. Cecilia’s expertise centres on science and engineering of novel 2D atomically thin materials to enable applications in energy conversion and energy storage.

About this journal

2D Materials is a multidisciplinary, electronic-only journal devoted to publishing fundamental and applied research of the highest quality and impact covering all aspects of graphene and related two-dimensional materials.

Editor-in-chief: Wencai Ren Shenyang National Laboratory for Materials Science, Chinese Academy of Sciences, China.

The post The future of 2D materials: grand challenges and opportunities appeared first on Physics World.

  •  

Data science CDT puts industry collaboration at its heart

Par : No Author
10 mai 2024 à 15:01

Physics is a constantly evolving field – how do we make sure the next generation of physicists receive training that keeps pace with new developments and continues to support the cutting edge of research?

According to Carsten P Welsch, a distinguished accelerator scientist at the University of Liverpool, in the age of machine learning and AI, PhD students in different physics disciplines have more in common than they might think.

“Research is increasingly data-intensive, so while a particle physicist and a medical physicist might spend their days thinking about very different concepts, the approaches, the algorithms, even the tools that people use, are often either the same or very similar,” says Professor Welsch.

Data science is extremely important for any type of research and will probably outlive any particular research field

Professor Welsch

Welsch is the director of the Liverpool Centre for Doctoral Training (CDT) for Innovation in Data Intensive Science (LIV.INNO). Founded in 2022, the CDT is currently recruiting its third cohort of PhD students. Current students are undertaking research that spans medical, environmental, particle and nuclear physics, but their projects are all underpinned by data science. According to Professor Welsch, “Data science is extremely important for any type of research and will probably outlive any particular research field.”

Next-generation PhD training

Carsten Welsch has a keen interest in improving postgraduate education, he was chair of STFC’s Education Training and Careers Committee and a member of the UKRI Skills Advisory Group. When it comes to the future of doctoral training he says “The big question is ‘where do we want UK researchers to be in a few years, across all of the different research areas?’”

He believes that LIV.INNO holds the solution. The CDT aims to give students with data-intensive PhD projects the skills that will enable them to succeed not only in their research but throughout their careers.

Lauryn Eley is a PhD student in the first LIV.INNO cohort who is researching medical imaging. She became interested in this topic during her undergraduate studies because it applied what she had learned in university to real-world situations. “It’s important that I can see the benefits of my work translated into everyday experiences, which I think medical imaging does quite nicely,” she says.

Miss Eley’s project is partnered with medical technology company Adaptix. The company has developed a mobile X-ray device which, it hopes, will enable doctors to produce a high-quality 3D X-ray image more cheaply and easily than with a traditional CT scanner.

Her task is to build a computational model of the X-ray device and investigate how to optimize the images it produces. To generate high-quality results she must simulate millions of X-rays. She says that the data science training she received at the start of the PhD has been invaluable.

From their first year, students attend lectures on data science topics which cover Monte Carlo simulation, high-performance computing, machine learning and AI, and data analysis. Lauryn Eley has an experimental background, and she says that the lectures enabled her to get to grips with the C++ she needed for her research.

Boosting careers with industry placements

Professor Welsch says that from the start, industry partnership has been at the centre of the LIV.INNO CDT. Students spend six months of their PhD on an industrial placement, and Lauryn Eley says that her work with Adaptix has been eye-opening, enabling her to experience first-hand the fast-paced, goal-driven world of industry, which she found very different to academic research.

While the CDT may particularly appeal to those keen on pursuing a career in industry, Professor Welsch emphazises the importance of students delivering high-quality research. Indeed, he believes that LIV.INNO’s approach provides students with the best chance of success in their academic endeavours. Students are taught to use project management skills to plan and deliver their projects, which he says puts them “in the driving seat” as researchers. They are also empowered to take initiative, working in partnership with their supervisors rather than waiting for external guidance.

LIV.INNO builds on a previous programme called the Liverpool Big Data Science Centre for Doctoral Training, which ran between 2017 and 2024. Professor Welsch was also the director of that CDT, and he has noticed that when it comes to partnering with student projects, industry attitudes have undergone a shift.

“When we approached the companies for the first time, you could definitely see that there was a lot of scepticism,” he says. “However, with the case studies from the first CDT, they found it much easier to attract industry partners to LIV.INNO.” Professor Welsch thinks that this demonstrates the benefits that industry-academia partnerships bring to both students and companies.

The first cohort from LIV.INNO are only in their second year, but many of the students from the previous CDT secured full-time jobs from the company where they did their placement. But whatever career path students eventually go down, Carsten Welsch is convinced that the cross-sector experience students get with LIV.INNO sets them up for success, saying “They can make a much better informed decision about where they would like to continue their careers.”

LIVINNO CDT logo

The post Data science CDT puts industry collaboration at its heart appeared first on Physics World.

  •  

GMT or TMT? Fate of next-generation telescope falls to expert panel set up by US National Science Foundation

Par : No Author
10 mai 2024 à 14:01

The US National Science Foundation (NSF) is to assemble a panel to help it decide whether to fund the Giant Magellan Telescope (GMT) or the Thirty Meter Telescope (TMT). The agency expects the panel, whose membership has yet to be determined, to report by 30 September, the end of the US government’s financial year.

The NSF first announced in February that it would support the construction of only one of the two next-generation ground-based telescopes due to rising costs. The GMT, priced at $2.54bn, will be located in Chile, while the TMT, which is expected to cost at least $3bn, is set to be built in Hawaii.

A decision on which telescope to fund was initially slated for May. But at a meeting of the National Science Board (NSB) last week, NSF boss Sethuraman Panchanathan revealed the panel would provide further advice to the agency. The decision to look to outsiders followed discussions with the US government and the NSB, which oversees the NSF.

The panel, which will include scientists and engineers, will assess “the readiness of the project from all perspectives” and consider how supporting each telescope would affect the NSF’s overall budget.

It will examine progress made to date, the level of partnerships and resources, and risk management. Complementarity to the European Extremely Large Telescope, opportunities for early-career scientists, and public engagement will be looked at too.

“I want to be very clear that this is not a decision to construct any telescopes,” Panchanathan, who originally trained as a physicist, told the NSB. “This is simply part of a process of gathering critical information to inform my decision-making on advancing either project to the final design stage.”

The post GMT or TMT? Fate of next-generation telescope falls to expert panel set up by US National Science Foundation appeared first on Physics World.

  •  

US DIII-D National Fusion Facility resumes operations following series of upgrades

10 mai 2024 à 11:46

The DIII-D National Fusion Facility in San Diego has completed eight months of upgrades that will allow researchers to better control and study fusion plasmas.

DIII-D is the largest magnetic-fusion facility in the US and is used by more than 700 researchers at 100 institutions worldwide. The DIII-D tokamak is a donut-shaped vacuum chamber that is surrounded by electromagnets that confine a plasma at a temperatures exceeding 10 times that of the Sun, enough to fuse hydrogen to produce energy.

Since July 2023, engineers and technicians have installed new systems to better control the fusion plasma. This includes a range of new diagnostic instruments as well as enhancements to the way that the plasma is heated.

Another change is to the tokamak’s divertor system, which removes exhaust heat and impurities from the tokamak. Engineers have installed a new configuration called a “shape and volume rise” divertor, which consists of a series of modular divertor configurations that the DIII-D will now test when experiments start up later this month.

The new divertor will allow plasma shapes to be studied that are expected to produce high fusion power performance but were not possible with DIII-D’s previous divertor geometry.

Work on the upgraded facility is also expected to support experiments that will be performed at the ITER experimental fusion reactor, which is currently being built in Cadarache, France.

“The upgrades provide us with exciting new capabilities and key enhancements,” notes DIII-D director Richard Buttery. “Our scientists will be able to use our upgraded systems and diagnostics to answer key questions on commercial industry–relevant technology, materials, and operations”.

The post US DIII-D National Fusion Facility resumes operations following series of upgrades appeared first on Physics World.

  •  

‘My career has not been a straight line’: Craig Jantzen on switching from nuclear science to diplomacy

10 mai 2024 à 10:21
Craig Jantzen
Fusing science and diplomacy Craig Jantzen makes a visit to a fusion physics laboratory at the KTH Royal Institute of Technology in Stockholm. (Courtesy: Craig Jantzen)

When Craig Jantzen was a PhD student at the University of Manchester in the UK, he used to go to politics and economics lectures alongside his research into nuclear materials. Jantzen is fascinated by all things nuclear, but he also saw the PhD as an opportunity to broaden his horizons beyond science. “You’re not drained from doing a nine-to-five job every day, and you’re around people that want to learn constantly,” he recalls.

Jantzen’s PhD, which he finished in 2017, involved investigating materials for next-generation nuclear reactors. It has been proposed that molten chloride salts, which are excellent heat conductors, could be used instead of water as reactor coolants, but these salts are incredibly corrosive to metals. Jantzen was testing the corrosion of different metal alloys in molten chloride salts in order to identify optimal materials for these reactors. But he is now a diplomat working on science collaboration and policy for the UK government. Given his interest in politics, Jantzen’s job might not seem surprising, but he emphasizes that his career has “not been a straight line”.

Having worked in finance, energy and environmental policy as well as the UK government’s COVID-19 response, Jantzen is currently based in Stockholm as the first secretary and regional manager for the UK’s Science Innovation Network where he covers the Nordic and Baltic regions. The network aims to build collaboration, promote UK research and provide expertise to the government. He leads a team of trained scientists, many of whom have PhDs, using their research experience to address policy issues like AI and climate change.

Embracing change

Jantzen’s first experience of what it would be like to work as a diplomat was sparked by a chance encounter at a conference during his PhD. He attended a talk by a speaker who had worked at the International Atomic Energy Agency (IAEA), which promotes the safe use of nuclear technologies. Jantzen was particularly intrigued to hear the speaker talk about nuclear safeguards, and in his second year, he did a six-month internship at the IAEA in Vienna, working in the same team that had responded to the Fukushima Daiichi nuclear accident in 2011.

I realized I like talking about science a lot more than I enjoy doing science

After his PhD, Jantzen considered staying in academia, but decided that his skills would be of better use elsewhere: “I realized I like talking about science a lot more than I enjoy doing science”. As it turned out, Jantzen’s first job after his PhD was as a financial consultant for Capco in London. “I knew that I would learn a lot in that environment and that they give you a lot of responsibility”, he says, “and I felt that was a good compliment to academic research”. Indeed, he credits this experience with getting him over some of the imposter syndrome he had from his PhD. With an emphasis on meeting deadlines, he had to let go of perfectionism and admit when he didn’t know something, eventually realizing that this allowed him to learn much faster.

But after 18 months in finance, it was time for another change. Wanting to do something he’d find more fulfilling, Jantzen started applying for jobs in the UK government. However, his career in the civil service got off to a slightly bumpy start.

He had been offered a role working for the Department for Business, Energy & Industrial Strategy on the proposed Wylfa Newydd nuclear power station in north Wales. However, in January 2019 – less than a week before he was supposed to start – the project was suspended. Instead, Jantzen joined the Energy Strategy team in the same department where he worked on the UK’s plan to reach net-zero emissions by 2050. His research experience had given him “a nuclear energy lens”, but working with modellers and policy teams across technologies like carbon capture and offshore wind gave him a valuable crash-course in the wider energy landscape.

Far-flung ambition

Having previously enjoyed his stint overseas with the IAEA, Jantzen soon started looking for more international-facing roles. With the UK hosting the 2021 United Nations Climate Change Conference (COP26), he knew that international environmental affairs was something he wanted to be part of. In November 2019 Jantzen moved to the Government Office for Science where he worked on the development of the UK’s COP26 science strategy. He also volunteered for the Scientific Advisory Group for Emergencies (SAGE) secretariat during the COVID-19 pandemic, where he co-led the epidemiology policy team and prepared advice that was given to the government.

A science background…helps you do your job more effectively because you understand the technology, you’re not intimidated by it

As it happened, when the opportunity came to move overseas, it was to return to the IAEA on a secondment funded by the UK government. In this role, he advised the IAEA on climate change during COP26 and COP27 – which was held in Egypt in 2022. This gave him the experience he needed to apply for full-time jobs overseas, which is how he ended up in his current position.

Now Jantzen’s day could involve negotiating bilateral agreements, hosting an embassy reception, or running technology workshops. Jantzen believes his science background has been valuable to his career, saying “It helps you do your job more effectively because you understand the technology, you’re not intimidated by it”. As well as technical knowledge, scientists bring a diversity of thought that is valuable to a team, he believes.

Jantzen thinks his school and university-age self would be surprised at where his early interest in nuclear science has taken him: “I never imagined being a diplomat or working internationally.” He had to gradually build up experience before making the jump to a diplomatic role overseas, and his advice to others who are interested in switching from science to diplomacy is not to be deterred if it takes time, saying “I definitely saw stepping stones. I didn’t know exactly what opportunity was going to come up, but when I did, I was just ready for it.”

The post ‘My career has not been a straight line’: Craig Jantzen on switching from nuclear science to diplomacy appeared first on Physics World.

  •  
À partir d’avant-hierPhysics World

Magnetic islands stabilize fusion plasma, simulations suggest

Par : No Author
9 mai 2024 à 17:23

By combining two different approaches to plasma stabilization, physicists in the US and Germany have developed a new technique for suppressing instabilities in tokamak fusion reactors. The team, led by Qiming Hu at Princeton Plasma Physics Laboratory, hopes its computer-modelling results could be an important step towards making nuclear fusion a viable source of energy.

Tokamak fusion reactors use intense magnetic fields to confine and heat hydrogen plasma within their doughnut-shaped interiors. At suitably high temperatures, the hydrogen nuclei will gain enough energy to overcome their mutual repulsion and fuse together to form helium nuclei, releasing energy in the process.

If more energy is released in the reaction than is fed into the tokamak, it would provide an abundant source of clean energy. This has been a goal of researchers since fusion was first created in the laboratory in the 1930s.

Stubborn roadblock

One of the most stubborn roadblocks to achieving sustained fusion is the emergence of periodic plasma instabilities called edge-localized modes (ELMs). These originate in the outer regions of the plasma and result in energy leaking into the tokamak’s walls. If left unchecked, this will cause the fusion reaction to fizzle out, and it can even damage the tokamak.

One of the most promising approaches for suppressing ELMs is the use of resonant magnetic perturbations (RMPs). These are controlled ripples in the confining magnetic field that create closed loops of magnetic fields to form inside the plasma.

Dubbed magnetic islands, these loops do not always have a desirable influence. If they are too large, they risk destabilizing the plasma even further. But by carefully engineering RMPs to generate islands with just the right size, it should be possible to redistribute the pressure inside the plasma, suppressing the growth of ELMs.

In their study, Hu’s team introduced an extra step to this process, which would enable them to better control the parameters of RMPs to generate magnetic islands of just the right size.

Spiralling electrons

This involved injecting the plasma with high-frequency microwaves in a method called edge-localized electron cyclotron current drive (ECCD). Inside the plasma, these waves cause energetic electrons to spiral along the direction of the confining magnetic field lines, generating local currents which run parallel to the field lines.

In previous experiments, ECCD microwaves were most often injected into the core of the plasma. But in their simulations, the Hu and colleagues instead directed them to the edge.

“Usually, people think applying localized ECCD at the plasma edge is risky because the microwaves may damage in-vessel components,” Hu explains. “We’ve shown that it’s doable, and we’ve demonstrated the flexibility of the approach.”

Tight control

In simulated tokamak reactors, the team found that their new approach can lower the amount of current necessary to generate RMPs, while also providing tight control over the sizes of magnetic islands as they formed in the plasma.

“Our simulation refines our understanding of the interactions in play,” Hu continues. “When the ECCD was added in the same direction as the current in the plasma, the width of the island decreased, and the pedestal pressure increased.”

The pedestal pressure refers to the region close to the edge of the plasma where the pressure peaks, before dropping off steeply towards the plasma boundary. “Applying the ECCD in the opposite direction produced opposite results, with island width increasing and pedestal pressure dropping or facilitating island opening,” explains Hu.

These simulation results could provide important guidance for physicists running tokamaks – including ITER experiment, which should begin operation in late 2025. If the same results can be replicated in real plasma it could bring the long-awaited goal of sustained nuclear fusion a step closer.

The research is described in Nuclear Fusion.

The post Magnetic islands stabilize fusion plasma, simulations suggest appeared first on Physics World.

  •  

Artificial intelligence: developing useful tools that scientists can trust

9 mai 2024 à 15:07

Artificial intelligence (AI) is used just about everywhere these days and scientific research is no exception. But how can physicists best use the rapidly-changing technology – and how can they be confident in the results AI delivers?

This episode of the Physics World Weekly podcast features a conversation with Rick Stevens, who is a cofounder of the Trillion Parameter Consortium, which is developing AI systems for use in science, engineering, medicine and other fields.

Stevens is a computer scientist at the Argonne National Laboratory and the University of Chicago in the US and he explains how AI can help with a wide range of tasks done by scientific researchers.

The post Artificial intelligence: developing useful tools that scientists can trust appeared first on Physics World.

  •  

Astronomy conference travel is on par with Africa’s per-capita carbon footprint

Par : No Author
9 mai 2024 à 14:15

Travel to more than 350 astronomy meetings in 2019 resulted in the emission of 42 500 tonnes of carbon dioxide. That’s the conclusion of the first-ever study to examine the carbon emissions from travel to meetings by an entire field. The carbon cost amounts to about one tonne of carbon dioxide equivalent (tCO2e) per participant per meeting – roughly Africa’s average per capita carbon footprint in 2019 (1.2 tCO2e) (PNAS Nexus 3 pgae143).

Carried out by a team led by Andrea Gokus at Washington University in St. Louis in the US, the study examined 362 meetings in 2019 that were open to anyone in the astronomical community. These included conferences disseminating scientific findings as well as schools providing lectures and training to students and early-career scientists.

Using data on each participant’s home institutions that were available for 300 of the meetings, the researchers estimated travel-related emissions for each event, assuming delegates went by train or plane. For these meetings, the emissions totalled 38 000 tCO2e and a distance equivalent to travelling to the Sun and half way back.

For the other 62 meetings that did not have details of the participants’ home institutions, the team estimated the emissions using average data from other conferences. Emissions from those events were put at 4500 tCO2e, bringing the total to 42 500 tCO2e.

The meeting with the highest emissions per participant was Great Barriers in Planet Formation held in Palm Cove, Queensland in Australia, with almost all attendees traveling from outside the country. The travel from the 115 participants resulted in 461 tCO2e, or 4 tCO2e for every person, on average. The team found that emissions could have been more than halved if it had been held in Europe or the northeastern US.

Hub model

Gokus says that while meetings are important for researchers, “adjustments can be made to reduce their hefty carbon cost”, for example by knowing where participants are based. The researchers found, for example, that emissions from 2019’s biggest astronomical conference – the 223rd American Astronomical Society (AAS) meeting in Seattle – could have been cut by a quarter if it had been held in a more central US location.

The team also explored the impact of switching the 223rd AAS meeting from a single-venue meeting to a hub model, in which simultaneous satellite events are held at different locations. A two-hub model for that conference, with an eastern and western US hub, would have reduced emissions by around 60%, the study finds. Adding a third European hub could have saved 65% of emissions, while a fourth hub in Asia, for instance in Tokyo, would have cut emissions by about 70%.

The researchers claim that such alternative meeting setups as well as virtual attendance, could have benefits beyond the environment. They point out that finances, complex visa processes, parenting and other careering responsibilities as well as disabilities can make travelling to meetings challenging for some.

“By making use of technology to connect virtually, we can foster a more inclusive collaborative approach, which can help us advance our understanding of the Universe further,” says Gokus. “It is important that we work together as a community to achieve this goal, because there is no Planet B.”

The post Astronomy conference travel is on par with Africa’s per-capita carbon footprint appeared first on Physics World.

  •  

Tetris-inspired radiation detector uses machine learning

Par : No Author
8 mai 2024 à 16:19

Inspired by the tetromino shapes in the classic video game Tetris, researchers in the US have designed a simple radiation detector that can monitor radioactive sources both safely and efficiently. Created by Mingda Li and colleagues at the Massachusetts Institute of Technology, the device employs a machine learning algorithm to process data, allowing it to build up accurate maps of sources using just four detector pixels.

Wherever there is a risk of radioactive materials leaking into the environment, it is critical for site managers to map out radiation sources as accurately as possible.

At first glance, there is an obvious solution to maximizing precision, while keeping costs as low as possible, explains Li. “When detecting radiation, the inclination might be to draw nearer to the source to enhance clarity. However, this contradicts the fundamental principles of radiation protection.”

For the people tasked with monitoring radiation, these principles advise that the radiation levels they expose themselves to should be kept as low as reasonably achievable.

Complex and expensive

However, since radiation can interact with intervening objects via a wide array of mechanisms, it is often both complex and expensive to map out radiation sources from reasonably safe distances.

“Thus, the crux of the matter lies in simplifying detector setups without compromising safety by minimizing proximity to radiation sources,” Li explains.

In a typical detector, radiation maps are created by monitoring intensity distribution patterns across a 10×10 array of detector pixels. The main drawback here is that radiation can approach the detector from a variety of directions and distances, making it difficult to extract useful information about the source of that radiation. This is usually done by placing an absorbing mask over the pixels, which provides some directional information, and by doing lots of data processing.

For Li’s team, the first step to reducing the complexity of this process was to minimize redundant information collected by multiple pixels within the array. “By strategically incorporating small [lead] paddings between pixels, we enhance contrast to ensure that each detector receives distinct information, even when the radioactive source is distant,” Li explains.

Machine learning

Next, the team developed machine learning algorithms to extract more accurate information regarding the direction of incoming radiation and the detector’s distance to the source.

Inspiration for the final step of the design would come from an unlikely source. In Tetris, players encounter seven unique tetrominoes, which represent every possible way that four squares can be arranged contiguously to create shapes.

By using these shapes to create detector pixel arrays, the researchers predicted they could achieve similar levels of accuracy as detectors with far larger square arrays. As Li explains, “these shapes offer superior efficiency in utilizing pixels, thereby enhancing accuracy.”.

To demonstrate this, the team designed a series of four–pixel radiation detectors, with the pixels arranged in Tetris-inspired tetromino shapes. To build up radiation maps, these arrays were moved in circular paths around the radioactive sources being studied. This allowed the detector’s algorithms to discern accurate information about source positions and directions, based on the counts received by the four pixels.

Successful field test

“Particularly noteworthy was our successful execution of a field-test at Lawrence Berkeley National Laboratory,” Li recalls. “Even when we withheld the precise source location, the machine learning algorithm could effectively localize it within real experimental data.”

Li’s team is now confident that its novel approach to detector design and data processing could be useful for radiation detection. “The adoption of Tetris-like configurations not only enhances accuracy but also minimizes complexity in detector setups,” Li says. “Moreover, our successful field-test underscores the real-world applicability of our approach, paving the way for enhanced safety and efficacy in radiation monitoring.”

Based on their success, the team hopes the detector design could soon be implemented for applications including the routine monitoring of nuclear reactors, the processing of radioactive material, and the safe storage of harmful radioactive waste.

The detector is described in Nature Communications.

The post Tetris-inspired radiation detector uses machine learning appeared first on Physics World.

  •  

What’s hot in particle and nuclear physics? Find out in the latest Physics World Briefing

8 mai 2024 à 15:41
Cover of the 2024 Physics World Particle & Nuclear Briefing
Stay tuned The first Physics World Particle and Nuclear Briefing is out now.

From the Higgs boson at CERN to nuclear reactions inside stars, who doesn’t love particle and nuclear physics?

There’s so much exciting work going on in both fields, which is why we’re bringing you this new Physics World Particle & Nuclear Briefing.

The 30-page, free-to-read digital magazine contains the best of our recent coverage in the two areas, including – of course – plenty on CERN, which is celebrating its 70th anniversary this year.

In addition to former CERN science communicator Achintya Rao looking back at the famous day in 2012 when the lab announced the discovery of the Higgs boson, there’s an interview with Freya Blekman, who talks about the joy of a career in physics as part of the CMS experiment at the Large Hadron Collider.

You can also find out how CERN’s Quantum Technology Initiative is encouraging collaboration between the high-energy physics and quantum tech communities.

But it’s not all about CERN. Over in the US, there are in-depth interviews with Lia Merminga, the physicist who’s current director of the Fermi National Accelerator Laboratory, and with Mike Witherell, who’s head of the Lawrence Berkeley National Laboratory.

Looking to the future, we’ve included an analysis of the influential “P5” report into the future of US particle physics, which recently called for the construction of a muon collider. Physics World also talks to Ambrogio Fasoli – the new head of EUROfusion, who says that Europe must ramp up its efforts to build a demonstration fusion reactor.

And with our pick of the best recent news and research updates, the new Physics World Particle & Nuclear Briefing really is the place for you to start.

If that’s not enough, do keep checking our particle and nuclear channel on the Physics World website for regular updates in the two fields.

The post What’s hot in particle and nuclear physics? Find out in the latest <em>Physics World Briefing</em> appeared first on Physics World.

  •  

Radiation-transparent RF coil designed for MR guidance of particle therapy

Par : Tami Freeman
8 mai 2024 à 10:50

Particle therapy is usually delivered using a large and costly gantry to change the angle of incidence of the therapeutic ion beam relative to the patient. If the patient were rotated instead, a simpler fixed-beam configuration could provide 360° access for the particle beam. During patient rotation, however, the changing direction of the gravitational force will deform and displace the tumour and surrounding organs in an unpredictable way. To ensure precise dose delivery to the tumour, such anatomical changes must be detected and compensated for during irradiation.

“Image guidance is absolutely necessary for particle therapy with patient rotation,” explains Kilian Dietrich from Heidelberg University Hospital and the German Cancer Research Center (DKFZ). “To exploit the main benefit of particle therapy – high dose escalation at the tumour with minimal dose to surrounding healthy tissue – prior knowledge of the tissue composition in the irradiation path is required.”

In conventional photon-based radiotherapy, MRI can be implemented in so-called MR-linacs, which offer the possibility to visualize changes in anatomy or patient position with high soft-tissue contrast. However, combining MRI with particle therapy including patient rotation remains a significant challenge.

Particle beams of protons, carbon ions or helium ions are extremely sensitive to non-homogeneous materials in the irradiation path, placing constraints on the MRI magnet and components. To address these limitations, Dietrich and colleagues are developing a radiation-transparent body coil to enable MR-guided particle therapy in combination with patient rotation, describing their work in Medical Physics.

Radiation transparency

One key obstacle when integrating MRI with particle therapy is the design of the radiofrequency (RF) coils used to flip the magnetization of the tissue and receive the generated MR signals. Conventional imaging coils contain highly attenuating electronic components that, if located in the beam path, will cause ion attenuation and scattering that alter the delivered dose distribution and reduce treatment efficacy.

To prevent such adverse effects, the team designed an RF coil with minimal ion attenuation, based on a cylindrical 16-rung birdcage configuration. This specific birdcage coil only has capacitors on the end rings, thereby avoiding attenuation and scattering in a large window in between. And since the birdcage functions both as a transmit and a receive coil, no additional RF coils are required. The design also allows easy integration into a capsule that enables rotation of the patient and the coil together, providing 360° access for a fixed ion beam source.

The researchers built the RF coil from a 35 µm-thick copper conductor embedded between layers of flexible polyimide and adhesive. The coil has an inner diameter of 53 cm and an axial length of 52 cm – providing a large enough field-of-view for full-body cross section imaging.

Measuring the Bragg peak shift caused by the entire RF coil confirmed its total water equivalent thickness (WET, a measure of ion attenuation) as 420 µm. This includes the polyimide and adhesive layers, which are homogeneous and can be compensated for with higher particle beam energy. The WET of the copper layer alone, which is inhomogeneous and cannot simply be compensated for, was approximately 210 µm. This is well within the clinical precision required for dose planning, which lies in the order of millimetres. As such, the team classifies the RF coil as radiation transparent.

Effective imaging

To characterize the imaging quality of their RF coil, the researchers imaged a homogeneous tissue-simulating phantom using a 1.5 T MR system. For the three central planes in the phantom, the transmit RF field distributions were homogeneous and resembled those of simulations and the MR system’s internal body coil. The measured transmit power efficiencies (between 0.17 and 0.26 µT/√W) were lower than the simulated values, but exceeded those of the internal body coil.

To examine the impact of coil rotation, they determined the mean transmit power efficiency in a central subvolume of the phantom for a full capsule rotation. Compared with the simulations, the measurements showed a slight dependence on rotation angle, with optimal transmit power efficiency at rotation angles close to 0° and 180°.

The RF coil also exhibited uniform signal acquisition in the three central phantom planes, with similar receive sensitivity profiles as observed in the simulations, both with the phantom in the horizontal position and when rotated by 30°. For a full rotation of the capsule, the measured receive sensitivity varied between 62% and 125%, decreasing at rotation angles between 15° and 120° and at 205°.

The signal-to-noise ratio (SNR) of the RF body coil showed a slight dependence on the rotation angle, ranging between 103 and 150. Overall, an increase of 10%–43% over the SNR of the internal body coil was achieved, indicating reasonable imaging quality for thoracic, abdominal and pelvic MRI.

To estimate the effect of realistic patient loading in the RF coil, the team also simulated a heterogeneous human voxel model, observing high transmit power efficiency and receive sensitivity for all rotation angles. The next step will be to perform in vivo measurements.

“The RF coil has not been tested in vivo yet since further tests are necessary before the whole setup can be tested,” Dietrich tells Physics World. “This includes patient acceptance for the rotation system as well as the time required to rescue the patient in times of emergency.”

The post Radiation-transparent RF coil designed for MR guidance of particle therapy appeared first on Physics World.

  •  

From pulsars and fast radio bursts to gravitational waves and beyond: a family quest for Maura McLaughlin and Duncan Lorimer

Par : No Author
7 mai 2024 à 18:41

Most physicists dream of making new discoveries that expand what we know about the universe, but they know that such breakthroughs are extremely rare. It’s even more surprising for a scientist to make a great discovery with someone who is not just a colleague, but also their life partner. The best-known husband-and-wife couples in physics are the Curies, Marie and Pierre; as well as their daughter, Irène Joliot-Curie and her husband Frédéric Joliot-Curie. Each couple won a Nobel prize, in 1903 and 1935 respectively, for early work on radioactivity.

Joining the ranks of these pioneering physicists are contemporary married couple Maura McLaughlin and Duncan Lorimer, who last year were two of three laureates awarded the $1.2m Shaw Prize in Astronomy (see box below) for their breakthroughs in radio astronomy. Together with astrophysicist Matthew Bailes, director of the Australian Research Council Centre of Excellence for Gravitational Wave Discovery, McLaughlin and Lorimer won the prize for their 2007 discovery of fast radio bursts (FRBs) – powerful but short-lived pulses of radio waves from distant cosmological sources. Since their discovery, several thousand of these mysterious cosmic flashes, which last for milliseconds, have been spotted.

Over the years, McLaughlin and Lorimer’s journeys – through academia and their personal life – have been inherently entwined and yet distinctly discrete, as the duo developed careers in radio astronomy and astrophysics that began with pulsars, then included FRBs and now envelop gravitational waves. The couple have also advanced science education and grown astronomical research and teaching at their home base, West Virginia University (WVU) in the US. There, McLaughlin is Eberly Family distinguished professor of physics and astronomy, and chair of the Department of Physics and Astronomy, while Lorimer currently serves as associate dean for research in WVU’s Eberly College of Arts and Sciences.

The Shaw Prize

Photo of two people superimposed with artist impression of radio waves
Shaw laureates Astrophysicists Duncan Lorimer and Maura McLaughlin received the Shaw Prize in 2023 for their discovery of fast radio bursts. (Courtesy: WVU Photo/Raymond Thompson Jr)

The 2023 Shaw Prize in Astronomy, awarded jointly to Duncan Lorimer and Maura McLaughlin, and to their colleague Matthew Bailes, is part of the legacy of Sir Run Run Shaw (1907–2014), a successful Hong Kong-based film and television mogul. Known for his philanthropy, he gave away billions in Hong Kong dollars to support schools and universities, hospitals and charities in Hong Kong, China and elsewhere.

In 2002 he established the Shaw Prize to recognize “those persons who have achieved distinguished contributions in academic and scientific research or applications or have conferred the greatest benefit to mankind”. A gold medal and a certificate for each Shaw laureate, and a monetary award of $1.2m shared among the laureates, is given yearly in astronomy, life science and medicine, and mathematical sciences. Previous winners of the Shaw Prize in Astronomy include Ronald Drever, Kip Thorne and Rainer Weiss, for the first observation of gravitational waves with LIGO. They are among the 16 of the 106 Shaw laureates since 2004 who have also been awarded Nobel prizes.

Accidental cosmic probe

Radio astronomy, which led to much of McLaughlin and Lorimer’s work, was not initially a formal area of research. Instead, it began rather serendipitously in 1928, when Bell Labs radio engineer Karl Jansky was trying to find the possible sources of static at 20.5 MHz that were disrupting the new transatlantic radio telephone service. Among the types of static that he detected was a constant “hiss” from an unknown source that he finally tracked down to the centre of the Milky Way galaxy, using a steerable antenna 30 m in length. His 1933 paper “Electrical disturbances apparently of extraterrestrial origin” received considerable media attention but little notice from the astronomy establishment of the time (see “Radio astronomy: from amateur roots by worldwide groups” by Emma Chapman).

Radio astronomy truly flourished after the Second World War, with new purpose-built facilities. An early example from 1957 was the steerable 76 m dish antenna built by Bernard Lovell and colleagues at Jodrell Bank in the UK – where McLaughlin and Lorimer would later work. Other researchers who led the way include the Nobel-prize-winning astronomer Sir Martin Ryle, who pioneered radio interferometry and developed aperture synthesis; as well as Australian electrical engineer Bernard Mills, who designed and built radio interferometers.

Extraterrestrial radio signals soon yielded important science. In 1951 researchers detected a predicted emission from neutral hydrogen at 1.4 GHz – a fingerprint of this fundamental atom. In 1964 Arno Penzias and Robert Wilson (also based at Bell Labs) inadvertently found a 4.2 GHz signal across the whole sky, while testing orbiting telecom satellites – thereby discovering the cosmic background radiation. And in 1968 another spectacular discovery shaped McLaughlin and Lorimer’s careers, when University of Cambridge graduate student Jocelyn Bell Burnell and her PhD supervisor Antony Hewish announced the observation of an unusual radio signal from space – a pulse that arrived every 1.3 seconds. That signal was the first to come from what were soon called “pulsars”. Hewish would go on to share the 1974 Nobel Prize for Physics for the discovery – while Bell Burnell was infamously left out, supposedly due to her then student status.

As more pulsars were found with varied periods and in different directions of the sky, it became clear that the signals were not being sent by an alien civilization as some researchers had speculated – after all, the chances of an extraterrestrial civilization sending many signals of varying periods, or different civilizations sending out different periodic signals, was slim. One clue was that the pulses were short and coherent, so they had to come from sources smaller than the distance light could travel during the pulse’s lifetime – for instance, the source of a 5 ms pulse could be at a maximum of 1500 km.

As it happened, the signals were our first look at neutron stars – small, extremely dense and rapidly rotating remnants of massive stars after they have gone supernova and had their protons and electrons squeezed into neutrons by gravity’s implacable power. As the star rotates, its strong off-axis magnetic field produces beams of electromagnetic radiation from the magnetic poles. These beams create regular pulses as they sweep past a detector on a direct line of sight. Pulsars are mostly studied at radio frequencies, but they also radiate at other, higher frequencies.

Pulsars to fast bursts

Lorimer and McLaughlin began their careers by studying these exotic stellar objects, but each of them had already been captivated by astronomy and astrophysics as teenagers. Lorimer was born in Darlington, UK. After studying astrophysics as an undergraduate at the University of Wales in Cardiff, he moved to the University of Manchester in 1994, where his PhD research focused on analysing classes of radio pulsars with different periods.

McLaughlin was born in Philadelphia, Pennsylvania, and first studied pulsars as an undergraduate student at Penn State. Her PhD dissertation at Cornell University in 2001 covered pulsars that variously emitted radio waves, X-rays or gamma rays. By 1995 Lorimer was working as a researcher at the Max Planck Institute for Radio Astronomy in Bonn, Germany, whereas McLaughlin joined the Jodrell Bank Observatory in 2003. He met McLaughlin in 1998 while working at the Arecibo Observatory in Puerto Rico. McLaughlin and Lorimer moved to the UK in 2001 to work at the Jodrell Bank observatory.

It was an interesting and exciting time in the pulsar research community, with new pulsars found by computerized Fourier transform analysis that detected the telltale periodicities in vast amounts of observational data. But radio astronomers also sometimes saw transient signals, and McLaughlin had written computer code designed to find single bright pulses. This led to the 2006 discovery of a new class of pulsars dubbed rotating radio transients (RRATS, an acronym recalling a pet rat McLaughlin once had). These stars could be detected only through their sporadic millisecond-long bursts, unlike most pulsars, which were found through their periodic emissions. The discovery in turn initiated further searches for transient pulses (Nature 439 817).

The following year, Lorimer and McLaughlin, now a married couple, joined WVU’s department of physics and astronomy as assistant professors. To uncover more distant and bright pulsars, Lorimer gave his graduate student Ash Narkevic the task of looking through archival observational data that the Parkes radio telescope in Australia had taken of the Large and Small Magellanic Clouds – two small galaxies that are satellites to our very own Milky Way, roughly 200,000 light-years away from Earth – of which the Large was already known to host 14 pulsars.

Narkevic examined the data and found a single strong burst – nearly 100 times stronger than the background – at 1.4 GHz with a 5 msec duration. But the burst seemed to come from the Small Magellanic Cloud, where there were five known pulsars at that time. Even more surprising was the fact that this extremely bright burst did not arrive all the same time. Known as pulse or frequency dispersion, this occurs when radio waves travelling through interstellar space interact with free electrons, dispersing the waves, as higher-frequency waves travel through the free-electron plasma quicker than lower-frequency ones, and arrive earlier at our telescopes.

This dispersion depends on the total number of electrons (or the column density) along the path. The further away the source of the burst, the more likely it is that the waves will encounter even more electrons on their path to Earth, and so the lag between the high- and low-frequency waves is greater. The pulse Narkevic spotted was so distorted by the time it reached Earth that it suggested the source was almost three billion light-years away – well beyond our local galactic neighbourhood. This also meant that the source must be significantly smaller than the Sun, and more on par with the proposed size of pulsars, while also somehow being 1012 times more luminous than a typical pulsar.

1 The first burst

Photo of two men holding a sheaf of paper and a graph of radio data showing a clear black line
Courtesy: Duncan Lorimer; Lorimer et al., NRAO/AUI/NSF

(Top) Duncan Lorimer (left) and Ash Narkevic in 2008 with the paper they published in Science about their observation of a fast radio burst (bottom).

The report of this seemingly new phenomenon – a single extremely energetic event at an enormous cosmological distance – was published in Science later that year, after being initially rejected (Science 318 777). This first detected fast radio burst came to be known as the “Lorimer burst” (figure 1). After several years and significant further work by Lorimer, McLaughlin, Bailes and others, they found first four and then tens of similar bursts. This launched a new class of cosmological phenomena that now includes more than 1000 FRBs, which have fulfilled the prediction in 2007 that they would serve as cosmological probes.

Thanks to FRBs having been found in different galaxies beyond our own across the sky, they serve as a probe of the intergalactic medium, allowing astrophysicists to measure the density of the material that lies between Earth and the host galaxy (Nature 581 391). By measuring the distance to the source of the FRB, and then looking at the dispersion as a function of wavelength of the pulses, astronomers can determine the density of the matter the pulse passed through, thereby yielding a value for the baryonic density of our universe. This is otherwise extremely difficult to measure, thanks to how diffused this matter is in our observable universe. FRBs have also provided an independent measurement for the Hubble constant, the exact value of which has lately come under new scrutiny (MNRAS 511 662).

Detecting a gravitational-wave background

While Lorimer is still working on pulsars and FRBs, McLaughlin has now moved into another area of pulsar astronomy. That’s because for almost two decades, she has been a researcher in and co-director of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) Physics Frontier Center, which uses pulsars to detect low-frequency gravitational waves with periods of years to decades. One of its facilities is the steerable 100 m Green Bank Telescope about 150 km south of WVU.

“We are observing an array of pulsars distributed across the sky,” says McLaughlin. “These are 70 millisecond pulsars, so very rapidly rotating. We search for very small deviations in the arrival times of the pulsars that we can’t explain with a timing model that accounts for all the known astrophysical delays.” General relativity predicts that certain deviations in the timing would depend on the relative orientation of pairs of pulsars, so seeing this special angular correlation in the timing would be a clear sign of gravitational waves.

2 Gravitational-wave spectrum

Two figures: a globe covered in coloured symbols and a chart
Courtesy: NANOGrav Collaboration

(a) The NANOGrav 15-year data set contains timing observations from 68 pulsars using the Arecibo Observatory, the Green Bank Telescope and the Very Large Array. The map shows pulsar locations in equatorial co-ordinates. (b) The background comes from correlating changes in pulsar arrival times between all possible pairs of the 67 pulsars (2211 distinct pairs in total), and is based on three or more years of timing data. The black line is the expected correlation predicted by general relativity. These calculations assume the gravitational-wave background is from inspiralling supermassive black-hole binaries.

In June 2023 the NANOGrav collaboration published an analysis of 15 years of its data (figure 2), looking at 68 pulsars with millisecond periods, which showed this signature for the first time (ApJL 951 L8). McLaughlin says that it represents not just one source of gravitational waves, but a background arising from all gravitational events such as merging supermassive black holes at the hearts of galaxies. This background may contain information about how galaxies interact and perhaps also the early universe. Five years from now, she predicts, NANOGrav will be detecting individual supermassive black-hole binaries and will tag their locations in specific galaxies, to form a black hole atlas.

Star-crossed astronomers

The connections between McLaughlin and Lorimer that played a role in their academic achievements began rather fittingly with an interaction in 1999, at the Arecibo radio telescope in Puerto Rico (now sadly decommissioned). Lorimer was based there at the time, while McLaughlin was a visiting graduate student, and their contact, though not in person, was definitely not cordial. Lorimer sent what he calls a “little snippy e-mail” to McLaughlin about her use of the computer that blocked his own access, which she also recalls as “pretty grumpy”.

Two photos: a woman stood on a telescope gantry and a man in a control room
Near miss Maura McLaughlin and Duncan Lorimer both worked at Arecibo Observatory in Puerto Rico in 1999, but they didn’t meet in person during that time. (Courtesy: Maura McLaughlin and Duncan Lorimer)

But things improved after they later met in person, and they joined the Jodrell Bank Observatory in the UK. The pair married in 2003 and now have three sons. Over the years, they moved together to the US, set up their own astronomy group at WVU by 2006, and proceeded to work together and alongside each other, publishing many research papers, both joint and separate.

Given all these successes, how do the two researchers balance science and family, especially when they first arrived at WVU with a five-month-old baby to join a department with just one astronomer and no graduate astronomy programme? McLaughlin says it was “Really hard work. Lots of grant writing, developing courses,” but adds that it was also “really fun because we were both building a programme and building a family and moving to a new place”.

Life got even busier in 2007, when another child and the FRB discovery both arrived. The couple says that it was all doable because they fully understood the need to shift scientific or family responsibilities to each other as necessary. According to McLaughlin, this includes equal parenting from her husband, for which she feels “very lucky”. As Lorimer puts it, “We get each other’s mindset.”

However, the fact that they are married may have coloured perceptions of their work and status. “When we first started here at WVU,” Lorimer explains, “a lot of people assumed we were sharing a single position. But the university’s been great. It’s always made it clear from the get-go that we’re obviously on different career trajectories.” And they agree that as they’ve progressed in their individual careers and are known for different things, they’re now unmistakably seen as two distinct scientists.

Three photos of the same couple: their wedding; riding a tandem bike; and posing with a dog
Shared wavelength Maura McLaughlin and Duncan Lorimer married in 2003 (top). They credit their ability to both have successful careers to sharing and shifting family responsibilities as needed, as well as taking their initially similar career paths on different trajectories. (Courtesy: Maura McLaughlin and Duncan Lorimer)

Beyond the Shaw Prize

The Shaw Prize came as a total surprise to the couple. The pair both received e-mails simultaneously one evening, but Lorimer spotted his first. “We almost missed it as it was just about time to go to bed and the announcement was being made in Hong Kong a few hours after that,” says Lorimer. McLaughlin recalls her husband screaming and excitedly running up the stairs to give her the news. “He doesn’t scream much to begin with, maybe only when the dogs do something bad, and I’m wondering ‘Why is he screaming late on a Sunday night?’ He told me to pull up the e-mail and I thought it was a prank. I read it again and realized it was real. That was quite a Sunday night.” Amusingly, the e-mail for their co-winner Matthew Bailes initially went into his spam folder. The trio would later describe their work in a Shaw Prize Lecture in Hong Kong in November 2023.

So what comes next for the stellar pair? Further research into the different types of FRBs that are still being found, using new telescopes and detection schemes. One new project, an extension of Lorimer’s earlier work in pulsar populations, is to locate FRBs in specific galaxies and among groups of both younger and older stars using the Green Bank telescope in West Virginia, along with others, to help uncover what causes them. FRBs may come from neutron stars with especially huge magnetic fields – dubbed magnetars – but this remains to be seen.

Data from Green Bank is also used in the Pulsar Science Collaboratory, co-founded by McLaughlin and Lorimer (see box below). Meanwhile, the NANOGrav pulsar observation of the gravitational wave background, where McLaughlin continues her long-time involvement, has been hailed by the LIGO Collaboration for opening up the spectrum in the exciting new era of gravitational-wave astronomy and cosmology.

The Pulsar Science Collaboratory

Photo of two high-schoolers and a woman looking at data on a computer screen
Engaging science Participants in the Pulsar Science Collaboratory, at the Green Bank Telescope control room. (Courtesy: NSF/AUI/GBO)

The Pulsar Science Collaboratory (PSC) was founded in 2007 by Maura McLaughlin, Duncan Lorimer and Sue Ann Heatherly at the Green Bank Observatory; with support from the US National Science Foundation. It is an educational project in which, to date, more than 2000 high-school students have been involved in the search for new pulsars.

Students are trained via a six-week online course and then must pass a certification test to use an online interface to access terabytes of pulsar data from the Green Bank Observatory. They are also invited to a summer workshop at the observatory. McLaughlin and Lorimer proudly note the seven new pulsars that high-school students have so far discovered. Many of these students have continued as college undergraduates or even graduate students working on pulsar and fast-radio-burst science.

At the end of the Shaw Prize Lecture, Lorimer pointed out that there is “still much left to explore”. In an interview for the press, McLaughlin said “We’ve really just started.” Both statements seem fair predictions for anything each one does in their areas of interest in the future – surely with hard work but also with the continuing sense that it’s “really fun”.

The post From pulsars and fast radio bursts to gravitational waves and beyond: a family quest for Maura McLaughlin and Duncan Lorimer appeared first on Physics World.

  •  

Australia raises eyebrows by splashing A$1bn into US quantum-computing start-up PsiQuantum

Par : No Author
7 mai 2024 à 17:16

The Australian government has controversially announced it will provide A$940m (£500m) for the US-based quantum-startup PsiQuantum. The investment, which comes from the country’s National Quantum Strategy budget, makes PsiQuantum the world’s most funded independent quantum company.

Founded in 2015 by five physicists who were based in the UK, PsiQuantum aims to build a large-scale quantum computer by 2029 using photons as quantum bits (or qubits). As photonic technology is silicon-based, it benefits from advances in large-scale chip-making fabrication and does not need as much cryogenic cooling as other qubit platforms require.

The company has already reported successful on-chip generation and the detection of single-photon qubits, but the technique is not plain sailing. In particular, optical losses still need to be reduced to sufficient levels, while detection needs to be more efficient to improve the quality (or fidelity) of the qubits.

Despite these challenges, PsiQuantum has already attracted several supporters. In 2021 private investors gave the firm $665m and in 2022 the US government provided $25m to both GlobalFoundries and PsiQuantum to develop and build photonic components.

The money from the Australian government comes mostly via equity-based investment as well as grants and loans. The amount represents half of the budget that was allocated by the government last year to boost Australia’s quantum industry over a seven-year period until 2030.

The cash come with some conditions, notably that PsiQuantum should build its regional headquarters in the Queensland capital Brisbane and operate the to-be-developed quantum computer from there. Anthony Albanese, Australia’s prime minister, claims the move will create up to 400 highly skilled jobs, boosting Australia’s tech sector.

A bold declaration

Stephen Bartlett, a quantum physicist from the University of Sydney, welcomes the news. He adds that the scale of the investment “is required to be on par” with companies such as Google, Microsoft, AWS, and IBM that are investing similar amounts into their quantum computer programmes.

Ekaterina Almasque, general partner at the venture capital firm OpenOcean, says that the investment may bring further benefits to Australia. “The [move] is a bold declaration that quantum will be at the heart of Australia’s national tech strategy, firing the starting gun in the next leg of the race for quantum [advantage],” she says. “This will ripple across the venture capital landscape, as government funding provides a major validation of the sector and reduces the risk profile for other investors.”

Open questions

The news, however, did not please everyone. Paul Fletcher, science spokesperson for Australia’s opposition Liberal/National party coalition, criticises the selection process. He says it was “highly questionable” and failed to meet normal standards of transparency and contestability.

“There was no public transparent expression of interest process to call for applications. A small number of companies were invited to participate, but they were required to sign non-disclosure agreements,” says Fletcher. “And the terms made it look like this had all been written so that PsiQuantum was going to be the winner.”

Fletcher adds that is is “particularly troubling” that the Australian government “has chosen to allocate a large amount of funding to a foreign based quantum-computing company” rather than home-grown firms. “It would be a tragedy if this decision ends up making it more difficult for Australian-based quantum companies to compete for global investment because of a perception that their own government doesn’t believe in them,” he states.

Kees Eijkel, director of business development at the quantum institute QuTech in the Netherlands, adds that it is still an open question what “winning technology” will result in a full-scale quantum computer due to the “huge potential” in the scalability of other qubit platforms.

Indeed, quantum physicist Chao-Yang Lu from University of Science and Technology of China took to X to note that there is “no technologically feasible pathway to the fault-tolerant quantum computers PsiQuantum promised” adding that there are many “formidable” challenges”.

Lu points out that PsiQuantum had already claimed to have a working quantum computer by 2020, which was then updated to 2025. He says that the date now slipping to 2029 “is [in] itself worrying”.

The post Australia raises eyebrows by splashing A$1bn into US quantum-computing start-up PsiQuantum appeared first on Physics World.

  •  

Dark-field X-ray imaging reveals potential of nanoparticle-delivered gene therapy

Par : Tami Freeman
7 mai 2024 à 10:30

Cystic fibrosis is a genetic disorder in which defects in the CFTR protein (arising from mutations in the CFTR gene) can cause life-threatening symptoms in multiple organs. In the respiratory system, cystic fibrosis dehydrates the airway and produces sticky mucus in the lungs, leading to breathing problems and increasing the risk of lung infections.

One proposed treatment for cystic fibrosis is gene therapy, in which a viral vector delivers a healthy copy of the CFTR gene into airway cells to produce functional CFTR protein. To transport this vector to target cells and keep it there long enough to interact with them – key challenges for all gene therapies – researchers have coupled the vector to magnetic nanoparticles, which should allow controlled delivery to the airways using an external magnetic field.

Researchers at the University of Adelaide are now tackling another pressing challenge for successful gene therapy – visualizing the magnetic nanoparticles within live airways and manipulating them in vivo. To achieve this, they explored the use of dark-field X-ray imaging to enhance nanoparticle contrast and understand how magnetic nanoparticles move within the airway of a live rat, reporting their findings in Physics in Medicine & Biology.

While conventional X-ray imaging relies on the absorption of X-rays, dark-field X-ray imaging detects small-angle scattering from microstructures within a sample. To perform dark-field imaging, the researchers used a 25.0 keV monochromatic beam at the SPring-8 Synchrotron in Japan. They placed a phase grid into the beam upstream of the sample, creating a pattern of beamlets at the detector. These beamlets diffuse as they scatter through the sample, and the dark-field signal can be extracted from the strength of this blurring at the detector.

University of Adelaide researchers
Research team From left to right: Martin Donnelley, Kaye Morgan, David Parsons, Ronan Smith and Alexandra McCarron during their visit to Japan to use the SPring-8 Synchrotron. (Courtesy: Martin Donnelley)

“My group previously used high-resolution phase-contrast X-ray imaging for imaging nanoparticle delivery, and we were at the synchrotron when we realised the images weren’t showing the full picture,” first author Ronan Smith tells Physics World. “I developed new methods for directional dark-field imaging during my PhD, so we thought we’d see if that could help.”

Imaging nanoparticle delivery

The researchers first examined the delivery of superparamagnetic nanoparticles to an anaesthetized rat, positioned with the synchrotron beam passing through its trachea at 45°. Imaging a living animal inevitably creates background signals from the surrounding anatomy. To supress this background during nanoparticle delivery, the team employed a novel approach based on analysing the components of the directional dark-field signal.

A suspension of nanoparticles should scatter X-rays isotropically, and the major and minor scattering components of the directional dark-field signal should be equal. Asymmetric structures such as tissue, skin and hair, however, will scatter anisotropically, with most of the signal seen in the major component. By examining just the minor component, the team could enhance the contrast of the nanoparticles signal above the background.

“The directional dark-field retrieval approach was key in isolating the isotropic dark-field signal, generated by nanoparticles entering the airways, from the overlying directional dark-field signal generated by the surrounding anatomy,” Smith explains. “No one has taken this approach before as far as I know.”

Smith and colleagues delivered the nanoparticles into the rat’s trachea over 25 s, capturing 180 frames during this time, guided by the animal’s breathing. Initially, a diagonal line appeared in both the X-ray transmission and dark-field images, showing the nanoparticles starting to flow from the delivery tube into the trachea. At 22.91 s, the minor dark-field signal revealed a noticeable feature in the lower half of the tube, which became gradually clearer before being pushed out by an air bubble at the end of the delivery. The dark-field signal captured this event with 3.5 times higher signal-to-noise ratio than the transmission signal.

Directional dark-field X-ray imaging
Nanoparticle imaging Transmission (a), directional dark-field (b), and major (c) and minor (d) components of the dark-field images. (Courtesy: CC BY 4.0/Phys. Med. Biol. 10.1088/1361-6560/ad40f5)

Imaging the delivery process revealed that the nanoparticles unexpectedly settled inside the delivery tube, with many only reaching the trachea during the last 10% of the delivery. The researchers note that this could lead to suboptimal cellular uptake of viral vectors being delivered by nanoparticles, adding that this process could not have been observed without dark-field imaging.

Rotating nanoparticle strings

Next, the team exposed the rat to a 1.17 T magnet, which caused the nanoparticles to form into string-like structures, and rotated the magnet around its trachea. With the magnet above the rat, transmission images showed that the strings were aligned vertically. As the magnet moved, the strings remained aligned to the magnetic field, suggesting that dynamic magnetic fields could indeed manipulate nanoparticles in situ.

With the magnet alongside the rat (partially aligning the strings along the beam axis), the strings also produced a directional dark-field signal. However, this signal was not clearly visible when the particles were aligned vertically, likely due to the beam passing through fewer nanoparticles in this position.

Smith says that the biologists in his group are now using these imaging results to enhance their work on airway gene therapy. “It’s a cyclic development process, so we have more synchrotron experiments planned to answer the questions that their results give, using a mixture of phase-contrast and directional dark-field imaging,” he explains. “We are also looking at other respiratory applications of dark-field imaging.”

The post Dark-field X-ray imaging reveals potential of nanoparticle-delivered gene therapy appeared first on Physics World.

  •  

Sound and light waves combine to create advanced optical neural networks

Par : No Author
6 mai 2024 à 14:00

One of the things that sets humans apart from machines is our ability to process the context of a situation and make intelligent decisions based on internal analysis and learned experiences.

Recent years have seen the development of new “smart” and artificially “intelligent” machine systems. While these do have intelligence based on analysing data and predicting outcomes, many intelligent machine networks struggle to contextualize information and tend to just create a general output that may or may not have situational context.

Whether we want to build machines that can make informed contextual decisions like humans can is an ethical debate for another day, but it turns out that neural networks can be equipped with recurrent feedback that allows them to process current inputs based on information from previous inputs. These so-called recurrent neural networks (RNNs) can contextualize, recognise and predict sequences of information (such as time signals and language) and have been used for numerous tasks including language, video and image processing.

There’s now a lot of interest in transferring electronic neural networks into the optical domain, creating optical neural networks that can process large data volumes at high speeds with high energy efficiency. But while there’s been much progress in general optical neural networks, work on recurrent optical neural networks is still limited.

New optoelectronics required

Development of recurrent optical neural networks will require new optoelectronic devices with a short-term memory that’s programmable, computes optical inputs, minimizes noise and is scalable. In a recent study led by Birgit Stiller at the Max Planck Institute for the Science of Light, researchers demonstrated an optoacoustic recurrent operator (OREO) that meets these demands.

optoacoustic recurrent operator concept
OREO concept Information in an optical pulse is partially converted into an initial acoustic wave, which affects the second and third light–sound processing steps. (Courtesy: Stiller Research Group, MPL)

The acoustic waves in the OREO link subsequent optical pulses and capture the information within, using it to manipulate the next operations. The OREO is based on stimulated Brillouin-Mandelstam scattering, an interaction between the optical waves and travelling sound waves that’s used to add latency and slow the acoustic velocity. This process enables the OREO to contextualize a time-encoded stream of information using sound waves as a form of memory, which could be used not only to remember previous operations but as a basis to manipulate the output of the current operation – much like in electronic RNNs.

“I am very enthusiastic about the generation of sound waves by light waves and the manipulation of light by the means of acoustic waves,” says Stiller. “The fact that sound waves can create fabrication-less temporary structures that can be seen by light and can manipulate light in a hair-thin optical fibre is fascinating to me. Building a smart neural network based on this interaction of optical and acoustic waves motivated me to embark on this new research direction.”

Designed to function in any optical waveguide, including on-chip devices, the OREO controls the recurrent operation entirely optically. In contrast to previous approaches, it does not need an artificial reservoir that requires complex manufacturing processes. The all-optical control is performed on a pulse-by-pulse basis and offers a high degree of reconfigurability that can be used to implement a recurrent dropout (a technique used to prevent overfitting in neural networks) and perform pattern recognition of up to 27 different optical pulse patterns.

“We demonstrated for the first time that we can create sound waves via light for the purposes of optical neural networks,” Stiller tells Physics World. “It is a proof of concept of a new physical computation architecture based on the interaction and reciprocal creation of optical and acoustic waves in optical fibres. These sound waves are, for example, able to connect several subsequent photonic computation steps with each other, so they give a current calculation access to past knowledge.”

Looking to the future

The researchers conclude that they have, for the first time, combined the field of travelling acoustic waves with artificial neural networks, creating the first optoacoustic recurrent operator that connects information carried by subsequent optical data pulses.

These developments pave the way towards more intelligent optical neural networks that could be used to build a new range of computing architectures. While this research has brought an intelligent context to the optical neural networks, it could be further developed to create fundamental building blocks such as nonlinear activation functions and other optoacoustic operators.

“This demonstration is only the first step into a novel type of physical computation architecture based on combining light with travelling sound waves,” says Stiller. “We are looking into upscaling our proof of concepts, working on other light–sound building blocks and aiming to realise a larger optical processing structure mastered by acoustic waves.”

The research is published in Nature Communications.

The post Sound and light waves combine to create advanced optical neural networks appeared first on Physics World.

  •  

Ship-based atomic clock passes precision milestone

6 mai 2024 à 10:30

A new ultra-precise atomic clock outperforms existing microwave clocks in time-keeping and sturdiness under real-world conditions. The clock, made by a team of researchers from the California, US-based engineering firm Vector Atomic, exploits the precise frequencies of atomic transitions in iodine molecules and recently passed a three-week trial aboard a ship sailing around Hawaii.

Atomic clocks are the world’s most precise timekeeping devices, and they are essential to staples of modern life such as global positioning systems, telecommunications and data centres. The most common types of atomic clock used in these real-world applications were developed in the 1960s, and they work by measuring the frequency at which atoms oscillate between two energy states. They are often based on caesium atoms, which absorb and emit radiation at microwave frequencies as they oscillate, and the best of them are precise to within one second in six million years.

Clocks that absorb and emit at higher, visible, frequencies are even more precise, with timing errors of less than 1 second in 30 billion years. These optical atomic clocks are, however, much bulkier than their microwave counterparts, and their sensitivity to disturbances in their surroundings means they only work properly under well-controlled conditions.

Prototypes based on iodine

The Vector Atomic work, which the team describe in Nature, represents a step towards overturning these limitations. Led by Vector Atomic co-founder and study co-author Jamil-Abo-Shaeer, the team developed three robust optical clock prototypes based on transitions in iodine molecules (I2). These transitions occur at wavelengths conveniently near those of routinely-employed commercial frequency-doubled lasers, and the iodine itself is confined in a vapour cell, doing away with the need to cool atoms to extremely cold temperatures or keep them in an ultrahigh vacuum. With a volume of around 30 litres, the clocks are also compact enough to fit on a tabletop.

While the precision of these prototype optical clocks lags behind that of the best lab-based versions, it is still 1000 times better than clocks of a similar size that ships currently use, says Abo-Shaeer. The prototype clocks are also 100 times more precise than existing microwave clocks of the same size.

Sea trials

The researchers tested their clocks aboard a Royal New Zealand Navy ship, HMNZS Aotearoa, during a three-week voyage around Hawaii. They found that the clocks performed almost as well as in the laboratory, despite the completely different conditions. Indeed, two of the larger devices recorded errors of less than 400 picoseconds (10-12 seconds) over 24 hours.

The team describe the prototypes as a “key building block” for upgrading the world’s timekeeping networks from the nanosecond to the picosecond regime. According to team member Jonathan Roslund, the goal is to build the world’s first fully integrated optical atomic clock with the same “form factor” as a microwave clock, and then demonstrate that it outperforms microwave clocks under real-world conditions.

“Iodine optical clocks are certainly not new,” he tells Physics World. “In fact, one of the very first optical clocks utilized iodine, but researchers moved onto more exotic atoms with better timekeeping properties. Iodine does have a number of attractive properties, however, for making a compact and simple portable optical clock.”

The most finicky parts of any atomic-clock system, Roslund explains, are the lasers, but iodine can rely on industrial-grade lasers operating at both 1064 nm and 1550 nm. “The vapour cell architecture we employ also uses no consumables and requires neither laser cooling nor a pre-stabilization cavity,” Roslund adds.

The next generation

After testing their first-generation clocks on HMNZS Aotearoa, the researchers developed a second-generation device that is 2.5 times more precise. With a volume of just 30 litres including the power supply and computer control, the upgraded version is now a commercial product called Evergreen-30. “We are also hard at work on a 5-litre version targeting the same performance, and an ultracompact 1-litre version,” Roslund reveals.

As well as travelling aboard ships, Roslund says these smaller clocks could have applications in airborne and space-based systems. They might also make a scientific impact: “We have just finished an exciting demonstration in collaboration with the University of Arizona, in which our Evergreen-30 clocks served as the timebase for a radio observatory in the Event Horizon Telescope Array, which is imaging distant supermassive blackholes.”

The post Ship-based atomic clock passes precision milestone appeared first on Physics World.

  •  

Superfluid helium: the quantum curiosity that enables huge physics experiments

6 mai 2024 à 10:27
Jianqin Zhang with the beta elliptical cryomodule at the ESS superconducting linear accelerator
European Spallation Source Cryogenics engineer and test leader Jianqin Zhang inspects the first medium beta elliptical cryomodule to be installed at the ESS superconducting linear accelerator. Each cryomodule contains several superconducting radio-frequency cavities. (Courtesy: Ulrika Hammarlund/ESS)

The largest use of helium II is currently in particle accelerators, how is it used at these facilities?

Helium II has two main uses in particle accelerators. One is to cool superconducting electromagnets to temperatures below 2.2 K. These create the large magnetic fields that bend and focus particle beams. The conducting wires in these magnets are usually made from niobium–titanium, which becomes a superconductor below about 9 K. However, further cooling allows the magnets to support higher current densities and higher field strengths. As a result, almost all the magnets on the Large Hadron Collider (LHC) at CERN are cooled by helium II.

The second main use of helium II at accelerators is to cool superconducting radio-frequency (SRF) cavities, which are used to accelerate particles. These are made from niobium, which is a superconductor at temperatures below about 9 K. Again, these cavities perform much better at superfluid temperatures, where they use less energy to achieve the same acceleration.

An important benefit of using helium II to cool magnets and SRFs is the superfluid’s very high effective thermal conductivity. As well as making it very efficient at removing heat, the high effective conductivity means that helium does not boil in the bulk – unlike normal liquid helium. This confers great advantage in cooling, particularly when it comes to SRF cavities. This is because the cavities are resonant devices and can be detuned by mechanical vibrations caused by boiling.

While CERN is currently the biggest user of helium II, it is also used at other accelerators worldwide. How will it be used at your institute, the European Spallation Source (ESS), which will be up and running next year?

Like existing spallation sources in the UK, US, Switzerland and Japan, the ESS will accelerate protons to very high energies in a linear accelerator. These protons will then strike a tungsten target, where neutrons will be created by the spallation (fragmentation) of the target nuclei. These neutrons with then be slowed down so that their de Broglie wavelengths are on par with the separations of atoms in solids and molecules. Such neutrons are ideal for experiments that explore the properties of matter.

The ESS accelerator is about 400 m in length and 90% of the acceleration will be done by SRF cavities operating at 2 K. The superfluid is created by a helium refrigerator providing up to 3 kW of cooling at 2 K.

Other accelerator facilities that use superfluid cooling include the Thomas Jefferson Laboratory in the US and the European X-ray Free Electron Laser in Germany. A future International Linear Collider – a possible successor to the LHC – would also employ superfluid-cooled SRFs.

While superfluid-cooled magnets are used in particle accelerators, that was not their first application.

That’s right. They were first designed for use in the Tore Supra tokamak, which began operation in 1988 in France. It has since been upgraded and called WEST, which operates today. Tore Supra, like other tokamaks, used magnetic fields to confine a hot hydrogen plasma. The ultimate goal of researchers working on tokamaks is to develop a practical way to harness nuclear fusion as a source of energy.

John Weisend
John Weisend Accelerator engineer and author of a book that outlines the history of how helium II has revolutionized science. (Courtesy: ESS)

Tore Supra’s designers wanted to create longer-lasting plasma pulses and realized that this would not be possible using conventional magnets. They saw superfluid-cooled superconducting magnets as the way forward. The Tore Supra team worked out how to handle liquid helium and also they also developed a piece of technology called a cold compressor that would allow them to efficiently and reliably get down to 2 K. These two developments showed that it was possible operate superfluid-cooled magnets.

Helium II has also been used in space, what was the first mission to be superfluid cooled?

The first real use of helium II in space was to cool a space telescope called the Infrared Interferometer Spectrometer and Radiometer (IRAS). This mission was launched in 1983 by the US, the Netherlands and the UK and it surveyed the entire sky at infrared wavelengths. The atmosphere absorbs infrared light, which is why the telescope was launched into space. Once in orbit, its sensors must be kept as cold as possible to detect low levels of infrared light.

This cooling was done using helium II and mission designers had to overcome significant challenges such as how vent helium vapour when it is mixed in with blobs of liquid in a low-gravity environment.

IRAS was a watershed mission in astronomy because nobody had so extensively observed the universe in these infrared wavelengths before. Astronomers could peer through dust clouds and see objects that had been invisible to other telescopes.

IRAS observed the universe for 300 days before its superfluid ran out, but a decade later NASA was able to transfer liquid helium in space. How was that done?

Yes, that was a project called Superfluid Helium On-Orbit Transfer (SHOOT), which carried superfluid helium onboard a Space Shuttle. The demonstration involved transferring superfluid from a full dewar to an empty dewar in microgravity. This was done using a pump that made use of the “fountain effect” in helium II.

How does the fountain effect work?

The effect can be understood in terms of the two fluid model, which describes helium II as having a superfluid component and a normal fluid component. These aren’t real physical phases within helium II, but rather provide a convenient way of understanding many of its mechanical and thermal properties.

The effect occurs when two regions of helium II are separated by a porous plug with micron-sized channels. If the helium II in one region is heated and the other region is cold, the superfluid component will move through the porous media towards the heater. This is possible because the superfluid component has zero viscosity and can move without resistance through the tiny channels – something that the normal fluid component cannot do.

Large Hadron Collider at CERN
Superfluid superuser The Large Hadron Collider at CERN is the world’s largest user of helium II. (Courtesy: Maximilien Brice/CERN)

In the heated region, some of the superfluid component will become normal. However, the normal component is viscous and cannot exit the warm region via the porous plug, so pressure builds up. This pressure can be used to pump helium II without the need for mechanical components.

SHOOT was an important demonstration of how helium II could be used transferred in space. However, researchers realized that it is more cost efficient to launch experiments with larger dewars and lower heat loads, than to refill a dewar during a mission.

Helium II also has the ability to flow up the wall of a dewar, but despite its exotic properties a superfluid is relatively easy to handle in bulk. Why is that?

Research done in the 1970s and 80s showed that bulk helium II has essentially the same fluid mechanical properties as a conventional fluid – something that can also be explained by the two fluid model. When helium II flows, quantized vortices in the superfluid component interact with the viscosity of the normal fluid component. The result is that the bulk properties are the same as a conventional fluid.

This is tremendously helpful to engineers like me, I suppose we can be thankful that sometimes the universe is kind. The standard engineering rules that are used to design fluid-handling systems also apply to helium II – rules that help use chose components such as pipes, pumps and valves for a given system. The only instances when we need to consider the special properties of helium II are when we are transferring heat, using porous media or creating thin films of the superfluid.

There are several Nobel Prizes for Physics that were made possible by helium II cooling. Do you have a favourite?

For me it’s the 1996 prize, which went to David Lee, Douglas Osheroff and Robert Richardson for their discovery of superfluidity in helium-3. The superfluid that we have been talking about so far in this interview is helium-4, which is by far the most abundant isotope of the element. Helium-4 is a boson and bosonic atoms are able to condense into the lowest quantum energy state of the system, creating a superfluid.

Helium-3 atoms are not bosons, but are fermions. These atoms cannot undergo this Bose–Einstein condensation directly to create a superfluid.  However, it the early 1970s Lee, Osheroff and Richardson showed that helium-3 can condense into a superfluid at the much lower temperature of 2.7 mK. The physical mechanism for this is similar to what occurs in superconductors, where at low temperatures, fermionic electrons pair up. These “Cooper pairs” are bosons, so they can condense to create a superconductor in which the electrons can flow without resistance.

Because of its magnetic properties, superfluid helium-3 is a much more complicated substance than superfluid helium-4. It has three different superfluid phases, rather than the one phase of helium-4.

What I like about this discovery is that the trio weren’t  searching for superfluity in their experiment. Instead, they were studying the properties of solid helium-3 at very low temperatures and high pressure. I really like the fact that they were looking for one thing and found something entirely different. Often, the most exciting scientific discoveries are made this way.

Further reading

The post Superfluid helium: the quantum curiosity that enables huge physics experiments appeared first on Physics World.

  •  

Modified pulse tube refrigerator cuts cryogenic cooling times in half

5 mai 2024 à 14:29
NIST refrigerator animation
How it works: the bottom animation shows how the addition of an adjustable needle valve between the refrigerator and helium reservoir prevents the relief valve from being used. (Courtesy: S. Kelley/NIST)

A simple modification to a popular type of cryogenic cooler could save $30 million in global electricity consumption and enough cooling water to fill 5000 Olympic swimming pools. That is the claim of researchers at the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder who describe their energy-efficient design in Nature Communications.

Ryan Snodgrass and colleagues in the US have designed a new way to operate pulse tube refrigerators (PTRs), which compress and expand helium gas in cooling cycle that is similar to that used in a household refrigerator. Developed in the 1980s, PTRs can now reach temperatures of just a few Kelvin, which is below the temperature that helium becomes a liquid (4.2 K).

While PTRs are reliable and used widely in research and industry, they are very power hungry. When Snodgrass and team looked at why commercial PTRs consume so much energy, they found that the devices were designed to be efficient at their final operating temperature of about 4 K. At higher temperatures, the PTRs are much less efficient – and this is a problem because the cooling process begins at room temperature.

Easier repairs

As well as using lots of electricity to cool down, this inefficiency means that it can take a very long time to cool objects. For example, the Cryogenic Underground Observatory for Rare Events (CUORE) – which is looking for neutrinoless double beta decay deep under a mountain in Italy – is cooled to a preliminary 4 K by five PTRs in a process that takes 20 days. Reducing such long cooling times would make it easier and less costly to modify or repair cryogenic systems.

A careful study of the room-temperature operation of PTRs revealed that the helium gas is compressed to a very high pressure. This causes a relief valve to open, sending some of the helium back to the compressor. Less helium is therefore used for cooling, reducing the efficiency of the PTR.

Snodgrass and colleagues solved this problem by replacing the manufacturer-supplied needle valves in a PTR with customized needle valves that can be adjusted constantly. These needle valves control the flow of gas between the refrigerator and its helium reservoirs. They are normally set to optimize the operation of the PTR at cryogenic temperatures.

In the new operating protocol developed at NIST, the needle valves are open at room temperature. This allows gas to flow in and out of the reservoir, which moderates the pressure in the refrigerator. As the temperature drops, the valves are slowly closed – keeping the system at an ideal pressure throughout its operation.

The team found that the modification can boost the cooling rate of PTRs by 1.7–3.5 times. As well as making cooling quicker and more energy efficient, the new design could also be used to reduce the size or number PTRs needed for specific applications. This could be very important for applications in space, where PTRs are already used to cool infrared telescopes such as MIRI on the James Webb Space Telescope.

 

The post Modified pulse tube refrigerator cuts cryogenic cooling times in half appeared first on Physics World.

  •  

In real-world social networks, your enemy’s enemy is indeed your friend, say physicists

3 mai 2024 à 19:01

If you’ve ever tried to remain friends with both halves of a couple going through a nasty divorce, or hung out with a crowd of mutuals that also includes someone you can’t stand, you’ll know what an unbalanced social network feels like.

You’ll probably also sympathize with the 20th-century social psychologist Fritz Heider, who theorized that humans strive to avoid such awkward, unbalanced situations, and instead favour “balanced” networks that obey rules like “the friend of my friend is also my friend” and “the enemy of my enemy is my friend”.

But striving and favouring aren’t the same thing as achieving, and the question of whether real-world social networks exhibit balance has proved surprisingly hard to answer. Some studies suggest that they do. Others say they don’t. And annoyingly, some “null models” – that is, models used to assess the statistical significance of patterns observed in real networks – fail to identify balance even in artificial networks expressly designed to have it.

Two physicists at Northwestern University in the US now report that they’ve cracked this problem – and it turns out that Heider was right. Using data collected from two Bitcoin trading platforms, the tech news site Slashdot, a product review site called Epinions, and interactions between members of the US House of Representatives, István Kovács and Bingjie Hao showed that most social networks do indeed demonstrate strong balance. Their result, they say, could be a first step towards “understanding and potentially reducing polarization in social media” and might also have applications in brain connectivity and protein-protein interactions.

Positive and negative signs

Mathematically speaking, social networks look like groups of nodes (representing people) connected by lines or edges (representing the relationships between them). If two people have an unfriendly or distrustful relationship, the edge connecting their nodes carries a negative sign. Friendly or trustful relationships get a positive sign.

Under this system, the micro-network described by the statement “the enemy of my enemy is my friend” looks like a triangle made up of one negative edge connecting you to your enemy, another negative edge connecting your enemy to their enemy, and one positive edge connecting you to your enemy’s enemy. The total number of negative edges is even, so the network is balanced.

Complicating factors

While the same mathematical framework can be applied to networks of any size and complexity, real-world social networks contain a few wrinkles that are hard to capture in null models. One such wrinkle is that not everyone knows each other. If the enemy of your enemy lives overseas, for example, you might not even know they exist, never mind whether to count them as a friend. Another complicating factor is that some people are friendlier than others, so they will have more positive connections.

In their study, which they describe in Science Advances, Kovács and Hao created a new null model that preserves both the topology (that is, the structure of the connections) and the “signed node degree” (that is, the “friendliness” or otherwise of individual nodes) that characterize real-world networks. By comparing this model to three- and four-node mini-networks in their chosen datasets, they showed that real-world networks are indeed more balanced than would be expected based on the more accurate null model.

So the next time you have to choose between two squabbling friends, or decide whether to trust someone who dislikes the same people as you, take heart: you’re performing a simple mathematical operation, and the most likely outcome will be a social network with more balance. Problem solved!

The post In real-world social networks, your enemy’s enemy is indeed your friend, say physicists appeared first on Physics World.

  •  

Protecting phone screens with non-Newtonian fluids

3 mai 2024 à 14:21

New research shows that phones could be strengthened by adding a layer of material to the screen that fluidized during an impact. In a paper published in PNAS, the team from the University of Edinburgh and Corning, a US-based materials company, developed a mathematical model of an object hitting a phone screen. Using modelling and experiments they identify the optimized fluid properties for this application. Their results show that fluids that become runnier during impact are most effective at protecting the screen.

Despite the development of toughened glass, a smashed phone screen is a commonplace annoyance. James Richards, a postdoc in Edinburgh who led the research, explains that the aim was to design a fluid-based alternative that would sit under the glass and absorb impacts.

The suspension of a car uses a piston moving through hydraulic fluid to absorb bumps in the road. The resistance of the fluid increases the faster the piston moves, which allows the system to adapt to large and small shocks.

In this project, instead of mechanical components, the screen would be protected by a layer of fluid, like a mattress sitting below the glass. To build a system that would adapt to different impacts, the researchers turned to a class of materials called non-Newtonian fluids, whose viscosity changes depending on the force applied. A mixture of cornflour and water is an example of a shear-thickening fluid because it becomes more viscous the harder it is hit. It is also possible to have shear-thinning fluids that become runnier under impact – an example of this is paint.

Soaking Kevlar vests in shear-thickening fluid can make them more resistant to projectiles because the fabric can absorb the impact whilst remaining flexible when worn. As a result, Richards and colleagues suspected that a shear-thickening fluid could also be used to protect phone screen glass.

When an object exerts a force on a screen, the fluid resists the deformation, but the force on the glass itself depends on how much the screen has deformed. This feedback loop makes it difficult to predict how a given fluid will respond, particularly if the fluid is non-Newtonian. “The challenge here is we didn’t know where we were in a design space,” says Richards “So we needed something much, much more general.”

The researchers wanted to perform an optimization that would test their theory that shear-thickening fluids are best at protecting the screen. This is challenging because the height of the bending screen varies continuously, so there are effectively an infinite number of variables to be optimized.

Simplified phone screen for design optimization

The team looked for a way to simplify the system whilst still capturing the essential physics. They identified that the problem would be a lot easier to solve if the screen was flat – meaning the height during impact would be the same everywhere. The quantity that determines whether the screen breaks would then just be the bending moment – the product of the diameter of the plate and the force on it.

The researchers argue that close to the impact, there will be some area of the plate that is effectively flat, with  the size of this flat part becoming smaller the more the screen bends. By solving the equations of motion of the fluid under the plate, the researchers were able to reduce the problem of the flexible plate to a single flat plate whose diameter changes as it squeezes down.

With this simplified system, the team was able to factor in shear thickening or shear thinning fluid behaviour, allowing them to identify the fluid that minimized the bending moment. They were surprised to find that the optimal fluid was not shear-thickening but shear-thinning “It turns out our initial thoughts were entirely wrong” says Richards.

A tight squeeze causes an unexpected fluid response

They attribute this unexpected behaviour to the geometry of the system. During impact, the deformation of the screen squeezes the fluid through a smaller and smaller gap. It’s harder to push a shear-thickening fluid through a narrower space, so whilst it stops the impact, the glass experiences a large force. By contrast, if the fluid is shear-thinning, it will get easier to squeeze as the screen bends. This means the impact spreads out over a longer time, and provided the fluid never gets too runny, it is still possible to absorb the force whilst protecting the screen.

As proof of concept, the researchers tested transparent shear-thickening and shear-thinning fluids in an experiment that mimicked a phone screen. The fluid was sandwiched between a solid base and a sheet of glass, and the force on the glass was measured as a solid wedge pushed down on it. Their result confirms that the force on the glass increases more gradually during impact with the shear-thinning fluid, indicating that this class of fluids would be most effective as screen protectors.

The researchers say that one of their main motivations was to develop a shock absorber that could be used to build flexible phone screens. Their work establishes a framework to optimize the squeezing of non-Newtonian fluids, and they believe it could have applications such as in car windows or even to study how skin creams are applied.

The post Protecting phone screens with non-Newtonian fluids appeared first on Physics World.

  •  
❌
❌