↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierPhysics World

Laser-driven accelerator benefits from clever use of light pulses

Par : No Author
31 mai 2024 à 15:35

Physicists in Germany say they have passed an important milestone in the development of laser-driven, plasma-based particle acceleration. Proton pulses with energies as high as 150 MeV were created by Tim Ziegler and colleagues at Helmholtz Centre Dresden–Rossendorf (HZDR). This is about 50% higher than the previous record for the technique, and was achieved by better exploiting the temporal profile of laser pulses.

Conventional particle accelerators use radio-frequency cavities to create the high voltages needed to drive particles to near the speed of light. These facilities tend to big; energy hungry; and often require expensive cryogenic cooling. This limits the number of facilities that can be built and where they can be located. If accelerators could be made smaller and less expensive, it would be a boon for applications as diverse as cancer therapies and materials science.

As a result, there is a growing interest in laser-driven plasma-based accelerators, which have the potential to be far more compact and energy efficient that conventional systems.

Ripping away electrons

These accelerators work by firing intense laser pulses into wafer-thin solid targets. The pulse rips away electrons from the target, leaving behind the positively charged atomic cores. This creates a very large voltage difference over a very small distance – which can be used to accelerate pulses of charged particles such as protons.

While these voltage gradients can be much larger than those in conventional accelerators, significant challenges must be overcome before this technique can be used in practical facilities.

“The adoption of plasma-based proton acceleration has been hampered by the slow progress in increasing ion energy,” Ziegler explains. One challenge is that today’s experiments are done at one of just a few high-power, ultrashort-pulse lasers around the world – including HZDR’s DRACO-PW facility. “Firing only a few shots per day, access and availability at these few facilities is constrained,” adds Ziegler.

One curious aspect of the ultrashort laser pulses from DRACO-PW is that some of the light precedes the main pulse. This means that the full power of the laser is not used to ionize the target. But now, Ziegler’s team has turned this shortcoming into an advantage.

Early arrival

“This preceding laser light modifies our particle source – a thin plastic foil – making it transparent to the main laser pulse,” Ziegler explains. “This allows the light of the main pulse to penetrate deeper into the foil and initiates a complex cascade of plasma acceleration mechanisms at ultra-relativistic intensities.”

The researchers tested this approach at DRACO-PW. When they previously to irradiated a solid foil target, the plasma accelerated protons to energies as high as 80 MeV.

In their latest experiment, they irradiated the target with a pulse energy of 22 J, and used the leading portion of the pulse to control the target’s transparency. This time, they accelerated a beam of protons to 150 MeV – almost doubling their previous record.

This accelerated proton beam had two distinct parts: a broadband component at proton energies lower than 70 MeV; and a high-energy component comprising protons travelling in a narrow and well-defined beam.

Linear scaling

“Notably, this high-energy component showed a linear scaling of maximum proton energy with increased laser energy, which is fundamentally different to the square-root scaling of the lower energy component,” Ziegler explains. The experiment also revealed that the degree of transparency in the solid target was strongly connected with its interaction with the laser – providing the team with tight control over the accelerator’s performance.

Ziegler believes the result could pave the way for smarter accelerator systems. “This observed sensitivity to subtle changes in the initial laser-plasma conditions makes this parameter ideal for future studies, which will aim for automated optimization of interaction parameters,” he says.

Now that they have boosted the efficiency of ion acceleration, the researchers are hopeful that laser-driven facilities could be built a fraction of the space and energy requirements of conventional facilities.

This would be particularly transformative in medicine, says Ziegler. “Our breakthrough opens up new possibilities to investigate new radiobiological concepts for precise, gentle tumour treatments, as well as scientific studies in efficient neutron generation and advanced materials analysis.”

The research is described in Nature Physics.

The post Laser-driven accelerator benefits from clever use of light pulses appeared first on Physics World.

  •  

Ask me anything: Daniel Hook – ‘The skills I learned as a researcher are applicable and helpful in any walk of life’

31 mai 2024 à 12:00

What skills do you use every day in your job?

As the chief executive officer (CEO) of Digital Science – a company that improves the information and software tools for all stakeholders in the research ecosystem – I use a variety of skills every day. Many of these are exactly what most people would expect: managing people, reading financial statements – all the usual CEO activities. Thankfully for all concerned, I don’t programme anymore. It’s more than a decade since my code was in a production environment.

However, perhaps surprisingly to some, I do a lot of data analysis. Digital Science’s core strength is our passion for understanding the research world as a route to offering better tools. For me, that means looking at what research is trending, understanding collaboration patterns, and gaining insight into how the scholarly record is changing. Not only are the data completely fascinating, but they are also the start of so many interesting discussions.

What do you like best and least about your job?

Let’s start with what I like least – which is travel, specifically the jet lag. While I do love spending time in different cultures, meeting people and seeing the beautiful nature and architecture in the places that I’m fortunate to visit, I find the jet lag to be very difficult and I’m constantly worried about my carbon footprint.

Last year I managed to do almost every trip in Europe by train and felt very good about it. But trips to Australia, New Zealand, Japan and the US still managed to make their way into my diary. This is somewhere I’m hoping that hybrid meetings find their feet soon.

As for what I like best about my job – that’s easy.  Not only do I work with the most talented, kindest and most passionate team, but we also serve those who are the positive agents of change in our world.

What do you know today that you wish you knew when you were starting out in your career?

Like many people who started off working toward a research career, I defined my success very narrowly – specifically, in terms of being successful in a classically defined research setting. However, the skills that I learned as a researcher are all generally applicable and helpful skills in any walk of life.

They include having an entrepreneurial spirit, a willingness to try to solve a problem, the capacity to work hard and focus on that problem, and not give up when you don’t find a solution with the first approach that you take. Success looks different for everyone and the problems that we contribute to solving, in any context, have the capacity to make people’s lives better.

So, sometimes it’s not good to “buy in” to what we’re so often taught success should look like.

The post Ask me anything: Daniel Hook – ‘The skills I learned as a researcher are applicable and helpful in any walk of life’ appeared first on Physics World.

  •  

‘Cavendish-like’ experiment could reveal gravity’s quantum nature

31 mai 2024 à 10:00
Diagram of the new "Cavendish-like" gravitation experiment
Cavendish-like: A schematic diagram of the proposed experiment on gravitational interaction between two torsion balances. Two torsion pendula are placed with their equilibrium orientations (dashed lines) in parallel and allowed to interact through gravity. An electromagnetic shield is placed between the two pendula to suppress electromagnetic interactions. The rotational degrees of freedom of each pendulum are monitored through their coupling to two cavity fields (red lines). (Courtesy: Ludovico Lami, Julen S Pedernales and Martin B Plenio, Phys. Rev. X 14 021022, https://doi.org/10.1103/PhysRevX.14.021022)

Mathematical physicists in the Netherlands and Germany have proposed a new “Cavendish-like” gravitation experiment that could offer an alternative means of determining whether gravity is a classical or quantum phenomenon. If built, the experiment might bring us closer to understanding whether the theory of gravity can be reconciled with quantum-mechanical descriptions of the other fundamental forces – a long sought-after goal in physics.

Gravity is one of the four known fundamental forces in nature. It is different from the others – the electromagnetic force and the weak and strong nuclear forces – because it describes a curvature in space-time rather than interactions between objects. This may be why we still do not understand whether it is classical (as Albert Einstein described it in his general theory of relativity) or governed by the laws of quantum mechanics and therefore unable to be fully described by a local classical field.

Many experiments that aim to resolve this long-standing mystery rely on creating quantum entanglement between two macroscopic objects placed a certain distance from each other. Entanglement is a phenomenon whereby the information contained in an ensemble of particles is encoded in correlations among them, and it is an essential feature of quantum mechanics – one that clearly distinguishes the quantum from the classical world.

The hypothesis, therefore, is that if massive, distant objects (known as delocalized states) can be entangled, then gravity must be quantum.

Revealing gravity’s quantum nature without generating entanglement

The problem is that it is extremely difficult to make large objects behave as quantum particles. In fact, the bigger they get, the more likely they are to lose their quantum-ness and resort to behaving like classical objects.

Ludovico Lami of the University of Amsterdam, together with Martin Plenio and Julen Pedernales of the University of Ulm, have now thought up a new experiment that would reveal gravity’s quantum nature without having to generate entanglement. Their proposal – which is so far only a thought experiment – involves studying the correlations between two torsion pendula placed close to each other as they rotate back and forth with respect to each other, acting as massive harmonic oscillators (see figure).

This set-up is very similar to the one that Henry Cavendish employed in 1797 to measure the strength of the gravitational force, but its purpose is different. The idea, the team say, would be to uncover correlations generated by the whole gravity-driven dynamical process and show that they are not reproducible if one assumes the type of dynamics implied by a local, classical version of gravity. “In quantum information, we call this type of dynamics an ‘LOCC’ (from ‘local operations and classical communication’),” Lami says.

In their work, Lami continues, he and his colleagues “design and prove mathematically some ‘LOCC inequalities’ whose violation, if certified by an experiment, can falsify all LOCC models. It turns out that you can use them to rule out LOCC models also in cases where no entanglement is physically generated.”

An alternative pathway

The researchers, who detail their study in Physical Review X, say they decided to look into this problem because traditional experiments have well-known bottlenecks that are difficult to overcome. Most notably, they require the preparation of large delocalized states.

The new experiment, Lami says, is an alternative way of realizing experiments that can definitively indicate whether gravity is ultimately fully classical, as Einstein taught us, or somehow non-classical – and hence most likely quantum. “While we don’t claim that our method is completely and utterly better than the others, it is quite different and, depending on the experimental platform, may prove easier to practically set up,” he tells Physics World.

Lami, Plenio and Pedernales are now working to bring their analyses closer to real-world experiments by taking into account other interactions besides gravity. While doing so will complicate the picture and make their analyses more involved, they recognize that it will eventually be necessary for building a “bulletproof” experiment.

Plenio adds that the approach they are taking could also reveal other finer details about the nature of gravity. “In our work we describe how to decide whether gravity can be mimicked by local operations and classical communications or not,” he says. “There might be other models, however – for example, where gravity follows dynamics that do not obey LOCC, but still do not have to create entanglement either. This type of dynamics is called ‘separability preserving’. In principle we can also solve our equations for these.”

The post ‘Cavendish-like’ experiment could reveal gravity’s quantum nature appeared first on Physics World.

  •  

Simulations point to the existence of a charming and beautiful tetraquark

Par : No Author
30 mai 2024 à 17:28

Supercomputer simulations done by a trio of physicists in India provide strong evidence for the existence of new type of tetraquark. Dubbed Tbc, the tetraquark comprises two heavy quarks (charm and beauty) and two light antiquarks (up and down). The simulations focused on the interaction of two mesons: one composed of a charm quark and a down antiquark, and the other made of a beauty quark and an up antiquark. A detailed analysis shows that the strong nuclear force should bind these mesons into a Tbc, which the trio believes could be discovered in accelerator experiments in the near future.

“Traditionally composite subatomic particles are categorized as mesons (comprising a quark and an antiquark) and baryons (comprising three quarks),” explain the researchers. “However, starting [in] 2003, there have been a large number of discoveries of exotic hadrons that defy the conventional picture of baryons and mesons, and calls for a description beyond these two simplest categories.” These exotic hadrons include tetraquarks ( comprising two quarks and two antiquarks) and pentaquarks (comprising four quarks and an antiquark).

One recent discovery is the Tcc+ tetraquark – two charm quarks, an up antiquark and a down antiquark – which was spotted by the LHCb collaboration. “Its bottom flavored cousin, designated Tbb, has long been hypothesized to be a strongly bound hadron, but finding it in experiments will be difficult in the near future because of its large mass,” say the researchers. However, the possible existence of the Tbc tetraquark had been unclear until now.

Lattice simulation

To find out whether the two mesons could combine to form Tbc, Padmanath Madanagopalan of the Homi Bhabha National Institute for Science Education and Archana Radhakrishnan and Nilmani Mathur of the Tata Institute of Fundamental Research, used a standard computational method for studying bound states of elementary particles. This involved simulating meson collisions in order to deduce how they could bind to each other to form a Tbc.

While this technique is ideal for studying electromagnetic and weak interactions, it requires enormous computing power to calculate the strong interactions involved in meson collisions. Such calculations involve approximating continuous space using a discrete lattice with a step size of a few hundredths of a femtometre – with quantum fields defined in its nodes and links between them.

“Calculation of the binding energies of tetraquarks is extremely challenging because of the nature of the strong interaction,” explains Tim Gershon of the UK’s University of Warwick, who was not involved in this latest study. “At high energy, strong interaction processes can be calculated ‘perturbatively’, which means considering exchange of one particle (the gluon – the mediator of the strong interaction) to be dominant with small corrections from two or more gluons exchanges.”

Very sophisticated algorithms

“Unfortunately, this does not work at lower energies where the strong interaction is ‘non-perturbative’, which means that interactions involving very large numbers of gluons need to be considered,” Gershon adds. “Various approximate methods exist to solve this puzzle. However, complete calculations can only be done using what is called lattice QCD (where QCD stands for quantum chromodynamics – the theory of the strong interaction). This involves using supercomputers and very sophisticated algorithms.”

By performing lattice QCD calculations of the two mesons scattering, the trio deduced the strength of the interaction and found that it is attractive and about 100 times stronger than the attraction between mesons in Tcc+. This is a strong indication of the existence of the Tbc, and that the tetraquark is much more strongly bound and long-lived than Tcc+.

“In our work using a first principles method of lattice QCD we provide compelling evidence in the existence of this novel subatomic particle, and remove any earlier doubts,” conclude the trio. They add that their calculations should motivate experimentalist to search for the tetraquark. Indeed, the trio believes that there is a realistic prospect that the tetraquark could be discovered within the next 5–10 years.

The research is described in Physical Review Letters.

The post Simulations point to the existence of a charming and beautiful tetraquark appeared first on Physics World.

  •  

Baltimore bridge collapse: engineers explain how failures can be avoided

30 mai 2024 à 16:34

Earlier this year, the Francis Scott Key Bridge in the US collapsed after being struck by a large container ship. Six people were killed in the disaster and many around the world were left wondering how such an important piece of infrastructure could collapse in such a catastrophic way.

We investigate in this episode of the Physics World Weekly podcast, which features Erin Bell and Martin Wosnick. They are both engineers at the University of New Hampshire (UNH) and they are in conversation with Physics World’s Margaret Harris.

Bell specializes in the structural design and dynamics of bridges and she explains why the bridge collapsed and talks about what can be done to avoid future catastrophes. Wosnick is an expert in fluid flow and along with Bell, is involved in the UNH Living Bridge Project. They explain how the project has transformed a lift bridge into a living laboratory that investigates, among other things, how a bridge can be used to generate tidal energy.

They also talk about the Atlantic Marine Energy Center, which is developing new ways to extract useful energy from the motions of the oceans.

The post Baltimore bridge collapse: engineers explain how failures can be avoided appeared first on Physics World.

  •  

Are dusty quasars masquerading as Dyson sphere candidates?

Par : No Author
30 mai 2024 à 15:00

Seven candidate Dyson spheres found from their excess infrared radiation could be a case of mistaken identity, with evidence for dusty background galaxies spotted close to three of them.

The seven candidates were discovered by Project Hephaistos, which is coordinated by astronomers at Uppsala University in Sweden and Penn State University in the US.

A Dyson sphere is a hypothetical construct: a swarm of energy collectors capturing all of a star’s radiant energy to provide huge amounts of power for its builders. As these energy collectors – basically huge arrays of solar panels – absorb sunlight, they must emit waste heat as infrared radiation to avoid overheating. While a complete Dyson swarm would hide a star from view, this waste heat would still be detectable.

The caveat is that to build a complete Dyson swarm, a lot of raw material is required. In his 1960 paper describing the concept, Freeman Dyson calculated that dismantling a gas giant planet like Jupiter should do the trick.

Given that this is easier said than done, Project Hephaistos has been looking for incomplete Dyson swarms “that do not block all starlight, but a fraction of it,” says Matías Suazo of Uppsala University, who is leading the project.

Suazo’s team searched five million objects in archival data from NASA’s Wide-field Infrared Survey Explorer (WISE) and Two Micron All-Sky Survey (2MASS), and cross-checked them against photometry and distance data accrued by the European Space Agency’s Gaia mission. This resulted in seven candidates exhibiting a suspicious infrared excess, as reported in Monthly Notices of the Royal Astronomical Society.

All the candidates are M-dwarfs, which are the smallest, coolest and most common type of star in the Universe. The closest is located 466 light years away.

Debris discs and Hot DOGs

Yet questions have now arisen over the real nature of these candidates.

“They could be an astrophysical phenomenon such as extreme debris discs, or something more exotic,” says Ann Marie Cody, an astronomer at the SETI Institute in California who is not involved in Project Hephaistos, but has conducted her own Dyson swarm search. “However, the data published thus far cannot discriminate between these scenarios.”

Debris discs are the dusty remnants of planet formation, but while many M-dwarfs have been found to have planets, only a handful have been found with sizeable debris discs, leading Suazo to be sceptical of the debris disc explanation.

“Observationally, M-dwarf debris discs are really uncommon,” Suazo tells Physics World. “There are different theories about why, including observational biases, formation mechanisms and so on. Those few cases that have been confirmed are in the submillimetre/radio regime, which means they are way cooler than the temperature range of our models.”

Tongtian Ren and Michael Garrett of the Jodrell Bank Centre for Astrophysics at the University of Manchester, and Andrew Siemion of Breakthrough Listen and the University of Oxford, have proposed that the candidates have another explanation: background contamination from distant, dusty quasars.

They found strong radio sources very close in the sky to three of the candidates. Each radio source is attributed to an active supermassive black hole at the centre of a very distant, dusty quasar known as a “Hot DOG”, or hot dust-obscured galaxy. Because they are dusty, they radiate infrared and are large enough in the sky to extend behind the Dyson swarm candidates.

Although no coincident radio sources were found near the four remaining candidates, the density of Hot DOGs in the sky leads Ren, Garrett and Siemion to conclude that they most probably are also contamination from Hot DOGs, but ones that are radio quiet.

But Garrett isn’t completely ruling out Dyson swarms.

“We still think the sources are worth pursuing with new observations across the electromagnetic spectrum to see which interpretation is correct,” he tells Physics World.

JWST to the rescue?

More bad news for the candidates comes from Cody’s Dyson swarm search, which used NASA’s Transiting Exoplanet Survey Satellite (TESS) to look for anomalous transits that could potentially signpost large, artificial structures.

“Our optical photometry pipeline did not classify the Dyson sphere candidates as having any anomalous variability,” says Cody. “In fact, I personally examined the available TESS data for each of the seven objects in this paper, and none of them appear to be significantly variable.”

To settle the matter, Suazo, Garrett and Cody all agree that spectroscopic observations are now vital. If dust is present, either in a disc or a background galaxy, it would produce specific absorption lines. Alternatively, spectroscopy could measure the energy distribution of a candidate star’s photosphere (its visible surface) so that a best-fit model can be applied to determine whether a candidate really is consistent with a Dyson swarm.

“James Webb Space Telescope data would be ideal, since it could quickly rule out or confirm the debris disc or the galaxy contamination explanations,” says Suazo.

If the Hot DOGs explanation turns out to be correct, it leaves the hunt for Dyson swarms in a difficult place. An infrared excess is a Dyson swarm’s calling card, but if contamination from background galaxies is the probable answer each time, how can one distinguish artificial megastructures from natural phenomena?

“It’s a good question,” says Garrett. “Hot DOGs would be detected in deep near-infrared observations. There might also be some subtle aspects of the data that would be a tell-tale sign of contamination – we will start looking at that now.”

Having already probed 60 million stars, Cody’s search continues and her team is still vetting about a thousand events that are probably eclipsing binaries, but you never know.

For Cody, a multifaceted approach is essential in hunting for Dyson swarms. “With photometric data alone, it can be challenging to distinguish between rare astrophysical phenomena and Dyson swarms,” she says. “However, I believe that optical and infrared spectroscopic data may assist with the task.”

The post Are dusty quasars masquerading as Dyson sphere candidates? appeared first on Physics World.

  •  

Early Earth’s magnetic field strength was similar to today’s

30 mai 2024 à 10:00

Ancient organisms preserved in the Earth’s oldest fossils may have experienced a planetary magnetic field similar to the one we observe today. This finding, from a team of researchers at the University of Oxford, UK and the Massachusetts Institute of Technology in the US, suggests that the planet’s magnetic field was relatively strong 3.7 billion years ago – a fact with important consequences for early microbial Earthlings.

“Our finding is interesting because the Sun was generating a much more intense solar wind in the Earth’s early history,” explains team leader Claire Nichols of Oxford’s Department of Earth Sciences. “This means that the same strength of magnetic field would have provided far less shielding (because the protective ‘bubble’ around Earth provided by the magnetosphere would have been much smaller) for life emerging at that time.”

Without the magnetosphere, which protects us from cosmic radiation as well as the solar wind, many scientists think that life as we know it would not have been possible. Until now, however, researchers weren’t sure when it first appeared or how strong it was.

Unique rock samples

In the new work, Nichols and colleagues analysed rocks from the northern edge of the Isua Supracrustal Belt in southwest Greenland. Billions of years ago, as these rocks were heated, crystals of magnetite formed, and iron oxide particles within them recorded the strength and direction of the ambient magnetic field.

While similar processes happened in many places and at many times during Earth’s history, the rocks in the northernmost part of Isua are extremely unusual. This is because their location atop a thick continental crust prevented their magnetic information from being “erased” by later geological activity.

Indeed, according to the researchers, this band of land experienced only three significant thermal events in its history. The first and hottest occurred 3.7 billion years ago and heated the rocks up to 550 °C. The two subsequent heating events were less intense, and because they did not heat the rocks to more than 400 °C, the 3.7-billion-year-old record of Earth’s magnetic field remains as it was after the first event locked it in.

Recovering a vector of magnetization

Collecting samples from Isua was challenging, Nichols says, because the sample site is so remote it can only be reached by helicopter. Once back in the laboratory, the team demagnetized the samples stepwise, either by gradually heating them or by applying increasingly strong alternating magnetic fields. “These processes allow us to slowly remove the magnetic signal from the samples,” Nichols explains, “and recover a vector of magnetization that tells us about the direction of the ancient magnetic field.”

To determine the strength of the ancient field, the researchers applied a known magnetic field and compared the vector of magnetization acquired in the lab to that recovered in the original demagnetization. They found that rocks dating from 3.7 billion years ago recorded a magnetic field strength of at least 15 microtesla (µT). To compare, Earth’s present-day magnetic field averages around 30 µT. These results constitute the oldest estimate of the Earth’s magnetic field strength ever recovered from bulk rock samples – a method that is more accurate and reliable than previous analyses of individual crystals.

Consequences for early life

The fact that the Earth’s magnetic field was already fairly strong 3.7 billion years ago has several implications. One is that, over time, as the solar wind decreased, life on Earth would have become progressively less likely to experience high levels of ionizing radiation. This may have allowed organisms to move onto land and leave the more protective environment of the deep oceans, Nichols says.

The results also suggest that Earth’s early magnetic dynamo was as efficient as the mechanism that generates our planet’s magnetic field today. This finding will help scientists better understand when the Earth’s inner, solid core began to form, potentially shedding light on processes such as mantle convection and plate tectonics.

Are magnetic fields a key criteria for habitability?

Perhaps the most profound implication, though, concerns the possibility of life on other planets. “Understanding the oldest record of Earth’s magnetic field is really important for figuring out whether magnetic fields are a key criteria for habitability,” Nichols tells Physics World. “I’m really interested to know why Earth appears so unique – and whether the magnetic field matters for its uniqueness.”

The technique developed in this study, which is detailed in the Journal of Geophysical Research, could be used to study other very old rocks, such as such as those found in Australia, Canada and South Africa. Nichols says her next big project will be to carry out similar studies on these rocks.

The post Early Earth’s magnetic field strength was similar to today’s appeared first on Physics World.

  •  

Blurred tomography fabricates custom microlenses with optically smooth surfaces

Par : No Author
29 mai 2024 à 16:00

Additive manufacturing, also known as 3D printing, has revolutionized many sectors with its speed, flexibility and unparalleled design freedom. But previous attempts to create high-quality optical components using additive manufacturing methods often came up against a range of obstacles. Now, researchers from the National Research Council of Canada have turned to blurred tomography – an extension of the tomographic volumetric additive manufacturing (VAM) method – to create customized optical components.

“3D printing is transforming every sector of manufacturing,” says lead author Daniel Webber. “I’ve always been interested in 3D printed optics due to their potential to revolutionize optical system design. I saw a post-doc position with the NRC where they wanted to do volumetric 3D printing of micro-optics, and the rest was history”.

Additive manufacturing challenges

In the past, techniques such as digital light processing, stereolithography, inkjet printing and two-photon polymerization (2PP) have been used to build optical components through a layer-by-layer approach. However, the fabrication process tends to be slow, it’s difficult to fabricate optical components with curvature – which many components need – and surfaces that aren’t parallel to the substrate have height steps defined by the layer thickness.

VAM has also had its challenges, with poor part quality (such as ridges on the surface called striations) due to the self-writing waveguide effect – in which the narrow writing beams used in VAM cause increased printing speed in planes parallel to the beams. Post-processing methods are usually required to increase the part quality and smooth the surfaces, but a direct VAM method that doesn’t require extra steps has been sought.

Overcoming challenges with blurred tomography

In their latest research, Webber and his team have accomplished such a direct VAM method, while maintaining the freedom of design that additive manufacturing offers for rapid prototyping.

Tomographic VAM uses projected light to solidify a photosensitive resin in specific regions, enabling parts to be built without support structures. While the pencil-like beams used in conventional tomographic VAM methods cause striations, the new technique can produce microlenses with commercial-grade quality. It is known as blurred tomography, because a large-etendue (more “spread out”) source is used to purposely blur the lines and reduce striations.

The blurring of the optical writing beam helped to generate a surface roughness in the sub-nanometre range – making it essentially molecularly smooth. By comparison, other VAM methods have well-collimated and low-etendue writing beams so that they are not blurred by design.

3D printing setup
Printing setup A custom projection lens blurs the laser beams used to solidify a photosensitive resin, enabling printing of commercial-quality optics, such as the lens shown bottom left. (Courtesy: Daniel Webber, NRC)

By purposely blurring the beam and coupling it with astigmatism introduced by the cylindrical vial of photoresin (without an index matching bath), the blurring could be achieved across the whole print volume. Alongside the rapid processing speed, the other defining feature of the blurred tomography method is that it doesn’t require additional processing and is therefore a direct method for producing smooth optical components.

“The most significant finding of this work is that we can directly fabricate optically smooth surfaces and have freeform ready-to-use optical components in under 30 minutes,” says Webber.

While the overall processing time takes around 30 min, the actual printing of the lenses took less than a minute. This is similar to other VAM techniques (but without the need for extra surface processing steps). In comparison, a previous study found that using 2PP to print a half-ball lens of similar size (2 versus 3 mm), curvature error (3.9% versus 5.4%) and surface roughness (2.9 versus 0.53 nm) took 23 h – showing how blurred tomography is much faster, while producing finer surface features.

The research team showcased the potential of the new technique by making a millimetre-sized plano-convex optical lens with an imaging performance comparable to that of a commercial glass lens. The intrinsic freedom design that additive manufacturing offers also helped the researchers create a biconvex microlens array (fabrication on both sides), as well as overprinting of a lens onto an optical fibre.

Like many areas of additive manufacturing, it’s thought that VAM could offer a way of producing low-cost and rapidly prototyped parts, in particular, freeform optical components. “We have demonstrated that blurred tomography is capable of rapidly fabricating a range of micro-optical components. Moving forward, we would like to extend these capabilities to larger part sizes and new materials,” Webber tells Physics World.

The research was published in Optica.

The post Blurred tomography fabricates custom microlenses with optically smooth surfaces appeared first on Physics World.

  •  

Ion therapy, mass spectrometry and the origins of life: Lily Ellis-Gibbings shares her passion for creating novel instrumentation

29 mai 2024 à 13:36

How did you first become interested in working in the field of instrumentation?

When I was in the final year of my bachelor’s degree in Australia I was looking for an undergraduate project and the one I chose involved studying atomic and molecular collisions. This meant working with homemade vacuum equipment and instrumentation, which I really liked. I discovered that I really enjoy doing experiments that take an incredibly complicated physical system and break it down into smaller and smaller pieces – until your measurements reveal something fundamental about the system you are studying.

After I graduated, a friend and I went on a road trip to visit universities and chat with potential PhD supervisors. However, I couldn’t find a good fit, so I worked for a few years as a science communication officer.

That was a really great experience, but after a while I was looking for something more scientific and a little more challenging. The professor that I did the undergraduate project with had a colleague in Spain who was looking for someone to do a PhD in a similar field. So I was very lucky to be in the right place at the right time – and to have some experience in vacuum instrumentation.

Your PhD at the Autonomous University of Madrid involved the development of new ion sources for radiotherapy. What was the goal of that research?

The focus was on ion therapy, which is a type of cancer treatment that uses a high-energy beam of protons, or perhaps carbon ions, to destroy a tumour within the body. Ions are particularly good at this application, because, due to an effect called the Bragg peak, the beam can be adjusted to deposit most of its energy in the tumour – minimising damage to the healthy tissue that it must pass through to reach the tumour.

It turns out that it’s not just the high-energy ions that affect the tumour. Ion collisions create a cascade of lower energy particles such as electrons and free radicals. These can affect tumour cells in lots of different ways – sometimes even killing the cells.

We were trying to develop ion sources that could mimic the secondary particles created when an ion beam interacts with molecules in the body. In particular, we were interested in how these ions interact with DNA, proteins and other complex molecules. This builds more complex computer models of ion therapy, whereas many conventional models use a body that is modelled as being 100% water.

After you completed your PhD, you worked in experimental astrochemistry at University College London, that must have been a fascinating field to be involved with.

Yes, astrochemistry is a super fun field and I really enjoyed it. It is the study of the chemistry that occurs in different parts of space. Although it sounds very different to my work in radiotherapy, there were similarities in terms of the instrumentation that we used – specifically ion and electron beams in vacuum systems. We focused on how these beams interacted with molecules of astrochemical interest. For example, if a molecule is detected in the ionosphere of Saturn’s moon Titan we could study how that molecule could break down in that environment and form new bonds. Could it turn into singly charged or a doubly charged ion?

Have you studied molecules that may be involved in the emergence of life?

That is certainly something we were interested in. Abiogenesis, the process by which life emerged from non-living matter, is an important topic in astrochemistry. Indeed, one of the reasons we would choose certain molecules to investigate is their possible roles in the emergence of life. For example, I would work with molecules that have carbon–nitrogen bonds because these bonds are really important in how we believe life first appeared on Earth.

That sounds like an exciting field, did you enjoy it?

It was fun and I definitely learned a lot. The astronomy conferences were fascinating and I loved seeing the amazing images that were created from telescope data. The overlap between people doing astrochemistry and the observational astronomers was fairly small. But, we did interact a lot with a lot of physicists who model what the concentrations of different atoms in different parts of the universe should be. It was fascinating speaking with them.

NPL Reception
National Physical Laboratory Ellis-Gibbings is a higher scientist at the National Centre of Excellence in Mass Spectrometry Imaging at the lab’s Teddington campus. (Courtesy: NPL)

You joined NPL in 2021, where you are a higher scientist. What does your job entail?

I joined the National Centre of Excellence in Mass Spectrometry Imaging (NiCE-MSI) at NPL in Teddington, London. We mostly work on the chemical imaging of surfaces and a lot of what we do is focused on biology. The measurements that we make are different from what I did before, but again, the instrumentation is similar.

In mass spectrometry imaging we use many different ion sources, some of which operate at high vacuum. But, the really cool thing that I am doing at the moment is the development of ambient ion sources. Conventional mass spectrometry imaging involves ionizing samples under vacuum conditions, often with a lot of sample preparation. However, some components or types of samples – particularly in delicate or unusual biological samples – cannot be studied fully in this way.

Our group ensures that mass spectrometry techniques are quantifiable and repeatable

To address this problem, I’m developing ambient ionization techniques that allow us to do mass spectrometry on samples in air with no sample preparation. I particularly enjoy working with unusual sample types such as plant matter. It’s interesting work and there are always new challenges.

NPL is very involved in developing measurement standards, so our group ensures that mass spectrometry techniques are quantifiable and repeatable. It is important that we maintain our high level of scientific integrity that we have within the metrology (measurement science) community.

Do you work with researchers outside of NPL?

Yes, we have extensive collaborations across the UK. We recently partnered with Cancer Research UK on a project called Rosetta, which involves using mass spectrometry imaging to map cancer biology. We also collaborate with the Rosalind Franklin Institute in this field and we partner with researchers based at universities around the UK. We contribute our experimental expertise to these projects and we are also involved in analysing data gathered by the collaborations.

NPL also takes part in collaborations with other metrology institutes around the world, including the National Institute of Standards and Technology (NIST) in the US.

What advice do you have for a physics PhD student who would like a career in instrumentation?

Try to spend as much time in the lab as you can. Fix things that are broken, with the appropriate guidance. Work-out how the electronic equipment in your lab works and be hands-on when it comes to tasks like putting together a vacuum system. It’s also important that you learn how to use computer-aided design (CAD) software and understand machine engineering basics – this will allow you to design, commission and build your own equipment.

One thing that I found very useful, is learning how to assemble and disassemble delicate equipment using tiny tools while wearing gloves. Through hours of practice, I taught myself not to be clumsy and to organize parts and tools so I don’t lose them.

One thing that is important to realize is that building instrumentation as an academic scientist can be risky. If you work on a project that fails to deliver data and publications, it can be very difficult when applying for the next job or fellowship. It’s always good to have a side project that trundles along and also produces data – so you always have something you can publish.

What about a career outside of academia?

Scientific institutes, like us at NPL, often mix academic, commercial and measurement services. There are also lots of companies that produce scientific instrumentation and they employ some really amazing scientists. So, that is an excellent career path.

The post Ion therapy, mass spectrometry and the origins of life: Lily Ellis-Gibbings shares her passion for creating novel instrumentation appeared first on Physics World.

  •  

Quantum error correction produces better ‘magic’ states

29 mai 2024 à 12:00

Humans like to build robust systems – ones that resist change and minimize unreliable or undesirable results. In quantum computing, this desire manifests itself as fault-tolerant quantum computing (FTQC), which aims to protect both the quantum state and the logic gates that manipulate it against interventions from the environment. Although quite a resource-intensive task, physicists at IBM Research recently showed that they could partially meet this requirement, thanks to some real-life quantum “magic”.

The process of cushioning quantum states against uncontrollable interventions is called quantum error correction (QEC). In this process, information that would ideally be stored in a few quantum bits, or qubits, is instead stored in many more. The resulting redundancy is used to detect and correct the enforced errors. In QEC jargon, the mapping of the ideal-case quantum state (the logical state) into the noise-protected state (the physical state) is termed encoding. There are multiple ways to perform this encoding, and each such scheme is called an error-correction code.

FTQC begins with QEC as a first step and then uses appropriate encoded gates and measurements to robustly perform a computation. For it to succeed, the error rate must fall below a certain threshold. Achieving this low error rate remains a significant challenge due to the huge number of physical qubits required in the aforementioned encodings. However, researchers are making significant improvements to FTQC sub-routines.

A little bit of magic

One such improvement recently came from an IBM team led by Maika Takita and Benjamin J Brown. In this work, which is published in Nature, the team proposed and implemented an error-suppressing encoding circuit that prepares high-fidelity “magic states”. To understand what a magic state is, one must first recognize that some operations are much easier to implement in FTQC than others. These operations are known as the stabiliser or Clifford operations. But with these operations alone, we cannot perform useful computations. Does this render Clifford circuits useless? No! When a class of states called magic states is injected into these circuits, the circuits can do any computation that quantum theory permits. In other words, injecting magic states introduces universality into Clifford circuits.

The next important question becomes, how do you prepare a magic state? Until a few decades ago, the best techniques for doing so only created low-quality copies of these states. Then, two researchers, Sergey Bravyi and Alexei Kitaev, proposed a method called magic state distillation (MSD). If one starts with many qubits in noisy copies of magic states, MSD can be used to create a single, purified magic state. Crucially, this is possible using only Clifford operations and measurements, which are “easy” in FTQC. Bravyi and Kitaev’s insight created a leading FTQC model of computation called quantum computation via MSD.

Of course, one can always start with a circuit that allows for operations outside the Clifford set. Then no input state remains “magical”. But the big advantage of sticking to Clifford circuits supplemented by magic states is that its universality reduces the general fault-tolerance problem to that of performing fault-tolerant Clifford circuits only. Moreover, most known error correcting codes are defined in terms of Clifford operations, making this model a crucial one for near-future exploration.

High fidelity of states

While MSD requires a large number of attempts to prepare the intended state, the IBM researchers showed that these can be reduced by preparing input magic states (to MSD) with high fidelity – meaning that the states prepared are very close to those required. In a proof-of-concept demonstration, the IBM team prepared an encoded magic state that is robust against any single-qubit error. Using four qubits of the 27-qubit IBM Falcon processor on the ibm_peekskill system to encode the controlled-Z (CZ) magic state, they achieved fidelities equivalent to the square of the fidelity they would have achieved without encoding the initial state.

The team also improved the encoded state’s yield – that is, the number of magic states produced over time – by introducing adaptivity, which is where mid-circuit measurement outputs are fed forward to choose the next operations needed to reach the required magic state. “Basically, we can create more magic states and less junk,” says team leader Benjamin Brown, who holds appointments at IBM Quantum’s T J Watson Research Center in New York, US and at IBM Denmark.  Both these observed advantages, in yield and fidelity, are due to quantum error correction that suppresses the noise accumulated during state preparation.

Verifying the states prepared

The team verified that they had created the intended magic state (and hence the claimed gain in the fidelity) via tomography experiments. In one experiment, the fault-tolerant circuits measure the physical state in such a way that only the information about the logical state is revealed. This is termed logical tomography. Meanwhile, the other experiment, called physical tomography, determines the exact physical state. As might be expected, logical tomography is more efficient and requires fewer resources than physical tomography. For example, in the case of the CZ state, the former required 7 while the latter required 81 measurement circuits.

As well as demonstrating the advantage of quantum error correction, the new work also opens up a new research pathway – invoking adaptivity to prepare high-fidelity, high-yield magic states with further important implications in developing creative fault-tolerant computing techniques. “This experiment sets us on the path to solving one of the most important challenges in quantum computing – running high-fidelity logical gates on error-corrected qubits,” Brown says.

The post Quantum error correction produces better ‘magic’ states appeared first on Physics World.

  •  

Boson sampler uses atoms rather than photons

Par : No Author
28 mai 2024 à 18:26

A boson sampler that uses atoms rather than photons has been developed by researchers in the US. The team used its system to determine a complex quantum state more accurately than would be practicable using a conventional (classical) computer. Atoms interact much more strongly than photons, so the researchers believe their system is a promising platform for simulating condensed-matter systems. Looking further in the future, it could also be used for quantum computing.

A defining property of bosons is that an unlimited number can occupy the same state at any one time. This leads to strange behaviour such as the Hong-Ou-Mandel effect, in which two indistinguishable photons striking a 50:50 beamsplitter at the same instance will always come out of the same port. Similar effects involving multiple photons and beamsplitters are extremely difficult for classical computers to model. The best classical algorithms can only manage around 50 bosons.

Boson sampling machines are proto-quantum computers, or quantum simulators, that utilize the properties of bosons themselves. A specific quantum state is imprinted into the system at the input and the state is measured after a given time. However, photons are easily lost in a system, making it very challenging to achieve a reliable measurement.

Square optical lattice

In the new work, researchers in Adam Kaufman’s group at JILA in Boulder, Colorado implemented a boson sampler using atom optics. They placed 180 strontium-88 atoms into a 48×48 site square optical lattice potential. Key to their success, the researchers used an optical tweezer array to move atoms around. Aaron Young, then Kaufman’s PhD student, explains: “We worked hard in this experiment to engineer the tweezers to really address single lattice sites. Our tweezers are smaller than typical tweezers.”

Once the researchers had placed the atoms appropriately, they turned off the tweezers. They then laser cooled the atoms and imaged the initial quantum state. Next, they reduced the depth of the lattice potential, allowing atoms to tunnel between sites. After allowing the state to evolve for a fixed time, they increased the potential depth again, tightly confining the atoms and thereby allowing them to image the positions of the atoms using photons.

The next step was to test the system, explains Young. “Certification of a boson sampler is believed to be as hard as simulating a boson sampler in the first place”. As it is not possible to simulate a boson sampler of 180 indistinguishable atoms classically in a reasonable amount of time, nor is it possible to check whether or not such a boson sampler produced the correct result. The researchers therefore turned to indirect certification by looking at cases in which the bosons were not indistinguishable. Such a state might arise in an experiment as a result of imperfect cooling, for example.

Distinguishable atoms

“As we make the atoms more and more distinguishable, we go from this problem that’s really hard to simulate closer to the case where you’re doing the one atom problem 180 times,” says Young. “And somewhere in the middle we cross the threshold where it’s once again possible to simulate our problem on a normal computer. We check two things: first, that as we turn this knob, things look well behaved and nothing dramatically goes wrong; second, that as things become sufficiently distinguishable to simulate, the experiment is in agreement with theory.”  The results suggested that the atoms were around 99.5% indistinguishable.

The team now intends to investigate how the system could be used as a platform for reprogrammable quantum logic. “In our system, we’re at this fine-tuned point where the atoms are, to a very good approximation, not interacting with each other, but it’s very easy to turn interactions back on.” This could allow the simulation of problems in condensed matter physics, for example. Beyond this, it could even provide a route to universal quantum computation. He points out that the optical tweezers can be used to shift the energies of lattice sites. “It turns out that the ability to just shift sites up and down like that gives you access to a universal set of controls,” says Young.

Atomic, molecular and optical physicist Cheng Chin at the University of Chicago is impressed with the research. He says that, thanks to the low loss observed compared with photons, Kaufman’s group has shown that atoms provide “much higher fidelity to the ideal boson sampling the algorithm would require”. He adds, “As far as this specific problem is concerned I think the application of cold atoms is a very remarkable step to demonstrate the advantage of quantum information processing. Perhaps now with Adam’s approach he can control which way the bosons are going and introduce interactions between atoms, which is much, much easier than introducing interactions between photons. It does open a lot of new opportunities beyond what photons can do.”

The research is described in Nature.

The post Boson sampler uses atoms rather than photons appeared first on Physics World.

  •  

Swift quakes and new podcast music inspired by the fine-structure constant

28 mai 2024 à 14:37

Whether you’re a Swiftie, a devout metalhead, or a 1980s synth pop aficionado, there is something for every musical taste in this month’s Physics World Stories.

In part one, podcast host Andrew Glester is joined by Jacqueline Caplan-Auerbach, a geophysicist at Western Washington University, US. She has analysed “Swift quakes”, a seismological phenomenon during Taylor Swift’s Eras tour, answering two important questions. Are the quakes triggered by the music or the crowd? And how does their magnitude compare with similar events like the 2011 “Beast quake” triggered by celebrations at an American Football game between the Seattle Seahawks and the New Orleans Saints. It turns out that Swifties (dedicated Taylor Swift fans) are queuing up to share data for geophysics research.

Regular listeners will notice that this month’s episode has a new podcast jingle. In part two, Glester is joined by the song’s creator Philip Moriarty, a physicist and science communicator at the University of Nottingham, UK. Titled 137, the song is inspired by the fine-structure constant, and is packed with cheeky references to this dimensionless constant and the physicists closely associated with it. (Yes, you can expect bongos!) Moriarty reveals even more about the song in his article “H1dd3n variab7es: the fundamental constant on which the new Physics World podcast music is built“, where you can also listen to the tune in full.

The post Swift quakes and new podcast music inspired by the fine-structure constant appeared first on Physics World.

💾

  •  

H1dd3n variab7es: the fundamental constant on which the new Physics World podcast music is built

Par : No Author
28 mai 2024 à 14:37
Composed and performed by Philip Moriarty137
Composed and performed by Philip Moriarty137

You may recall Ian Randall’s recent cryptic word search on the theme of quantum physics. As a scanning tunnelling microscopist, I particularly liked the clue “Ill gent, nun in a bad way? It’s barrier breaking (10)”. Sticking with the cryptic quantum theme, I’m now going to consider something else that’s hidden – this time in a piece of music I’ve composed as the soundtrack for Physics World (click the play button above to listen).

It’s a dimensionless constant – a “plain, beautiful number” as my colleague Laurence Eaves  once memorably described it. It fascinated and infuriated Wolfgang Pauli, who loved its physical significance and its apparent mystical ramifications (surprising given his famously caustic and cynical demeanour). Richard Feynman, as his wont, dubbed it “one of the greatest mysteries of physics – a magic number that comes to us with no understanding”.

I’m talking about 137, also known as “alpha” or the inverse of the fine structure constant. Pauli was so obsessed with 137 that he’d say, when he died, his first question to the Devil would be:What is the meaning of the fine structure constant?” In a staggering coincidence, that terminal moment came in room number 137 of the hospital in which he was being treated for pancreatic cancer.

If you’re wondering why Pauli believed that 137, rather than 42, was the answer to life, the universe and everything, the historian Arthur Miller has written an entire book on the subject. Entitled 137: Jung, Pauli, and the Pursuit of a Scientific Obsession, it explores how psychoanalysis helped Pauli to understand his creative powers and cope with life. Be warned: the numerology and woo quotients are off the scale.

A beautiful number

So when I was recently asked if I’d like to write a physics-inspired jingle and theme tune for the Physics World podcast, that “plain, beautiful” number immediately sprang to mind. What better constant to encode in the notes, chords and arpeggios of a physics jingle than the number that has driven so many physicists to distraction since Arnold Sommerfeld originally introduced it in the 1920s?

It’s not the first time I’ve dabbled with fundamental constants as the basis of musical riffs and rhythms. As I described last year in “Shreddinger’s equation”, I once created a musical mashup of quantum physics and heavy metal. But this time I decided to forego my natural inclination to turn everything up to 11 (and beyond) and instead write a piece of music – 137 – inspired by synthpop music from the 1980s, the decade in which Physics World was born.

Just over four minutes long, at the musical core of 137 is a very simple arpeggiated chord comprising the notes D, F and C. These are the first, third and seventh notes, respectively, of a C natural minor scale, if we assign C to the number 0. (Technically, and for those versed in music theory, it’s a D Locrian mode if we use the more conventional approach and label the first note in a scale/mode as 1.)

Throughout the piece, the 137 motif appears repeatedly on different instruments. Although the primary instrument is the synthesizer, what appear to be synth sounds in 137 are sometimes instead heavily effected guitar harmonics. It’s a nod, if you like, to the standing wave resonances of the quantum-particle-in-a-box model so beloved of undergraduate physicists. Talking of which, the “whooshing” sound – at around 13 seconds in – is the evolution of a superposition state in an infinite potential well converted to sound.

Heavenly connections

Other incidences of 137 come in the drum pattern that appears for the first time at around 00:45, the tempo of the piece (137 beats per minute, naturally) and the fade-out at the end, which is a heavily processed bass guitar playing natural harmonics that follow the 1-3-7 pattern. I’m a big science-fiction fan and was keen for this to sound like a cryptic encoded message from an alien civilization.

Another sci-fi-inspired effect is the Shepard scale starting at 02:30 – the steadily rising pitch that seemingly never stops rising, extensively exploited by composer Hans Zimmer in his score for the 2014 movie Interstellar. I discuss just how this intriguing audio illusion works in the context of 137 in the May episode of the Physics World Stories podcast, which was the official premiere of the new music.

Of course, pedants will point out that the fine structure constant isn’t exactly the inverse of 137. They’re right. So I’ll let Pauli have the last word. In a joke I once saw posted by the US physicist Chad Orzel, Pauli’s died and gone to heaven where he asks God why the fine structure constant is only roughly equal to 1/137? “Ah, I’m glad you asked that,” God replies, creating a blackboard and writing equations to derive it.

After about 10 minutes, Pauli says “Oh, THERE’S Your mistake!”

The post H1dd3n variab7es: the fundamental constant on which the new <em>Physics World</em> podcast music is built appeared first on Physics World.

  •  

3D printing creates personalized pharmaceuticals

Par : Tami Freeman
28 mai 2024 à 10:10

Personalized pills that release timed doses of medication tailored to an individual patient’s requirements could improve treatment effectiveness and increase patient compliance. With this goal, researchers at the University of Nottingham have used 3D printing of novel soluble inks to fabricate tablets that deliver drugs with a bespoke dose and release profile. The new technique, described in Materials Today Advances, paves the way for scalable batch production of customizable pills.

Additive manufacturing – or 3D printing – provides a means to fabricate structures with controlled drug release profiles that can’t be created using conventional manufacturing methods. In particular, multi-material inkjet 3D printing (MM-IJ3DP) offers promise for precise deposition of multiple components at production scales. When creating personalized pills, however, the restricted choice of processable materials means that most studies have relied on swelling and diffusion mechanisms for drug delivery, limiting the ability to perform timed release of active material.

Instead, Yinfeng He from the university’s Centre for Additive Manufacturing and colleagues are investigating an inkjet-printable material with a dissolution-based release mechanism. They developed a biocompatible water-soluble ink using acrylomorpholine (ACMO), the photo-polymerized form of which provides a model excipient to enable drug release, and co-printed poly-ACMO with another polymer to create personalized pharmaceutical tablets.

“This breakthrough not only highlights the potential of 3D printing in revolutionizing drug delivery, but also opens up new avenues for the development of next-generation personalized medicines,” says He in a press statement.

Drug delivery

To demonstrate the ability of MM-IJ3DP to create complex composites, the researchers 3D printed a series of 8 mm-diameter structures containing poly-ACMO and another functional polymer. In a single manufacturing step, they fabricated six designs: a disc; an annulus; an insoluble annulus with a soluble core; a soluble annulus with an insoluble core; insoluble cubes within a soluble matrix; and soluble cubes in an insoluble matrix. For each tablet, the soluble component dissolved in phosphate-buffered saline, leaving the insoluble component behind.

The team also fabricated various shaped poly-ACMO tablets with different surface areas, including discs, squares, hexagons, pentagons, pentagrams and four-pointed stars. As expected, the dissolution rate increased linearly with increasing surface area. This trait should enable the team to design structures in which the exposed surface area of the soluble material changes over time.

The next step was to incorporate an active pharmaceutical into the tablets. He and colleagues created poly-ACMO and aspirin-loaded (30 wt%) poly-ACMO inks and co-printed them into a 5 mm-diameter pill with a patterned aspirin-loaded region in the centre. In each layer, the inks were deposited sequentially and cured with UV light in between to prevent cross-contamination.

Raman spectroscopy confirmed successful incorporation of the aspirin in the pill, while NMR spectroscopy of the dissolved tablet revealed a total aspirin loading of 6.9 wt% and a drug loading deviation within typical pharmaceutical quality standards. Further tests showed that the aspirin retained an amorphous state in the printed pill and did not recrystallize after ink curing.

Timed doses

Finally, He and colleagues designed 8 mm-diameter tablets with different patterns of insoluble and soluble ink to create a range of release rates. They 3D printed four designs: an aspirin-loaded poly-ACMO tablet with a small insoluble core; a tablet capped top and bottom with insoluble surfaces; a capped and partially walled tablet with the wall at the outer radius; and a capped tablet with the partial wall nearer to the centre.

Printing tablets with different drug release behaviours
Controlled release The team designed and printed tablets with four different drug release behaviours. The photos show the residues after the poly-ACMO fully dissolved; scale bar: 4 mm. (Courtesy: CC BY 4.0/Mater. Today Adv. 10.1016/j.mtadv.2024.100493)

Measuring the release of poly-ACMO from tablets immersed in phosphate-buffered saline revealed that design 1 had the fastest release rate (4.07 mg/min). The insoluble barriers in design 2 reduced the release rate (to 2.17 mg/min), while the outer wall of design 3 reduced it further (to 0.98 mg/min). Design 4 showed an initial release comparable to design 2, then transitioned to a significantly lower release rate (0.70 mg/min) as the poly-ACMO front moved through the inner insoluble wall.

As the poly-ACMO dissolves, the tablets release aspirin with the same profiles: fast release for design 1, slow for designs 2 and 3, and a two-stage release profile for design 4. These findings demonstrate how co-printing soluble and insoluble structures can be used to programme drug release and introduce complexities, such as stepped drug release profiles, that can’t be achieved by traditional manufacturing methods.

The researchers note that the design flexibility of MM-IJ3DP holds potential for incorporating multiple pharmaceuticals in a single tablet, as well as manufacture of different tablet designs in a single production batch of 56 pills.

These unique capabilities could advance the field of personalized pharmaceuticals, the researchers say, enabling control of dose (by varying the quantity of drug-containing soluble material), release rate (governed by the surface area) and the time of release (controlled by the spatial distribution of the soluble component).

The team is now working to expand its database of formulations and design pills that maximize the technique’s capabilities. “We are currently investigating the use of AI technology to understand the drug release profiles of different 3D-printed pills,” He tells Physics World. “This approach will offer us a new toolset to help us design pills that meet the desired drug release performance specified by pharmacists for patients.”

The post 3D printing creates personalized pharmaceuticals appeared first on Physics World.

  •  

Ursula Le Guin: the pioneering author we should thank for popularizing Schrödinger’s cat

27 mai 2024 à 12:00

The world’s most famous cat is everywhere. It appears on cartoons, T-shirts, board games, puzzle boxes and glow-in-the-dark coffee cups. There’s even a gin named after the celebrity animal. Boasting “lovely aromas of fresh mint and lemon zest”, with notes of basil, blueberries, cardamom and lemon-thyme – and “a strong backbone of juniper” – it’s yours for just £42.95 for 500 ml.

You know whom I’m talking about. But despite its current ubiquity, the fictitious animal only really entered wider public consciousness after the US science-fiction and fantasy writer Ursula K Le Guin published a short story called “Schrödinger’s cat” exactly 50 years ago. Le Guin, who died in 2018 at the age of 88, was a widely admired writer, who produced more than 20 novels and over 100 short stories.

Schrödinger originally invented the cat image as a gag. If true believers in quantum mechanics are right that the microworld’s uncertainties are dispelled only when we observe it, Schrödinger felt, this must also sometimes happen in the macroworld – and that’s ridiculous. Writing in a paper published in 1935 in the German-language journal Naturwissenschaften (23 807), he presented his famous cat-in-a-box image to show why such a notion is foolish.

For a while, few paid attention. According to an “Ngram” search of Google Books carried out by Steven French, a philosopher of science at the University of Leeds in the UK, there were no citations of the phrase “Schrödinger’s cat” in the literature for almost 20 years. As French describes in his 2023 book A Phenomenological Approach to Quantum Mechanics, the first reference appeared in a footnote to an essay by the philosopher Paul Feyerabend in the 1957 book Observation and Interpretation in the Philosophy of Physics edited by Stephan Körner.

The American philosopher and logician Hilary Putnam (1926–2016) first learned of Schrödinger’s image around 1960. “I always assumed the physics community was familiar with the idea,” Putnam later recalled, but he found few who were. In his 1965 paper “A philosopher looks at quantum mechanics” Putnam called it “absurd” to say that human observers determine what exists. But he was unable to refute the idea.

Invoking Schrödinger’s image, Putnam found that we are indeed unable to say “that the cat is either alive or dead, or for that matter that the cat is even a cat, as long as no-one is looking”. Putnam had another worry too. Quantum formalism required that if he looked at a quantum event, it would throw himself into superposition. Putnam concluded that “no satisfactory interpretation of quantum mechanics exists today”.

Enter Le Guin

It was to be another decade before the cat and its bizarre implications jumped into popular culture. In 1974 Le Guin published The Dispossessed (1974), an award-winning book about a physicist whose new, relativistic theory of time draws him into the politics of the pacifist-anarchist society in which he lived. According to Julie Phillips, who is writing a biography of Le Guin, she read up on relativity theory to make her character’s “theory of simultaneity” sound plausible.

“My best guess,” Phillips wrote in an e-mail to me, “is that she discovered Schrödinger’s cat while doing research for the novel.” Le Guin, it appears, seems to have read Putnam’s article in about 1972. “The Cat & the apparatus exist, & will be in State 0 or State 1, IF somebody looks,” Le Guin wrote in a note to herself. “But if he doesn’t look, we can’t say they’re in State 0, or State 1, or in fact exist at all.”

Le Guin was entranced by the implied uncertainties and appreciated the fantastic nature of Schrödinger’s image

Unlike Putnam, Le Guin was entranced by the implied uncertainties and appreciated the fantastic nature of Schrödinger’s image. “If we can say nothing about the definite values of micro-observables, when not measuring them, except that they exist, then their existence depends on our observation & measurement.”

In “Schrödinger’s cat”, which Le Guin finished in September 1972 but didn’t publish for another two years, an unnamed narrator senses that “things appear to be coming to some sort of climax”. A yellow cat appears. The narrator grieves but doesn’t know why. A musical note makes her want to cry but she doesn’t know for what, and thinks the cat knows but is unable to tell her. She then remembers Michelangelo’s painting The Last Judgment, of a man dragged down to hell who clamps a hand over one eye in horror but keeps the other eye open and clear. The doorbell rings and in walks Rover, a dog.

Black and white photo of a woman with short hair sat on a chair
Speculative genius Ursula K Le Guin in 1995. (CC BY-SA 2.0 Marian Wood Kolisch, Oregon State University)

Rover pulls a box out of his knapsack with a quantum-mechanical gadget that will either shoot or not shoot the cat once it gets inside and the lid is closed. Before we open the lid, Rover says, the cat is neither dead nor alive. “So it is beautifully demonstrated that if you desire certainty, any certainty, you must create it yourself.”

The narrator is not sure. Don’t we ourselves get “included in the system”; aren’t we still inside a yet bigger box? She’s reminded of the Greek legend of Pandora, who opens her box and lets out all its evil contents. She and Rover open the lid, but find the box empty.

The house roof flies off “just like the lid of a box” and “the unconscionable, inordinate light of the stars” shines down. The narrator finally identifies the note, whose tone is now much clearer once the stars are visible. The narrator wonders whether the cat knows what it was they lost.

Le Guin’s story was soon followed by other fictional and non-fictional treatments of quantum mechanics in which Schrödinger’s cat is a major figure. Examples include the Schrödinger’s Cat Trilogy (Robert Anton Wilson, 1979); Schrödinger's Baby: a Novel (H R McGregor, 1999); Schrödinger’s Ball (Adam Felber, 2006); Blueprints of the Afterlife (Ryan Budinot, 2012). There have also been a number of short stories including F Gwynplaine MacIntyre’s “Schrödinger's cat-sitter” from 2001.

The critical point

Phillips called Le Guin’s “Schrödinger’s cat” a “slight, playful story with an undercurrent of sorrow”, and warned me not to overthink it. “You could think of it as ‘a fantasy writer looks at quantum mechanics’,” she explained, adding that Le Guin wrote in her journal that fantasy as a genre and physics as a science are approaches to reality that reject common sense. “I think,” Phillips concluded, “she may have been playing around with her sense, at that moment, that physics was another way of expressing the fantastic.”

If so, Le Guin unerringly found the right image.

The post Ursula Le Guin: the pioneering author we should thank for popularizing Schrödinger’s cat appeared first on Physics World.

  •  

The ORCA-Quest quantitative CMOS camera: a core building block for quantum systems

Par : No Author
24 mai 2024 à 14:52

There are incremental innovations (think continuous improvement of existing product lines) and then there are disruptive innovations (think platform technologies that rewrite the rulebook). A case study in the latter is the ORCA®-Quest, a scientific camera that has been turning heads since its commercial release in 2021, opening up cutting-edge imaging applications in disciplines as diverse as quantum computing, atomic physics, synchrotron science, Raman spectroscopy and super-resolution microscopy.

Developed by Hamamatsu Photonics, a specialist manufacturer of high-sensitivity, low-noise cameras for fundamental and applied research, the ORCA®-Quest is a quantitative CMOS (qCMOS) camera with unique “photon-number resolving” functionality – determining the number of photons incident on each pixel (in a 9.4 megapixel array) by accurately measuring the number of photoelectrons generated on a per-pixel basis.

Innovative design, advanced fabrication

It's this granular ability to count photoelectrons that underpins the ORCA-Quest’s game-changing performance as an imaging system. “To realize photon-number resolving, we had to make some modifications to the pixel structure itself,” says Brad Coyle, OEM camera product manager at Hamamatsu Photonics.

Specifically, that means fabricating deep-trench structures in the semiconductor layers between each pixel to ensure that a photon that impacts on a given pixel registers exclusively on that pixel. “The trench structure suppresses the flow of photoelectrons between neighbouring pixels,” Coyle adds, “so we get really high fidelity and linearity at the level of the individual pixels.”

Equally important in this regard is the work of the Hamamatsu development team to reduce the noise-floor of the ORCA-Quest. Conventional scientific CMOS (sCMOS) cameras, for example, come with a low readout noise – though still larger than the photoelectron signal, which makes it difficult to count photoelectrons.

Brad Coyle
Brad Coyle “We get really high fidelity and linearity at the level of the individual pixels.” (Courtesy: Hamamatsu Photonics)

“What we found, theoretically, is that in order to quantify the number of photoelectrons generated per pixel, we have to reduce the noise floor below 0.3 electrons,” notes Coyle. In terms of practical implementation, this necessitated a redesign of the detection and readout circuitry of the ORCA-Quest to achieve 0.27 electron RMS read noise (while ensuring stable performance versus temperature and time as well as individual calibration and real-time correction of each pixel value).

Another notable feature of the ORCA-Quest is its high-speed readout – in other words, how many pixels the camera can read out per second (number of pixels × frame rate). In standard scan mode, for example, the ORCA-Quest offers a higher data rate (>1100 megapixel/s) and lower readout noise compared with conventional sCMOS cameras (approx. 400 megapixel/s). In ultraquiet scan mode, meanwhile, the camera offers photon-number resolving with a x10 faster data rate (approx. 250 megapixel/s) versus single-photon counting with electron-multiplying CCD cameras

“With the full 9.4 megapixel array [4096 x 2304] and high-speed readout, users are able to image a large number of objects with exacting temporal requirements,” notes Coyle. “The high-bandwidth interface means it is also possible to extract real-time feedback from the camera system – which is mandatory for emerging R&D applications in quantum computing and quantum communications.”

Quantum imaging, quantum insights

Among the early-adopters of the ORCA-Quest in the quantum science community is Dolev Bluvstein, a PhD student and team member of Mikhail Lukin’s Quantum Optics Laboratory at Harvard University (Cambridge, MA). Within a diverse programme of theoretical and applied research, Bluvstein and colleagues are working on aspects of quantum computing and quantum simulation using arrays of individually trapped rubidium-87 (Rb) atoms.

At a schematic level, individual atoms are trapped independently in vacuum by optical tweezers, such that highly focused laser beams enable real-time control of each atomic position in space. “Once the atoms are prepared in their programmed positions and pumped into their ground electronic states,” says Bluvstein, “we introduce interactions among them by using lasers to excite them to their Rydberg states [in which an electron is excited into a very large orbital state].”

In this way, the Rb atoms are aligned one-by-one in an array to be utilized as qubits for quantum computing operations, while the qubit states are determined by observing the laser-induced fluorescence (or absence of fluorescence) from each atom. It’s here that the ORCA-Quest provides a core building block in Bluvstein’s experimental set-up, ensuring spatial diagnostics of the entire atom array as well as quantum-state detection for each atomic qubit – all while combining ultralow-noise measurements and high-speed readout (at a frame rate of every 100 μs).

“The camera is the only way to see where the Rb atoms are and to extract qubit information rapidly out of the quantum system,” says Bluvstein. “In a sense, the camera is the main input/output interface between the classical and quantum worlds.”

At the end of last year, Bluvstein and colleagues published a landmark paper in Nature, detailing work on a quantum processor architecture based on reconfigurable atom arrays. The laboratory testbed – which was developed as part of Bluvstein’s PhD work within a wider collaboration involving scientists at NIST/University of Maryland, Massachusetts Institute of Technology (Cambridge, MA) and QuEra Computing (Cambridge, MA) – features high-fidelity entangling gates, local qubit control, mid-circuit readout and any-to-any connectivity for hundreds of atomic qubits.

By grouping atomic qubits together to form error-corrected logical qubits, the team is exploring early fault-tolerant quantum computation with up to dozens of logical qubits and hundreds of logical entangling gates. The ultimate end-game: a neutral-atom, error-corrected quantum computer – at scale – with of the order of 10 million atom qubits imaged every 100 μs on a 24/7 basis while logging their individual quantum states.

“The ultrafast readout speed of the ORCA-Quest over large regions of interest is mission-critical for our research,” explains Bluvstein. “What’s also impressive is that when we installed the camera in our experimental set-up, the signal-to-noise of our atomic imaging improved by a factor of two. All of which advances the rate at which we can do this classical/quantum interfacing and, ultimately, the path to large-scale quantum computation.”

Continuous improvement

Disruptive innovation, inevitably, begets incremental innovation. Earlier this year, Hamamatsu Photonics unveiled the ORCA-Quest 2, offering end-users enhanced functionality along a couple of key coordinates. For starters, there’s a x5 improvement in frame rate when the camera is operated in ultra-quiet scan mode for photon-number resolving (i.e. 25 frame/s for the full 4096 x 2304 array). The ORCA-Quest 2 also offers higher quantum efficiency in the UV region (around 280–400 nm) thanks to an advanced antireflection coating on the sensor window (with no change to the efficiency in the visible and near-IR regions).

It's early days, however, and the search is on for new applications and market segments for the ORCA-Quest 2. “We’re looking to hear from partners who may have a unique application for this camera,” concludes Coyle. “This is still a new technology and we’re trying to learn where the best fit is going to be across basic research and industry R&D.”

Further reading

D Bluvstein et al. 2023 Logical quantum processor based on reconfigurable atom arrays Nature 626 58

The post The ORCA-Quest quantitative CMOS camera: a core building block for quantum systems appeared first on Physics World.

  •  

Wild songbirds respond to mathematically synthesized song

Par : Tami Freeman
24 mai 2024 à 14:00

Understanding how birds react to each other’s songs can shed light on their behaviour and communication in the wild. But such studies require a way to generate realistic-sounding synthetic birdsong. To achieve this, a team from the University of Buenos Aires has used mathematical modelling to synthesize a fake birdsong that’s credible enough to prompt wild birds to respond.

The researchers – Roberto Bistel, Ana Amador and Gabriel Mindlin – studied the rufous-collared sparrow, a highly territorial songbird. The sparrow’s song comprises a brief theme composed of 2–4 notes, followed by a trill, with the entire song usually lasting about 2 s. During the breeding season, the male sparrow sings roughly three times each minute, with each bird singing its own unique theme. If the sparrow hears song from another bird of the same species, it typically responds by increasing its rate of singing.

To create realistic artificial birdsong, the team used a mathematical model based on the physics of avian sound production. The model describes the dynamics of the two pairs of syringeal labia (located at the junctions between the bronchi and the trachea) that modulate airflow and create the sound waves. These sound waves are then filtered as they pass through the bird’s trachea, oro-oesophageal cavity and beak.

By defining the muscle activity and subsequent filtering during birdsong production, the low-dimensionality model can generate sounds with temporal and acoustic properties that emulate a given song. For their study, the researchers used the model to generate 11 synthetic themes based on previous in-the-field recordings from rufous-collared sparrows.

In the field

Lab-based tests of songs generated by the model showed that the synthetic songs generated similar neural responses to those evoked by the bird's own song. But to fully assess the degree of realism, the team moved on to studying the behaviour of birds in the wild. This involved playing both synthetic and real birdsong to male sparrows and recording their behavioural response, classified as the number of songs that the auditory stimulus elicited.

Working in a roughly 0.4 km2 area within Parque Pereyra Iraola (a UNESCO Biosphere reserve) in Buenos Aires, Argentina, the researchers first identified the locations of birds with moderate singing activity, studying a total of 26 individuals in 15 sites. During each 13 min test, they recorded natural sound for 2 min, repeatedly played a song for 1 min, and then continued the recording for 10 min. In total, the birds were exposed to 256 tests, 90 with real songs, 81 with synthetic songs and 85 with songs from three different bird species.

To quantify the birds’ responses, the team processed the recorded audio files with a noise reduction filter and a band-pass filter to focus only on the sparrow’s frequency range and then computed corresponding spectrograms. Two independent observers inspected the spectrograms to count the number of songs per minute produced by each individual. As every bird sings a unique song, the team could confirm each bird’s identity in the recordings.

At the start of each test, the birds sang three to four songs per minute. While the recording was played, the singing performance increased significatively. Once the playback stopped, the original singing rate gradually returned. The researchers note that there was no significant difference between the birds’ responses to real and synthetic songs in the field. Songs from other species, however, failed to evoke a response.

Bistel, Amador and Mindlin say that the model provides a valuable tool for investigating a wide range of biological questions. “This work paves the way for manipulating auditory stimuli in an interpretable way, allowing to address a series of questions that can greatly benefit from the flexibility in the generation of acoustic stimuli permitted by our physical model,” they write.

One potential application could be to test the performance hypothesis, which proposes that two attributes of singing (the frequency of syllable production and the spectral range of syllables) are reliable indicators of the quality of the singer. Another option is to study how wild birds can learn via playback, through the use of automatic audio players.

“The success of our approach instils confidence in the hypotheses underpinning the model and provides a valuable tool for investigating a wide range of biological questions,” the researchers conclude. They plan to continue their fieldwork, further evaluating behavioural responses to synthetic songs generated by the model.

“On one hand, we plan to modify certain parameters in the model to identify which acoustic characteristics make a song more intimidating to other males, and which make it more attractive to the females,” Mindlin tells Physics World. “We hope that the interpretability of the model will allow us to understand the anatomical or physiological features underlying these acoustic properties.”

The study is described in Physical Review E.

The post Wild songbirds respond to mathematically synthesized song appeared first on Physics World.

  •  

‘Hidden’ citations conceal the true impact of scientific research

Par : No Author
24 mai 2024 à 11:00

Papers introducing concepts that have since become common knowledge are often under cited by researchers, skewing those articles' true impact. That's the conclusion of new study using machine learning to identify “foundational” work in science that is often not properly cited. Being able to count such hidden citations could provide more accurate bibliometric measures of impact, the study says. (PNAS Nexus 3 pgae155).

The number of times a paper is cited is widely seen as marker of its scientific credibility. But some concepts or ideas are so well known that no one cites them. It would be unusual for an article on, say, general relativity to refer to Albert Einstein’s original 1915 paper on the subject. Xiangyi Meng, a physicist at Northwestern University in the US, who led the new study, calls such non-references "hidden citations".

In their work, Meng and colleagues used a machine-learning model to analyse one million papers on the arXiv preprint server. It detected catchphrases that suggest specific discoveries and then linked each to at least one foundational paper. The researchers identified 343 topics in physics that accumulate hidden citations, each of which has at least one catchphrase.

The researchers found that the ratio of hidden citations– i.e. citations that should have been made but were not – to actual citations for foundation papers was, on average 0.98:1, suggesting that papers usually acquire hidden citations at the same rate as citations.

Some publications, however, acquire much higher rates of hidden citations. Alan Guth’s 1981 paper that introduced cosmological inflation theory, for example, has 8.8 times more hidden citations than actual citations.

In another example, their model predicts that the phrase “quantum discord” – a quantity that relates two subsystems of a quantum state – should in principle be accompanied by a reference to a 2001 paper by Harold Ollivier and Wojciech Zurek. The algorithm found that hidden citations account for 34.6% of all detectable credit for the “quantum discord” paper.

Foundational papers that acquire hidden citations are nevertheless still highly cited, with an average of 434 citations, compared with an average of 1.4 citations for all physics papers.

Meng adds that when they count hidden citations, the order of the top 100 cited papers in physics changes. Many publications drop down the pecking order, such as Juan Maldacena’s 1999 work on anti-de Sitter/conformal field theory. Lying top for explicit citations, it falls to second in the revised charts mostly because it has a large number of hidden citations.

A few papers with high numbers of hidden citations show significant increases. Guth’s 1981 paper, for example, jumps from eighth place to top spot, overtaking Maldacena’s paper. “Without hidden citations, citation ranks don’t really mean anything,” Meng adds.

Community acceptance

To explore the impact of hidden citations on authors, the researchers used Microsoft Academic Graph’s “author saliency” metric. It judges the academic impact of scientists using a range of metrics, such as the connectivity of articles, authors and journals as well an author’s citation count.

The team found that authors with more hidden citations also have a higher author saliency, with this effect particularly notable for those with lower numbers of citations. In other words, while these authors have credibility and reputation, citation counts are not fully capturing the true impact of their work.

“Authors with more hidden citations actually have a higher impact, they appear to be more reputational than those authors with fewer hidden citations,” says Meng. “If you have hidden citations, it means that your concept, your work has been widely accepted by the community."

Mang explains that hidden citations are also inevitable given that it is difficult for researchers to cite every paper or concept used in their work, which is why, he says, it is important that they are counted in some way.

The post ‘Hidden’ citations conceal the true impact of scientific research appeared first on Physics World.

  •  

A passion for building instrumentation, and a hint of dark matter in dwarf galaxies

23 mai 2024 à 15:30

In this episode of the Physics World Weekly podcast we chat with Lily Ellis-Gibbings, who is a higher scientist at the UK’s National Physical Laboratory. She talks about her passion for building scientific instrumentation for fields as diverse as radiotherapy, astrochemistry and mass spectrometry. Ellis-Gibbings also shares her top tips for physics students who aspire to careers in instrumentation.

Also in this episode, the astrophysicist Alex McDaniel talks about a new study of dwarf galaxies. While at Clemson University in the US, McDaniel and colleagues observed evidence that dark-matter particles in the galaxies are annihilating to create gamma-rays. While well below the statistical threshold to be called a discovery, the observation provides a tantalizing hint about the nature of dark matter.

 

Thyracont logo

 

This podcast is sponsored by Thyracont Vacuum Instruments, which provides all types of vacuum metrology for a broad variety of applications ranging from laboratory research to coating and the semiconductor industry. Explore their sensors, handheld vacuum meters, digital and analogue transducers as well as vacuum accessories and components at thyracont-vacuum.com.

The post A passion for building instrumentation, and a hint of dark matter in dwarf galaxies appeared first on Physics World.

  •  

Bruno Touschek: the physicist who escaped the Nazi Holocaust to build particle colliders

Par : No Author
23 mai 2024 à 12:00
Bruno Touschek pictured in 1955
Man of many talents Bruno Touschek pictured in 1955, a decade after escaping death in Germany. By this time he was a successful theorist who had already proposed building the world’s first electron–positron collider. (Courtesy: CC-BY-3.0: https://cds.cern.ch/record/135949)

One sunny day in May 1966, I entered the grounds of the Frascati National Laboratory near Rome for the first time. I had just graduated with a degree in physics from the University of Rome and had a fellowship to work in Frascati’s theoretical-physics group. It was led by Bruno Touschek, who six years earlier had famously proposed building a new kind of particle accelerator that was to become a prototype for many future devices around the world.

His idea did not involve smashing particles into fixed targets or colliding electrons with each other. Instead, Touschek wanted to show you could store enough antimatter in the form of positrons and collide them head-on with electrons in a circular device, with the resulting annihilation revealing new secrets of the particle world. His dream became reality in 1963 when the Anello di Accumulazione (AdA), or “storage ring”, came online.

AdA was such an extraordinary accomplishment that similar electron–positron colliders were soon built elsewhere too. Now, in 1966, Touschek was overseeing construction of ADONE – an even more powerful and beautiful machine – that would collide electrons and positrons with a centre-of-mass energy higher than any other accelerator in the world. I can still remember the emotion I felt when Touschek took me to a large hall, in a round building across the Via Enrico Fermi, where an enormous crane was putting ADONE’s first magnets into position.

Neither I – nor most of his colleagues at the University of Rome – were aware of Bruno Touschek’s dark and dramatic past

I was to spend the next year working in Touschek’s research group but neither I – nor most of his colleagues at the University of Rome – were aware of his dark and dramatic past. To most students, the Austrian-born Touschek was best known for the wonderfully clear lectures he gave on statistical mechanics, which he delivered carefully and precisely, using delightful turns of phrase and in a beautiful, neat script.

For me and many others, Touschek was a genius. Totally confident in his abilities as a physicist, he wasn’t arrogant but didn’t suffer fools gladly and liked his students to be smart and hard working. There was an aura about him that he richly deserved, having brought the AdA storage ring to fruition. The true story about Touschek’s turbulent early life only emerged years later, following his death in 1978.

I was shocked when I heard the news. It soon emerged that Touschek’s death, at the age of 57, had been caused by liver failure brought on by many years of excessive drinking. His addiction issues were well known to those around him, but it was not something that any of us really questioned. The reasons for them had only started to surface in the months before his death as Touschek began to open up about his early life to his friend and mentor, the physicist Edoardo Amaldi.

In the years that followed, much more was to come to light from his friends and colleagues, who spoke out in various articles, books, lectures and video documentaries. But the fullest story of his remarkable life only emerged in 2009 after the historian Luisa Bonolis and I came across a cache of letters that Touschek had written to his father (see “Bruno Touschek’s family letters” box).

Bruno Touschek’s family letters

A pencil sketch of a bombed, broken building
Dramatic times Touschek’s drawing of a bombed building from a letter to his father in 1943. (Courtesy: Touschek family)

In the spring of 2009, the science historian Luisa Bonolis and I visited Bruno Touschek’s widow, Elspeth Yonge, who lived in a small villa perched in the hills outside Rome. Bonolis knew from an earlier visit that Touschek had written many letters to his father and asked if we could see them. Yonge came back with a large cardboard box. Amongst various photographs and yellowed newspaper cuttings, was a folder of thin typewritten letters.

The letters, which are currently in the possession of the Touschek family, are written in German and had been carefully dated and collected by Bruno’s father. Passed back to Bruno after his father’s death in 1971, these letters describe Bruno’s years in Germany in gripping detail, including his role in the betatron "death-ray" project, his imprisonment and escape from death in 1945.

Not yet published in full, the letters formed the basis of my book Bruno Touschek’s Extraordinary Journey: From Death Rays to Antimatter (2022 Springer) and the contents of this Physics World article. Bonolis, who is currently based at the Max Planck Institute for the History of Science in Berlin, Germany, has also written a paper with a full list of references to many of the articles, books, videos and lectures about Touschek’s life (arxiv:2111.00625).

The shocking truth was that despite being Jewish, Touschek had been made to work for the Nazis during the Second World War. Commandeered to help build a scientific device that could emit military-grade “death-rays”, his was an incredible story that is described in detail in my book Bruno Touschek’s Extraordinary Journey (Springer 2022). Touschek, who was later imprisoned and sent to a concentration camp, displayed immense courage under the worst of circumstances. Despite those traumas, he was to make vital fundamental contributions to particle physics, which he carried out with determination and vision.

Tragic times

Born on 3 February 1921 in Vienna, Touschek was the only son of the Jewish artist Camilla Weltmann and Franz Xaver Touschek – a Catholic officer in the Austrian army who had fought in Italy during the First World War. It was to be a childhood marred by tragedy. His mother died from the after-effects of "Spanish flu" when he was nine and then, in 1934, his maternal uncle killed himself following Hitler’s rise to power.

Life worsened when Austria was annexed by Nazi Germany in early 1938. Touschek was a pupil at the prestigious Piaristengymnasium school and was due to take his final exams the following year. Although his mother had converted to Catholicism to marry Bruno’s father, Touschek was regarded as a Jew and forbidden from sitting the exams with his fellow students. He had to switch to the Schottengymnasium – a private, Catholic school – where he passed his exams in February 1939.

With Europe heading towards war, Touschek now decided to go to Rome, where his maternal aunt Ada lived. There he attended a course on mathematical physics at the University of Rome, which was the first sign of his growing interest in theoretical physics. But Touschek’s time in the Italian capital was spent with “more enthusiasm than profit”, as Amaldi later wrote in a 1981 CERN report, The Bruno Touschek legacy.

Discouraged from continuing to study in Italy by the antisemitic racial laws enforced by Mussolini, Touschek instead applied to do chemistry at the University of Manchester in the UK. The reason for switching subjects isn’t clear but Touschek was probably drawn by the fact that Chaim Weizmann – later Israel’s first president – had been a lecturer in Manchester’s chemistry department. The city also had a strong Jewish community, which must have offered the prospect of a safe haven.

The statue of Minerva outside the University of Rome
Wisdom and warfare Touschek’s initial interest in theoretical physics was formed at the University of Rome, where he studied with Europe heading towards conflict in 1939. (Courtesy: iStock/rarrarorro)

But for reasons that remain unknown, Touschek did not – or could not – take up his offer of a place in Britain. Instead, in September 1939, just as war was breaking out, he began studying physics back home at the University of Vienna, where he excelled in its famous school of theoretical physics. His professors there included Hans Thirring, best known for developing the “Lense–Thirring” frame-dragging effect of general relativity.

Touschek was aware of the dangers of staying in Vienna but, with the war now on, his options were limited. Despite his mixed Jewish/Catholic background, the Nazi authorities deemed Touschek to be a “first-class” [i.e. fully] non-Aryan and, at the end of his first year, he was suspended from the university. In January 1941 he was expelled entirely. Touschek’s chances of continuing to live and study in Vienna were disappearing fast.

To the heart of Germany

But Touschek then found protection and encouragement from the eminent German physicist Arnold Sommerfeld. Based at the University of Munich, Sommerfeld, then 72, was still an influential figure in the German physics community despite having been ostracized by the Nazi government for not complying with antisemitic policies. He had also refused to adhere to the notion of Deutsche Physik, such as denouncing relativity (which was deemed “un-German”).

Touschek had got in contact with Sommerfeld after writing to him to point out some errors he’d spotted in one of his books. The ensuing correspondence saw Touschek travel to Munich in November 1941 with Paul Urban, a physicist from Vienna who was giving a seminar there and who'd mentored Touschek following his suspension and then expulsion from the university. Won over by Touschek’s courage and determination, Sommerfeld crafted a plan for him to move to the University of Hamburg.

One of his former students would help Touschek continue his studies, with financial support from another ex-student, who now ran an electronics firm in the city. Moving to Germany might seem bizarre, but Hamburg was not as dangerous as Vienna, where his precarious status as a Mischlinge (mixed-race person) was well known. In any case, Austria was now effectively part of Germany and emigration – even to Italy – was not an option. Touschek simply hoped he could carry on with his physics, unnoticed.

Crucially for Touschek, there were scientists in Germany trying to protect their Jewish colleagues by hiring them for jobs in firms that were building equipment or devices for the Nazi military. Those scientists could claim that their Jewish friends’ activities were indispensable to the success of the war effort. Such a ruse would keep Jewish scientists away from the attention of the Gestapo and prevent them from being sent to concentration camps.

That at least was the hope. As it turns out, the Gestapo was fully aware of the employment of Jewish scientists. The Nazi authorities tolerated the practice, knowing that as soon as the projects were completed, those scientists would be arrested and dispensed with. Unaware at the time of those dangers, Touschek packed his bags and headed for Germany.

Berlin and the betatron

After visiting Sommerfeld in Munich and receiving his “blessings” for the journey, Touschek arrived in Hamburg on 1 March 1942. Immediately he contacted the company and scientific colleagues Sommerfeld had recommended, before looking for somewhere to live. Money was tight and his studies progressed, albeit slowly. Touschek was then distraught to learn that his grandmother had been taken to the Theresienstadt concentration camp where she died.

Depressed, and with Hamburg and other cities starting to be fire bombed by Allied forces, in November 1942 Touschek was on the move once again, this time to Berlin. Closer than ever to the dark heart of the Nazi regime, he got a job with Löwe Opta, an electronics firms with links to the military. At Löwe, Touschek came to hear of a project to build a 15 MeV betatron – a machine that could accelerate electrons to high energies.

It was being commissioned by the Reich Ministry of Aviation, which had sought the help of the Norwegian physicist Rolf Widerøe, who in 1928 had invented the principle by which such accelerators operate. The Nazis hoped the device would be powerful enough to create “death-rays” – beams of electromagnetic radiation that could strike down enemy aircraft in military operations.

Devices to produce death-rays had first been proposed in the 1920s by several scientists – supposedly including even Guglielmo Marconi and Nikola Tesla – and betatrons had later been suggested as a possible source. In 1941 the US physicist Donald Kerst built the first betatron as a research tool at the University of Illinois – and Widerøe wanted his betatron to be as good, if not better. In their hearts, though, every member of Widerøe’s betatron project knew it was unlikely that a betatron could ever really be put to military use.

Touschek formally joined Widerøe’s team at the end of 1943, where his knowledge of theoretical physics made him a vital member of the project. Aware that he was under surveillance by the Gestapo, Touschek wrote to his father to say he had signed his own “death contract”. In early 1944 he was summoned to the all-powerful Todt Organization, which senior Nazi engineer Fritz Todt had set up to build Germany’s concentrations camps and provide industry with forced labour.

Aware that he was under surveillance by the Gestapo, Touschek wrote to his father to say he had signed his own “death contract”

Luckily, his colleagues successfully appealed his call-up to the organization, insisting that Touschek was indispensable to the betatron. Despite further summons following – the last being in November 1944, when he was asked to appear with “blankets and warm underwear” – in each case Touschek managed to remain on the project. In one case his colleagues even appealed directly to General Erhard Milch, a close associate of armaments minister Albert Speer.

A march towards death

The betatron was completed at the end of 1944. But as 1945 dawned, it started to become obvious that Germany was going to lose the war. Orders came for the country to save important infrastructure and facilities from the advancing Allied armies. The betatron – a prized device – could still be of use and a plan was hatched to move it from the factory in Hamburg where it had been built to Wrist, a small village about 30 km north of the city.

Touschek and Widerøe completed the task on 15 March 1945. The following day, Touschek returned to Hamburg, arriving at his flat at midnight. At 7 a.m. the next morning, he was awoken by the Gestapo, who took Touschek away to the infamous Fuhlsbüttel prison, where he was kept for four weeks, initially in such miserable conditions that he thought of suicide.

Colleagues from the betatron project came and briefly managed to improve Touschek’s situation, even bringing him some of his physics books. He was promised that a release would come soon. It did not. Instead on 15 April 1945, all 200 Fuhlsbüttel prisoners – Touschek among them – were ordered to march to the Kiel concentration camp, roughly 100 km north of Hamburg.

Unwell and weighed down by the physics books that he was carrying with him, Touschek fainted and collapsed on the road near Langenhorn on the outskirts of Hamburg. An SS officer accompanying the prisoners fired at Touschek, shooting him twice as he fell in a trench at the roadside. Blood pouring from his head, the officer and other prisoners continued their march, leaving Touschek for dead.

An SS officer accompanying the prisoners fired at Touschek, shooting him twice as he fell in a trench at the roadside

His wounds fortunately proved superficial. Touschek regained consciousness and was taken to hospital and then another prison, from which a betatron colleague had him released at the end of April 1945. Touschek would later tell his close friends this remarkable tale, which Amaldi also described in a letter from Widerøe who had visited him in prison. Lengthy descriptions appear as well in two letters Touschek wrote to his father in June and October 1945 (see Eur. Phys. J. H 36 1 for English translations).

Touschek never properly explained why he was arrested, offering different explanations to different people in the years that followed. In my view, he simply would not – or could not – account for his involvement with a classified project financed by the Minister of Aviation of the Reich. His work for the Nazi regime was not something that Touschek could ever easily come to terms with or forget.

Göttingen, Glasgow and Rome

After the war, the Allies permitted German science to restart under the guidance of Werner Heisenberg at the University of Göttingen, provided it was directed only for peaceful purposes. But with the Manhattan atomic-bomb project making particle accelerators a useful source of nuclear isotopes, Touschek's experience with Widerøe's betatron caught the eye of the British, who occupied the Hamburg region. Recognising his mix of theoretical and practical know-how, a plan was drawn up to bring him to the UK.

Bruno Touschek at the University of Glasgow in 1949 with Sam Curran
Peaceful progress After the Second World War Touschek (left) moved to the UK, gaining his PhD from the University of Glasgow in 1949, where he extended his now growing knowledge of particle accelerators. He is seen here with Samuel Curran, a colleague from the newly established synchrotron group at Glasgow. (Courtesy: Touschek family)

Aware that Touschek’s formal education was lacking, he was first allowed to obtain his diploma (master’s) in physics at Göttingen, where he did a thesis on the theory of the betatron. In 1947, after a further six months in Heisenberg’s research group, Touschek moved to the University of Glasgow, where he did a PhD supervised by John Gunn, with Rudolph Peierls as external advisor. He then spent a further three years there as a Nuffield lecturer.

Touschek’s five years in Glasgow were fruitful both scientifically and personally. He extended his knowledge of particle accelerators by following the construction of the Glasgow 350 MeV synchrotron and advising UK groups in Birmingham and elsewhere who were building their own devices. On the theoretical physics side, he came to know Max Born, who had found refuge at the University of Edinburgh after leaving Germany in 1933.

Touschek collaborated with him on the second edition of Born’s famous Atomic Physics book and discussed various physics problems with him, sometimes even explaining Heisenberg’s newest papers. In this period Touschek began to work on the so-called “infrared catastrophe”. Involving low-frequency photons emitted by accelerated charged particles, it was a phenomenon that was later to be relevant to all high-energy particle accelerators.

Frascati National Laboratory
Grounds for optimism Bruno Touschek proposed and successfully built AdA at the Frascati National Laboratory near Rome, where the original device is now on display to visitors. (Courtesy: A Srivastava, INFN)

His credentials as a physicist now firmly established, in 1952 Touschek accepted a job offer from Amaldi as a researcher at the University of Rome. Returning to the city he had visited many times before the war – and where his aunt Ada had built a villa in the Frascati hills – Touschek found a vibrant intellectual atmosphere in the university’s physics institute. It played host to numerous distinguished international visitors including the Nobel laureates Patrick Blackett and Wolfgang Pauli.

With the war now firmly in the past, numerous national and international physics projects were starting up. One was CERN, the European particle-accelerator centre near Geneva, which Amaldi strongly supported and served as its first director-general. Rome was also home to two significant, new Italian projects – the Institute for Nuclear Physics (INFN) and the Frascati lab – both of which were to play an important role in Touschek’s future.

Particle accelerators were fast becoming a fundamental research tool and were being used to discover a whole “zoo” of new particles. Touschek became interested in their symmetry properties and started studying neutrinos, proposing chiral symmetry transformations. At Rome, he worked closely with Wolfgang Pauli, who was trying to prove the charge–parity–time (CPT) theorem, according to which particle states don’t change if the particles become their anti-particles, if spatial co-ordinates are reflected or time is reversed.

Bruno Touschek and Italo Federico Quercia (left) and the AdA collider (right)
Collision course Left: Touschek in 1966 with Italo Federico Quercia, director of the Frascati National Laboratory, overseeing the construction of the ADONE electron–positron collider. ADONE was a higher energy evolution of the AdA collider (pictured right), which Touschek had spearheaded and was to become a prototype for many future devices around the world. (Courtesy: INFN–LNF)

Touschek’s understanding of CPT led him to realize that electron–positron colliders, which accelerate matter and antimatter along the same orbit but in opposite directions, would be vital for the future of physics. Convinced by the CPT theorem that electrons and positrons could be smashed into – and annihilate – each other, in 1960 he started leading a team of Frascati scientists to build a prototype. This was AdA, which began operations in February 1961.

To prove its feasibility as a research device, the 1.3 m-diameter device was transported to the Orsay lab near Paris where the first electron–positron collisions were observed by a team of French and Italian researchers in late 1963. Key to AdA’s success was the exceptional cadre of young theoretical physicists at Rome and the technical and scientific staff both in Frascati and Orsay. Although it never led to annihilation or produced novel particles, AdA was a testbed for a new breed of machines.

Lasting legacy

Bruno Touschek in 1970 with his cocker spaniel Lola
Down time Touschek relaxing at his home in Rome in 1970 with his cocker spaniel Lola. (Courtesy: Touschek family)

Touschek’s visionary thinking soon inspired other large physics labs in France, the Soviet Union and the US to build similar electron–positron colliders, opening the door to the discovery of new particles. AdA thus laid the foundations to the Standard Model of particle physics and changed the face of physics itself. Touschek was able to see some of these great events, such as multihadron production at ADONE and the discovery of charm quarks.

In 1977 he spent a year’s sabbatical at CERN, where the Super Proton–Antiproton Collider and the Large Electron–Positron collider (LEP) were going to be built. Not a fan of big international enterprises, which Touschek felt were becoming too bureaucratic and complex, he nevertheless enjoyed keen discussions with Carlo Rubbia about stochastic cooling – a technique to create a stock of antiprotons that could be annihilated with protons to discover the carriers of the weak force.

However, in February 1978 Touschek’s health started rapidly declining. After a number of hospitalizations, he asked CERN’s then director-general, Léon Van Hove, for a car to drive him to Innsbruck in Austria. The country of his birth, it was a place he had loved all his life. Touschek, who died on 25 May 1978, never got to witness the renaissance of particle physics – the experimental discovery of the W and Z bosons, the top quark and the Higgs boson – in the years and decades that followed.

But his legacy as a visionary scientist, who showed wisdom, stamina and perseverance – despite all the odds – lives on.

The post Bruno Touschek: the physicist who escaped the Nazi Holocaust to build particle colliders appeared first on Physics World.

  •  

European Space Agency releases first batch of spectacular science images from its Euclid mission

23 mai 2024 à 12:00

The European Space Agency (ESA) has released the first science results from its €1.4bn Euclid mission. Today, the space agency has released five spectacular images of the cosmos along with 10 scientific papers as part of Euclid’s early release observations.

Euclid was launched in July 2023 and is currently located in a spot in space called Lagrange Point 2 – a gravitational balance point some 1.5 million kilometres beyond the Earth’s orbit around the Sun. The Euclid Consortium comprises some 2600 members from more than 15 countries.

Euclid has a 1.2 m-diameter telescope, a camera and a spectrometer that it uses to plot a 3D map of the distribution of more than two billion galaxies. The images it takes are about four times as sharp as current ground-based telescopes.

In November, following weeks of calibrations, Euclid released its first full-colour images of the cosmos. Then in early 2024 it began science operations, studying 17 astronomical objects including distance galaxies and nearby dust clouds.

The images and resulting science findings released today were produced with only a single day of observations. They reveal 11 million objects in visible light and a further five million in the infrared. Some of the new discoveries include free-floating newborn planets, newly identified extragalactic star clusters and new low-mass dwarf galaxies.

The five images include a breathtaking image of Messier 78, which is a star nursery that is enveloped in interstellar dust and lies some 1300 light-years away in the Orion constellation. This marks the first time that the region has been taken as this width and depth.

“It’s no exaggeration to say that the results we’re seeing from Euclid are unprecedented,” says ESA science director Carole Mundell. “Euclid’s first images, published in November, clearly illustrated the telescope’s vast potential to explore the dark Universe, and this second batch is no different.

The other four images released today are shown below.

Galaxy cluster Abell 2390

Galaxy cluster Abell 2390
Galaxy cluster Abell 2390 (courtesy: ESA/Euclid/Euclid Consortium/NASA, image processing by J-C Cuillandre (CEA Paris-Saclay), G Anselmi; CC BY-SA 3.0 IGO)

This image of galaxy cluster Abell 2390, which lies 2.7 billion light-years away in the Pegasus constellation, reveals more than 50,000 galaxies. This giant conglomeration of galaxies contains a huge amount of mass, much of it being in the form of dark matter. This makes the cluster an ideal place to study the dark universe.

Galaxy cluster Abell 2764

Galaxy cluster Abell 2764
Galaxy cluster Abell 2764 (courtesy: ESA/Euclid/Euclid Consortium/NASA, image processing by J-C Cuillandre (CEA Paris-Saclay), G Anselmi; CC BY-SA 3.0 IGO)

This image shows the galaxy cluster Abell 2764 (top right), which lies a billion light-years away in the direction of the Pheonix constellation. It comprises hundreds of galaxies within a vast halo of dark matter. Also seen here is a very bright foreground star, known as V*BP-Phoenicis, which lies within our own galaxy and is almost bright enough to be seen by the human eye.

Galaxy NGC 6744

Galaxy NGC 6744
Galaxy NGC 674 (courtesy: ESA/Euclid/Euclid Consortium/NASA, image processing by J-C Cuillandre (CEA Paris-Saclay), G Anselmi; CC BY-SA 3.0 IGO)

NGC 6744, which lies some 30 million light-years away, is a typical example of the kind of galaxy that is currently forming most of the stars in our local Universe. Euclid’s large field-of-view covers the entire galaxy, capturing not only spiral structure on larger scales but also smaller details such as lanes of dust emerging from “spurs” from the spiral arms. The dataset will allow Euclid to identify clusters of old stars and search for new dwarf galaxies.

Dorado galaxy group

Dorado galaxy group
Dorado galaxy group (courtesy: ESA/Euclid/Euclid Consortium/NASA, image processing by J-C Cuillandre (CEA Paris-Saclay), G Anselmi; CC BY-SA 3.0 IGO)

Euclid captures galaxies evolving and merging ‘in action’ in the Dorado galaxy group, which lies some 62 million light-years away. As Dorado is a lot younger than other clusters, several of its constituent galaxies are forming stars and are still interacting with each other. Scientists are using these images to study how galaxies evolve and collide over time.

Tackling the big questions

Over the coming six years, Euclid will continue to study the large-scale structure of the universes, creating the largest cosmic 3D map ever made, with the aim of understanding how the universe evolved following the Big Bang.

"This space telescope intends to tackle the biggest open questions in cosmology,” notes Euclid project scientist Valeria Pettorino. “And these early observations clearly demonstrate that Euclid is more than up to the task.”

Euclid's next data release will focus on its primary science objectives and is currently slated for March 2025 with a wider data release scheduled for June 2026.

The post European Space Agency releases first batch of spectacular science images from its Euclid mission appeared first on Physics World.

  •  
❌
❌