↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 27 septembre 2024Physics World

Positronium gas is laser-cooled to one degree above absolute zero

Par : No Author
27 septembre 2024 à 15:42
29-09-2024 positron cooling
Matter and antimatter Artist’s impression of positronium being instantaneously cooled in a vacuum by a series of laser pulses with rapidly varying wavelengths. (Courtesy: 2024 Yoshioka et al./CC-BY-ND)

Researchers at the University of Tokyo have published a paper in the journal Nature that describes a new laser technique that is capable of cooling a gas of positronium atoms to temperatures as low as 1 K. Written by Kosuke Yoshioka and colleagues at the University of Tokyo, the paper follows on from a publication earlier this year from the AEgIS team at CERN, who described how a different laser technique was used to cool positronium to 170 K.

Positronium comprises a single electron bound to its antimatter counterpart, the positron. Although electrons and positrons will ultimately annihilate each other, they can briefly bind together to form an exotic atom. Electrons and positrons are fundamental particles that are nearly point like, so positronium provides a very simple atomic system for experimental study. Indeed, this simplicity means that precision studies of positronium could reveal new physics beyond the Standard Model.

Quantum electrodynamics

One area of interest is the precise measurement of the energy required to excite positronium from its ground state to its first excited state. Such measurements could enable more rigorous experimental tests of quantum electrodynamics (QED). While QED has been confirmed to extraordinary precision, any tiny deviations could reveal new physics.

An important barrier to making precision measurements is the inherent motion of positronium atoms. “This large randomness of motion in positronium is caused by its short lifetime of 142 ns, combined with its small mass − 1000 times lighter than a hydrogen atom,” Yoshioka explains. “This makes precise studies challenging.”

In 1988, two researchers at Lawrence Livermore National Laboratory in the US published a theoretical exploration of how the challenge could be overcome by using laser cooling to slow positronium atoms to very low speeds. Laser cooling is routinely used to cool conventional atoms and involves having the atoms absorb photons and then re-emitting the photons in random directions.

Chirped pulse train

Building on this early work, Yoshioka’s team has developed new laser system that is ideal for cooling positronium. Yoshioka explains that in the Tokyo setup, “the laser emits a chirped pulse train, with the frequency increasing at 500 GHz/μs, and lasting 100 ns. Unlike previous demonstrations, our approach is optimized to cool positronium to ultralow velocities.”

In a chirped pulse, the frequency of the laser light increases over the duration of the pulse. It allows the cooling system to respond to the slowing of the atoms by keeping the photon absorption on resonance.

Using this technique, Yoshioka’s team successfully cooled positronium atoms to temperatures around 1 K, all within just 100 ns. “This temperature is significantly lower than previously achieved, and simulations suggested that an even lower temperature in the 10 mK regime could be realized via a coherent mechanism,” Yoshioka says. Although the team’s current approach is still some distance from achieving this “recoil limit” temperature, the success of their initial demonstration has given them confidence that further improvements could bring them closer to this goal.

“This breakthrough could potentially lead to stringent tests of particle physics theories and investigations into matter-antimatter asymmetry,” Yoshioka predicts. “That might allow us to uncover major mysteries in physics, such as the reason why antimatter is almost absent in our universe.”

The post Positronium gas is laser-cooled to one degree above absolute zero appeared first on Physics World.

  •  

Knowledge grows step-by-step despite the exponential growth of papers, finds study

Par : No Author
27 septembre 2024 à 14:02

Scientific knowledge is growing at a linear rate despite an exponential increase in publications. That’s according to a study by physicists in China and the US, who say their finding points to a decline in overall scientific productivity. The study therefore contradicts the notion that productivity and knowledge grow hand in hand – but adds weight to the view that the rate of scientific discovery may be slowing or that “information fatigue” and the vast number of papers can drown out new discoveries.

Defining knowledge is complex, but it can be thought of as a network of interconnected beliefs and information. To measure it, the authors previously created a knowledge quantification index (KQI). This tool uses various scientific impact metrics to examine the network structures created by publications and their citations and quantifies how well publications reduce the uncertainty of the network, and thus knowledge.

The researchers claim the tool’s effectiveness has been validated through multiple approaches, including analysing the impact of work by Nobel laureates.

In the latest study, published on arXiv, the team analysed 213 million scientific papers, published between 1800 and 2020, as well as 7.6 million patents filed between 1976 and 2020. Using the data, they built annual snapshots of citation networks, which they then scrutinised with the KQI to observe changes in knowledge over time.

The researchers – based at Shanghai Jiao Tong University in Shanghai, the University of Minnesota in the US and the Institute of Geographic Sciences and Natural Resources Research in Beijing –found that while the number of publications has been increasing exponentially, knowledge has not.

Instead, their KQI suggests that knowledge has been growing in a linear fashion. Different scientific disciplines do display varying rates of knowledge growth, but they all have the same linear growth pattern. Patent growth was found to be much slower than publication growth but also shows the linear growth in the KQI.

According to the authors, the analysis indicates “no significant change in the rate of human knowledge acquisition”, suggesting that our understanding of the world has been progressing at a steady pace.

If scientific productivity is defined as the number of papers required to grow knowledge, this signals a significant decline in productivity, the authors claim.

The analysis also revealed inflection points associated with new discoveries, major breakthroughs and other important developments, with knowledge growing at different linear rates before and after.

Such inflection points create the illusion of exponential knowledge growth due to the sudden alteration in growth rates, which may, according to the study authors, have led previous studies to conclude that knowledge is growing exponentially.

Research focus

“Research has shown that the disruptiveness of individual publications – a rough indicator of knowledge growth – has been declining over recent decades,” says Xiangyi Meng, a physicist at Northwestern University in the US, who works in network science but was not involved in the research. “This suggests that the rate of knowledge growth must be slower than the exponential rise in the number of publications.”

Meng adds, however, that the linear growth finding is “surprising” and “somewhat pessimistic” – and that further analysis is needed to confirm if knowledge growth is indeed linear or whether it “more likely, follows a near-linear polynomial pattern, considering that human civilization is accelerating on a much larger scale”.

Due to the significant variation in the quality of scientific publications, Meng says that article growth may “not be a reliable denominator for measuring scientific efficiency”. Instead, he suggests that analysing research funding and how it is allocated and evolves over time might be a better focus.

The post Knowledge grows step-by-step despite the exponential growth of papers, finds study appeared first on Physics World.

  •  
À partir d’avant-hierPhysics World

Field work – the physics of sheep, from phase transitions to collective motion

Par : No Author
26 septembre 2024 à 14:23

You’re probably familiar with the old joke about a physicist who, when asked to use science to help a dairy farmer, begins by approximating a spherical cow in a vacuum. But maybe it’s time to challenge this satire on how physics-based models can absurdly over-simplify systems as complex as farm animals. Sure, if you want to understand how a cow or a sheep works, approximating those creatures as spheres might not be such a good idea. But if you want to understand a herd or a flock, you can learn a lot by reducing individual animals to mere particles – if not spheres, then at least ovoids (or bovoids; see what I did there?).

By taking that approach, researchers over the past few years have not only shed new insight on the behaviour of sheep flocks but also begun to explain how shepherds do what they do – and might even be able to offer them new tips about controlling their flocks. Welcome to the emerging science of sheep physics.

“Boids” of a feather

Physics-based models of the group dynamics of living organisms go back a long way. In 1987 Craig Reynolds, a software engineer with the California-based computer company Symbolics, wrote an algorithm to try to mimic the flocking of birds. By watching blackbirds flock in a local cemetery, Reynolds intuited that each bird responds to the motions of its immediate neighbours according to some simple rules.

His simulated birds, which he called “boids” (a fusion of bird and droid), would each match their speed and orientation to those of others nearby, and would avoid collisions as if there was a repulsive force between them. Those rules alone were enough to generate group movements resembling the striking flocks or “murmurations” of real-life blackbirds and starlings, that swoop and fly together in seemingly perfect unison. Reynolds’ algorithms were adapted for film animations such as the herd of wildebeest in The Lion King.

Murmuration of starlings
Birds of a feather Physicists have been studying the collective motion of flocks of birds, such as groups of starlings – known as murmurations – that are seemingly governed by rules of physics. (Courtesy: iStock/georgeclerk)

Over the next two or three decades, these models were modified and extended by other researchers, including the future Nobel-prize-winning physicist Giorgio Parisi, to study collective motions of organisms ranging from birds to schooling fish and swarming bacteria. Those studies fed into the emerging science of active matter, in which particles – which could be simple colloids – move under their own propulsion. In the late 1990s physicist Tamás Vicsek and his student Andras Czirók, at Eötvös University in Budapest, revealed analogies between the collective movements of such self-propelled particles and the reorientation of magnetic spins in regular arrays, which also “feel” and respond to what their neighbours are doing (Phys. Rev. Lett. 82 209; J. Phys. A: Math. Gen. 30 1375).

In particular, the group motion can undergo abrupt phase transitions – global shifts in the pattern of behaviour, analogous to how matter can switch to a bulk magnetized state – as the factors governing individual motion, such as average velocity and strength of interactions, are varied. In this way, the collective movements can be summarized in phase diagrams, like those depicting the gaseous, liquid and solid states of matter as variables such as temperature and density are changed.

Models like these have now been used to explore the dynamics not just of animals and bacteria, but also of road traffic and human pedestrians. They can predict the kinds of complex behaviours seen in the real world, such as stop-and-start waves in traffic congestion or the switch to a crowd panic state. And yet the way they represent the individual agents seems – for humans anyway – almost insultingly simple, as if we are nothing but featureless particles propelled by blind forces.

Follow the leader

If these models work for humans, you might imagine they’d be fine for sheep too – which, let’s face it, seem behaviourally and psychologically rather unsophisticated compared with us. But if that’s how you think of sheep, you’ve probably never had to shepherd them. Sheep are decidedly idiosyncratic particles.

“Why should birds, fish or sheep behave like magnetic spins?” asks Fernando Peruani of the University of Cergy Paris. “As physicists we may want that, but animals may have a different opinion.” To understand how flocks of sheep actually behave, Peruani and his colleagues first looked at the available data, and then tried to work out how to describe and explain the behaviours that they saw.

1 Are sheep like magnetic spins?

Sheep walking in a line
(Diagram courtesy: Nat. Phys. 18 1402. Photo courtesy: iStock/scottyh)

In a magnetic material, magnetic spins interact to promote their mutual alignment (or anti-alignment, depending on the material). In the model of collective sheep motion devised by Fernando Peruani from the University of Cergy Paris, and colleagues, each sheep is similarly assumed to move in a direction determined by interactions with all the others that depend on their distance apart and their relative angles of orientation. The model predicts the sheep will fall into loose alignment and move in a line, following a leader, that takes a more or less sinuous path over the terrain.

For one thing, says Peruani, “real flocks are not continuously on the move. Animals have to eat, rest, find new feeding areas and so on”. No existing model of collective animal motion could accommodate such intermittent switching between stationary and mobile phases. What’s more, bird murmurations don’t seem to involve any specific individual guiding the collective behaviour, but some animal groups do exhibit a hierarchy of roles.

Elephants, zebras and forest ponies, for example, tend to move in lines such that the animal at the front has a special status. An advantage of such hierarchies is that the groups can respond quickly to decisions made by the leaders, rather than having to come to some consensus within the whole group. On the other hand, it means the group is acting on less information than would be available by pooling that of everyone.

To develop their model of collective sheep behaviour, Peruani and colleagues took a minimalistic approach of watching tiny groups of Merino Arles sheep that consisted of “flocks” of just two to four individuals who were free to move around a large field. They found that the groups spend most of their time grazing but would every so often wander off collectively in a line, following the individual at the front (Nat. Phys. 18 1494).

They also saw that any member of the group is equally likely to take the lead in each of these excursions, selected seemingly at random. In other words, as George Orwell famously suggested for certain pigs, all sheep are equal but some are (temporarily) more equal than others. Peruani and colleagues suspected that this switching of leaders allows some information pooling without forcing the group to be constantly negotiating a decision.

The researchers then devised a simple model of the process in which each individual has some probability of switching from the grazing to the moving state and vice versa – rather like the transition probability for emission of a photon from an excited atom. The empirical data suggested that this probability depends on the group size, with the likelihood getting smaller as the group gets bigger. Once an individual sheep has triggered the onset of the “walking phase”, the others follow to maintain group cohesion.

In their model, each individual feels an attractive, cohesive force towards the others and, when moving, tends to align its orientation and velocity with those of its neighbour(s). Peruani and colleagues showed that the model produces episodic switching between a clustered “grazing mode” and collective motion in a line (figure 1). They could also quantify information exchange between the simulated sheep, and found that probabilistic swapping of the leader role does indeed enable the information available to each individual to be pooled efficiently between all.

Although the group size here was tiny, the team has video footage of a large flocks of sheep adopting the same follow-my-leader formation, albeit in multiple lines at once. They are now conducting a range of experiments to get a better understanding of the behavioural rules – for example, using sirens to look at how sheep respond to external stimuli and studying herds composed of sheep of different ages (and thus proclivities) to probe the effects of variability.

The team is also investigating whether individual sheep trained to move between two points can “seed” that behaviour in an entire flock. But such experiments aren’t easy, Peruani says, because it’s hard to recruit shepherds. In Europe, they tend to live in isolation on low wages, and so aren’t the most forthcoming of scientific collaborators.

The good shepherd

Of course, shepherds don’t traditionally rely on trained sheep to move their flocks. Instead, they use sheepdogs that are trained for many months before being put to work in the field. If you’ve ever watched a sheepdog in action, it’s obvious they do an amazingly complex job – and surely one that physics can’t say much about? Yet mechanical engineer Lakshminarayanan Mahadevan at Harvard University in the US says that the sheepdog’s task is basically an exercise in control theory: finding a trajectory that will guide the flock to a particular destination efficiently and accurately.

Mahadevan and colleagues found that even this phenomenon can be described using a relatively simple model (arXiv:2211.04352). From watching YouTube videos of sheepdogs in action, he figured there were two key factors governing the response of the sheep. “Sheep like to stay together,” he says – the flock has cohesion. And second, sheep don’t like sheepdogs – there is repulsion between sheep and dog. “Is that enough – cohesion plus repulsion?” Mahadevan wondered.

Sheepdogs and a flock of sheep
Let’s stay together Harvard University researcher Lakshminarayanan Mahadevan studied the interactions between sheepdogs and a flock of sheep, to develop a model that describes how a flock reacts to different herding tactics employed by the dogs. They found that the size of the flock and how fast it moves between its initial and final positions are two main factors that determine the best herding strategy. (Courtesy: Shutterstock/Danica Chang)

The researchers wrote down differential equations to describe the animals’ trajectories and then applied standard optimization techniques to minimize a quantity that captures the desired outcome: moving the flock to a specific location without losing any sheep. Despite the apparent complexity of the dynamical problem, they found it all boiled down to a simple picture. It turns out there are two key parameters that determine the best herding strategy: the size of the flock and the speed with which it moves between initial and final positions.

Four possible outcomes emerged naturally from their model. One is simply that the herding fails: nothing a dog can do will get the flock coherently from point A to point B. This might be the case, for example, if the flock is just too big, or the dog too slow. But there are three shepherding strategies that do work.

One involves the dog continually running from one side of the flock to the other, channelling the sheep in the desired direction. This is the method known to shepherds as “droving”. If, however, the herd is relatively small and the dog is fast, there can be a better technique that the team called “mustering”. Here the dog propels the flock forward by running in corkscrews around it. In this case, the flock keeps changing its overall shape like a wobbly ellipse, first elongating and then contracting around the two orthogonal axes, as if breathing. Both strategies are observed in the field (figure 2).

But the final strategy the model generated, dubbed “driving”, is not a tactic that sheepdogs have been observed to use. In this case, if the flock is large enough, the dog can run into the middle of it and the sheep retreat but don’t scatter. Then the dog can push the flock forward from within, like a driver in a car. This approach will only work if the flock is very strongly cohesive, and it’s not clear that real flocks ever have such pronounced “stickiness”.

2 Shepherding strategies: the three types of herding

Diagram of herding patterns
(Courtesy: L Mahadevan, arXiv:2211.04352)

In the model of interactions between a sheepdog and its flock developed by Lakshminarayanan Mahadevan at Harvard University and coworkers, optimizing a mathematical function that describes how well the dog transports the flock results in three possible shepherding strategies, depending on the precise parameters in the model. In “droving”, the dog runs from side to side to steer the flock towards the target location. In “mustering”, the dog takes a helix-like trajectory, repeatedly encircling the flock. And in “driving”, the dog steers the flock from “inside” by the aversion – modelled as a repulsive force – of the sheep for the dog.

These three regimes, derived from agent-based models (ABM) and models based on ordinary differential equations (ODE), are plotted above. In the left column, the mean path of the flock (blue) over time is shown as it is driven by a shepherd on a separate path (red) towards a target (green square). Columns 2-4 show snapshots from column 1, with trajectories indicated in black, where fading indicates history. From left to right, snapshots represent the flock at later time points.

These herding scenarios can be plotted on a phase diagram, like the temperature–density diagram for states of matter, but with flock size and speed as the two axes. But do sheepdogs, or their trainers, have an implicit awareness of this phase diagram, even if they did not think of it in those terms? Mahadevan suspects that herding techniques are in fact developed by trial and error – if one strategy doesn’t work, they will try another.

Mahadevan admits that he and his colleagues have neglected some potentially important aspects of the problem. In particular, they assumed that the animals can see in every direction around them. Sheep do have a wide field of vision because, like most prey-type animals, they have eyes on the sides of their heads. But dogs, like most predators, have eyes at the front and therefore a more limited field of view. Mahadevan suspects that incorporating these features of the agents’ vision will shift the phase boundaries, but not alter the phase diagram qualitatively.

Another confounding factor is that sheep might alter their behaviour in different circumstances. Chemical engineer Tuhin Chakrabortty of the Georgia Institute of Technology in Atlanta, together with biomolecular engineer Saad Bhamla, have also used physics-based modelling to look at the shepherding problem. They say that sheep behave differently on their own from how they do in a flock. A lone sheep flees from a dog, but in a flock they employ a more “selfish” strategy, with those on the periphery trying to shove their way inside to be sheltered by the others.

3 Heavy and light: how flocks interact with sheepdogs

How flocks interact with sheepdogs
(Courtesy: T Chakrabortty and S Bhamla, arXiv:2406.06912)

In the agent-based model of the interaction between sheep and a sheepdog devised by Tuhin Chakrabortty and Saad Bhamla, sheep may respond to a nearby dog by reorienting themselves to face away from or at right angles to it. Different sheep might have different tendencies for this – “heavy” sheep ignore the dog unless they are facing towards it. The task of the dog could be to align the flock facing away from it (herding) or to divide the flock into differently aligned subgroups (shedding).

What’s more, says Chakrabortty, contrary to the stereotype, sheep can show considerable individual variation in how they respond to a dog. Essentially, the sheep have personalities. Some seem terrified and easily panicked by a dog while others might ignore – or even confront – it. Shepherds traditionally call the former sort of sheep “light”, and the latter “heavy” (figure 3).

In the agent-based model used by Chakrabortty and Bhamla, the outcomes differ depending on whether a herd is predominantly light or heavy (arXiv:2406.06912). When a simulated herd is subjected to the “pressure” of a shepherding dog, it might do one of three things: flee in a disorganized way, shedding panicked individuals; flock in a cohesive group; or just carry on grazing while reorienting to face at right angles to the dog, as if turning away from the threat.

Again these behaviours can be summarized in a 2D phase diagram, with axes representing the size of the herd and what the two researchers call the “specificity of the sheepdog stimulus” (figure 4). This factor depends on the ratio of the controlling stimulus (the strength of sheep–dog repulsion) and random noisiness in the sheep’s response. Chakrabortty and Bhamla say that sheepdog trials are conducted for herd sizes where all three possible outcomes are well represented, creating an exacting test of the dog’s ability to get the herd to do its bidding.

4 Fleeing, flocking and grazing: types of sheep movement

Graph showing types of sheep movement
(Courtesy: T Chakrabortty and S Bhamla, arXiv:2406.06912)

The outcomes of the shepherding model of Chakrabortty and Bhamla can be summarized in a phase diagram showing the different behavioural options – uncoordinated fleeing, controlled flocking, or indifferent grazing – as a function of two model parameters: the size of the flock Ns and the “specificity of stimulus”, which measures how strongly the sheep respond to the dog relative to their inherent randomness of action. Sheepdog trials are typically conducted for a flock size that allows for all three phases.

Into the wild

One of the key differences between the movements of sheep and those of fish or birds is that sheep are constrained to two dimensions. As condensed-matter physicists have come to recognize, the dimensionality of a problem can make a big difference to phase behaviour. Mahadevan says that dolphins make use of dimensionality when they are trying to shepherd schools of fish to feed on. To make them easier to catch, dolphins will often push the fish into shallow water first, converting a 3D problem to a 2D problem. Herders like sheepdogs might also exploit confinement effects to their benefit, for example using fences or topographic features to help contain the flock and simplify the control problem. Researchers haven’t yet explored these issues in their models.

Dolphins using herding tactics to drive a school of fish
Shoal of thought In nature, dolphins have been observed using a number of herding tactics to drive schools of fish into shallow water or even beach them, to making hunting easier and more efficient. (Courtesy: iStock/atese)

As the case of dolphins shows, herding is a challenge faced by many predators. Mahadevan says he has witnessed such behaviour himself in the wild while observing a pack of wild dogs trying to corral wildebeest. The problem is made more complicated if the prey themselves can deploy group strategies to confound their predator – for example, by breaking the group apart to create confusion or indecision in the attacker, a behaviour seemingly adopted by fish. Then the situation becomes game-theoretic, each side trying to second-guess and outwit the other.

Sheep seem capable of such smart and adaptive responses. Bhamla says they sometimes appear to identify the strategy that a farmer has signalled to the dog and adopt the appropriate behaviour even without much input from the dog itself. And sometimes splitting a flock can be part of the shepherding plan: this is actually a task dogs are set in some sheepdog competitions, and demands considerable skill. Because sheepdogs seem to have an instinct to keep the flock together, they can struggle to overcome that urge and have to be highly trained to split the group intentionally.

Iain Couzin of the Max Planck Institute of Animal Behavior in Konstanz, Germany, who has worked extensively on agent-based models of collective animal movement, cautions that even if physical models like these seem to reproduce some of the phenomena seen in real life, that doesn’t mean the model’s rules reflect what truly governs the animals’ behaviour. It’s tempting, he says, to get “allured by the beauty of statistical physics” at the expense of the biology. All the same, he adds that whether or not such models truly capture what is going on in the field, they might offer valuable lessons for how to control and guide collectives of agent-like entities.

In particular, the studies of shepherding might reveal strategies that one could program into artificial shepherding agents such as robots or drones. Bhamla and Chakrabortty have in fact suggested how one such swarm control algorithm might be implemented. But it could be harder than it sounds. “Dogs are extremely good at inferring and predicting the idiosyncrasies of individual sheep and of sheep–sheep interactions,” says Chakrabortty. This allows them to adapt their strategy on the fly. “Farmers laugh at the idea of drones or robots,” says Bhamla. “They don’t think the technology is ready yet. The dogs benefit from centuries of directed evolution and training.”

Perhaps the findings could be valuable for another kind of animal herding too. “Maybe this work could be applied to herding kids at a daycare,” Bhamla jokes. “One of us has small kids and recognizes the challenges of herding small toddlers from one room to another, especially at a party. Perhaps there is a lesson here.” As anyone who has ever tried to organize groups of small children might say: good luck with that.

The post Field work – the physics of sheep, from phase transitions to collective motion appeared first on Physics World.

  •  

Researchers exploit quantum entanglement to create hidden images

Par : No Author
25 septembre 2024 à 15:00
Encoding images in photon correlations
Encoding images in photon correlations Simplified experimental setup (a). A conventional intensity image (b) reveals no information about the object, while a correlation image acquired using an electron-multiplied CCD camera (c) reveals the hidden object. (Courtesy: Phys. Rev. Lett. 10.1103/PhysRevLett.133.093601)

Ever since the double-slit experiment was performed, physicists have known that light can be observed as either a wave or a stream of particles. For everyday imaging applications, it is the wave-like aspect of light that manifests, with receptors (natural or artificial) capturing the information contained within the light waves to “see” the scene being observed.

Now, Chloé Vernière and Hugo Defienne from the Paris Institute of Nanoscience at Sorbonne University have used quantum correlations to encode an image into light such that it only becomes visible when particles of light (photons) are observed by a single-photon sensitive camera – otherwise the image is hidden from view.

Encoding information in quantum correlations

In a study described in Physical Review Letters, Vernière and Defienne managed to hide an image of a cat from conventional light measurement devices by encoding the information in quantum entangled photons, known as a photon-pair correlation. To achieve this, they shaped spatial correlations between entangled photons – in the form of arbitrary amplitude and phase objects – to encode image information within the pair correlation. Once the information is encoded into the photon pairs, it is undetectable by conventional measurements. Instead, a single-photon detector known as an electron-multiplied charge couple device (EMCCD) camera is needed to “show” the hidden image.

“Quantum entanglement is a fascinating phenomenon, central to many quantum applications and a driving concept behind our research,” says Defienne. “In our previous work, we demonstrated that, in certain cases, quantum correlations between photons are more resistant to external disturbances, such as noise or optical scattering, than classical light. Inspired by this, we wondered how this resilience could be leveraged for imaging. We needed to use these correlations as a support – a ‘canvas’ – to imprint our image, which is exactly what we’ve achieved in this work.”

How to hide an image

The researchers used a technique known as spontaneous parametric down-conversion (SPDC), which is used in many quantum optics experiments, to generate the entangled photons. SPDC is a nonlinear process that uses a nonlinear crystal (NLC) to split a single high-energy photon from a pump beam into two lower energy entangled photons. The properties of the lower energy photons are governed by the geometry and type of the NLC and the characteristics of the pump beam.

In this study, the researchers used a continuous-wave laser that produced a collimated beam of horizontally polarized 405 nm light to illuminate a standing cat-shaped mask, which was then Fourier imaged onto an NLC using a lens. The spatially entangled near-infrared (810 nm) photons, produced after passing through the NLC, were then detected using another lens and the EMCCD.

This SPDC process produces an encoded image of a cat. This image does not appear on regular camera film and only becomes visible when the photons are counted one by one using the EMCCD. This allowed the image of the cat to be “hidden” in light and unobservable by traditional cameras.

“It is incredibly intriguing that an object’s image can be completely hidden when observed classically with a conventional camera, but then when you observe it ‘quantumly’ by counting the photons one by one and examining their correlations, you can actually see it,” says Vernière, a PhD student on the project. “For me, it is a completely new way of doing optical imaging, and I am hopeful that many powerful applications will emerge from it.”

What’s next?

This research has extended on previous work and Defienne says that the team’s next goal is to show that this new method of imaging has practical applications and is not just a scientific curiosity. “We know that images encoded in quantum correlations are more resistant to external disturbances – such as noise or scattering – than classical light. We aim to leverage this resilience to improve imaging depth in scattering media.”

When asked about the applications that this development could impact, Defienne tells Physics World: “We hope to reduce sensitivity to scattering and achieve deeper imaging in biological tissues or longer-range communication through the atmosphere than traditional technologies allow. Even though we are still far from it, this could potentially improve medical diagnostics or long-range optical communications in the future.”

The post Researchers exploit quantum entanglement to create hidden images appeared first on Physics World.

  •  

We should treat our students the same way we would want our own children to be treated

Par : No Author
23 septembre 2024 à 12:00

“Thank goodness I don’t have to teach anymore.” These were the words spoken by a senior colleague and former mentor upon hearing about the success of their grant application. They had been someone I had respected. Such comments, however, reflect an attitude that persists across many UK higher-education (HE) science departments. Our departments’ students, our own children even, studying across the UK at HE institutes deserve far better.

It is no secret in university science departments that lecturing, tutoring and lab supervision are perceived by some colleagues to be mere distractions from what they consider their “real” work and purpose to be. These colleagues may evasively try to limit their exposure to teaching, and their commitment to its high-quality delivery. This may involve focusing time and attention solely on research activities or being named on as many research grant applications as possible.

University workload models set time aside for funded research projects, as they should. Research grants provide universities with funding that contributes to their finances and are an undeniably important revenue stream. However, an aversion to – or flagrant avoidance of – teaching by some colleagues is encountered by many who have oversight and responsibility for the organization and provision of education within university science departments.

It is also a behaviour and mindset that is recognized by students, and which negatively impacts their university experience. Avoidance of teaching displayed, and sometimes privately endorsed, by senior or influential colleagues in a department can also shape its culture and compromise the quality of education that is delivered. Such attitudes have been known to diffuse into a department’s environment, negatively impacting students’ experiences and further learning. Students certainly notice and are affected by this.

The quality of physics students’ experiences depends on many factors. One is the likelihood of graduating with skills that make them employable and have successful careers. Others include: the structure, organization and content of their programme; the quality of their modules and the enthusiasm and energy with which they are delivered; the quality of the resources to which they have access; and the extent to which their individual learning needs are supported.

We should always be present and dispense empathy, compassion and a committed enthusiasm to support and enthral our students with our teaching.

In the UK, the quality of departments’ and institutions’ delivery of these and other components has been assessed since 2005 by the National Student Survey (NSS). Although imperfect and continuing to evolve, it is commissioned every year by the Office for Students on behalf of UK funding and regulatory bodies and is delivered independently by Ipsos.

The NSS can be a helpful tool to gather final-year students’ opinions and experiences about their institutions and degree programmes. Publication of the NSS datasets in July each year should, in principle, provide departments and institutions with the information they need to recognize their weaknesses and improve their subsequent students’ experiences. They would normally be motivated to do this because of the direct impact NSS outcomes have on institutions’ league table positions. These league tables can tangibly impact student recruitment and, therefore, an institution’s finances.

My sincerely held contention, however, communicated some years ago to a red-faced finger-wagging senior manager during a fraught meeting, is this. We should ignore NSS outcomes. They don’t, and shouldn’t, matter. This is a bold statement; career-ending, even. I articulated that we and all our colleagues should instead wholeheartedly strive to treat our students as we would want our own children, or our younger selves, to be treated, across every academic aspect and learning-related component of their journey while they are with us. This would be the right and virtuous thing to do.  In fact, if we do this, the positive NSS outcomes would take care of themselves.

Academic guardians

I have been on the frontline of university teaching, research, external examining and education leadership for close to 30 years. My heartfelt counsel, formed during this journey, is that our students’ positive experiences matter profoundly. They matter because, in joining our departments and committing three or more years and many tens of thousands of pounds to us, our students have placed their fragile and uncertain futures and aspirations into our hands.

We should feel privileged to hold this position and should respond to and collaborate with them positively, always supportively and with compassion, kindness and empathy. We should never be the traditionally tough and inflexible guardians of a discipline that is academically demanding, and which can, in a professional physics academic career, be competitively unyielding. That is not our job. Our roles, instead, should be as our students’ academic guardians, enthusiastically taking them with us across this astonishing scientific and mathematical world; teaching, supporting and enabling wherever we possibly can.

A narrative such as this sounds fantastical. It seems far removed from the rigours and tensions of day-in, day-out delivery of lecture modules, teaching labs and multiple research targets. But the metaphor it represents has been the beating heart of the most successfully effective, positive and inclusive learning environments I have encountered in UK and international HE departments during my long academic and professional journey.

I urge physics and science colleagues working in my own and other UK HE departments to remember and consider what it can be like to be an anxious or confused student, whose cognitive processes are still developing, whose self-confidence may be low and who may, separately, be facing other challenges to their circumstances. We should then behave appropriately. We should always be present and dispense empathy, compassion and a committed enthusiasm to support and enthral our students with our teaching. Ego has no place. We should show kindness, patience, and a willingness to engage them in a community of learning, framed by supportive and inclusive encouragement. We should treat our students the way we would want our own children to be treated.

The post We should treat our students the same way we would want our own children to be treated appeared first on Physics World.

  •  

Quantum hackathon makes new connections

Par : No Author
20 septembre 2024 à 10:40

It is said that success breeds success, and that’s certainly true of the UK’s Quantum Hackathon – an annual event organized by the National Quantum Computing Centre (NQCC) that was held in July at the University of Warwick. Now in its third year, the 2024 hackathon attracted 50% more participants from across the quantum ecosystem, who tackled 13 use cases set by industry mentors from the private and public sectors. Compared to last year’s event, participants were given access to a greater range of technology platforms, including software control systems as well as quantum annealers and physical processors, and had an additional day to perfect and present their solutions.

The variety of industry-relevant problems and the ingenuity of the quantum-enabled solutions were clearly evident in the presentations on the final day of the event. An open competition for organizations to submit their problems yielded use cases from across the public and private spectrum, including car manufacturing, healthcare and energy supply. While some industry partners were returning enthusiasts, such as BT and Rolls Royce, newcomers to the hackathon included chemicals firm Johnson Matthey, Aioi R&D Lab (a joint venture between Oxford University spin-out Mind Foundry and the global insurance brand Aioi Nissay Dowa) and the North Wales Police.

“We have a number of problems that are beyond the scope of standard artificial intelligence (AI) or neural networks, and we wanted to see whether a quantum approach might offer a solution,” says Alastair Hughes, lead for analytics and AI at North Wales Police. “The results we have achieved within just two days have proved the feasibility of the approach, and we will now be looking at ways to further develop the model by taking account of some additional constraints.”

The specific use case set by Hughes was to optimize the allocation of response vehicles across North Wales, which has small urban areas where incidents tend to cluster and large swathes of countryside where the crime rate is low. “Our challenge is to minimize response times without leaving some of our communities unprotected,” he explains. “At the moment we use a statistical process that needs some manual intervention to refine the configuration, which across the whole region can take a couple of months to complete. Through the hackathon we have seen that a quantum neural network can deliver a viable solution.”

Teamwork
Problem solving Each team brought together a diverse range of skills, knowledge and experience to foster learning and accelerate the development process. (Courtesy: NQCC)

While Hughes had no prior experience with using quantum processors, some of the other industry mentors are already investigating the potential benefits of quantum computing for their businesses. At Rolls Royce, for example, quantum scientist Jarred Smalley is working with colleagues to investigate novel approaches for simulating complex physical processes, such as those inside a jet engine. Smalley has mentored a team at all three hackathons, setting use cases that he believes could unlock a key bottleneck in the simulation process.

The hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors

“Some of our crazy problems are almost intractable on a supercomputer, and from that we extract a specific set of processes where a quantum algorithm could make a real impact,” he says. “At Rolls Royce our research tends to be focused on what we could do in the future with a fault-tolerant quantum computer, and the hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors.”

Since the first hackathon in 2022, Smalley says that there has been an improvement in the size and capabilities of the hardware platforms. But perhaps the biggest advance has been in the software and algorithms available to help the hackers write, test and debug their quantum code. Reflecting that trend in this year’s event was the inclusion of software-based technology providers, such as Q-CTRL’s Fire Opal and Classiq, that provide tools for error suppression and optimizing quantum algorithms. “There are many more software resources for the hackers to dive into, including algorithms that can even analyse the problems themselves,” Smalley says.

Cathy White, a research manager at BT who has mentored a team at all three hackathons, agrees that rapid innovation in hardware and software is now making it possible for the hackers to address real-world problems – which in her case was to find the optimal way to position fault-detecting sensors in optical networks. “I wanted to set a problem for which we could honestly say that our classical algorithms can’t always provide a good approximation,” she explained. “We saw some promising results within the time allowed, and I’m feeling very positive that quantum computers are becoming useful.”

Both White and Smalley could see a significant benefit from the extended format, which gave hackers an extra day to explore the problem and consider different solution pathways. The range of technology providers involved in the event also enabled the teams to test their solutions on different platforms, and to adapt their approach if they ran into a problem. “With the extra time my team was able to use D-Wave’s quantum annealer as well as a gate-model approach, and it was impressive to see the diversity of algorithms and approaches that the students were able to come up with,” White comments. “They also had more scope to explore different aspects of the problem, and to consolidate their results before deciding what they wanted to present.”

One clear outcome from the extended format was more opportunity to benchmark the quantum solutions against their classical counterparts. “The students don’t claim quantum advantage without proper evidence,” adds White. “Every year we see remarkable progress in the technology, but they can help us to see where there are still challenges to be overcome.”

According to Stasja Stanisic from Phasecraft, one of the four-strong judging panel, a robust approach to benchmarking was one of the stand-out factors for the winning team. Mentored by Aioi R&D Lab, the team investigated a risk aggregation problem, which involved modelling dynamic relationships between data such as insurance losses, stock market data and the occurrence of natural disasters. “The winning team took time to really understand the problem, which allowed them to adapt their algorithm to match their use-case scenario,” Stanisic explains. “They also had a thorough and structured approach to benchmarking their results against other possible solutions, which is an important comparison to make.”

The team presenting their results
Learning points Presentations on the final day of the event enabled each team to share their results with other participants and a four-strong judging panel. (Courtesy: NQCC)

Teams were judged on various criteria, including the creativity of the solution, its success in addressing the use case, and investigation of scaling and feasibility. The social impact and ethical considerations of their solution was also assessed. Using the NQCC’s Quantum STATES principles for responsible and ethical quantum computing (REQC), which were developed and piloted at the NQCC, the teams, for example, considered the potential impact of their innovation on different stakeholders and the explainability of their solution. They also proposed practical recommendations to maximize societal benefit. While many of their findings were specific to their use cases, one common theme was the need for open and transparent development processes to build trust among the wider community.

“Quantum computing is an emerging technology, and we have the opportunity right at the beginning to create an environment where ethical considerations are discussed and respected,” says Stanisic. “Some of the teams showed some real depth of thought, which was exciting to see, while the diverse use cases from both the public and private sectors allowed them to explore these ethical considerations from different perspectives.”

Also vital for participants was the chance to link with and learn from their peers. “The hackathon is a place where we can build and maintain relationships, whether with the individual hackers or with the technology partners who are also here,” says Smalley. For Hughes, meanwhile, the ability to engage with quantum practitioners has been a game changer. “Being in a room with lots of clever people who are all sparking off each other has opened my eyes to the power of quantum neural networks,” he says. “It’s been phenomenal, and I’m excited to see how we can take this forward at North Wales Police.”

  • To take part in the 2025 Quantum Hackathon – whether as a hacker, an industry mentor or technology provider – please e-mail the NQCC team at nqcchackathon@stfc.ac.uk

The post Quantum hackathon makes new connections appeared first on Physics World.

  •  

Rheo-electric measurements to predict battery performance from slurry processing

Par : No Author
20 septembre 2024 à 08:58

The market for lithium-ion batteries (LIBs) is expected to grow ~30x to almost 9 TWh produced annually in 2040 driven by demand from electric vehicles and grid scale storage. Production of these batteries requires high-yield coating processes using slurries of active material, conductive carbon, and polymer binder applied to metal foil current collectors. To better understand the connections between slurry formulation, coating conditions, and composite electrode performance we apply new Rheo-electric characterization tools to battery slurries. Rheo-electric measurements reveal the differences in carbon black structure in the slurry that go undetected by rheological measurements alone. Rheo-electric results are connected to characterization of coated electrodes in LIBs in order to develop methods to predict the performance of a battery system based on the formulation and coating conditions of the composite electrode slurries.

Jeffrey Richards (left) and Jeffrey Lopez (right)

Jeffrey Richards is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on understanding the rheological and electrical properties of soft materials found in emergent energy technologies.

Jeffrey Lopez is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on using fundamental chemical engineering principles to study energy storage devices and design solutions to enable accelerated adoption of sustainable energy technologies.



The post Rheo-electric measurements to predict battery performance from slurry processing appeared first on Physics World.

  •  

Simultaneous structural and chemical characterization with colocalized AFM-Raman

Par : No Author
19 septembre 2024 à 17:27

The combination of Atomic Force Microscopy (AFM) and Raman spectroscopy provides deep insights into the complex properties of various materials. While Raman spectroscopy facilitates the chemical characterization of compounds, interfaces and complex matrices, offering crucial insights into molecular structures and compositions, including microscale contaminants and trace materials. AFM provides essential data on topography and mechanical properties, such as surface texture, adhesion, roughness, and stiffness at the nanoscale.

Traditionally, users must rely on multiple instruments to gather such comprehensive analysis. HORIBA’s AFM-Raman system stands out as a uniquely multimodal tool, integrating an automated AFM with a Raman/photoluminescence spectrometer, providing precise pixel-to-pixel correlation between structural and chemical information in a single scan.

This colocalized approach is particularly valuable in applications such as polymer analysis, where both surface morphology and chemical composition are critical; in semiconductor manufacturing, for detecting defects and characterizing materials at the nanoscale; and in life sciences, for studying biological membranes, cells, and tissue samples. Additionally, it’s ideal for battery research, where understanding both the structural and chemical evolution of materials is key to improving performance.

João Lucas Rangel

João Lucas Rangel currently serves as the AFM & AFM-Raman global product manager at HORIBA and holds a PhD in biomedical engineering. Specializing in Raman, infrared, and fluorescence spectroscopies, his PhD research was focused on skin dermis biochemistry changes. At HORIBA Brazil, João started in 2012 as molecular spectroscopy consultant, transitioning into a full-time role as an application scientist/sales support across Latin America, expanding his responsibilities, overseeing the applicative sales support, and co-management of the business activities within the region. In 2022, João was invited to join HORIBA France as a correlative microscopy – Raman application specialist, being responsible to globally develop the correlative business, combing HORIBA’s existing technologies with other complementary technologies. More recently, in 2023, João was promoted to the esteemed position of AFM & AFM-Raman global product manager. In this role, João oversees strategic initiatives aiming at the company’s business sustainability and future development, ensuring its continued success and future growth.

The post Simultaneous structural and chemical characterization with colocalized AFM-Raman appeared first on Physics World.

  •  

A comprehensive method for assembly and design optimization of single-layer pouch cells

Par : No Author
18 septembre 2024 à 16:08

For academic researchers, the cell format for testing lithium-ion batteries is often overlooked. However, choices in cell format and their design can affect cell performance more than one may expect. Coin cells that utilize either a lithium metal or greatly oversized graphite negative electrode are common but can provide unrealistic testing results when compared to commercial pouch-type cells. Instead, single-layer pouch cells provide a more similar format to those used in industry while not requiring large amounts of active material. Moreover, their assembly process allows for better positive/negative electrode alignment, allowing for assembly of single-layer pouch cells without negative electrode overhang. This talk presents a comparison between coin, single-layer pouch, and stacked pouch cells, and shows that single-layer pouch cells without negative electrode overhang perform best. Additionally, a careful study of the detrimental effects of excess electrode material is shown. The single-layer pouch cell format can also be used to measure pressure and volume in situ, something that is not possible in a coin cell. Last, a guide to assembling reproducible single-layer pouch cells without negative electrode overhang is presented.

An interactive Q&A session follows the presentation.

Matthew Garayt
Matthew Garayt

Matthew D L Garayt is a PhD candidate in the Jeff Dahn, Michael Metzger, and Chongyin Yang Research Groups at Dalhousie University. His work focuses on materials for lithium- and sodium-ion batteries, with a focus on increased energy density and lifetime. Before this, he worked at E-One Moli Energy, the first rechargeable lithium battery company in the world, where he worked on high-power lithium-ion batteries, and completed a summer research term in the Obrovac Research Group, also at Dalhousie. He received a BSc (Hons) in applied physics from Simon Fraser University.

The post A comprehensive method for assembly and design optimization of single-layer pouch cells appeared first on Physics World.

  •  

Adaptive deep brain stimulation reduces Parkinson’s disease symptoms

Par : No Author
18 septembre 2024 à 11:10

Deep brain stimulation (DBS) is an established treatment for patients with Parkinson’s disease who experience disabling tremors and slowness of movements. But because the therapy is delivered with constant stimulation parameters – which are unresponsive to a patient’s activities or variations in symptom severity throughout the day – it can cause breakthrough symptoms and unwanted side effects.

In their latest Parkinson’s disease initiative, researchers led by Philip Starr from the UCSF Weill Institute for Neurosciences have developed an adaptive DBS (aDBS) technique that may offer a radical improvement. In a feasibility study with four patients, they demonstrated that this intelligent “brain pacemaker” can reduce bothersome side effects by 50%.

The self-adjusting aDBS, described in Nature Medicine, monitors a patient’s brain activity in real time and adjusts the level of stimulation to curtail symptoms as they arise. Generating calibrated pulses of electricity, the intelligent aDBS pacemaker provides less stimulation when Parkinson’s medication is active, to ward off excessive movements, and increases stimulation to prevent slowness and stiffness as the drugs wear off.

Starr and colleagues conducted a blinded, randomized feasibility trial to identify neural biomarkers of motor signs during active stimulation, and to compare the effects of aDBS with optimized constant DBS (cDBS) during normal, unrestricted daily life.

The team recruited four male patients with Parkinson’s disease, ranging in age from 47 to 68 years, for the study. Although all participants had implanted DBS devices, they were still experiencing symptom fluctuations that were not resolved by either medication or cDBS therapy. They were asked to identify the most bothersome residual symptom that they experienced.

To perform aDBS, the researchers developed an individualized data-driven pipeline for each participant, which turns the recorded subthalamic or cortical field potentials into personalized algorithms that auto-adjust the stimulation amplitudes to alleviate residual motor fluctuations. They used both in-clinic and at-home neural recordings to provide the data.

“The at-home data streaming step was important to ensure that biomarkers identified in idealized, investigator-controlled conditions in the clinic could function in naturalistic settings,” the researchers write.

The four participants received aDBS alongside their existing DBS therapy. The team compared the treatments by alternating between cDBS and aDBS every two to seven days, with a cumulative period of one month per condition.

The researchers monitored motor symptoms using wearable devices plus symptom diaries completed daily by the participants. They evaluated the most bothersome symptoms, in most cases bradykinesia (slowness of movements), as well as stimulation-associated side effects such as dyskinesia (involuntary movements). To control for other unwanted side effects, participants also rated other common motor symptoms, their quality of sleep, and non-motor symptoms such as depression, anxiety, apathy and impulsivity.

The study revealed that aDBS improved each participant’s most bothersome symptom by roughly 50%. Three patients also reported improved quality-of-life using aDBS. This change was so obvious to these three participants that, even though they did not know which treatment was being delivered at any time, they could often correctly guess when they were receiving aDBS.

The researchers note that the study establishes the methodology for performing future trials in larger groups of males and females with Parkinson’s disease.

“There are three key pathways for future research,” lead author Carina Oehrn tells Physics World. “First, simplifying and automating the setup of these systems is essential for broader clinical implementation. Future work by Starr and Simon Little at UCSF, and Lauren Hammer (now at the Hospital of the University of Pennsylvania) will focus on automating this process to increase access to the technology. From a practicality standpoint, we think it necessary to develop an AI-driven smart device that can identify and auto-set treatment settings with a clinician-activated button.”

“Second, long-term monitoring for safety and sustained effectiveness is crucial,” Oehrn added. “Third, we need to expand these approaches to address non-motor symptoms in Parkinson’s disease, where treatment options are limited. I am studying aDBS for memory and mood in Parkinson’s at the University of California-Davis. Little is investigating aDBS for sleep disturbances and motivation.”

The post Adaptive deep brain stimulation reduces Parkinson’s disease symptoms appeared first on Physics World.

  •  

Dark-matter decay could have given ancient supermassive black holes a boost

Par : No Author
17 septembre 2024 à 17:19

The decay of dark matter could have played a crucial role in triggering the formation of supermassive black holes (SMBHs) in the early universe, according to a trio of astronomers in the US. Using a combination of gas-cloud simulations and theoretical dark matter calculations, Yifan Lu and colleagues at the University of California, Los Angeles, uncovered promising evidence that the decay of dark matter may have provided the radiation necessary to prevent primordial gas clouds from fragmenting as they collapsed.

SMBHs are thought to reside at the centres of most large galaxies, and can be hundreds of thousands to billions of times more massive than the Sun. For decades, astronomers puzzled over how such immense objects could have formed, and the mystery has deepened with recent observations by the James Webb Space Telescope (JWST).

Since 2023, JWST has detected SMBHs that existed less than one billion years after the birth of the universe. This is far too early to be the result of conventional stellar evolution, whereby smaller black holes coalesce to create a SMBH.

Fragmentation problem

An alternative explanation is that vast primordial gas clouds in the early universe collapsed directly into SMBHs. However, as Lu explains, this theory challenges our understanding of how matter behaves. “Detailed calculations show that, in the absence of any unusual radiation, the largest gas clouds tend to fragment and form a myriad of small halos, not a single supermassive black hole,” he says. “This is due to the formation of molecular hydrogen, which cools the rest of the gas by radiating away thermal energy.”

For SMBHs to form under these conditions, molecular hydrogen would have needed to be somehow suppressed, which would require an additional source of radiation from within these ancient clouds. Recent studies have proposed that this extra energy could have come from hypothetical dark-matter particles decaying into photons.

“This additional radiation could cause the dissociation of molecular hydrogen, preventing fragmentation of large gas clouds into smaller pieces,” Lu explains. “In this case, gravity forces the entire large cloud to collapse as a whole into a [SMBH].”

In several recent studies, researchers have used simulations and theoretical estimates to investigate this possibility. So far, however, most studies have either focused on the mechanics of collapsing gas clouds or on the emissions produced by decaying dark matter, with little overlap between the two.

Extra ingredient needed

“Computer simulations of clouds of gas that could directly collapse to black holes have been studied extensively by groups farther on the astrophysics side of things, and they had examined how additional sources of radiation are a necessary ingredient,” explains Lu’s colleague Zachary Picker.

“Simultaneously, people from the dark matter side had performed some theoretical estimations and found that it seemed unlikely that dark matter could be the source of this additional radiation,” adds Picker.

In their study, Lu, Picker, and Alexander Kusenko sought to bridge this gap by combining both approaches: simulating the collapse of a gas cloud when subjected to radiation produced by the decay of several different candidate dark-matter particles. As they predicted, some of these particles could indeed provide the missing radiation needed to dissociate molecular hydrogen, allowing the entire cloud to collapse into a single SMBH.

However, dark matter is a hypothetical substance that has never been detected directly. As a result, the trio acknowledges that there is currently no reliable way to verify their findings experimentally. For now, this means that their model will simply join a growing list of theories that aim to explain the formation of SMBHs. But if the situation changes in the future, the researchers hope their model could represent a significant step forward in understanding the early universe’s evolution.

“One day, hopefully in my lifetime, we’ll find out what the dark matter is, and then suddenly all of the papers written about that particular type will magically become ‘correct’,” Picker says. “All we can do until then is to keep trying new ideas and hope they uncover something interesting.”

The research is described in Physical Review Letters.

The post Dark-matter decay could have given ancient supermassive black holes a boost appeared first on Physics World.

  •  

NASA suffering from ageing infrastructure and inefficient management practices, finds report

Par : No Author
16 septembre 2024 à 15:34

NASA has been warned that it may need to sacrifice new missions in order to rebalance the space agency’s priorities and achieve its long-term objectives. That is the conclusion of a new report – NASA at a Crossroads: Maintaining Workforce, Infrastructure, and Technology Preeminence in the Coming Decades – that finds a space agency battling on many fronts including ageing infrastructure, China’s growing presence in space, and issues recruiting staff.

The report was requested by Congress and published by the National Academies of Sciences, Engineering, and Medicine. It was written by a 13-member committee, which included representatives from industry, academia and government, and was chaired by Norman Augustine, former chief executive of Lockheed Martin. Members visited all nine NASA centres and talked to about 400 employees to compile the report.

While the panel say that NASA had “motivate[ed] many of the nation’s youth to pursue careers in science and technology” and “been a source of inspiration and pride to all Americans”, they highlight a variety of problems at the agency. Those include out-of-date infrastructure, a pressure to prioritize short-term objectives, budget mismatches, inefficient management practices, and an unbalanced reliance on commercial partners. Yet according to Augustine, the agency’s main problem is “the more mundane tendency to focus on near-term accomplishments at the expense of long-term viability”.

As well as external challenges such as China’s growing role in space, the committee discovered that many were homegrown. They found that 83% of NASA’s facilities are past their design lifetimes. For example, the capacity of the Deep Space Network, which provides critical communications support for uncrewed missions, “is inadequate” to support future craft and even current missions such as the Artemis Moon programme “without disrupting other projects”.

There is also competition from private space firms in both technology development and recruitment. According to the report, NASA has strict hiring rules and salaries it can offer. It takes 81 days, on average, from the initial interview to an offer of employment. During that period, the subject will probably receive offers from private firms, not only in the space industry but also in the “digital world”, which offer higher salaries.

In addition, Augustine notes, the agency is giving its engineers less opportunity “to get their hands dirty” by carrying out their own research. Instead, they are increasingly managing outside contractors who are doing the development work. At the same time, the report identifies a “major reduction” over the past few decades in basic research that is financed by industry – a trend that the report says is “largely attributable to shareholders seeking near-term returns as opposed to laying groundwork for the future”.

Yet the committee also finds that NASA faces “internal and external pressure to prioritize short-term measures” without considering longer-term needs and implications. “If left unchecked these pressures are likely to result in a NASA that is incapable of satisfying national objectives in the longer term,” the report states. “The inevitable consequence of such a strategy is to erode those essential capabilities that led to the organization’s greatness in the first place and that underpin its future potential.”

Cash woes

Another concern is the US government budget process that operates year by year and is slowly reducing NASA’s proportional share of funding. The report finds that the budget is “often incompatible with the scope, complexity, and difficulty of [NASA’s] work” and the funding allocation “has degraded NASA’s capabilities to the point where agency sustainability is in question”. Indeed, during the agency’s lifetime, the proportion of the US budget devoted to government R&D has declined from 1.9% of gross domestic product to 0.7%. The panel also notes a trend of reducing investment in research and technology as a fraction of funds devoted to missions. “NASA is likely to face budgetary problems in the future that greatly exceed those we’ve seen in recent years,” Augustine told a briefing.

The panel now calls on NASA to work with Congress to establish “an annually replenished revolving fund – such as a working capital fund” to maintain and improve the agency’s infrastructure. It would be financed by the US government as well as users of NASA’s facilities and be “sufficiently capitalized to eliminate NASA’s current maintenance backlog over the next decade”. While it is unclear how the government and the agency will react to that proposal, as Augustine warned, for NASA, “this is not business as usual”.

The post NASA suffering from ageing infrastructure and inefficient management practices, finds report appeared first on Physics World.

  •  

Almost 70% of US students with an interest in physics leave the subject, finds survey

Par : No Author
11 septembre 2024 à 13:30

More than two-thirds of college students in the US who initially express an interest in studying physics drop out to pursue another degree. That is according to a five-year-long survey by the American Institute of Physics, which found that students often quit due to a lack of confidence in mathematics or having poor experiences within physics departments and instructors. Most students, however, ended up in another science, technology, engineering and mathematics (STEM) field.

Carried out by AIP Statistical Research, the survey initially followed almost 4000 students in their first year of high school or college who were doing an introductory physics course at four large, predominantly white universities.

Students highlighted “learning about the universe”, “applying their problem-solving and maths skills”, “succeeding in a challenging subject” and “pursuing a satisfying career” as reasons why they choose to study physics.

Anne Marie Porter and her colleagues Raymond Chu and Rachel Ivie concentrated on the 745 students who had expressed interest in  pursuing physics, following them for five academic years.

Over that period, only 31% graduated with a physics degree, with most of those switching to another degree during their first or second year. Under-represented groups, including women, African-American and Hispanic students, were the most likely to avoid physics degree courses.

Pull and push

While many who quit physics enjoyed their experience, they left due to “issues with poor teaching quality and large class sizes” as well as “negative perceptions that physics employment consists only of academic positions and desk jobs”. Self-appraisal played a role in the decision to leave too. “They may feel unable to succeed because they lack the necessary skills in physics,” Porter says. “That’s a reason for concern.”

Porter adds that intervention early in college is essential to retain physicists with introductory physics courses being “incredibly important”. Indeed, the survey comes at a time when the number of bachelor’s degrees in physics offered by US universities is growing more slowly than in other STEM fields.

Meanwhile, a separate report published by the National Academies of Science, Engineering, and Medicine has called on the US government to adopt a new strategy to recruit and retain talent in STEM subjects. In particular, the report urges Congress to smooth the path to permanent residency and US citizenship for foreign-born individuals working in STEM fields.

The post Almost 70% of US students with an interest in physics leave the subject, finds survey appeared first on Physics World.

  •  

Vacuum for physics research

Par : No Author
11 septembre 2024 à 09:28

Your research can’t happen without vacuum! If you’re pushing the boundaries of science or technology, you know that creating a near-perfect empty space is crucial. Whether you’re exploring the mysteries of subatomic particles, simulating the harsh conditions of outer space, or developing advanced materials, mastering ultra-high (UHV) and extreme-high vacuum (XHV) is necessary.

In this live webinar:

  • You will learn how vacuum enables physics research, from quantum computing, to fusion, to the fundamental nature of the universe.
  • You will discover why ultra-low-pressure environments directly impact the success of your experiments.
  • We will dive into the latest techniques and technologies for creating and maintaining UVH and XHV.

Join us to gain practical insights and stay ahead in your field ­– because in your research, vacuum isn’t just important; it’s critical.

John Screech

John Screech graduated in 1986 with a BA in physics and has worked in analytical instrumentation ever since. His career has spanned general mass spectrometry, vacuum system development, and contraband detection. John joined Agilent in 2011 and currently leads training and education programmes for the Vacuum Products division. He also assists Agilent’s sales force and end-users with pre- and post-sales applications support. He is based near Toronto, Canada.

The post Vacuum for physics research appeared first on Physics World.

  •  

Flagship journal Reports on Progress in Physics marks 90th anniversary with two-day celebration

Par : No Author
10 septembre 2024 à 18:05

When the British physicist Edward Andrade wrote a review paper on the structure of the atom in the first volume of the journal Reports on Progress in Physics (ROPP) in 1934, he faced a problem familiar to anyone seeking to summarize the latest developments in a field. So much exciting research had happened in atomic physics that Andrade was finding it hard to cram everything in. “It is obvious, in view of the appalling number of papers that have appeared,” he wrote, “that only a small fraction can receive reference.”

Review articles are the ideal way to get up to speed with developments and offer a gateway into the scientific literature

Apologizing that “many elegant pieces of work have been deliberately omitted” due to a lack of space, Andrade pleaded that he had “honestly tried to maintain a just balance between the different schools [of thought]”. Nine decades on, Andrade’s struggles will be familiar to anyone has ever tried to write a review paper, especially of a fast-moving area of physics. Readers, however, appreciate the efforts authors put in because review articles are the ideal way to get up to speed with developments and offer a gateway into the scientific literature.

Writing review papers also benefits authors because such articles are usually widely read and cited by other scientists – much more in fact than a paper containing new research findings. As a result, most review journals have an extraordinarily high “impact factor”, which is the yearly mean number of citations received by articles published in the last two years in the journal. ROPP, for example, has an impact factor of 19.0. While there are flaws with using impact factor to judge the quality of a journal, it’s still a well-respected metric in many parts of the world. And who wouldn’t want to appear in a journal with that much influence?

New dawn for ROPP

Celebrating its 90th anniversary this year, ROPP is the flagship journal of IOP Publishing, which also publishes Physics World. As a learned-society publisher, IOP Publishing does not have shareholders, with any financial surplus ploughed back into the Institute of Physics (IOP) to support everyone from physics students to physics teachers. In contrast to journals owned by commercial publishers, therefore, ROPP has the international physics community at its heart.

Over the last nine decades, ROPP has published over 2500 review papers. There have been more than 20 articles by Nobel-prize-winning physicists, including famous figures from the past such as Hans Bethe (stellar evolution), Lawrence Bragg (protein crystallography) and Abdus Salam (field theory). More recently, ROPP has published papers by still-active Nobel laureates including Konstantin Novoselov (2D materials), Ferenc Krausz (attosecond physics) and Isamu Akasaki (blue LEDS) – see the box below for a full list.

Subhir Sachdev
New directions Subir Sachdev from Harvard University in the US is the current editor-in-chief of Reports on Progress in Physics. (Courtesy: Subir Sachdev)

But the journal isn’t resting on its laurels. ROPP has recently started accepting articles containing new scientific findings for the first time, with the plan being to publish 150–200 very-high-quality primary-research papers each year. They will be in addition to the usual output of 50 or so review papers, most of which will still be commissioned by ROPP’s active editorial board. IOP Publishing hopes the move will cement the journal’s place at the pinnacle of its publishing portfolio.

“ROPP will continue as before,” says Subir Sachdev, a condensed-matter physicist from Harvard University, who has been editor-in-chief of the journal since 2022. “There’s no change to the review format, but what we’re doing is really more of an expansion. We’re adding a new section containing original research articles.” The journal is also offering an open-access option for the first time, thereby increasing the impact of the work. In addition, authors have the option to submit their papers for “double anonymous” and transparent peer review.

Maintaining high standards

Those two new initiatives – publishing primary research and offering an open-access option – are probably the biggest changes in the journal’s 90-year history. But Sachdev is confident the journal can cope. “Of course, we want to maintain our high standards,” he says. “ROPP has over the years acquired a strong reputation for very-high-quality articles. With the strong editorial board and the support we have from referees, we hope we will be able to maintain that.”

Early signs are promising. Among the first primary-research papers in ROPP are CERN’s measurement of the speed of sound in a quark–gluon plasma (87 077801), a study into flaws in the Earth’s gravitational field (87 078301), and an investigation into whether supersymmetry could be seen in 2D materials (10.1088/1361-6633/ad77f0). A further paper looks into creating an overarching equation of state for liquids based on phonon theory (87 098001).

The idea is to publish a relatively small number of papers but ensure they’re the best of what’s going on in physics and provide a really good cross section of what the physics community is doing

David Gevaux

David Gevaux, ROPP’s chief editor, who is in charge of the day-to-day running of the journal, is pleased with the quality and variety of primary research published so far. “The idea is to publish a relatively small number of papers – no more than 200 max – but ensure they’re the best of what’s going on in physics and provide a really good cross section of what the physics community is doing,” he says. “Our first papers have covered a broad range of physics, from condensed matter to astronomy.”

Another benefit of ROPP only publishing a select number of papers is that each article can have, as Gevaux explains, “a little bit more love” put into it. “Traditionally, publishers were all about printing journals and sending them around the world – it was all about distribution,” he says. “But with the Internet, everything’s immediately available and researchers almost have too many papers to trawl through. As a flagship journal, ROPP gives its published authors extra visibility, potentially through a press release or coverage in Physics World.”

Nobel laureates who have published in ROPP

Change of focus Home for the last 90 years to top-quality review articles, Reports on Progress in Physics now also accepts primary research papers for the first time. (Courtesy: IOP Publishing)

Since its launch in 1934, Reports on Progress in Physics has published papers by numerous top scientists, including more than 20 current or future Nobel-prize-winning physicists. A selection of those papers written or co-authored by Nobel laureates over the journal’s first 90 years is given chronologically below. For brevity, papers by multiple authors list only the contributing Nobel winner.

Nevill Mott 1938 “Recent theories of the liquid state” (5 46) and 1939 “Reactions in solids” (6 186)
Hans Bethe 1939 “The physics of stellar interiors and stellar evolution” (6 1)
Max Born 1942 “Theoretical investigations on the relation between crystal dynamics and x-ray scattering” (9 294)
Martin Ryle 1950 “Radio astronomy” (13 184)
Willis Lamb 1951 “Anomalous fine structure of hydrogen and singly ionized helium” (14 19)
Abdus Salam 1955 “A survey of field theory” (18 423)
Alexei Abrikosov 1959 “The theory of a fermi liquid” (22 329)
David Thouless 1964 “Green functions in low-energy nuclear physics” (27 53)
Lawrence Bragg 1965 “First stages in the x-ray analysis of proteins” (28 1)
Melvin Schwartz 1965 “Neutrino physics” (28 61)
Pierre-Gilles de Gennes 1969 “Some conformation problems for long macromolecules” (32 187)
David Gabor 1969 Progress in holography” (32 395)
John Clauser 1978 “Bell’s theorem. Experimental tests and implications” (41 1881)
Norman Ramsey 1982 “Electric-dipole moments of elementary particles” (45 95)
Martin Perl 1992 “The tau lepton” (55 653)
Charles Townes 1994 “The nucleus of our galaxy” (57 417)
Pierre Agostini 2004 “The physics of attosecond light pulses” (67 813)
Takaaki Kajita 2006 “Discovery of neutrino oscillations” (69 1607)
Konstantin Novoselov 2011 “New directions in science and technology: two-dimensional crystals” (7 082501)
John Michael Kosterlitz 2016 “Physics: a review of key issues” (2016 79 026001)
Anthony Leggett 2016 “Liquid helium-3: a strongly correlated but well understood Fermi liquid” (79 054501)
Ferenc Krausz 2017 “Attosecond physics at the nanoscale” (80 054401)
Isamu Akasaki 2018 GaN-based vertical-cavity surface-emitting lasers with AlInN/GaN distributed Bragg reflectors” (82 012502)

An event for the community

As another reminder of its place in the physics community, ROPP is hosting a two-day event at the IOP’s headquarters in London and online. Taking place on 9–10 October 2024, the hybrid event will present the latest cutting-edge condensed-matter research, from fundamental work to applications in superconductivity, topological insulators, superfluids, spintronics and beyond. Confirmed speakers at Progress in Physics 2024 include Piers Coleman (Rutgers University), Susannah Speller (University of Oxford), Nandini Trivedi (Ohio State University) and many more.

artist's impression of a superconductive cube levitiating
Keep up with the action The latest advances in superconductivity are among the hot topics to be discussed at a hybrid meeting on 9–10 October 2024 online and in London to mark the 90th anniversary of IOP Publishing’s flagship journal Reports on Progress in Physics. (Courtesy: iStock/koto_feja)

“We’re taking the journal out into the community,” says Gevaux. “IOP Publishing is very heavily associated with the IOP and of course the IOP has a large membership of physicists in the UK, Ireland and beyond. With the meeting, the idea is to bring that community and the journal together. This first meeting will focus on condensed-matter physics, with some of the ROPP board members giving plenary talks along with lectures from invited, external scientists and a poster session too.”

Longer-term, IOP Publishing plans to put ROPP at the top of a wider series of journals under the “Progress in” brand. The first of those journals is Progress in Energy, which was launched in 2019 and – like ROPP – has now also expanded its remit to included primary- research papers. Other, similar spin-off journals in different topic areas will be launched over the next few years, giving IOP Publishing what it hopes is a series of journals to match the best in the world.

For Sachdev, publishing with ROPP is all about having “the stamp of approval” from the academic community. “So if you think your field is now reached a point where a scholarly assessment of recent advances is called for, then please consider ROPP,” he says. “We have a very strong editorial board to help you produce a high-quality, impactful article, now with the option of open access and publishing really high-quality primary research papers too.”

The post Flagship journal <em>Reports on Progress in Physics</em> marks 90th anniversary with two-day celebration appeared first on Physics World.

  •  

Quantum growth drives investment in diverse skillsets

Par : No Author
10 septembre 2024 à 15:20

The meteoric rise of quantum technologies from research curiosity to commercial reality is creating all the right conditions for a future skills shortage, while the ongoing pursuit of novel materials continues to drive demand for specialist scientists and engineers. Within the quantum sector alone, headline figures from McKinsey & Company suggest that less than half of available quantum jobs will be filled by 2025, with global demand being driven by the burgeoning start-up sector as well as enterprise firms that are assembling their own teams to explore the potential of quantum technologies for transforming their businesses.

While such topline numbers focus on the expertise that will be needed to design, build and operate quantum systems, a myriad of other skilled professionals will be needed to enable the quantum sector to grow and thrive. One case in point is the diverse workforce of systems engineers, measurement scientists, service engineers and maintenance technicians who will be tasked with building and installing the highly specialized equipment and instrumentation that is needed to operate and monitor quantum systems.

“Quantum is an incredibly exciting space right now, and we need to prepare for the time when it really takes off and explodes,” says Matt Martin, Managing Director of Oxford Instruments NanoScience, a UK-based company that manufactures high-performance cryogenics systems and superconducting magnets. “But for equipment makers like us the challenge is not just about quantum, since we are also seeing increased demand from both academia and industry for emerging applications in scientific measurement and condensed-matter physics.”

Martin points out that Oxford Instruments already works hard to identify and nurture new talent. Within the UK the company has for many years sponsored doctoral students to foster a deeper understanding of physics in the ultracold regime, and it also offers placements to undergraduates to spark an early interest in the technology space. The firm is also dialled into the country’s apprenticeship scheme, which offers an effective way to train young people in the engineering skills needed to manufacture and maintain complex scientific instruments.

Despite these initiatives, Martin acknowledges that NanoScience faces the same challenges as other organizations when it comes to recruiting high-calibre technical talent. In the past, he says, a skilled scientist would have been involved in all stages of the development process, but now the complexity of the systems and depth of focus required to drive innovation across multiple areas of science and engineering has led to the need for greater specialization. While collaboration with partners and sister companies can help, the onus remains on each business to develop a core multidisciplinary team.

Building ultracold and measurement expertise

The key challenge for companies like Oxford Nanoscience is finding physicists and engineers who can create the ultracold environments that are needed to study both quantum behaviour and the properties of novel materials. Compounding that issue is the growing trend towards providing the scientific community with more automated solutions, which has made it much easier for researchers to configure and conduct experiments at ultralow temperatures.

Harriet van der Vliet
Quantum focus Harriet van der Vliet, the product manager for quantum technologies at Oxford Instruments NanoScience, with one of the company’s dilution refrigerators. (Courtesy: Oxford Instruments NanoScience)

“In the past PhD students might have spent a significant amount of time building their experiments and the hardware needed for their measurements,” explains Martin. “With today’s push-button solutions they can focus more on the science, but that changes their knowledge because there’s no need for them to understand what’s inside the box. Today’s measurement scientists are increasingly skilled in Python and integration, but perhaps less so in hardware.”

Developing such comprehensive solutions demands a broader range of technical specialists, such as software programmers and systems engineers, that are in short supply across all technology-focused industries. With many other enticing sectors vying for their attention, such as the green economy, energy and life sciences, and the rise of AI-enabled robotics, Martin understands the importance of inspiring young people to devote their energies to the technologies that underpin the quantum ecosystem. “We’ve got to be able to tell our story, to show why this new and emerging market is so exciting,” he says. “We want them to know that they could be part of something that will transform the future.”

To raise that awareness Oxford Instruments has been working to establish a series of applications centres in Japan, the US and the UK. One focus for the centres will be to provide training that helps users to get to the most out of the company’s instruments, particularly for those without direct experience of building and configuring an ultracold system. But another key objective is to expose university-level students to research-grade technology, which in turn should help to highlight future career options within the instrumentation sector.

To build on this initiative Oxford Instruments is now actively discussing opportunities to collaborate with other companies on skills development and training in the US. “We all want to provide some hands-on learning for students as they progress through their university education, and we all want to find ways to work with government programmes to stimulate this training,” says Martin. “It’s better for us to work together to deliver something more substantial rather than doing things in a piecemeal way.”

That collaboration is likely to centre around an initiative launched by US firm Quantum Design back in 2015. Under the scheme, now badged Discovery Teaching Labs, the company has donated one of its commercial systems for low-temperature material analysis, the PPMS VersaLab, to several university departments in the US. As part of the initiative the course professors are also asked to create experimental modules that enable undergraduate students to use this state-of-the-art technology to explore key concepts in condensed-matter physics.

“Our initial goal was to partner with universities to develop a teaching curriculum that uses hands-on learning to inspire students to become more interested in physics,” says Quantum Design’s Barak Green, who has been a passionate advocate for the scheme. “By enabling students to become confident with using these advanced scientific instruments, we have also found that we have equipped them with vital technical skills that can open up new career paths for them.”

One of the most successful partnerships has been with California State University San Marcos (CSUSM), a small college that mainly attracts students from communities with no prior tradition of pursuing a university education. “There is no way that the students at CSUSM would have been able to access to this type of equipment in their undergraduate training, but now they have a year-long experimental programme that enhances their scientific learning and makes them much more comfortable with using such an advanced system,” says Green. “Many of these students can’t afford to stay in school to study for a PhD, and this programme has given them the knowledge and experience they need to get a good job.”

California State University San Marcos (CSUSM)
Teaching and discovery To build knowledge and skills among physics students, Quantum Design has developed an initiative for donating research-grade equipment to undergraduate teaching labs. (Courtesy: Quantum Design)

Indeed, Quantum Design has already hired around 20 students from CSUSM and other local programmes. “We didn’t start the initiative with that in mind, but over the years we discovered that we had all these highly skilled people who could come and work for us,” Green continues. “Students who only do theory are often very nervous around these machines, but the CSUSM graduates bring a whole extra layer of experience and know-how. Not everyone needs to have a PhD in quantum physics, we also need people who can go into the workforce and build the systems that the scientists rely on.”

This overwhelming success has given greater impetus to the programme, with Quantum Design now seeking to bring in other partners to extend its reach and impact. LakeShore Cryotronics, a long-time collaborator that designs and builds low-temperature measurement systems that can be integrated into the VersaLab, was the first company to make the commitment. In 2023 the US-based firm donated one of its M91 FastHall measurement platforms to join the VersaLab already installed at CSUSM, and the two partners are now working together to establish an undergraduate teaching lab at Stony Brook University in New York.

“It’s an opportunity for like-minded scientific companies to give something back to the community, since most of our products are not affordable for undergraduate programmes,” says LakeShore’s Chuck Cimino, who has now joined the board of advisors for the Discovery Teaching Labs programme. “Putting world-class equipment into the hands of students can influence their decisions to continue in the field, and in the long term will help to build a future workforce of skilled scientists and engineers.”

Conversations with other equipment makers at the 2024 APS March Meeting also generated significant interest, potentially paving the way for Oxford Instruments to join the scheme. “It’s a great model to build on, and we are now working to see how we might be able to commit some of our instruments to those training centres,” says Martin, who points out that the company’s Proteox S platform offers the ideal entry-level system for teaching students how to manage a cold space for experiments with qubits and condensed-matter systems. “We’ve developed a lot of training on the hardware and the physicality of how the systems work, and in that spirit of sharing there’s lots of useful things we could do.”

While those discussions continue, Martin is also looking to a future when quantum-powered processors become a practical reality in compute-intensive settings such as data centres. “At that point there will be huge demand for ultracold systems that are capable of hosting and operating large-scale quantum computers, and we will suddenly need lots of people who can install and service those sorts of systems,” he says. “We are already thinking about ways to set up training centres to develop that future workforce, which will primarily be focused around service engineers and maintenance technicians.”

Martin believes that partnering with government labs could offer a solution, particularly in the US where various initiatives are already in place to teach technical skills to college-level students. “It’s about taking that forward view,” he says. “We have already built a product that can be used for training purposes, and we have started discussions with US government agencies to explore how we could work together to build the workforce that will be needed to support the big industrial players.”

The post Quantum growth drives investment in diverse skillsets appeared first on Physics World.

  •  

Fusion’s public-relations drive is obscuring the challenges that lie ahead

Par : No Author
9 septembre 2024 à 12:00

“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.” So stated the Nobel laureate Richard Feynman during a commission hearing into NASA’s Challenger space shuttle disaster in 1986, which killed all seven astronauts onboard.

Those famous words have since been applied to many technologies, but they are becoming especially apt to nuclear fusion where public relations currently appears to have the upper hand. Fusion has recently been successful in attracting public and private investment and, with help from the private sector, it is claimed that fusion power can be delivered in time to tackle climate change in the coming decades.

Yet this rosy picture hides the complexity of the novel nuclear technology and plasma physics involved. As John Evans – a physicist who has worked at the Atomic Energy Research Establishment in Harwell, UK – recently highlighted in Physics World, there is a lack of proven solutions for the fusion fuel cycle, which involves breeding and reprocessing unprecedented quantities of radioactive tritium with extremely low emissions.

Unfortunately, this is just the tip of the iceberg. Another stubborn roadblock lies in instabilities in the plasma itself – for example, so-called Edge Localised Modes (ELMs), which originate in the outer regions of tokamak plasmas and are akin to solar flares. If not strongly suppressed they could vaporize areas of the tokamak wall, causing fusion reactions to fizzle out. ELMs can also trigger larger plasma instabilities, known as disruptions, that can rapidly dump the entire plasma energy and apply huge electromagnetic forces that could be catastrophic for the walls of a fusion power plant.

In a fusion power plant, the total thermal energy stored in the plasma needs to be about 50 times greater than that achieved in the world’s largest machine, the Joint European Torus (JET). JET operated at the Culham Centre for Fusion Energy in Oxfordshire, UK, until it was shut down in late 2023. I was responsible for upgrading JET’s wall to tungsten/beryllium and subsequently chaired the wall protection expert group.

JET was an extremely impressive device, and just before it ceased operation it set a new world record for controlled fusion energy production of 69 MJ. While this was a scientific and technical tour de force, in absolute terms the fusion energy created and plasma duration achieved at JET were minuscule. A power plant with a sustained fusion power of 1 GW would produce 86 million MJ of fusion energy every day. Furthermore, large ELMs and disruptions were a routine feature of JET’s operation and occasionally caused local melting. Such behaviour would render a power plant inoperable, yet these instabilities remain to be reliably tamed.

Complex issues

Fusion is complex – solutions to one problem often exacerbate other problems. Furthermore, many of the physics and technology features that are essential for fusion power plants and require substantial development and testing in a fusion environment were not present in JET. One example being the technology to drive the plasma current sustainably using microwaves. The purpose of the international ITER project, which is currently being built in Cadarache, France, is to address such issues.

ITER, which is modelled on JET, is a “low duty cycle” physics and engineering experiment. Delays and cost increases are the norm for large nuclear projects and ITER is no exception. It is now expected to start scientific operation in 2034, but the first experiments using “burning” fusion fuel – a mixture of deuterium and tritium (D–T) – is only set to begin in 2039. ITER, which is equipped with many plasma diagnostics that would not be feasible in a power plant, will carry out an extensive research programme that includes testing tritium-breeding technologies on a small scale, ELM suppression using resonant magnetic perturbation coils and plasma-disruption mitigation systems.

The challenges ahead cannot be understated. For fusion to become commercially viable with an acceptably low output of nuclear waste, several generations of power-plant-sized devices could be needed

Yet the challenges ahead cannot be understated. For fusion to become commercially viable with an acceptably low output of nuclear waste, several generations of power-plant-sized devices could be needed following any successful first demonstration of substantial fusion-energy production. Indeed, EUROfusion’s Research Roadmap, which the UK co-authored when it was still part of ITER, sees fusion as only making a significant contribution to global energy production in the course of the 22nd century. This may be politically unpalatable, but it is a realistic conclusion.

The current UK strategy is to construct a fusion power plant – the Spherical Tokamak for Energy Production (STEP) – at West Burton, Nottinghamshire, by 2040 without awaiting results from intermediate experiments such as ITER. This strategy would appear to be a consequence of post-Brexit politics. However, it looks unrealistic scientifically, technically and economically. The total thermal energy of the STEP plasma needs to be about 5000 times greater than has so far been achieved in the UK’s MAST-U spherical tokamak experiment. This will entail an extreme, and unprecedented, extrapolation in physics and technology. Furthermore, the compact STEP geometry means that during plasma disruptions its walls would be exposed to far higher energy loads than ITER, where the wall protection systems are already approaching physical limits.

I expect that the complexity inherent in fusion will continue to provide its advocates, both in the public and private sphere, with ample means to obscure both the severity of the many issues that lie ahead and the timescales required. Returning to Feynman’s remarks, sooner or later reality will catch up with the public relations narrative that currently surrounds fusion. Nature cannot be fooled.

The post Fusion’s public-relations drive is obscuring the challenges that lie ahead appeared first on Physics World.

  •  

Taking the leap – how to prepare for your future in the quantum workforce

Par : No Author
6 septembre 2024 à 17:16

It’s official: after endorsement from 57 countries and the support of international physics societies, the United Nations has officially declared that 2025 is the International Year of Quantum Science and Technology (IYQ).

The year has been chosen as it marks the centenary of Werner Heisenberg laying out the foundations of quantum mechanics – a discovery that would earn him the Nobel Prize for Physics in 1932. As well as marking one of the most significant breakthroughs in modern science, the IYQ also reflects the recent quantum renaissance. Applications that use the quantum properties of matter are transforming the way we obtain, process and transmit information, and physics graduates are uniquely positioned to make their mark on the industry.

It’s certainly big business these days. According to estimates from McKinsey, in 2023 global quantum investments were valued at $42bn. Whether you want to build a quantum computer, an unbreakable encryption algorithm or a high-precision microscope, the sector is full of exciting opportunities. With so much going on, however, it can be hard to make the right choices for your career.

To make the quantum landscape easier to navigate as a jobseeker, Physics World has spoken to Abbie Bray, Araceli Venegas-Gomez and Mark Elo – three experts in the quantum sector, from academia and industry. They give us their exclusive perspectives and advice on the future of the quantum marketplace; job interviews; choosing the right PhD programme; and managing risk and reward in this emerging industry.

Quantum going mainstream: Abbie Bray

According to Abbie Bray, lecturer in quantum technologies at University College London (UCL) in the UK, the second quantum revolution has broadened opportunities for graduates. Until recently, there was only one way to work in the quantum sector – by completing a PhD followed by a job in academia. Now, however, more and more graduates are pursuing research in industry, where established companies such as Google, Microsoft and BT – as well as numerous start-ups like Rigetti and Universal Quantum – are racing to commercialize the technology.

Abbie Bray
Abbie Bray “Theorists and experimentalists need to move at the same time.” (Courtesy: Henry Bennie)

While a PhD is generally needed for research, Bray is seeing more jobs for bachelor’s and master’s graduates as quantum goes mainstream. “If you’re an undergrad who’s loving quantum but maybe not loving the research or some of the really high technical skills, there’s other ways to still participate within the quantum sphere,” says Bray. With so many career options in industry, government, consulting or teaching, Bray is keen to encourage physics graduates to consider these as well as a more traditional academic route.

She adds that it’s important to have physicists involved in all parts of the industry. “If you’re having people create policies who maybe haven’t quite understood the principles or impact or the effort and time that goes into research collaboration, then you’re lacking that real understanding of the fundamentals. You can’t have that right now because it’s a complex science, but it’s a complex science that is impacting society.”

So whether you’re a PhD student or an undergraduate, there are pathways into the quantum sector, but how can you make yourself stand out from the crowd? Bray has noticed that quantum physics is not taught in the same way across universities, with some students getting more exposure to the practical applications of the field than others. If you find yourself in an environment that isn’t saturated with quantum technology, don’t panic – but do consider getting additional experience outside your course. Bray highlights PennyLane, which is a Python library for programming quantum computers, that also produces learning resources.

Consider your options

Something else to be aware of, particularly for those contemplating a PhD, is that “quantum technologies” is a broad umbrella term, and while there is some crossover between, say, sensing and computing, switching between disciplines can be a challenge. It’s therefore important to consider all your options before committing to a project and Bray thinks that Centres for Doctoral Training (CDTs) are a step in the right direction. UCL has recently launched a quantum computing and quantum communications CDT where students will undergo a six-month training period before writing their project proposal. She thinks this enables them to get the most out of their research, particularly if they haven’t covered some topics in their undergraduate degree. “It’s very important that during a PhD you do the research that you want to do,” Bray says.

When it comes to securing a job, PhD position or postdoc, non-technical skills can be just as valuable as quantum know-how. Bray says it’s important to demonstrate that you’re passionate and deeply knowledgeable about your favourite quantum topic, but graduates also need to be flexible and able to work in an interdisciplinary team. “If you think you’re a theorist, understand that it also does sometimes mean looking at and working with experimental data and computation. And if you’re an experimentalist, you’ve got to understand that you need to have a rigorous understanding of the theory before you can make any judgements on your experimentation.” As Bray summarises: “theorists and experimentalists need to move at the same time”.

The ability to communicate technical concepts effectively is also vital. You might need to pitch to potential investors, apply for grants or even communicate with the HR department so that they shortlist the best candidates. Bray adds that in her experience, physicists are conditioned to communicate their research very directly, which can be detrimental in interviews where panels want to hear narratives about how certain skills were demonstrated. “They want to know how you identified a situation, then you identified the action, then the resolution. I think that’s something that every single student, every single person right now should focus on developing.”

The quantum industry is still finding its feet and earlier this year it was reported that investment has fallen by 50% since a high in 2022. However, Bray argues that “if there has been a de-investment, there’s still plenty of money to go around” and she thinks that even if some quantum technologies don’t pan out, the sector will continue to provide valuable skills for graduates. “No matter what you do in quantum, there are certain skills and experiences that can cross over into other parts of tech, other parts of science, other parts of business.”

In addition, quantum research is advancing everything from software to materials science and Bray thinks this could kick-start completely new fields of research and technology. “In any race, there are horses that will not cross the finish line, but they might run off and cross some other finish line that we didn’t know existed,” she says.

Building the quantum workforce: Araceli Venegas-Gomez

While working in industry as an aerospace engineer, Araceli Venegas-Gomez was looking for a new challenge and decided to pursue her passion for physics, getting her master’s degree in medical physics alongside her other duties. Upon completing that degree in 2016, she decided to take on a second master’s followed by a PhD in quantum optics and simulation at the University of Strathclyde, UK. By the time the COVID-19 pandemic hit in 2020, she had defended her thesis, registered her company, and joined the University of Bristol Quantum Technology Enterprise Centre as an executive fellow.

Araceli Venegas-Gomez
Araceli Venegas-Gomez “If you have a background in physics and business, everyone is looking for you.” (Courtesy: Qureca)

It was during her studies at Strathclyde that Venegas-Gomez decided to use her vast experience across industry and academia, as well as her quantum knowledge. Thanks to a fellowship from the Optica Foundation, she was able to launch QURECA (Quantum Resources and Careers). Today, it’s a global company that helps to train and recruit individuals, while also providing business development advice for for both individuals and companies in the quantum sphere. As founder and chief executive of the firm, her aims were to link the different stakeholders in the quantum ecosystem and to raise the quantum awareness of the general public. Crucially, she also wanted to ease the skills bottleneck in the quantum workforce and to bring newcomers into the quantum ecosystem.

As Venegas-Gomez points out, there is a significant scarcity of skilled quantum professionals for the many roles that need filling. This shortage is exacerbated by the competition between academia and industry for the same pool of talent. “Five or ten years ago, it was difficult enough to find graduate students who would like to pursue a career in quantum science, and that was just in academia,” explains Venegas-Gomez. “With the quantum market booming, industry is also looking to hire from the same pool of candidates, so you have more competition, for pretty much the same number of people.”

Slow progress

Venegas-Gomez highlights that the quantum arena is very broad. “You can have a career in research, or work in industry, but there are so many different quantum technologies that are coming onto the market, at different stages of development. You can work on software or hardware or engineering; you can do communications; you can work on developing the business side; or perhaps even in patent law.” While some of these jobs are highly technical and would require a master’s or a PhD in that specific area of quantum tech, there are plenty of roles that would accept graduates with only an MSc in physics or even a more interdisciplinary experience. “If you have a background in physics and business, everyone is looking for you,” she adds.

From what she sees in the quantum recruitment market today, there is no job shortage for physicists – instead there is a dearth of physicists with the right skills for a specific role. Venegas-Gomez explains that graduates with a physics degree in many fields have transferable skills that allow them to work in “absolutely any sector that you could imagine”. But depending on the specific area of academia or industry within the quantum marketplace that you might be interested in, you will likely require some specific competences.

As Bray also stated, Venegas-Gomez acknowledges that the skills and knowledge that physicists pick up can vary significantly between universities – making it challenging for employers to find the right candidates. To avoid picking the wrong course for you, Venegas-Gomez recommends that potential master’s and PhD students speak to a number of alumni from any given institute to find out more about the course, and see what areas they work in today. This can also be a great networking strategy, especially as some cohorts can have as few as 10–15 students all keen work with these companies or university departments in the future.

Despite the interest and investment in the quantum industry, new recruits should note that it is is still in its early stages. This slow progress can lead to high expectations that are not met, causing frustration for both employers and potential employees. “Only today, we had an employer approach us (QURECA) saying that they wanted someone with three to four years’ experience in Python, and a bachelor’s or master’s degree – it didn’t have to be quantum or even physics specifically,” reveals Venegas-Gomez. “This means that [to get this particular job] you could have a background in computer science or software engineering. Having an MSc in quantum per se is not going to guarantee that you get a job in quantum technologies, unless that is something very specific that employer is looking for.”

So what specific competencies are employers across the board looking for? If an company isn’t looking for a specific technical qualification, what happens if they get two similar CVs for the same role? Do they look at an applicant’s research output and publications, or are they looking for something different? “What I find is that employers are looking for candidates who can show that, alongside their academic achievements, they have been doing outreach and communication activities,” says Venegas-Gomez. “Maybe you took on a business internship and have a good idea of how the industry works beyond university – this is what will really stand out.”

She adds that so-called soft-skills – such as demonstrating good leadership, teamwork, and excellent communication skills – are very valued. “This is an industry where highly skilled technical people need to be able to work with people vastly beyond their area of expertise. You need to be able to explain Hamiltonians or error corrections to someone who is not quantum-literate and explain the value of what you are working on.”

Venegas-Gomez is also keen that job-seekers realize that the chances of finding a role at a large firm such as Google, IBM or Microsoft are still slim-to-none for most quantum graduates. “I have seen a lot of people complete their master’s in a quantum field and think that they will immediately find the perfect job. The reality is that they likely need to be patient and get some more experience in the field before they get that dream job.” Her main advice to students is to clearly define their career goals, within the context of the booming and ever-growing quantum market, before pursuing a specific degree. The skills you acquire with a quantum degree are also highly transferable to other fields, meaning there are lots of alternatives out there even if you can’t find the right job in the quantum sphere. For example, experience in data science or software development can complement quantum expertise, making you a versatile and coveted contender in today’s job market.

Approaching “quantum advantage”: Mark Elo

Last year, IBM broke records by building the first quantum chip with more than 1000 qubits. The project represents millions of dollars of investment and the company is competing with the likes of Intel and Google to achieve “quantum advantage”, which refers to a quantum computer that can solve problems that are out of reach for classical machines.

Despite the hype, there is work to be done before the technology becomes widespread – a commercial quantum computer needs millions of qubits, and challenges in error correction and algorithm efficiency must be addressed.

Mark Elo
Mark Elo “There are some geniuses in the world, but if they can’t communicate it’s no good in an industrial environment.”

“We’re trying to move it away from a science experiment to something that’s more an industrial product,” says Mark Elo, chief marketing officer at Tabor Electronics. Tabor has been building electronic signal equipment for over 50 years and recently started applying this technology to quantum computing. The company’s focus is on control systems – classical electronic signals that interact with quantum states. At the 2024 APS March Meeting, Tabor, alongside its partners FormFactor and QuantWare, unveiled the first stage of the Echo-5Q project, a five-qubit quantum computer.

Elo describes the five years he’s worked on quantum computing as a period of significant change. Whereas researchers once relied on “disparate pieces of equipment” to build experiments, he says that the industry has changed such that “there are [now] products designed specifically for quantum computing”.

The ultimate goal of companies like Tabor is a “full-stack” solution where software and hardware are integrated into a single platform. However, the practicalities of commercializing quantum computing require a workforce with the right skills. Two years ago the consultancy company McKinsey reported that companies were already struggling to recruit, and they predicted that by 2025, half of the jobs in quantum computing will not be filled. Like many in the industry, Elo sees skills gaps in the sector that must be addressed to realize the potential of quantum technology.

Elo’s background is in solid-state electronics, and he worked for nearly three decades on radio-frequency engineering for companies including HP and Keithley. Most quantum-computing control systems use radio waves to interface with the qubits, so when he moved to Tabor in 2019, Elo saw his career come “full circle”, combining the knowledge from his degree with his industry experience. “It’s been like a fusion of two technologies” he says.

It’s at this interface between physics and electronic engineering where Elo sees a skills shortage developing. “You need some level of electrical engineering and radio-frequency knowledge to lay out a quantum chip,” he explains. “The most common qubit is a transmon, and that is all driven by radio waves. Deep knowledge of how radio waves propagate through cables, through connectors, through the sub-assemblies and the amplifiers in the refrigeration unit is very important.” Elo encourages physics students interested in quantum computing to consider adding engineering – specifically radio-frequency electronics – courses to their curricula.

Transferable skills

The Tabor team brings together engineers and physicists, but there are some universal skills it looks for when recruiting. People skills, for example, are a must. “There are some geniuses in the world, but if they can’t communicate it’s no good in an industrial environment,” says Elo.

Elo describes his work as “super exciting” and says “I feel lucky in the career and the technology I’ve been involved in because I got to ride the wave of the cellular revolution all the way up to 5G and now I’m on to the next new technology.” However, because quantum is an emerging field, he thinks that graduates need to be comfortable with some risk before embarking on a career. He explains that companies don’t always make money right now in the quantum sector – “you spend a lot to make a very small amount”. But, as Elo’s own career shows, the right technical skills will always allow you to switch industries if needed.

Like many others, Elo is motivated by the excitement of competing to commercialize this new technology. “It’s still a market that’s full of ideas and people marketing their ideas to raise money,” he says. “The real measure of success is to be able to look at when those ideas become profitable. And that’s when we know we’ve crossed a threshold.”

The post Taking the leap – how to prepare for your future in the quantum workforce appeared first on Physics World.

  •  

Researchers with a large network of unique collaborators have longer careers, finds study

Par : No Author
5 septembre 2024 à 17:00

Are you keen to advance your scientific career? If so, it helps to have a big network of colleagues and a broad range of unique collaborators, according to a new analysis of physicists’ publication data. The study also finds that female scientists tend to work in more tightly connected groups than men, which can hamper their career progression.

The study was carried out by a team led by Mingrong She, a data analyst at Maastricht University in the Netherlands. It examined the article history of more than 23,000 researchers who had published at least three papers in American Physical Society (APS) journals. Each scientist’s last paper had been published before 2015, suggesting their research career had ended (arXiv:2408.02482).

To measure “collaboration behaviour”, the study noted the size of each scientist’s collaborative network, the reoccurrence of collaborations, the “interconnectivity” of the co-authors and the average number of co-authors per publication. Physicists with larger networks and a greater number of unique collaborators were found to have had longer careers and been more likely to become principal investigators, as given by their position in the author list.

On the other hand, publishing repeatedly with the same highly interconnected co-authors is associated with shorter careers and a lower chance of achieving principal investigator status, as is having a larger average number of coauthors.

The team also found that the more that physicists publish with the same co-authors, the more interconnected their networks become. Conversely, as network size increases, networks tended to be less dense and repeat collaboration less frequent.

Close-knit collaboration

In terms of gender, the study finds that women have more interconnected networks and a higher average number of co-authors than men. Female physicists are also more likely to publish repeatedly with the same co-authors, with women therefore being less likely than men to become principal investigators. Male scientists also have longer overall careers and stay in science longer after achieving principal investigator status than women, the study finds.

Collaborating with experts from diverse backgrounds introduces novel perspectives and opportunities

Mingrong She

“Collaborating with experts from diverse backgrounds introduces novel perspectives and opportunities [and] increases the probability of establishing connections with prominent researchers and institutions,” She told Physics World. Diverse collaboration also “mitigates the risk of being confined to a narrow niche and enhances adaptability” she adds,”both of which are indispensable for long-term career growth”.

Close-knit collaboration networks can be good for fostering professional support, the study authors state, but they reduce opportunities for female researchers to form new professional connections and lower their visibility within the broader scientific community. Similarly, larger numbers of co-authors dilute individual contributions, making it harder for female researchers to stand out.

She says the study “highlights how the structure of collaboration networks can reinforce existing inequalities, potentially limiting opportunities for women to achieve career longevity and progression”. Such issues could be improved with policies that help scientists to engage a wider array of collaborators, rewarding and encouraging small-team publications and diverse collaboration. Policies could include adjustments to performance evaluations and grant applications, and targeted training programmes.

The study also highlights lower mobility as a major obstacle for female scientists, suggesting that better childcare support, hybrid working and financial incentives could help improve the mobility and network size of female scientists.

The post Researchers with a large network of unique collaborators have longer careers, finds study appeared first on Physics World.

  •  

Quark distribution in light–heavy mesons is mapped using innovative calculations

Par : No Author
4 septembre 2024 à 09:27

The distribution of quarks inside flavour-asymmetric mesons has been mapped by Yin-Zhen Xu of the University of Huelva and Pablo de Olavide University in Spain. These mesons are strongly interacting particles composed of a quark and an antiquark, one heavy and one light.

Xu employed the Dyson–Schwinger/Bethe–Salpeter equation technique to calculate the heavy–light meson electromagnetic form factors, which can be measured in collider experiments. These form factors provide invaluable information about the properties of the strong interactions as described by quantum chromodynamics.

“The electromagnetic form factors, which describe the response of composite particles to electromagnetic probes, provide an important tool for understanding the structure of bound states in quantum chromodynamics,” explains Xu. “In particular, they can be directly related to the charge distribution inside hadrons.”

From numerous experiments, we know that particles that interact via the strong force (such as protons and neutrons) consist of quarks bound together by gluons. This similar to how nuclei and electrons are bound into atoms through the exchange of photons as described by quantum electrodynamics. However, doing precise calculations in quantum chromodynamics is nearly impossible, and this makes predicting the internal structure of hadrons extremely challenging.

Approximation techniques

To address this challenge, scientists have developed several approximation techniques. One such method is the lattice approach, which replaces the infinite number of points in real space with a finite grid, making calculations more manageable. Another effective method involves solving the Dyson–Schwinger/Bethe–Salpeter equations. They ignore certain subtle effects in the strong interactions of quarks with gluons, as well as the virtual quark–antiquark pairs that are constantly being born and disappearing in the vacuum.

Xu’s new study is described in the Journal for High Energy Physics, utilized the Dyson-Schwinger/Bethe-Salpeter approach to investigate the properties of hadrons made of quarks and antiquarks of different types (or flavors) with significant mass differences. For instance, K-mesons are composed of a strange antiquark with a mass of around 100 MeV and an up or down quark with a mass of only a few megaelectronvolts. The substantial difference in quark masses simplifies their interaction, which allows the author to extract more information about the structure of flavour-asymmetric mesons.

Xu began his study by calculating the masses of mesons and compared these results with experimental data. He found that the Dyson–Schwinger/Bethe–Salpeter method produced results comparable to the best previously used methods, validating his approach.

Deducing quark distributions

Xu’s next step was to deduce the distribution of quarks within the mesons. Quantum effects prevent particles from being localized in space, so he calculated the probability of their presence in certain regions, whose size depends on the properties of the quarks and their interactions with surrounding particles.

Xu discovered that the heavier the quark, the more localized it is within the meson with the difference in the distribution range reaching more than ten times. For instance, in B-mesons, the distribution range for a bottom antiquark is much smaller (0.07 fm) compared to that for the much lighter up or down quarks (0.80 fm). In contrast, the distribution range for two light quarks inside π-mesons is almost equal.

Using these quark distributions, Xu then computed the electromagnetic form factors, which encode the details of charge and current distribution within the mesons. The values he obtained closely matched the available experimental data.

In his work, Xu has shown that the Dyson–Schwinger/Bethe–Salpeter technique is particularly well-suited for studying heavy-light mesons, often surpassing even the most sophisticated and resource-intensive methods used previously.

Room for refinement

Although Xu’s results are promising, he admits that there is room for refinement. On the experimental side, measuring some currently unknown form factors could allow comparisons with his computed values to further verify the method’s consistency.

From a theoretical perspective, more details about strong interactions within mesons could be incorporated into the Dyson–Schwinger/Bethe–Salpeter method to enhance computational accuracy. Additionally, other meson parameters can be computed using this approach, allowing more extensive comparisons with experimental data.

“Based on the theoretical framework applied in this work, other properties of heavy–light mesons, such as various decay rates, can be further investigated,” concludes Xu.

The study also provides a powerful tool for exploring the intricate world of strongly interacting subatomic particles, potentially opening new avenues in particle physics research.

The calculations are described in The Journal of High Energy Physics.

The post Quark distribution in light–heavy mesons is mapped using innovative calculations appeared first on Physics World.

  •  

Akiko Nakayama: the Japanese artist skilled in fluid mechanics

Par : No Author
3 septembre 2024 à 12:00

Any artist who paints is intuitively an expert in empirical fluid mechanics, manipulating liquid and pigment for aesthetic effect. The paint is usually brushed onto a surface material, although it can also be splattered onto a horizontal canvas in a technique made famous by Jackson Pollock or even layered on with a palette knife, as in the works of Paul Cezanne or Henri Matisse. But however the paint is delivered, once it dries, the result is always a fixed, static image.

Japanese artist Akiko Nakayama is different. Based in Tokyo, she makes the dynamic real-time flow of paint, ink and other liquids the centre of her work. Using a variety of colours, she encourages the fluids to move and mix, creating gorgeous, intricate patterns that transmute into unexpected forms and shades.

What also sets Nakayama apart is that she doesn’t work in private. Instead, she performs public “Alive painting” sessions, projecting her creations onto large surfaces, to the accompaniment of music. Audiences see the walls of the venue covered with coloured shapes that arise from natural processes modified by her intervention. The forms look abstract, but in their mutations often resemble living creatures in motion.

Inspired by ink

Born in 1988, Nakayama was trained in conventional techniques of Eastern and Western painting, earning degrees in fine art from Tokyo Zokei University in 2012 and 2014. Her interest in dynamic art goes back to a childhood calligraphy class, where she found herself enthralled by the beauty of the ink flowing in the water while washing her brush.

“It was more beautiful than the characters [I had written],” she recalls, finding herself “fascinated by the freedom of the ink”. Later, while learning to draw, she always preferred to capture a “moment of time” in her sketches. Eventually, Nakayama taught herself how to make patterns from moving fluids, motivated by Johann Wolfgang von Goethe’s treatise Theory of Colours (1810).

Best known as a writer, Goethe also took a keen interest in science and his book critiques Isaac Newton’s work on the physical properties of light. Goethe instead offered his own more subjective insights into his experiments with colour and the beauty they produce. Despite its flaws as a physical theory of light, reading the book encouraged Nakayama to develop methods to pour and agitate various paints in Petri dishes, and to project the results in real time using a camera designed for close-up viewing.

Akiko Nakayama stands bottom right of a large screen that displays the artwork she is creating on stage
Wonderful stuff Appearing at the Dao Xuan Festival in Hanoi, Vietnam, in 2019, Akiko Nakayama is shown here creating a “dendritic” painting by applying colourful droplets of acrylic ink mixed with alcohol onto a flat surface coated with a layer of acrylic paint. Audience members watch as beautiful fractal structures appear in front of their eyes. (Courtesy: Akiko Nakayama)

She started learning about liquids, reading research papers and even began examining the behaviour of water droplets under strobe lights. Nakayama also looked into studies of zero gravity on liquids by JAXA, the Japanese space agency. After finding a 10 ml sample of ferrofluid – a nanoscale ferromagnetic colloidal liquid – in a student science kit, she started using the material in her presentations, manipulating it with a small, permanent magnet.

Nakayama’s art has an unexpected link with space science because ferrofluids were invented in 1963 by NASA engineer Steve Papell, who sought a way to pump liquid rocket fuel in microgravity environments. By putting tiny iron oxide particles into the fuel, he found that the liquid could be drawn into the rocket engine by an electromagnet. Ferrofluids were never used by NASA, but they have many applications in industry, medicine and consumer products.

Secret science of painting

Having presented dozens of live performances, exhibitions and commissioned works in Japan and internationally over the last decade, other scientific connections have emerged for Nakayama. She has, for example, mixed acrylic ink with alcohol, dropping the fluid onto a thin layer of acrylic paint to create wonderfully intricate branched, tree-like dendritic forms.

In 2023 her painting caught the attention of materials scientists San To Chan and Eliot Fried at the Okinawa Institute of Science and Technology in Japan. They ended up working with Nakayama to analyse dendritic spreading in terms of the interplay of the different viscosities and surface tensions of the fluids (figure 1).

1 Magic mixtures

Images of 15 ink blots that have spread different amounts
(CC BY PNAS Nexus 3 59)

When pure ink is dropped onto an acrylic resin substrate 400 mm thick, it remains fairly static over time (top). But if isopropanol (IPA) is mixed into the ink, the combined droplet spreads out to yield intricate, tree-like dendritic patterns. Shown here are drops with IPA at two different volume concentrations: 16.7% (middle) and 50% (bottom).

Chan and Fried published their findings, concluding that the structures have a fractal dimension of 1.68, which is characteristic of “diffusion-limited aggregation” – a process that involves particles clustering together as they diffuse through a medium (PNAS Nexus 3 59).

The two researchers also investigated the liquid parameters so that an experimentalist or artist could tune the arrangement to vary the dendritic results. Nakayama calls this result a “map” that allows her to purposefully create varied artistic patterns rather than “going on an adventure blindly”. Chan and Fried have even drawn up a list of practical instructions so that anyone inclined can make their own dendritic paintings at home.

Another researcher who has also delved into the connection between fluid dynamics and art is Roberto Zenit, a mechanical engineer at Brown University in the US. Zenit has shown that Jackson Pollock created his famous abstract compositions by carefully controlling the motion of viscous filaments (Phys. Rev. Fluids 4 110507). Pollock also avoided hydrodynamic instabilities that would have otherwise made the paint break up before it hit the canvas (PLOS One 14 e0223706).

Deeper meanings

Although Nakayama likes to explore the science behind her artworks, she has not lost sight of the deeper meanings in art. She told me, for example, that the bubbles that sometimes arise as she creates liquid shapes have a connection with the so-called “Vanitas” tradition in art that emerged in western Europe in the 16th and 17th centuries.

Derived from the Latin word for “vanity”, this kind of art was not about having an over-inflated belief in oneself as the word might suggest. Instead, these still-life paintings, largely by Dutch artists, would often have symbols and images that indicate the transience and fragility of life, such as snuffed-out candles with wisps of smoke, or fragile soap bubbles blown from a pipe.

A large screen showing a bubble in a field of blue
Dramatic effect A close-up of one of the bubbles in Nakayama’s live paintings created at the Mutek festival in Tokyo in December 2020, with the artist on the right and musical accompanists on the left. (Courtesy: Haruka Akagi)

The real bubbles in Nakayama’s artworks always stay spherical thanks to their strong surface tension, thereby displaying – in her mind – a human-like mixture of strength and vulnerability. It’s not quite the same as the fragility of the Vanitas paintings, but for Nakayama – who acknowledges that she’s not a scientist – her works are all about creating “a visual conversation between an artist and science”.

Asked about her future directions in art, however, Nakayama’s response makes immediate sense to any scientist. “Finding universal forms of natural phenomena in paintings is a joy and discovery for me,” she says. “I would be happy to continue to learn about the physics and science that make up this world, and to use visual expression to say ‘the world is beautiful’.”

The post Akiko Nakayama: the Japanese artist skilled in fluid mechanics appeared first on Physics World.

  •  

Heavy exotic antinucleus gives up no secrets about antimatter asymmetry

Par : No Author
29 août 2024 à 15:08

An antihyperhydrogen-4 nucleus – the heaviest antinucleus ever produced – has been observed in heavy ion collisions by the STAR Collaboration at Brookhaven National Laboratory in the US. The antihypernucleus contains a strange quark, making it a heavier cousin of antihydrogen-4. Physicists hope that studying such antimatter particles could shed light on why there is much more matter than antimatter in the visible universe – however in this case, nothing new beyond the Standard Model of particle physics was observed.

In the first millionth of a second after the Big Bang, the universe is thought to have been too hot for quarks to have been bound into hadrons. Instead it comprised a strongly interacting fluid called a quark–gluon plasma. As the universe expanded and cooled, bound baryons and mesons were created.

The Standard Model forbids the creation of matter without the simultaneous creation of antimatter, and yet the universe appears to be made entirely of matter. While antimatter is created by nuclear processes – both naturally and in experiments – it is swiftly annihilated on contact with matter.

The Standard Model also says that matter and antimatter should be identical after charge, parity and time are reversed. Therefore, finding even tiny asymmetries in how matter and antimatter behave could provide important information about physics beyond the Standard Model.

Colliding heavy ions

One way forward is to create quark–gluon plasma in the laboratory and study particle–antiparticle creation. Quark–gluon plasma is made by smashing together heavy ions such as lead or gold. A variety of exotic particles and antiparticles emerge from these collisions. Many of them decay almost immediately, but their decay products can be detected and compared with theoretical predictions.

Quark–gluon plasma can include hypernuclei, which are nuclei containing one or more hyperons. Hyperons are baryons containing one or more strange quarks, making hyperons the heavier cousins of protons and neutrons. These hypernuclei are thought to have been present in the high-energy conditions of the early universe, so physicists are keen to see if they exhibit any matter/antimatter asymmetries.

In 2010, the STAR collaboration unveiled the first evidence of an antihypernucleus, which was created by smashing gold nuclei together at 200 GeV. This was the antihypertriton, which is the antimatter version of an exotic counterpart to tritium in which one of the down quarks in one of the neutrons is replaced by a strange quark.

Now, STAR physicists have created a heavier antihypernucleus. They recorded over 6 billion collisions using pairs of uranium, ruthenium, zirconium and gold ions moving at more than 99.9% of the speed of light. In the resulting quark–gluon plasma, the researchers found evidence of antihyperhydrogen-4 (antihypertriton with an extra antineutron). Antihyperhydrogen-4 decays almost immediately by the emission of a pion, producing antihelium-4. This was detected by the researchers in 2011. The researchers therefore knew what to look for among the debris of their collisions.

Sifting through the collisions

Sifting through the collision data, the researchers found 22 events that appeared to be antihyperhydrogen-4 decays. After subtracting the expected background, they were left with approximately 16 events, which was statistically significant enough to claim that they had observed antihyperhydrogen-4.

The researchers also observed evidence of the decays of hyperhydrogen-4, antihypertriton and hypertriton. In all cases, the results were consistent with the predictions of charge–parity–time (CPT) symmetry. This is a central tenet of modern physics that says that if the charge and internal quantum numbers of a particle are reversed, the spatial co-ordinates are reversed and the direction of time is reversed, the outcome of an experiment will be identical.

STAR member Hao Qiu of the Institute of Modern Physics at the Chinese Academy of Sciences says that, in his view, the most important feature of the work is the observation of the hyperhydrogen-4. “In terms of the CPT test, it’s just that we’re able to do it…The uncertainty is not very small compared with some other tests.”

Qiu says that he, personally, hopes the latest research may provide some insight into violation of charge–parity symmetry (i.e. without flipping the direction of time). This has already been shown to occur in some systems. “Ultimately, though, we’re experimentalists – we look at all approaches as hard as we can,” he says; “but if we see CPT symmetry breaking we have to throw out an awful lot of current physics.”

“I really do think it’s an incredibly impressive bit of experimental science,” says theoretical nuclear physicist Thomas Cohen of University of Maryland, College Park; “The idea that they make thousands of particles each collision, find one of these in only a tiny fraction of these events, and yet they’re able to identify this in all this really complicated background – truly amazing!”

He notes, however, that “this is not the place to look for CPT violation…Making precision measurements on the positron mass versus the electron mass or that of the proton versus the antiproton is a much more promising direction simply because we have so many more of them that we can actually do precision measurements.”    

The research is described in Nature.

The post Heavy exotic antinucleus gives up no secrets about antimatter asymmetry appeared first on Physics World.

  •  
❌
❌