↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Physicists reveal the mechanics of tea scum

If you have ever brewed a cup of black tea with hard water you will be familiar with the oily film that can form on the surface of the tea after just a few minutes.

Known as “tea scum” the film consists of calcium carbonate crystals within an organic matrix. Yet it can be easily broken apart with a quick stir of a teaspoon.

Physicists in France and the UK have now examined how this film forms and also what happens when it breaks apart through stirring.

They did so by first sprinkling graphite powder into a water tank. Thanks to capillary forces, the particles gradually clump together to form rafts. The researchers then generated waves in the tank that broke apart the rafts and filmed the process with a camera.

Through these experiments and theoretical modelling, they found that the rafts break up when diagonal cracks form at thte raft’s centre. This causes them to fracture into larger chunks before the waves eventually eroded them away.

They found that the polygonal shapes created when the rafts split up is the same as that seen in ice floes.

Despite the visual similarities, however, sea ice and tea scum break up through different physical mechanisms. While ice is brittle, bending and snapping under the weight of crushing waves, the graphite rafts come apart when the viscous stress exerted by the waves overcome the capillary forces that hold the individual particles together.

Buoyed by their findings, the researchers now plan to use their model to explain the behaviour of other thin biofilms, such as pond scum.

The post Physicists reveal the mechanics of tea scum appeared first on Physics World.

Positronium gas is laser-cooled to one degree above absolute zero

Par : No Author
29-09-2024 positron cooling
Matter and antimatter Artist’s impression of positronium being instantaneously cooled in a vacuum by a series of laser pulses with rapidly varying wavelengths. (Courtesy: 2024 Yoshioka et al./CC-BY-ND)

Researchers at the University of Tokyo have published a paper in the journal Nature that describes a new laser technique that is capable of cooling a gas of positronium atoms to temperatures as low as 1 K. Written by Kosuke Yoshioka and colleagues at the University of Tokyo, the paper follows on from a publication earlier this year from the AEgIS team at CERN, who described how a different laser technique was used to cool positronium to 170 K.

Positronium comprises a single electron bound to its antimatter counterpart, the positron. Although electrons and positrons will ultimately annihilate each other, they can briefly bind together to form an exotic atom. Electrons and positrons are fundamental particles that are nearly point like, so positronium provides a very simple atomic system for experimental study. Indeed, this simplicity means that precision studies of positronium could reveal new physics beyond the Standard Model.

Quantum electrodynamics

One area of interest is the precise measurement of the energy required to excite positronium from its ground state to its first excited state. Such measurements could enable more rigorous experimental tests of quantum electrodynamics (QED). While QED has been confirmed to extraordinary precision, any tiny deviations could reveal new physics.

An important barrier to making precision measurements is the inherent motion of positronium atoms. “This large randomness of motion in positronium is caused by its short lifetime of 142 ns, combined with its small mass − 1000 times lighter than a hydrogen atom,” Yoshioka explains. “This makes precise studies challenging.”

In 1988, two researchers at Lawrence Livermore National Laboratory in the US published a theoretical exploration of how the challenge could be overcome by using laser cooling to slow positronium atoms to very low speeds. Laser cooling is routinely used to cool conventional atoms and involves having the atoms absorb photons and then re-emitting the photons in random directions.

Chirped pulse train

Building on this early work, Yoshioka’s team has developed new laser system that is ideal for cooling positronium. Yoshioka explains that in the Tokyo setup, “the laser emits a chirped pulse train, with the frequency increasing at 500 GHz/μs, and lasting 100 ns. Unlike previous demonstrations, our approach is optimized to cool positronium to ultralow velocities.”

In a chirped pulse, the frequency of the laser light increases over the duration of the pulse. It allows the cooling system to respond to the slowing of the atoms by keeping the photon absorption on resonance.

Using this technique, Yoshioka’s team successfully cooled positronium atoms to temperatures around 1 K, all within just 100 ns. “This temperature is significantly lower than previously achieved, and simulations suggested that an even lower temperature in the 10 mK regime could be realized via a coherent mechanism,” Yoshioka says. Although the team’s current approach is still some distance from achieving this “recoil limit” temperature, the success of their initial demonstration has given them confidence that further improvements could bring them closer to this goal.

“This breakthrough could potentially lead to stringent tests of particle physics theories and investigations into matter-antimatter asymmetry,” Yoshioka predicts. “That might allow us to uncover major mysteries in physics, such as the reason why antimatter is almost absent in our universe.”

The post Positronium gas is laser-cooled to one degree above absolute zero appeared first on Physics World.

Ask me anything: Fatima Gunning – ‘Thinking outside the box is a winner when it comes to problem solving’

What skills do you use every day in your job?

I am fortunate to have several different roles, and problem-solving is a skill I use in each. As physicists, we’re constantly solving problems in different ways, and, as researchers, we are always trying to question the unknown. To understand the physical world more, we need to be curious and willing to reformulate our questions when they are challenged.

Researchers need to keep asking ‘Why?’ Trying to understand a problem or challenge – listening and considering other views – is essential.

In everyday work such as administration, research, teaching and mentoring, I also find that thinking outside the box is a winner when it comes to problem solving. I try not to just go along with whatever the team or the group is thinking. Instead, I try to consider different points of view. Researchers need to keep asking ‘Why?’ Trying to understand a problem or challenge – listening and considering other views – is essential.

Another critical skill I use is communication. In my work, I need to be able to listen, speak and write a lot. It could be to convey why our research is important and why it should be funded. It could be to craft new policies, mediate conflict or share research findings clearly with colleagues, students, managers and members of the public. So communication is definitely key.

What do you like best and least about your job?

I graduated about 30 years ago and, during that time, the things I like best or least have never stayed the same. At the moment, the best part of my job is working with research students – not just at master’s and PhD level, but final-year undergraduates who might be getting hands-on experience in a lab for the first time. There’s great satisfaction and a sense of “job well done” whenever I demonstrate a concept they’ve known for several years but have never “seen” in action. When they shout “Ah, I get it!”, it’s a great feeling. It’s also really rewarding to receive similar reactions from my education and public engagement work, such as when I visit primary and secondary schools.

At the moment, my least favourite part of my job is the lack of time. I’m not very good at time management, and I find it hard to say “no” to people in need, especially if I know how to help them. It’s difficult to juggle work, mentoring, volunteering activities and home life. During the COVID-19 pandemic, I realized that taking time off to pursue a hobby is vital – not only for my wellbeing but also to give me clarity in decision making.

What do you know today that you wish you knew when you were starting out in your career?

I wish I had realized the important of mentorship sooner. Throughout my career, I’ve had people who’ve supported me along the way. It might just have been a brief conversation in the corridor, help with a grant application or a serendipitous chat at a conference, although at other times it might have been through in-depth discussion of my work. I only started to regard the help as “mentorship” when I did a leadership course that included mentor/mentee training. Looking back, those encounters really boosted my confidence and helped me make rational choices.

There are so many opportunities to meet people in your field and people are always happy to share their experiences

Once you realize what mentors can do, you can plan to speak to people strategically. These conversations can help you make decisions and introduce you to new contacts. They can also help you understand what career paths are available – it’s okay to take your time to explore career options or even to change direction. Students and young professionals should also engage with professional societies, such as the Institute of Physics. There are so many opportunities to meet people in your field and people are always happy to share their experiences. We need to come out of our “shy” shells and talk to people, no matter how senior and famous they are. That’s certainly the message I’d have given myself 30 years ago.

The post Ask me anything: Fatima Gunning – ‘Thinking outside the box is a winner when it comes to problem solving’ appeared first on Physics World.

Knowledge grows step-by-step despite the exponential growth of papers, finds study

Par : No Author

Scientific knowledge is growing at a linear rate despite an exponential increase in publications. That’s according to a study by physicists in China and the US, who say their finding points to a decline in overall scientific productivity. The study therefore contradicts the notion that productivity and knowledge grow hand in hand – but adds weight to the view that the rate of scientific discovery may be slowing or that “information fatigue” and the vast number of papers can drown out new discoveries.

Defining knowledge is complex, but it can be thought of as a network of interconnected beliefs and information. To measure it, the authors previously created a knowledge quantification index (KQI). This tool uses various scientific impact metrics to examine the network structures created by publications and their citations and quantifies how well publications reduce the uncertainty of the network, and thus knowledge.

The researchers claim the tool’s effectiveness has been validated through multiple approaches, including analysing the impact of work by Nobel laureates.

In the latest study, published on arXiv, the team analysed 213 million scientific papers, published between 1800 and 2020, as well as 7.6 million patents filed between 1976 and 2020. Using the data, they built annual snapshots of citation networks, which they then scrutinised with the KQI to observe changes in knowledge over time.

The researchers – based at Shanghai Jiao Tong University in Shanghai, the University of Minnesota in the US and the Institute of Geographic Sciences and Natural Resources Research in Beijing –found that while the number of publications has been increasing exponentially, knowledge has not.

Instead, their KQI suggests that knowledge has been growing in a linear fashion. Different scientific disciplines do display varying rates of knowledge growth, but they all have the same linear growth pattern. Patent growth was found to be much slower than publication growth but also shows the linear growth in the KQI.

According to the authors, the analysis indicates “no significant change in the rate of human knowledge acquisition”, suggesting that our understanding of the world has been progressing at a steady pace.

If scientific productivity is defined as the number of papers required to grow knowledge, this signals a significant decline in productivity, the authors claim.

The analysis also revealed inflection points associated with new discoveries, major breakthroughs and other important developments, with knowledge growing at different linear rates before and after.

Such inflection points create the illusion of exponential knowledge growth due to the sudden alteration in growth rates, which may, according to the study authors, have led previous studies to conclude that knowledge is growing exponentially.

Research focus

“Research has shown that the disruptiveness of individual publications – a rough indicator of knowledge growth – has been declining over recent decades,” says Xiangyi Meng, a physicist at Northwestern University in the US, who works in network science but was not involved in the research. “This suggests that the rate of knowledge growth must be slower than the exponential rise in the number of publications.”

Meng adds, however, that the linear growth finding is “surprising” and “somewhat pessimistic” – and that further analysis is needed to confirm if knowledge growth is indeed linear or whether it “more likely, follows a near-linear polynomial pattern, considering that human civilization is accelerating on a much larger scale”.

Due to the significant variation in the quality of scientific publications, Meng says that article growth may “not be a reliable denominator for measuring scientific efficiency”. Instead, he suggests that analysing research funding and how it is allocated and evolves over time might be a better focus.

The post Knowledge grows step-by-step despite the exponential growth of papers, finds study appeared first on Physics World.

Genetically engineered bacteria solve computational problems

Par : Tami Freeman

Cell-based biocomputing is a novel technique that uses cellular processes to perform computations. Such micron-scale biocomputers could overcome many of the energy, cost and technological limitations of conventional microprocessor-based computers, but the technology is still very much in its infancy. One of the key challenges is the creation of cell-based systems that can solve complex computational problems.

Now a research team from the Saha Institute of Nuclear Physics in India has used genetically modified bacteria to create a cell-based biocomputer with problem-solving capabilities. The researchers created 14 engineered bacterial cells, each of which functioned as a modular and configurable system. They demonstrated that by mixing and matching appropriate modules, the resulting multicellular system could solve nine yes/no computational decision problems and one optimization problem.

The cellular system, described in Nature Chemical Biology, can identify prime numbers, check whether a given letter is a vowel, and even determine the maximum number of pizza or pie slices obtained from a specific number of straight cuts. Here, senior author Sangram Bagh explains the study’s aims and findings.

How does cell-based computing work?

Living cells use computation to carry out biological tasks. For instance, our brain’s neurons communicate and compute to make decisions; and in the event of an external attack, our immune cells collaborate, compute and make judgements. The development of synthetic biology opens up new avenues for engineering live cells to carry out human-designed computation.

The fusion of biology and computer science has resulted in the development of living cell-based biocomputers to solve computational problems. Here, living cells are engineered to use as circuits and components to build biocomputers. Lately, researchers have been manipulating living cells to find solutions for maze and graph colouring puzzles.

Why did you employ bacteria to perform the computations?

Bacteria are single-cell organisms, 2–5 µm in size, with fast replication times (about 30 min). They can survive in many conditions and require minimum energy, thus they provide an ideal chassis for building micron-scale computer technology. We chose to use Escherichia coli, as it has been studied in detail and is easy to manipulate, making it a logical choice to build a biocomputer.

How did you engineer the bacteria to solve problems?

We built synthetic gene regulatory networks in bacteria in such a way that each bacterium worked as an artificial neuro-synapse. In this way, 14 genetically engineered bacteria were created, each acting like an artificial neuron, which we named “bactoneurons”. When these bactoneurons are mixed in a liquid culture in a test tube, they create an artificial neural network that can solve computational problems. The “LEGO-like” system incorporates 14 engineered cells (the “LEGO blocks”) that you can mix and match to build one of 12 specific problem solvers on demand.

How do the bacteria report their answers?

We pose problems to the bacteria in a chemical space using a binary system. The bacteria were questioned by adding (“one”) or not adding (“zero”) four specific chemicals. The bacterial artificial neural network analysed the data and responded by producing different fluorescent proteins. For example, when we asked if three is a prime number, in response to this question, the bacteria glowed green to print “yes”. Similarly, when we asked if four was a prime number, the bacteria glowed red and said “no”.

How could such a biocomputer be used in real-world applications?

Bacteria are tiny organisms, about one-twentieth the diameter of a human hair. It is not possible to make a silicon computer so small. Making such a small computer with bacteria will open a new horizon in microscale computer technology. Its use will extend from new medical technology and material technology to space technology.

For example, one may imagine a set of engineered bacteria or other cells within the human body taking decisions and acting upon a particular disease state, based on multiple biochemical and physiological cues.

Scientists have proposed using synthetically engineered organisms to help in situ resource utilization to build a human research base on Mars. However, it may not be possible to instruct each of the organisms remotely to perform a specific task based on local conditions. Now, one can imagine the tiny engineered organisms working as a biocomputer, interacting with each other, and taking autonomous decisions on action without any human intervention.

The importance of this work in basic science is also immense. We know that recognizing prime numbers or vowels can only be done by humans or computers – but now genetically engineered bacteria are doing the same. Such observations raise new questions about the meaning of “intelligence” and offer some insight on the biochemical nature and the origin of intelligence.

What are you planning to do next?

We would like to build more complex biocomputers to perform more complex computation tasks with multitasking capability. The ultimate goal is to build artificially intelligent bacteria.

The post Genetically engineered bacteria solve computational problems appeared first on Physics World.

Field work – the physics of sheep, from phase transitions to collective motion

Par : No Author

You’re probably familiar with the old joke about a physicist who, when asked to use science to help a dairy farmer, begins by approximating a spherical cow in a vacuum. But maybe it’s time to challenge this satire on how physics-based models can absurdly over-simplify systems as complex as farm animals. Sure, if you want to understand how a cow or a sheep works, approximating those creatures as spheres might not be such a good idea. But if you want to understand a herd or a flock, you can learn a lot by reducing individual animals to mere particles – if not spheres, then at least ovoids (or bovoids; see what I did there?).

By taking that approach, researchers over the past few years have not only shed new insight on the behaviour of sheep flocks but also begun to explain how shepherds do what they do – and might even be able to offer them new tips about controlling their flocks. Welcome to the emerging science of sheep physics.

“Boids” of a feather

Physics-based models of the group dynamics of living organisms go back a long way. In 1987 Craig Reynolds, a software engineer with the California-based computer company Symbolics, wrote an algorithm to try to mimic the flocking of birds. By watching blackbirds flock in a local cemetery, Reynolds intuited that each bird responds to the motions of its immediate neighbours according to some simple rules.

His simulated birds, which he called “boids” (a fusion of bird and droid), would each match their speed and orientation to those of others nearby, and would avoid collisions as if there was a repulsive force between them. Those rules alone were enough to generate group movements resembling the striking flocks or “murmurations” of real-life blackbirds and starlings, that swoop and fly together in seemingly perfect unison. Reynolds’ algorithms were adapted for film animations such as the herd of wildebeest in The Lion King.

Murmuration of starlings
Birds of a feather Physicists have been studying the collective motion of flocks of birds, such as groups of starlings – known as murmurations – that are seemingly governed by rules of physics. (Courtesy: iStock/georgeclerk)

Over the next two or three decades, these models were modified and extended by other researchers, including the future Nobel-prize-winning physicist Giorgio Parisi, to study collective motions of organisms ranging from birds to schooling fish and swarming bacteria. Those studies fed into the emerging science of active matter, in which particles – which could be simple colloids – move under their own propulsion. In the late 1990s physicist Tamás Vicsek and his student Andras Czirók, at Eötvös University in Budapest, revealed analogies between the collective movements of such self-propelled particles and the reorientation of magnetic spins in regular arrays, which also “feel” and respond to what their neighbours are doing (Phys. Rev. Lett. 82 209; J. Phys. A: Math. Gen. 30 1375).

In particular, the group motion can undergo abrupt phase transitions – global shifts in the pattern of behaviour, analogous to how matter can switch to a bulk magnetized state – as the factors governing individual motion, such as average velocity and strength of interactions, are varied. In this way, the collective movements can be summarized in phase diagrams, like those depicting the gaseous, liquid and solid states of matter as variables such as temperature and density are changed.

Models like these have now been used to explore the dynamics not just of animals and bacteria, but also of road traffic and human pedestrians. They can predict the kinds of complex behaviours seen in the real world, such as stop-and-start waves in traffic congestion or the switch to a crowd panic state. And yet the way they represent the individual agents seems – for humans anyway – almost insultingly simple, as if we are nothing but featureless particles propelled by blind forces.

Follow the leader

If these models work for humans, you might imagine they’d be fine for sheep too – which, let’s face it, seem behaviourally and psychologically rather unsophisticated compared with us. But if that’s how you think of sheep, you’ve probably never had to shepherd them. Sheep are decidedly idiosyncratic particles.

“Why should birds, fish or sheep behave like magnetic spins?” asks Fernando Peruani of the University of Cergy Paris. “As physicists we may want that, but animals may have a different opinion.” To understand how flocks of sheep actually behave, Peruani and his colleagues first looked at the available data, and then tried to work out how to describe and explain the behaviours that they saw.

1 Are sheep like magnetic spins?

Sheep walking in a line
(Diagram courtesy: Nat. Phys. 18 1402. Photo courtesy: iStock/scottyh)

In a magnetic material, magnetic spins interact to promote their mutual alignment (or anti-alignment, depending on the material). In the model of collective sheep motion devised by Fernando Peruani from the University of Cergy Paris, and colleagues, each sheep is similarly assumed to move in a direction determined by interactions with all the others that depend on their distance apart and their relative angles of orientation. The model predicts the sheep will fall into loose alignment and move in a line, following a leader, that takes a more or less sinuous path over the terrain.

For one thing, says Peruani, “real flocks are not continuously on the move. Animals have to eat, rest, find new feeding areas and so on”. No existing model of collective animal motion could accommodate such intermittent switching between stationary and mobile phases. What’s more, bird murmurations don’t seem to involve any specific individual guiding the collective behaviour, but some animal groups do exhibit a hierarchy of roles.

Elephants, zebras and forest ponies, for example, tend to move in lines such that the animal at the front has a special status. An advantage of such hierarchies is that the groups can respond quickly to decisions made by the leaders, rather than having to come to some consensus within the whole group. On the other hand, it means the group is acting on less information than would be available by pooling that of everyone.

To develop their model of collective sheep behaviour, Peruani and colleagues took a minimalistic approach of watching tiny groups of Merino Arles sheep that consisted of “flocks” of just two to four individuals who were free to move around a large field. They found that the groups spend most of their time grazing but would every so often wander off collectively in a line, following the individual at the front (Nat. Phys. 18 1494).

They also saw that any member of the group is equally likely to take the lead in each of these excursions, selected seemingly at random. In other words, as George Orwell famously suggested for certain pigs, all sheep are equal but some are (temporarily) more equal than others. Peruani and colleagues suspected that this switching of leaders allows some information pooling without forcing the group to be constantly negotiating a decision.

The researchers then devised a simple model of the process in which each individual has some probability of switching from the grazing to the moving state and vice versa – rather like the transition probability for emission of a photon from an excited atom. The empirical data suggested that this probability depends on the group size, with the likelihood getting smaller as the group gets bigger. Once an individual sheep has triggered the onset of the “walking phase”, the others follow to maintain group cohesion.

In their model, each individual feels an attractive, cohesive force towards the others and, when moving, tends to align its orientation and velocity with those of its neighbour(s). Peruani and colleagues showed that the model produces episodic switching between a clustered “grazing mode” and collective motion in a line (figure 1). They could also quantify information exchange between the simulated sheep, and found that probabilistic swapping of the leader role does indeed enable the information available to each individual to be pooled efficiently between all.

Although the group size here was tiny, the team has video footage of a large flocks of sheep adopting the same follow-my-leader formation, albeit in multiple lines at once. They are now conducting a range of experiments to get a better understanding of the behavioural rules – for example, using sirens to look at how sheep respond to external stimuli and studying herds composed of sheep of different ages (and thus proclivities) to probe the effects of variability.

The team is also investigating whether individual sheep trained to move between two points can “seed” that behaviour in an entire flock. But such experiments aren’t easy, Peruani says, because it’s hard to recruit shepherds. In Europe, they tend to live in isolation on low wages, and so aren’t the most forthcoming of scientific collaborators.

The good shepherd

Of course, shepherds don’t traditionally rely on trained sheep to move their flocks. Instead, they use sheepdogs that are trained for many months before being put to work in the field. If you’ve ever watched a sheepdog in action, it’s obvious they do an amazingly complex job – and surely one that physics can’t say much about? Yet mechanical engineer Lakshminarayanan Mahadevan at Harvard University in the US says that the sheepdog’s task is basically an exercise in control theory: finding a trajectory that will guide the flock to a particular destination efficiently and accurately.

Mahadevan and colleagues found that even this phenomenon can be described using a relatively simple model (arXiv:2211.04352). From watching YouTube videos of sheepdogs in action, he figured there were two key factors governing the response of the sheep. “Sheep like to stay together,” he says – the flock has cohesion. And second, sheep don’t like sheepdogs – there is repulsion between sheep and dog. “Is that enough – cohesion plus repulsion?” Mahadevan wondered.

Sheepdogs and a flock of sheep
Let’s stay together Harvard University researcher Lakshminarayanan Mahadevan studied the interactions between sheepdogs and a flock of sheep, to develop a model that describes how a flock reacts to different herding tactics employed by the dogs. They found that the size of the flock and how fast it moves between its initial and final positions are two main factors that determine the best herding strategy. (Courtesy: Shutterstock/Danica Chang)

The researchers wrote down differential equations to describe the animals’ trajectories and then applied standard optimization techniques to minimize a quantity that captures the desired outcome: moving the flock to a specific location without losing any sheep. Despite the apparent complexity of the dynamical problem, they found it all boiled down to a simple picture. It turns out there are two key parameters that determine the best herding strategy: the size of the flock and the speed with which it moves between initial and final positions.

Four possible outcomes emerged naturally from their model. One is simply that the herding fails: nothing a dog can do will get the flock coherently from point A to point B. This might be the case, for example, if the flock is just too big, or the dog too slow. But there are three shepherding strategies that do work.

One involves the dog continually running from one side of the flock to the other, channelling the sheep in the desired direction. This is the method known to shepherds as “droving”. If, however, the herd is relatively small and the dog is fast, there can be a better technique that the team called “mustering”. Here the dog propels the flock forward by running in corkscrews around it. In this case, the flock keeps changing its overall shape like a wobbly ellipse, first elongating and then contracting around the two orthogonal axes, as if breathing. Both strategies are observed in the field (figure 2).

But the final strategy the model generated, dubbed “driving”, is not a tactic that sheepdogs have been observed to use. In this case, if the flock is large enough, the dog can run into the middle of it and the sheep retreat but don’t scatter. Then the dog can push the flock forward from within, like a driver in a car. This approach will only work if the flock is very strongly cohesive, and it’s not clear that real flocks ever have such pronounced “stickiness”.

2 Shepherding strategies: the three types of herding

Diagram of herding patterns
(Courtesy: L Mahadevan, arXiv:2211.04352)

In the model of interactions between a sheepdog and its flock developed by Lakshminarayanan Mahadevan at Harvard University and coworkers, optimizing a mathematical function that describes how well the dog transports the flock results in three possible shepherding strategies, depending on the precise parameters in the model. In “droving”, the dog runs from side to side to steer the flock towards the target location. In “mustering”, the dog takes a helix-like trajectory, repeatedly encircling the flock. And in “driving”, the dog steers the flock from “inside” by the aversion – modelled as a repulsive force – of the sheep for the dog.

These three regimes, derived from agent-based models (ABM) and models based on ordinary differential equations (ODE), are plotted above. In the left column, the mean path of the flock (blue) over time is shown as it is driven by a shepherd on a separate path (red) towards a target (green square). Columns 2-4 show snapshots from column 1, with trajectories indicated in black, where fading indicates history. From left to right, snapshots represent the flock at later time points.

These herding scenarios can be plotted on a phase diagram, like the temperature–density diagram for states of matter, but with flock size and speed as the two axes. But do sheepdogs, or their trainers, have an implicit awareness of this phase diagram, even if they did not think of it in those terms? Mahadevan suspects that herding techniques are in fact developed by trial and error – if one strategy doesn’t work, they will try another.

Mahadevan admits that he and his colleagues have neglected some potentially important aspects of the problem. In particular, they assumed that the animals can see in every direction around them. Sheep do have a wide field of vision because, like most prey-type animals, they have eyes on the sides of their heads. But dogs, like most predators, have eyes at the front and therefore a more limited field of view. Mahadevan suspects that incorporating these features of the agents’ vision will shift the phase boundaries, but not alter the phase diagram qualitatively.

Another confounding factor is that sheep might alter their behaviour in different circumstances. Chemical engineer Tuhin Chakrabortty of the Georgia Institute of Technology in Atlanta, together with biomolecular engineer Saad Bhamla, have also used physics-based modelling to look at the shepherding problem. They say that sheep behave differently on their own from how they do in a flock. A lone sheep flees from a dog, but in a flock they employ a more “selfish” strategy, with those on the periphery trying to shove their way inside to be sheltered by the others.

3 Heavy and light: how flocks interact with sheepdogs

How flocks interact with sheepdogs
(Courtesy: T Chakrabortty and S Bhamla, arXiv:2406.06912)

In the agent-based model of the interaction between sheep and a sheepdog devised by Tuhin Chakrabortty and Saad Bhamla, sheep may respond to a nearby dog by reorienting themselves to face away from or at right angles to it. Different sheep might have different tendencies for this – “heavy” sheep ignore the dog unless they are facing towards it. The task of the dog could be to align the flock facing away from it (herding) or to divide the flock into differently aligned subgroups (shedding).

What’s more, says Chakrabortty, contrary to the stereotype, sheep can show considerable individual variation in how they respond to a dog. Essentially, the sheep have personalities. Some seem terrified and easily panicked by a dog while others might ignore – or even confront – it. Shepherds traditionally call the former sort of sheep “light”, and the latter “heavy” (figure 3).

In the agent-based model used by Chakrabortty and Bhamla, the outcomes differ depending on whether a herd is predominantly light or heavy (arXiv:2406.06912). When a simulated herd is subjected to the “pressure” of a shepherding dog, it might do one of three things: flee in a disorganized way, shedding panicked individuals; flock in a cohesive group; or just carry on grazing while reorienting to face at right angles to the dog, as if turning away from the threat.

Again these behaviours can be summarized in a 2D phase diagram, with axes representing the size of the herd and what the two researchers call the “specificity of the sheepdog stimulus” (figure 4). This factor depends on the ratio of the controlling stimulus (the strength of sheep–dog repulsion) and random noisiness in the sheep’s response. Chakrabortty and Bhamla say that sheepdog trials are conducted for herd sizes where all three possible outcomes are well represented, creating an exacting test of the dog’s ability to get the herd to do its bidding.

4 Fleeing, flocking and grazing: types of sheep movement

Graph showing types of sheep movement
(Courtesy: T Chakrabortty and S Bhamla, arXiv:2406.06912)

The outcomes of the shepherding model of Chakrabortty and Bhamla can be summarized in a phase diagram showing the different behavioural options – uncoordinated fleeing, controlled flocking, or indifferent grazing – as a function of two model parameters: the size of the flock Ns and the “specificity of stimulus”, which measures how strongly the sheep respond to the dog relative to their inherent randomness of action. Sheepdog trials are typically conducted for a flock size that allows for all three phases.

Into the wild

One of the key differences between the movements of sheep and those of fish or birds is that sheep are constrained to two dimensions. As condensed-matter physicists have come to recognize, the dimensionality of a problem can make a big difference to phase behaviour. Mahadevan says that dolphins make use of dimensionality when they are trying to shepherd schools of fish to feed on. To make them easier to catch, dolphins will often push the fish into shallow water first, converting a 3D problem to a 2D problem. Herders like sheepdogs might also exploit confinement effects to their benefit, for example using fences or topographic features to help contain the flock and simplify the control problem. Researchers haven’t yet explored these issues in their models.

Dolphins using herding tactics to drive a school of fish
Shoal of thought In nature, dolphins have been observed using a number of herding tactics to drive schools of fish into shallow water or even beach them, to making hunting easier and more efficient. (Courtesy: iStock/atese)

As the case of dolphins shows, herding is a challenge faced by many predators. Mahadevan says he has witnessed such behaviour himself in the wild while observing a pack of wild dogs trying to corral wildebeest. The problem is made more complicated if the prey themselves can deploy group strategies to confound their predator – for example, by breaking the group apart to create confusion or indecision in the attacker, a behaviour seemingly adopted by fish. Then the situation becomes game-theoretic, each side trying to second-guess and outwit the other.

Sheep seem capable of such smart and adaptive responses. Bhamla says they sometimes appear to identify the strategy that a farmer has signalled to the dog and adopt the appropriate behaviour even without much input from the dog itself. And sometimes splitting a flock can be part of the shepherding plan: this is actually a task dogs are set in some sheepdog competitions, and demands considerable skill. Because sheepdogs seem to have an instinct to keep the flock together, they can struggle to overcome that urge and have to be highly trained to split the group intentionally.

Iain Couzin of the Max Planck Institute of Animal Behavior in Konstanz, Germany, who has worked extensively on agent-based models of collective animal movement, cautions that even if physical models like these seem to reproduce some of the phenomena seen in real life, that doesn’t mean the model’s rules reflect what truly governs the animals’ behaviour. It’s tempting, he says, to get “allured by the beauty of statistical physics” at the expense of the biology. All the same, he adds that whether or not such models truly capture what is going on in the field, they might offer valuable lessons for how to control and guide collectives of agent-like entities.

In particular, the studies of shepherding might reveal strategies that one could program into artificial shepherding agents such as robots or drones. Bhamla and Chakrabortty have in fact suggested how one such swarm control algorithm might be implemented. But it could be harder than it sounds. “Dogs are extremely good at inferring and predicting the idiosyncrasies of individual sheep and of sheep–sheep interactions,” says Chakrabortty. This allows them to adapt their strategy on the fly. “Farmers laugh at the idea of drones or robots,” says Bhamla. “They don’t think the technology is ready yet. The dogs benefit from centuries of directed evolution and training.”

Perhaps the findings could be valuable for another kind of animal herding too. “Maybe this work could be applied to herding kids at a daycare,” Bhamla jokes. “One of us has small kids and recognizes the challenges of herding small toddlers from one room to another, especially at a party. Perhaps there is a lesson here.” As anyone who has ever tried to organize groups of small children might say: good luck with that.

The post Field work – the physics of sheep, from phase transitions to collective motion appeared first on Physics World.

New on-chip laser fills long sought-after green gap

A series of visible-light colours generated by a microring resonator
Closing the green gap Series of visible-light colours generated by a microring resonator. (Courtesy: S Kelley/NIST)

On-chip lasers that emit green light are notoriously difficult to make. But researchers at the National Institute of Standards and Technology (NIST) and the NIST/University of Maryland Joint Quantum Institute may now have found a way to do just this, using a modified optical component known as a ring-shaped microresonator. Green lasers are important for applications including quantum sensing and computing, medicine and underwater communications.

In the new work, a research team led by Kartik Srinivasan modified a silicon nitride microresonator such that it was able to convert infrared laser light into yellow and green light. The researchers had already succeeded in using this structure to convert infrared laser light into red, orange and yellow wavelengths, as well as a wavelength of 560 nm, which lies at the edge between yellow and green light. Previously, however, they were not able to produce the full range of yellow and green colours to fill the much sought-after “green gap”.

More than 150 distinct green-gap wavelengths

To overcome this problem, the researchers made two modifications to their resonator. The first was to thicken it by 100 nm so that it could more easily generate green light with wavelengths down to 532 nm. Being able to produce such a short wavelength means that the entire green wavelength range is now covered, they say. In parallel, they modified the cladding surrounding the microresonator by etching away part of the silicon dioxide layer that it was fabricated on. This alteration made the output colours less sensitive to the dimension of the microring.

These changes meant that the team could produce more than 150 distinct green-gap wavelengths and could fine tune these too. “Previously, we could make big changes – red to orange to yellow to green – in the laser colours we could generate with OPO [optical parametric oscillation], but it was hard to make small adjustments within each of these colour bands,” says Srinivasan.

Like the previous microresonator, the new device works thanks to a process known as nonlinear wave mixing. Here, infrared light that is pumped into the ring-shaped structure is confined and guided within it. “This infrared light circulates around the ring hundreds of times due to its low loss, resulting in a build-up of intensity,” explains Srinivasan. “This high intensity enables the conversion of pump light to other wavelengths.”

Third-order optical parametric oscillation

“The purpose of the microring is to enable relatively modest, input continuous-wave laser light to build up in intensity to the point that nonlinear optical effects, which are often thought of as weak, become very significant,” says team member Xiyuan Lu.

The specific nonlinear optical process the researchers use is third-order optical parametric oscillation. “This works by taking light at a pump frequency np and creating one beam of light that’s higher in frequency (called the signal, at a frequency ns) and one beam that’s lower in frequency (called the idler, at a frequency ni),” explains first author Yi Sun. “There is a basic energy conservation requirement that 2np= ns+ ni.”

Simply put, this means that for every two pump photons that are used to excite the system, one signal photon and one idler photon are created, he tells Physics World.

Towards higher power and a broader range of colours

The NIST/University of Maryland team has been working on optical parametric oscillation as a way to convert near-infrared laser light to visible laser light for several years now. One of their main objectives was to fill the green gap in laser technology and fabricate frequency-converted lasers for quantum, biology and display applications.

“Some of the major applications we are ultimately targeting are high-end lasers, continuous-wave single-mode lasers covering the green gap or even a wider range of frequencies,” reveals team member Jordan Stone. “Applications include lasers for quantum optics, biology and spectroscopy, and perhaps laser/hologram display technologies.”

For now, the researchers are focusing on achieving higher power and a broader range of colours (perhaps even down to blue wavelengths). They also hope to make devices that can be better controlled and tuned. “We are also interested in laser injection locking with frequency-converted lasers, or using other techniques to further enhance the coherence of our lasers,” says Stone.

The work is detailed in Light: Science & Applications.

The post New on-chip laser fills long sought-after green gap appeared first on Physics World.

Researchers exploit quantum entanglement to create hidden images

Par : No Author
Encoding images in photon correlations
Encoding images in photon correlations Simplified experimental setup (a). A conventional intensity image (b) reveals no information about the object, while a correlation image acquired using an electron-multiplied CCD camera (c) reveals the hidden object. (Courtesy: Phys. Rev. Lett. 10.1103/PhysRevLett.133.093601)

Ever since the double-slit experiment was performed, physicists have known that light can be observed as either a wave or a stream of particles. For everyday imaging applications, it is the wave-like aspect of light that manifests, with receptors (natural or artificial) capturing the information contained within the light waves to “see” the scene being observed.

Now, Chloé Vernière and Hugo Defienne from the Paris Institute of Nanoscience at Sorbonne University have used quantum correlations to encode an image into light such that it only becomes visible when particles of light (photons) are observed by a single-photon sensitive camera – otherwise the image is hidden from view.

Encoding information in quantum correlations

In a study described in Physical Review Letters, Vernière and Defienne managed to hide an image of a cat from conventional light measurement devices by encoding the information in quantum entangled photons, known as a photon-pair correlation. To achieve this, they shaped spatial correlations between entangled photons – in the form of arbitrary amplitude and phase objects – to encode image information within the pair correlation. Once the information is encoded into the photon pairs, it is undetectable by conventional measurements. Instead, a single-photon detector known as an electron-multiplied charge couple device (EMCCD) camera is needed to “show” the hidden image.

“Quantum entanglement is a fascinating phenomenon, central to many quantum applications and a driving concept behind our research,” says Defienne. “In our previous work, we demonstrated that, in certain cases, quantum correlations between photons are more resistant to external disturbances, such as noise or optical scattering, than classical light. Inspired by this, we wondered how this resilience could be leveraged for imaging. We needed to use these correlations as a support – a ‘canvas’ – to imprint our image, which is exactly what we’ve achieved in this work.”

How to hide an image

The researchers used a technique known as spontaneous parametric down-conversion (SPDC), which is used in many quantum optics experiments, to generate the entangled photons. SPDC is a nonlinear process that uses a nonlinear crystal (NLC) to split a single high-energy photon from a pump beam into two lower energy entangled photons. The properties of the lower energy photons are governed by the geometry and type of the NLC and the characteristics of the pump beam.

In this study, the researchers used a continuous-wave laser that produced a collimated beam of horizontally polarized 405 nm light to illuminate a standing cat-shaped mask, which was then Fourier imaged onto an NLC using a lens. The spatially entangled near-infrared (810 nm) photons, produced after passing through the NLC, were then detected using another lens and the EMCCD.

This SPDC process produces an encoded image of a cat. This image does not appear on regular camera film and only becomes visible when the photons are counted one by one using the EMCCD. This allowed the image of the cat to be “hidden” in light and unobservable by traditional cameras.

“It is incredibly intriguing that an object’s image can be completely hidden when observed classically with a conventional camera, but then when you observe it ‘quantumly’ by counting the photons one by one and examining their correlations, you can actually see it,” says Vernière, a PhD student on the project. “For me, it is a completely new way of doing optical imaging, and I am hopeful that many powerful applications will emerge from it.”

What’s next?

This research has extended on previous work and Defienne says that the team’s next goal is to show that this new method of imaging has practical applications and is not just a scientific curiosity. “We know that images encoded in quantum correlations are more resistant to external disturbances – such as noise or scattering – than classical light. We aim to leverage this resilience to improve imaging depth in scattering media.”

When asked about the applications that this development could impact, Defienne tells Physics World: “We hope to reduce sensitivity to scattering and achieve deeper imaging in biological tissues or longer-range communication through the atmosphere than traditional technologies allow. Even though we are still far from it, this could potentially improve medical diagnostics or long-range optical communications in the future.”

The post Researchers exploit quantum entanglement to create hidden images appeared first on Physics World.

Ambipolar electric field helps shape Earth’s ionosphere

A drop in electric potential of just 0.55 V measured at altitudes of between 250–768 km in the Earth’s atmosphere above the North and South poles could be the first direct measurement of our planet’s long-sought after electrostatic field. The measurements, from NASA’s Endurance mission, reveal that this field is important for driving how ions escape into space and shaping the upper layer of the atmosphere, known as the ionosphere.

Researchers first predicted the existence of the ambipolar electric field in the 1960s as the first spacecraft flying over the Earth’s poles detected charged particles (including positively-charged hydrogen and oxygen ions) flowing out from the atmosphere. The theory of a planet-wide electric field was developed to directly explain this “polar wind”, but the effects of this field were thought to be too weak to be detectable. Indeed, if the ambipolar field was the only mechanism driving the electrostatic field of Earth, then the resulting electric potential drop across the exobase transition region (which lies at an altitude of between 200–780 km) could be as low as about 0.4 V.

A team of researchers led by Glyn Collinson at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, has now succeeded in measuring this field for the first time thanks to a new instrument called a photoelectron spectrometer, which they developed. The device was mounted on the Endurance rocket, which was launched from Svalbard in the  Norwegian Arctic in May 2022. “Svalbard is the only rocket range in the world where you can fly through the polar wind and make the measurements we needed,” says team member Suzie Imber, who is a space physicist at the University of Leicester, UK.

Just the “right amount”

The spacecraft reached an altitude of 768.03 km, where it remained for 19 min while the onboard spectrometer measured the energies of electrons there every 10 seconds. It measured a drop in electric potential of 0.55 V±0.09 V over an altitude range of 258–769 km. While tiny, this is just the “right amount” to explain the polar wind without any other atmospheric effects, says Collinson.

The researchers showed that the ambipolar field, which is generated exclusively by the outward pressure of ionospheric electrons, increases the “scale height” of the ionosphere by as much as 271% (from a height of 77.0 km to a height of 208.9 km). This part of the atmosphere therefore remains denser to greater heights than it would if the field did not exist. This is because the field increases the supply of cold oxygen ions (O+) to the magnetosphere (that is, near the peak at 768 km) by more than 3.8%, so counteracting the effects of other mechanisms (such as wave-particle interactions) that can heat and accelerate these particles to velocities high enough for them to escape into space. The field also probably explains why the magnetosphere is made up primarily of cold hydrogen ions (H+).

The ambipolar field could be as fundamental for our planet as its gravity and magnetic fields, says Collinson, and it may even have helped shape how the atmosphere evolved. Similar fields might also exist on other planets in the solar system with an atmosphere, including Venus and Mars. “Understanding the forces that cause Earth’s atmosphere to slowly leak to space may be important for revealing what makes Earth habitable and why we’re all here,” he tells Physics World. “It’s also crucial to accurately forecast the impact of geomagnetic storms and ‘space weather’.”

Looking forward, the scientists say they would like to make further measurements of the Earth’s ambipolar field in the future. Happily, they recently received endorsement for a follow-up rocket – called Resolute – to do just this.

The post Ambipolar electric field helps shape Earth’s ionosphere appeared first on Physics World.

Light-absorbing dye turns skin of a live mouse transparent

One of the difficulties when trying to image biological tissue using optical techniques is that tissue scatters light, which makes it opaque. This scattering occurs because the different components of tissue, such as water and lipids, have different refractive indices, and it limits the depth at which light can penetrate.

A team of researchers at Stanford University in the US has now found that a common water-soluble yellow dye (among several other dye molecules) that strongly absorbs near-ultraviolet and blue light can help make biological tissue transparent in just a few minutes, thus allowing light to penetrate more deeply. In tests on mice skin, muscle and connective tissue, the team used the technique to observe a wide range of deep-seated structures and biological activity.

In their work, the research team – led by Zihao Ou (now at The University of Texas at Dallas), Mark Brongersma and Guosong Hong – rubbed the common food dye tartrazine, which is yellow/red in colour, onto the abdomen, scalp and hindlimbs of live mice. By absorbing light in the blue part of the spectrum, the dye altered the refractive index of the water in the treated areas at red-light wavelengths, such that it more closely matched that of lipids in this part of the spectrum. This effectively reduced the refractive-index contrast between the water and the lipids and allowed the biological tissue to appear more transparent at this wavelength, albeit tinged with red.

In this way, the researchers were able to visualize internal organs, such as the liver, small intestine and bladder, through the skin without requiring any surgery. They were even able to observe fluorescent protein-labelled enteric neurons in the abdomen and monitor the movements of these nerve cells. This enabled them to generate maps showing different movement patterns in the gut during digestion. They were also able to visualize blood flow in the rodents’ brains and the fine structure of muscle sarcomere fibres in their hind limbs.

Reversible effect

The skin becomes transparent in just a few minutes and the effect can be reversed by simply rinsing off the dye.

So far, this “optical clearing” study has only been conducted on animals. But if extended to humans, it could offer a variety of benefits in biology, diagnostics and even cosmetics, says Hong. Indeed, the technique could help make some types of invasive biopsies a thing of the past.

“For example, doctors might be able to diagnose deep-seated tumours by simply examining a person’s tissue without the need for invasive surgical removal. It could potentially make blood draws less painful by helping phlebotomists easily locate veins under the skin and could also enhance procedures like laser tattoo removal by allowing more precise targeting of the pigment beneath the skin,” Hong explains. “If we could just look at what’s going on under the skin instead of cutting into it, or using radiation to get a less than clear look, we could change the way we see the human body.”

Hong tells Physics World that the collaboration originated from a casual conversation he had with Brongersma, at a café on Stanford’s campus during the summer of 2021. “Mark’s lab specializes in nanophotonics while my lab focuses on new strategies for enhancing deep-tissue imaging of neural activity and light delivery for optogenetics. At the time, one of my graduate students, Nick Rommelfanger (third author of the current paper), was working on applying the ‘Kramers-Kronig’ relations to investigate microwave–brain interactions. Meanwhile, my postdoc Zihao Ou (first author of this paper) had been systematically screening a variety of dye molecules to understand their interactions with light.”

Tartrazine emerged as the leading candidate, says Hong. “This dye showed intense absorption in the near-ultraviolet/blue spectrum (and thus strong enhancement of refractive index in the red spectrum), minimal absorption beyond 600 nm, high water solubility and excellent biocompatibility, as it is an FD&C-approved food dye.”

“We realized that the Kramers-Kronig relations could be applied to the resonance absorption of dye molecules, which led me to ask Mark about the feasibility of matching the refractive index in biological tissues, with the aim of reducing light scattering,” Hong explains. “Over the past three years, both our labs have had numerous productive discussions, with exciting results far exceeding our initial expectations.”

The researchers say they are now focusing on identifying other dye molecules with greater efficiency in achieving tissue transparency. “Additionally, we are exploring methods for cells to express intensely absorbing molecules endogenously, enabling genetically encoded tissue transparency in live animals,” reveals Hong.

The study is detailed in Science.

The post Light-absorbing dye turns skin of a live mouse transparent appeared first on Physics World.

Science thrives on constructive and respectful peer review

It is Peer Review Week and celebrations are well under way at IOP Publishing (IOPP), which brings you the Physics World Weekly podcast.

Reviewer feedback to authors plays a crucial role in the peer-review process, boosting the quality of published papers to the benefit of authors and the wider scientific community. But sometimes authors receive very unhelpful or outright rude feedback about their work. These inappropriate comments can shake the confidence of early career researchers, and even dissuade them from pursuing careers in science.

Our guest in this episode is Laura Feetham-Walker, who is reviewer engagement manager at IOPP. She explains how the publisher is raising awareness of the importance of constructive and respectful peer review feedback and how innovations can help to create a positive peer review culture.

As part of the campaign, IOPP asked some leading physicists to recount the worst reviewer comments that they have received – and Feetham-Walker shares some real shockers in the podcast.

IOPP has created a video called “Unprofessional peer reviews can harm science” in which leading scientists share inappropriate reviews that they have received.

The publisher also offers a  Peer Review Excellence  training and certification programme, which equips early-career researchers in the physical sciences with the skills to provide constructive feedback.

The post Science thrives on constructive and respectful peer review appeared first on Physics World.

Convection enhances heat transport in sea ice

The thermal conductivity of sea ice can significantly increase when convective flow is present within the ice. This new result, from researchers at Macquarie University, Australia, and the University of Utah and Dartmouth College, both in the US, could allow for more accurate climate models – especially since current global models only account for temperature and salinity and not convective flow.

Around 15% of the ocean’s surface will be covered with sea ice at some time in a year. Sea ice is a thin layer that separates the atmosphere and the ocean and it is responsible for regulating heat exchange between the two in the polar regions of our planet. The thermal conductivity of sea ice is a key parameter in climate models. It has proved difficult to measure, however, because of its complex structure, made up of ice, air bubbles and brine inclusions, which form as the ice freezes from the surface of the ocean to deeper down. Indeed, sea ice can be thought of as being a porous composite material and is therefore very sensitive to changes in temperature and salinity.

The salty liquid within the brine inclusions is heavier than fresh ocean water. This results in convective flow within the ice, creating channels through which liquid can flow out, explains applied mathematician Noa Kraitzman at Macquarie, who led this new research effort. “Our new framework characterizes enhanced thermal transport in porous sea ice by combining advection-diffusion processes with homogenization theory, which simplifies complex physical properties into an effective bulk coefficient.”

Thermal conductivity of sea ice can increase by a factor of two to three

The new work builds on a 2001 study in which researchers observed an increase in thermal conductivity in sea ice at warmer temperatures. “In our calculations, we had to derive new bounds on the effective thermal conductivity, while also accounting for complex, two-dimensional convective fluid flow and developing a theoretical model that could be directly compared with experimental measurements in the field,” explains Kraitzman. “We employed Padé approximations to obtain the required bounds and parametrized the Péclet number specifically for sea ice, considering it as a saturated rock.”

Padé approximations are routinely used to approximate a function by a rational analysis of given order and the Péclet number is a dimensionless parameter defined as the ratio between the rate of advection to the rate of diffusion.

The results suggest that the effective thermal conductivity of sea ice can increase by a factor of two to three because of conductive flow, especially in the lower, warmer sections of the ice, where temperature and the ice’s permeability favour convection, Kraitzman tells Physics World. “This enhancement is mainly confined to the bottom 10 cm during the freezing season, when convective flows are present within the sea ice. Incorporating these bounds into global climate models could improve their ability to predict thermal transport through sea ice, resulting in more accurate predictions of sea ice melt rates.”

Looking forward, Kraitzman and colleagues say they now hope to acquire additional field measurements to refine and validate their model. They also want to extend their mathematical framework to include more general 3D flows and incorporate the complex fluid exchange processes that exist between ocean and sea ice. “By addressing these different areas, we aim to improve the accuracy and applicability of our model, particularly in ocean-sea ice interaction models, aiming for a better understanding of polar heat exchange processes and their global impacts,” says Kraitzman.

The present work is detailed in Proceedings of the Royal Society A.

The post Convection enhances heat transport in sea ice appeared first on Physics World.

Short-range order always appears in new type of alloy

Short-range order plays an important role in defining the properties and performance of “multi-principal element alloys” (MPEAs), but the way in which this order develops is little understood, making it difficult to control. In a surprising new discovery, a US-based research collaboration has have found that this order exists regardless of how MPEAs are processed. The finding will help scientists develop more effective ways to improve the properties of these materials and even tune them for specific applications, especially those with demanding conditions.

MPEAs are a relatively new type of alloy and consist of three or more components in nearly equal proportions. This makes them very different to conventional alloys, which are made from just one or two principal elements with trace elements added to improve their performance.

In recent years, MPEAs have spurred a flurry of interest thanks to their high strength, hardness and toughness over temperature ranges at which traditional alloys, such as steel, can fail. They could also be more resistant to corrosion, making them promising for use in extreme conditions, such as in power plants, or aerospace and automotive technologies, to name but three.

Ubiquitous short-range order

MPEAs were originally thought of as being random solid solutions with the constituent elements being haphazardly dispersed, but recent experiments have shown that this is not the case.

The researchers – from Penn State University, the University of California, Irvine, the University of Massachusetts, Amherst, and Brookhaven National Laboratory – studied the cobalt/chromium/nickel (CoCrNi) alloy, one of the best-known examples of an MPEA. This face-centred cubic (FCC) alloy boasts the highest fracture toughness for an alloy at liquid helium temperatures ever recorded.

Using an improved transmission electron microscopy characterization technique combined with advanced three-dimensional printing and atomistic modelling, the team found that short-range order, which occurs when atoms are arranged in a non-random way over short distances, appears in three CoCrNi-based FCC MPEAs under a variety of processing and thermal treatment conditions.

Their computational modelling calculations also revealed that local chemical order forms in the liquid–solid interface when the alloys are rapidly cooled, even at a rate of 100 billion °C/s. This effect comes from the rapid atomic diffusion in the supercooled liquid, at rates equal to or even greater than the rate of solidification. Short-range order is therefore an inherent characteristic of FCC MPEAs, the researchers say.

The new findings are in contrast to the previous notion that the elements in MPEAs arrange themselves randomly in the crystal lattice if they cool rapidly during solidification. It also refutes the idea that short-range order develops mainly during annealing (a process in which heating and slow cooling are used to improve material properties such as strength, hardness and ductility).

Short-range order can affect MPEA properties, such as strength or resistance to radiation damage. The researchers, who report their work in Nature Communications, say they now plan to explore how corrosion and radiation damage affect the short-range order in MPEAs.

“MPEAs hold promise for structural applications in extreme environments. However, to facilitate their eventual use in industry, we need to have a more fundamental understanding of the structural origins that give rise to their superior properties,” says team co-lead Yang Yang, who works in the engineering science and mechanics department at Penn State.

The post Short-range order always appears in new type of alloy appeared first on Physics World.

We should treat our students the same way we would want our own children to be treated

Par : No Author

“Thank goodness I don’t have to teach anymore.” These were the words spoken by a senior colleague and former mentor upon hearing about the success of their grant application. They had been someone I had respected. Such comments, however, reflect an attitude that persists across many UK higher-education (HE) science departments. Our departments’ students, our own children even, studying across the UK at HE institutes deserve far better.

It is no secret in university science departments that lecturing, tutoring and lab supervision are perceived by some colleagues to be mere distractions from what they consider their “real” work and purpose to be. These colleagues may evasively try to limit their exposure to teaching, and their commitment to its high-quality delivery. This may involve focusing time and attention solely on research activities or being named on as many research grant applications as possible.

University workload models set time aside for funded research projects, as they should. Research grants provide universities with funding that contributes to their finances and are an undeniably important revenue stream. However, an aversion to – or flagrant avoidance of – teaching by some colleagues is encountered by many who have oversight and responsibility for the organization and provision of education within university science departments.

It is also a behaviour and mindset that is recognized by students, and which negatively impacts their university experience. Avoidance of teaching displayed, and sometimes privately endorsed, by senior or influential colleagues in a department can also shape its culture and compromise the quality of education that is delivered. Such attitudes have been known to diffuse into a department’s environment, negatively impacting students’ experiences and further learning. Students certainly notice and are affected by this.

The quality of physics students’ experiences depends on many factors. One is the likelihood of graduating with skills that make them employable and have successful careers. Others include: the structure, organization and content of their programme; the quality of their modules and the enthusiasm and energy with which they are delivered; the quality of the resources to which they have access; and the extent to which their individual learning needs are supported.

We should always be present and dispense empathy, compassion and a committed enthusiasm to support and enthral our students with our teaching.

In the UK, the quality of departments’ and institutions’ delivery of these and other components has been assessed since 2005 by the National Student Survey (NSS). Although imperfect and continuing to evolve, it is commissioned every year by the Office for Students on behalf of UK funding and regulatory bodies and is delivered independently by Ipsos.

The NSS can be a helpful tool to gather final-year students’ opinions and experiences about their institutions and degree programmes. Publication of the NSS datasets in July each year should, in principle, provide departments and institutions with the information they need to recognize their weaknesses and improve their subsequent students’ experiences. They would normally be motivated to do this because of the direct impact NSS outcomes have on institutions’ league table positions. These league tables can tangibly impact student recruitment and, therefore, an institution’s finances.

My sincerely held contention, however, communicated some years ago to a red-faced finger-wagging senior manager during a fraught meeting, is this. We should ignore NSS outcomes. They don’t, and shouldn’t, matter. This is a bold statement; career-ending, even. I articulated that we and all our colleagues should instead wholeheartedly strive to treat our students as we would want our own children, or our younger selves, to be treated, across every academic aspect and learning-related component of their journey while they are with us. This would be the right and virtuous thing to do.  In fact, if we do this, the positive NSS outcomes would take care of themselves.

Academic guardians

I have been on the frontline of university teaching, research, external examining and education leadership for close to 30 years. My heartfelt counsel, formed during this journey, is that our students’ positive experiences matter profoundly. They matter because, in joining our departments and committing three or more years and many tens of thousands of pounds to us, our students have placed their fragile and uncertain futures and aspirations into our hands.

We should feel privileged to hold this position and should respond to and collaborate with them positively, always supportively and with compassion, kindness and empathy. We should never be the traditionally tough and inflexible guardians of a discipline that is academically demanding, and which can, in a professional physics academic career, be competitively unyielding. That is not our job. Our roles, instead, should be as our students’ academic guardians, enthusiastically taking them with us across this astonishing scientific and mathematical world; teaching, supporting and enabling wherever we possibly can.

A narrative such as this sounds fantastical. It seems far removed from the rigours and tensions of day-in, day-out delivery of lecture modules, teaching labs and multiple research targets. But the metaphor it represents has been the beating heart of the most successfully effective, positive and inclusive learning environments I have encountered in UK and international HE departments during my long academic and professional journey.

I urge physics and science colleagues working in my own and other UK HE departments to remember and consider what it can be like to be an anxious or confused student, whose cognitive processes are still developing, whose self-confidence may be low and who may, separately, be facing other challenges to their circumstances. We should then behave appropriately. We should always be present and dispense empathy, compassion and a committed enthusiasm to support and enthral our students with our teaching. Ego has no place. We should show kindness, patience, and a willingness to engage them in a community of learning, framed by supportive and inclusive encouragement. We should treat our students the way we would want our own children to be treated.

The post We should treat our students the same way we would want our own children to be treated appeared first on Physics World.

Working in quantum tech: where are the opportunities for success?

The quantum industry in booming. An estimated $42bn was invested in the sector in 2023 and is projected to rise to $106 billion by 2040. In this episode of Physics World Stories, two experts from the quantum industry share their experiences, and give advice on how to enter this blossoming sector. Quantum technologies – including computing, communications and sensing – could vastly outperform today’s technology for certain applications, such as efficient and scalable artificial intelligence.

Our first guest is Matthew Hutchings, chief product officer and co-founder of SEEQC. Based in New York and with facilities in Europe, SEEQC is developing a digital quantum computing platform with a broad industrial market due to its combination of classical and quantum technologies. Hutchings speaks about the increasing need for engineering positions in a sector that to date has been dominated by workers with a PhD in quantum information science.

The second guest is Araceli Venegas-Gomez, founder and CEO of QURECA, which helps to train and recruit individuals, while also providing business development services. Venegas-Gomez’s journey into the sector began with her reading about quantum mechanics as a hobby while working in aerospace engineering. In launching QURECA, she realized there was an important gap to be filled between quantum information science and business – two communities that have tended to speak entirely different languages.

Get even more tips and advice in the recent feature article ‘Taking the leap – how to prepare for your future in the quantum workforce’.

The post Working in quantum tech: where are the opportunities for success? appeared first on Physics World.

💾

Thermal dissipation decoheres qubits

How does a Josephson junction, which is the basic component of a superconducting quantum bit (or qubit), release its energy into the environment? It is radiated as photons, according to new experiments by researchers at Aalto University Finland in collaboration with colleagues from Spain and the US who used a thermal radiation detector known as a bolometer to measure this radiation directly in the electrical circuits holding the qubits. The work will allow for a better understanding of the loss and decoherence mechanism in qubits that can disrupt and destroy quantum information, they say.

Quantum computers make use of qubits to store and process information. The most advanced quantum computers to date – including those being developed by IT giants Google and IBM – use qubits made from superconducting electronic circuits operating at very low temperatures. To further improve qubits, researchers need to better understand how they dissipate heat, says Bayan Karimi, who is the first author of a paper describing the new study. This heat transfer is a form of decoherence – a phenomenon by which the quantum states in qubits revert to behaving like classical 0s and 1s and lose the precious quantum information they contain.

“An understanding of dissipation in a single Josephson junction coupled to an environment remains strikingly incomplete, however,” she explains. “Today, a junction can be modelled and characterized without a detailed knowledge of, for instance, where energy is dissipated in a circuit. But improving design and performance will require a more complete picture.”

Physical environment is important

In the new work, Karimi and colleagues used a nano-bolometer to measure the very weak radiation emitted from a Josephson junction over a broad range of frequencies up to 100::GHz. The researchers identified several operation regimes depending on the junction bias, each with a dominant dissipation mechanism. “The whole frequency-dependent power and shape of the current-voltage characteristics can be attributed to the physical environment of the junction,” says Jukka Pekola, who led this new research effort.

The thermal detector works by converting radiation into heat and is composed of an absorber (made of copper), the temperature of which changes when it detects the radiation. The researchers measure this variation using a sensitive thermometer, comprising a tunnel junction between the copper absorber and a superconductor.

“Our work will help us better understand the nature of heat dissipation of qubits that can disrupt and destroy quantum information and how these coherence losses can be directly measured as thermal losses in the electrical circuit holding the qubits,” Karimi tells Physics World.

In the current study, which is detailed in Nature Nanotechnology, the researchers say they measured continuous energy release from a Josephson junction when it was biased by a voltage. They now aim to find out how their detector can sense single heat loss events when the Josephson junction or qubit releases energy. “At best, we will be able to count single photons,” says Pekola.

The post Thermal dissipation decoheres qubits appeared first on Physics World.

New superconductor has record breaking current density

A superconducting wire segment based on rare-earth barium copper oxide (REBCO) is the highest performing yet in terms of current density, carrying 190 MA/cm2 in the absence of any external magnetic field at a temperature of 4.2 K. At warmer temperatures of 20 K (which is the proposed application temperature for magnets used in commercial nuclear fusion reactors), the wires can still carry over 150 MA/cm2. These figures mean that the wire, despite being only 0.2 micron thick, can carry a current comparable to that of commercial superconducting wires that are almost 10 times thicker, according to its developers at the University at Buffalo in the US.

High-temperature superconducting (HTS) wires could be employed in a host of applications, including energy generation, storage and transmission, transportation, and in the defence and medical sectors. They might also be used in commercial nuclear fusion, offering the possibility of limitless clean energy. Indeed, if successful, this niche application could help address the world’s energy supply issues, says Amit Goyal of the University at Buffalo’s School of Engineering and Applied Science, who co-led this new study.

Record-breaking critical current density and pinning force

Before such large-scale applications see the light of day, however, the performance of HTS wires must be improved – and their cost reduced. Goyal and colleagues’ new HTS wire has the highest values of critical current density reported to date. This is particularly true at lower operating temperatures ranging from 4.2–30 K, which is of interest for the fusion application. While still extremely cold, these are much higher than the absolute zero temperatures that traditional superconductors function at, says Goyal.

And that is not all, the wires also have the highest pinning force (that is, the ability to hold magnetic vortices) ever reported for such wires: around 6.4 TN/m3 per cubic metre at 4.2 K and about 4.2 TN/m3 at 20 K, both under a 7 T applied magnetic field.

“Prior to this work, we did not know if such levels of critical current density and pinning were possible to achieve,” says Goyal.

The researchers made their wire using a technique called pulsed laser deposition. Here, a laser beam impinges on a target material and ablates material that is deposited as a film on the substrate, explains Goyal. “This technique is employed by a majority of HTS wire manufacturers. In our experiment, the high critical current density was made possible thanks to a combination of pinning effects from rare-earth doping, oxygen-point defects and insulating barium zirconate nanocolumns as well as optimization of deposition conditions.”

This is a very exciting time for the HTS field, he tells Physics World. “We have a very important niche large-scale application – commercial nuclear fusion. Indeed, one company, Commonwealth Fusion, has invested $1.8bn in series B funding. And within the last 5 years, almost 20 new companies have been founded around the world to commercialize this fusion technology.”

Goyal adds that his group’s work is just the beginning and that “significant performance enhancements are still possible”. “If HTS wire manufacturers work on optimizing the conditions under which the wires are deposited, they should be able to achieve a much higher critical current density, which will result in much better price/performance metric for the wires and enable applications. Not just in fusion, but all other large-scale applications as well.”

The researchers say they now want to further enhance the critical current density and pinning force of their 0.2 micron-thick wires. “We also want to demonstrate thicker films that can carry much higher current,” says Goyal.

They describe their HTS wires in Nature Communications.

The post New superconductor has record breaking current density appeared first on Physics World.

The physics of cycling’s ‘Everesting’ challenge revealed

“Everesting” involves a cyclist riding up and down a given hill multiple times until the ascent totals the elevation of Mount Everest – or 8848 m.

The challenge became popular during the COVID-19 lockdowns and in 2021 the Irish cyclist Ronan McLaughlin was reported to have set a new “Everesting” record of 6:40:54. This was almost 20 minutes faster than the previous world record of 6:59:38 set by the US’s Sean Gardner in 2020.

Yet a debate soon ensued on social media concerning the significant tailwind that day of 5.5 meters per second, which they claimed would have helped McLaughlin to climb the hill multiple times.

But did it? To investigate, Martin Bier, a physicist at East Carolina University in North Carolina, has now analysed what effect air resistance might have when cycling up and down a hill.

“Cycling uses ‘rolling’, which is much smoother and faster, and more efficient [than running],” notes Bier. “All of the work is purely against gravity and friction.”

Bier calculated that a tailwind does help slightly when going uphill, but most of the work when doing so is generating enough power to overcome gravity rather than air resistance.

When coming downhill, however, any headwind becomes significant given that the force of air resistance increases with the square of the cyclist’s speed. The headwind can then have a huge effect, causing a significant reduction in speed.

So, while a tailwind going up is negligible the headwind coming down certainly won’t be. “There are no easy tricks,” Bier adds. “If you want to be a better Everester, you need to lose weight and generate more [power]. This is what matters — there’s no way around it.”

The post The physics of cycling’s ‘Everesting’ challenge revealed appeared first on Physics World.

Air-powered computers make a comeback

A device containing a pneumatic logic circuit made from 21 microfluidic valves could be used as a new type of air-powered computer that does not require any electronic components. The device could help make a wide range of important air-powered systems safer and less expensive, according to its developers at the University of California at Riverside.

Electronic computers rely on transistors to control the flow of electricity. But in the new air-powered computer, the researchers use tiny valves instead of transistors to control the flow of air rather than electricity. “These air-powered computers are an example of microfluidics, a decades-old field that studies the flow of fluids (usually liquids but sometimes gases) through tiny networks of channels and valves,” explains team leader William Grover, a bioengineer at UC Riverside.

By combining multiple microfluidic valves, the researchers were able to make air-powered versions of standard logic gates. For example, they combined two valves in a row to make a Boolean AND gate. This gate works because air will flow through the two valves only if both are open. Similarly, two valves connected in parallel make a Boolean OR gate. Here, air will flow if either one or the other of the valves is open.

Complex logic circuits

Combining an increasing number of microfluidic valves enables the creation of complex air-powered logic circuits. In the new study, detailed in Device, Grover and colleagues made a device that uses 21 microfluidic valves to perform a parity bit calculation – an important calculation employed by many electronic computers to detect errors and other problems.

The novel air-powered computer detects differences in air pressure flowing through the valves to count the number of bits. If there is an error, it outputs an error signal by blowing a whistle. As a proof-of-concept, the researchers used their device to detect anomalies in an intermittent pneumatic compression (IPC) device – a leg sleeve that fills with air and regularly squeezes a patient’s legs to increase blood flow, with the aim of preventing blood clots that could lead to strokes. Normally, these machines are monitored using electronic equipment.

“IPC devices can save lives, but they aren’t as widely employed as they could be,” says Grover. “In part, this is because they’re so expensive. We wanted to see if we could reduce their cost by replacing some of their electronic hardware with pneumatic logic.”

Air’s viscosity is important

Air-powered computers behave very similarly, but not quite identically to electronic computers, Grover adds. “For example, we can often take an existing electronic circuit and make an air-powered version of it and it’ll work just fine, but at other times the air-powered device will behave completely differently and we have to tweak the design to make it function.”

The variations between the two types of computers come down to one important physical difference between electricity and air, he explains: electricity does not have viscosity, but air does. “There are also lots of little design details that are of little consequence in electronic circuits but which become important in pneumatic circuits because of air’s viscosity. This makes our job a bit harder, but it also means we can do things with pneumatic logic that aren’t possible – or are much harder to do – with electronic logic.”

In this work, the researchers focused on biomedical applications for their air-powered computer, but they say that this is just the “tip of the iceberg” for this technology. Air-powered systems are ubiquitous, from the brakes on a train, to assembly-line robots and medical ventilators, to name but three. “By using air-powered computers to operate and monitor these systems, we could make these important systems more affordable, more reliable and safer,” says Grover.

“I have been developing air-powered logic for around 20 years now, and we’re always looking for new applications,” he tells Physics World. “What is more, there are areas in which they have advantages over conventional electronic computers.”

One specific application of interest is moving grain inside silos, he says. These enormous structures hold grain and other agricultural products and people often have to climb inside to spread out the grain – an extremely dangerous task because they can become trapped and suffocate.

“Robots could take the place of humans here, but conventional electronic robots could generate electronic sparks that could create flammable dust inside the silo,” Grover explains. “An air-powered robot, on the other hand, would work inside the silo without this risk. We are thus working on an air-powered ‘brain’ for such a robot to keep people out of harm’s way.”

Air-powered computers aren’t a new idea, he adds. Decades ago, there was a multitude of devices being designed that ran on water or air to perform calculations. Air-powered computers fell out of favour, however, when transistors and integrated circuits made electronic computers feasible. “We’ve therefore largely forgotten the history of computers that ran on things other than electricity. Hopefully, our new work will encourage more researchers to explore new applications for these devices.”

The post Air-powered computers make a comeback appeared first on Physics World.

Quantum hackathon makes new connections

Par : No Author

It is said that success breeds success, and that’s certainly true of the UK’s Quantum Hackathon – an annual event organized by the National Quantum Computing Centre (NQCC) that was held in July at the University of Warwick. Now in its third year, the 2024 hackathon attracted 50% more participants from across the quantum ecosystem, who tackled 13 use cases set by industry mentors from the private and public sectors. Compared to last year’s event, participants were given access to a greater range of technology platforms, including software control systems as well as quantum annealers and physical processors, and had an additional day to perfect and present their solutions.

The variety of industry-relevant problems and the ingenuity of the quantum-enabled solutions were clearly evident in the presentations on the final day of the event. An open competition for organizations to submit their problems yielded use cases from across the public and private spectrum, including car manufacturing, healthcare and energy supply. While some industry partners were returning enthusiasts, such as BT and Rolls Royce, newcomers to the hackathon included chemicals firm Johnson Matthey, Aioi R&D Lab (a joint venture between Oxford University spin-out Mind Foundry and the global insurance brand Aioi Nissay Dowa) and the North Wales Police.

“We have a number of problems that are beyond the scope of standard artificial intelligence (AI) or neural networks, and we wanted to see whether a quantum approach might offer a solution,” says Alastair Hughes, lead for analytics and AI at North Wales Police. “The results we have achieved within just two days have proved the feasibility of the approach, and we will now be looking at ways to further develop the model by taking account of some additional constraints.”

The specific use case set by Hughes was to optimize the allocation of response vehicles across North Wales, which has small urban areas where incidents tend to cluster and large swathes of countryside where the crime rate is low. “Our challenge is to minimize response times without leaving some of our communities unprotected,” he explains. “At the moment we use a statistical process that needs some manual intervention to refine the configuration, which across the whole region can take a couple of months to complete. Through the hackathon we have seen that a quantum neural network can deliver a viable solution.”

Teamwork
Problem solving Each team brought together a diverse range of skills, knowledge and experience to foster learning and accelerate the development process. (Courtesy: NQCC)

While Hughes had no prior experience with using quantum processors, some of the other industry mentors are already investigating the potential benefits of quantum computing for their businesses. At Rolls Royce, for example, quantum scientist Jarred Smalley is working with colleagues to investigate novel approaches for simulating complex physical processes, such as those inside a jet engine. Smalley has mentored a team at all three hackathons, setting use cases that he believes could unlock a key bottleneck in the simulation process.

The hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors

“Some of our crazy problems are almost intractable on a supercomputer, and from that we extract a specific set of processes where a quantum algorithm could make a real impact,” he says. “At Rolls Royce our research tends to be focused on what we could do in the future with a fault-tolerant quantum computer, and the hackathon offers a way for us to break into the current state of the technology and to see what can be done with today’s quantum processors.”

Since the first hackathon in 2022, Smalley says that there has been an improvement in the size and capabilities of the hardware platforms. But perhaps the biggest advance has been in the software and algorithms available to help the hackers write, test and debug their quantum code. Reflecting that trend in this year’s event was the inclusion of software-based technology providers, such as Q-CTRL’s Fire Opal and Classiq, that provide tools for error suppression and optimizing quantum algorithms. “There are many more software resources for the hackers to dive into, including algorithms that can even analyse the problems themselves,” Smalley says.

Cathy White, a research manager at BT who has mentored a team at all three hackathons, agrees that rapid innovation in hardware and software is now making it possible for the hackers to address real-world problems – which in her case was to find the optimal way to position fault-detecting sensors in optical networks. “I wanted to set a problem for which we could honestly say that our classical algorithms can’t always provide a good approximation,” she explained. “We saw some promising results within the time allowed, and I’m feeling very positive that quantum computers are becoming useful.”

Both White and Smalley could see a significant benefit from the extended format, which gave hackers an extra day to explore the problem and consider different solution pathways. The range of technology providers involved in the event also enabled the teams to test their solutions on different platforms, and to adapt their approach if they ran into a problem. “With the extra time my team was able to use D-Wave’s quantum annealer as well as a gate-model approach, and it was impressive to see the diversity of algorithms and approaches that the students were able to come up with,” White comments. “They also had more scope to explore different aspects of the problem, and to consolidate their results before deciding what they wanted to present.”

One clear outcome from the extended format was more opportunity to benchmark the quantum solutions against their classical counterparts. “The students don’t claim quantum advantage without proper evidence,” adds White. “Every year we see remarkable progress in the technology, but they can help us to see where there are still challenges to be overcome.”

According to Stasja Stanisic from Phasecraft, one of the four-strong judging panel, a robust approach to benchmarking was one of the stand-out factors for the winning team. Mentored by Aioi R&D Lab, the team investigated a risk aggregation problem, which involved modelling dynamic relationships between data such as insurance losses, stock market data and the occurrence of natural disasters. “The winning team took time to really understand the problem, which allowed them to adapt their algorithm to match their use-case scenario,” Stanisic explains. “They also had a thorough and structured approach to benchmarking their results against other possible solutions, which is an important comparison to make.”

The team presenting their results
Learning points Presentations on the final day of the event enabled each team to share their results with other participants and a four-strong judging panel. (Courtesy: NQCC)

Teams were judged on various criteria, including the creativity of the solution, its success in addressing the use case, and investigation of scaling and feasibility. The social impact and ethical considerations of their solution was also assessed. Using the NQCC’s Quantum STATES principles for responsible and ethical quantum computing (REQC), which were developed and piloted at the NQCC, the teams, for example, considered the potential impact of their innovation on different stakeholders and the explainability of their solution. They also proposed practical recommendations to maximize societal benefit. While many of their findings were specific to their use cases, one common theme was the need for open and transparent development processes to build trust among the wider community.

“Quantum computing is an emerging technology, and we have the opportunity right at the beginning to create an environment where ethical considerations are discussed and respected,” says Stanisic. “Some of the teams showed some real depth of thought, which was exciting to see, while the diverse use cases from both the public and private sectors allowed them to explore these ethical considerations from different perspectives.”

Also vital for participants was the chance to link with and learn from their peers. “The hackathon is a place where we can build and maintain relationships, whether with the individual hackers or with the technology partners who are also here,” says Smalley. For Hughes, meanwhile, the ability to engage with quantum practitioners has been a game changer. “Being in a room with lots of clever people who are all sparking off each other has opened my eyes to the power of quantum neural networks,” he says. “It’s been phenomenal, and I’m excited to see how we can take this forward at North Wales Police.”

  • To take part in the 2025 Quantum Hackathon – whether as a hacker, an industry mentor or technology provider – please e-mail the NQCC team at nqcchackathon@stfc.ac.uk

The post Quantum hackathon makes new connections appeared first on Physics World.

Rheo-electric measurements to predict battery performance from slurry processing

Par : No Author

The market for lithium-ion batteries (LIBs) is expected to grow ~30x to almost 9 TWh produced annually in 2040 driven by demand from electric vehicles and grid scale storage. Production of these batteries requires high-yield coating processes using slurries of active material, conductive carbon, and polymer binder applied to metal foil current collectors. To better understand the connections between slurry formulation, coating conditions, and composite electrode performance we apply new Rheo-electric characterization tools to battery slurries. Rheo-electric measurements reveal the differences in carbon black structure in the slurry that go undetected by rheological measurements alone. Rheo-electric results are connected to characterization of coated electrodes in LIBs in order to develop methods to predict the performance of a battery system based on the formulation and coating conditions of the composite electrode slurries.

Jeffrey Richards (left) and Jeffrey Lopez (right)

Jeffrey Richards is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on understanding the rheological and electrical properties of soft materials found in emergent energy technologies.

Jeffrey Lopez is an assistant professor of chemical and biological engineering at Northwestern University. His research is focused on using fundamental chemical engineering principles to study energy storage devices and design solutions to enable accelerated adoption of sustainable energy technologies.



The post Rheo-electric measurements to predict battery performance from slurry processing appeared first on Physics World.

Simultaneous structural and chemical characterization with colocalized AFM-Raman

Par : No Author

The combination of Atomic Force Microscopy (AFM) and Raman spectroscopy provides deep insights into the complex properties of various materials. While Raman spectroscopy facilitates the chemical characterization of compounds, interfaces and complex matrices, offering crucial insights into molecular structures and compositions, including microscale contaminants and trace materials. AFM provides essential data on topography and mechanical properties, such as surface texture, adhesion, roughness, and stiffness at the nanoscale.

Traditionally, users must rely on multiple instruments to gather such comprehensive analysis. HORIBA’s AFM-Raman system stands out as a uniquely multimodal tool, integrating an automated AFM with a Raman/photoluminescence spectrometer, providing precise pixel-to-pixel correlation between structural and chemical information in a single scan.

This colocalized approach is particularly valuable in applications such as polymer analysis, where both surface morphology and chemical composition are critical; in semiconductor manufacturing, for detecting defects and characterizing materials at the nanoscale; and in life sciences, for studying biological membranes, cells, and tissue samples. Additionally, it’s ideal for battery research, where understanding both the structural and chemical evolution of materials is key to improving performance.

João Lucas Rangel

João Lucas Rangel currently serves as the AFM & AFM-Raman global product manager at HORIBA and holds a PhD in biomedical engineering. Specializing in Raman, infrared, and fluorescence spectroscopies, his PhD research was focused on skin dermis biochemistry changes. At HORIBA Brazil, João started in 2012 as molecular spectroscopy consultant, transitioning into a full-time role as an application scientist/sales support across Latin America, expanding his responsibilities, overseeing the applicative sales support, and co-management of the business activities within the region. In 2022, João was invited to join HORIBA France as a correlative microscopy – Raman application specialist, being responsible to globally develop the correlative business, combing HORIBA’s existing technologies with other complementary technologies. More recently, in 2023, João was promoted to the esteemed position of AFM & AFM-Raman global product manager. In this role, João oversees strategic initiatives aiming at the company’s business sustainability and future development, ensuring its continued success and future growth.

The post Simultaneous structural and chemical characterization with colocalized AFM-Raman appeared first on Physics World.

❌