↩ Accueil

Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 1 décembre 2024Physics World

Return to Helgoland: celebrating 100 years of quantum mechanics

1 décembre 2024 à 12:00
Sunset on the island of Helgoland
A new dawn It was on the island of Helgoland off the coast of Germany in June 1925 that Werner Heisenberg created matrix mechanics. (Courtesy: iStock/Iurii Buriak)

At 3 a.m. one morning in June 1925, an exhausted, allergy-ridden 23-year old climbed a rock at the edge of a small island off the coast of Germany in the North Sea. Werner Heisenberg, who was an unknown physics postdoc at the time, had just cobbled together, in crude and unfamiliar mathematics, a framework that would shortly become what we know as “matrix mechanics”. If we insist on pegging the birth of quantum mechanics to a particular place and time, Helgoland in June 1925 it is.

Heisenberg’s work a century ago is the reason why the United Nations has proclaimed 2025 to be the International Year of Quantum Science and Technology. It’s a global initiative to raise the public’s awareness of quantum science and its applications, with numerous activities in the works throughout the year. One of the most significant events for physicists will be a workshop running from 9–14 June on Helgoland, exactly 100 years on from the very place where quantum mechanics supposedly began.

Entitled “Helgoland 2025”, the event is designed to honour Heisenberg’s development of matrix mechanics, which organizers have dubbed “the first formulation of quantum theory”. The workshop, they say, will explore “the increasingly fruitful intersection between the foundations of quantum mechanics and the application of these foundations in real-world settings”. But why was Heisenberg’s work so vital to the development of quantum mechanics? Was it really as definitive as we like to think? And is the oft-repeated Helgoland story really true?

How it all began

The events leading up to Heisenberg’s trip can be traced back to the work of Max Planck in 1900. Planck was trying to produce a formula for the how certain kinds of materials absorb and emit light depending on energy. In what he later referred to as an “act of sheer desperation”, Planck found himself having to use the idea of the “quantum”, which implied that electromagnetic radiation is not continuous but can be absorbed and emitted only in discrete chunks.

Standing out as a smudge on the beautiful design of classical physics, the idea of quantization appeared of limited use. Some physicists called it “ugly”, “grotesque” and “distasteful”; it was surely a theoretical sticking plaster that could soon be peeled off. But the quantum proved indispensable, cropping up in more and more branches of physics, including the structure of the hydrogen atom, thermodynamics and solid-state physics. It was like an obnoxious visitor whom you try to expel from your house but can’t. Worse, its presence seemed to grow. The quantum, remarked one scientist at the time, was a “lusty infant”.

‘Quantum theory’ was like having instructions for how to get from place A to place B. What you really wanted was a ‘quantum mechanics’ – a map that showed you how to go from any place to any other.

Robert P Crease, Stony Brook University

Attempts to domesticate that infant in the first quarter of the 20th century were made not only by Planck but other physicists too, such as Wolfgang Pauli, Max Born, Niels Bohr and Ralph Kronig. They succeeded only in producing rules for calculating certain phenomena that started with classical theory and imposed conditions. “Quantum theory” was like having instructions for how to get from place A to place B. What you really wanted was a “quantum mechanics” – a map that, working with one set of rules, showed you how to go from any place to any other.

Werner Heisenberg (1901-1976). Portrait of the German theoretical physicist, c.1927.
Delicate figure Werner Heisenberg was said to be sensitive, good looking and talented at music but vulnerable to allergies. (Courtesy: IanDagnall Computing/Alamy Stock Photo)

Heisenberg was a young crusader in this effort. Born on 5 December 1901 – the year after Planck’s revolutionary discovery – Heisenberg had the character often associated with artists, with dashing looks, good musicianship and a physical frailty including a severe vulnerability to allergies. That summer in 1923, Heisenberg had just finished his PhD under Arnold Sommerfeld at the Ludwig Maximilian University in Munich and was starting a postdoc with Born at the University of Göttingen.

Like others, Heisenberg was stymied in his attempts to develop a mathematical framework for the frequencies, amplitudes, orbitals, positions and momenta of quantum phenomena. Maybe, he wondered, the trouble was trying to cast these phenomena in a Newtonian-like visualizable form. Instead of treating them as classical properties with specific values, he decided to look at them in purely mathematical terms as operators acting on functions. It was then that an “unfortunate personal setback” occurred.

Destination Helgoland

Referring to a bout of hay fever that had wiped him out, Heisenberg asked Born for a two-week leave of absence from Göttingen and took a boat to Helgoland. The island, which lies some 50 km off Germany’s mainland, is barely 1 km2 in size. However, its strategic military location had given it an outsized history that saw it swapped several times between different European powers. Part of Denmark from 1714, the island was occupied by Britain in 1807 before coming under Germany’s control in 1890.

During the First World War, Germany turned the island into a military base and evacuated all its residents. By the time Heisenberg arrived, the soldiers had long gone and Helgoland was starting to recover its reputation as a centre for commercial fishing and a bracing tourist destination. Most importantly for Heisenberg, it had fresh winds and was remote from allergen producers.

Colourful lobster huts on the offshore island Helgoland
Site for sore eyes Helgoland is a popular tourist destination with fresh and bracing North Sea winds that gave Werner Heisenberg relief from a severe bout of hay fever, letting him focus on his seminal work on quantum mechanics. (Courtesy: iStock/Sabine Wagner)

Heisenberg arrived at Helgoland on Saturday 6 June 1925 coughing and sneezing, and with such a swollen face that his landlady decided he had been in a fight. She installed him in a quiet room on the second floor of her Gasthaus that overlooked the beach and the North Sea. But he didn’t stop working. “What exactly happened on that barren, grassless island during the next ten days has been the subject of much speculation and no little romanticism,” wrote historian David Cassidy in his definitive 1992 book Uncertainty: The Life and Science of Werner Heisenberg.

In Heisenberg’s telling, decades later, he kept turning over all he knew and began to construct equations of observables – of frequencies and amplitudes – in what he called “quantum-mechanical series”. He outlined a rough mathematical scheme, but one so awkward and clumsy that he wasn’t even sure it obeyed the conservation of energy, as it surely must. One night Heisenberg turned to that issue.

“When the first terms seemed to accord with the energy principle, I became rather excited,” he wrote much later in his 1971 book Physics and Beyond. But he was still so tired that he began to stumble over the maths. “As a result, it was almost three o’clock in the morning before the final result of my computations lay before me.” The work still seemed finished yet incomplete – it succeeded in giving him a glimpse of a new world though not one worked out in detail – but his emotions were weighted with fear and longing.

“I was deeply alarmed,” Heisenberg continued. “I had the feeling that, through the surface of atomic phenomena, I was looking at a strangely beautiful interior, and felt almost giddy at the thought that I now had to probe this wealth of mathematical structure nature had so generously spread out before me. I was far too excited to sleep and so, as a new day dawned, I made for the southern tip of the island, where I had been longing to climb a rock jutting out into the sea. I now did so without too much trouble, and waited for the sun to rise.”

What happened on Helgoland?

Historians are suspicious of Heisenberg’s account. In their 2023 book Constructing Quantum Mechanics Volume 2: The Arch 1923–1927, Anthony Duncan and Michael Janssen suggest that Heisenberg made “somewhat less progress in his visit to Helgoland in June 1925 than later hagiographical accounts of this episode claim”. They believe that Heisenberg, in Physics and Beyond, may “have misremembered exactly how much he accomplished in Helgoland four decades earlier”.

What’s more – as Cassidy wondered in Uncertainty – how could Heisenberg have been so sure that the result agreed with the conservation of energy without having carted all his reference books along to the island, which he surely had not. Could it really be, Cassidy speculated sceptically, that Heisenberg had memorized the relevant data?

Alexei Kojevnikov – another historian – even doubts that Heisenberg was entirely candid about the reasons behind his inspiration. In his 2020 book The Copenhagen Network: The Birth of Quantum Mechanics from a Postdoctoral Perspective, Kojevnikov notes that fleeing from strong-willed mentors such as Bohr, Born, Kronig, Pauli and Sommerfeld was key to Heisenberg’s creativity. “In order to accomplish his most daring intellectual breakthrough,” Kojevnikov writes, “Heisenberg had to escape from the authority of his academic supervisors into the temporary loneliness and freedom on a small island in the North Sea.”

Whatever did occur on the island, one thing is clear. “Heisenberg had his breakthrough,” decides Cassidy in his book. He left Helgoland 10 days after he arrived, returned to Göttingen, and dashed off a paper that was published in Zeitschrift für Physik in September 1925 (33 879). In the article, Heisenberg wrote that “it is not possible to assign a point in space that is a function of time to an electron by means of observable quantities.” He then suggested that “it seems more advisable to give up completely on any hope of an observation of the hitherto-unobservable quantities (such as the position and orbital period of the electron).”

To modern ears, Heisenberg’s comments may seem unremarkable. But his proposition certainly would have been nearly unthinkable to those steeped in Newtonian mechanics. Of course, the idea of completely abandoning the observability of those quantities wasn’t quite true. Under certain conditions, it can make sense to speak of observing them. But they certainly captured the direction he was taking.

The only trouble was that his scheme, with its “quantum-mechanical relations”, produced formulae that were “noncommutative” – a distressing asymmetry that was surely an incorrect feature in a physical theory. Heisenberg all but shoved this feature under the rug in his Zeitschrift für Physik article, where he relegated the point to a single sentence.

Abstract image of quantum ideas
Strange world We might not fully understand quantum physics, but novel experimental techniques are helping us to make progress, while applications in areas such as quantum computing and cryptography are booming. (Courtesy: iStock/agsandrew)

The more mathematically trained Born, on the other hand, sensed something familiar about the maths and soon recognized that Heisenberg’s bizarre “quantum-mechanical relations” with their strange tables were what mathematicians called matrices. Heisenberg was unhappy with that particular name for his work, and considered returning to what he had called “quantum-mechanical series”.

Fortunately, he didn’t, for it would have made the rationale for the Helgoland 2025 conference clunkier to describe. Born was delighted with the connection to traditional mathematics. In particular he found that when the matrix associated with momentum and the matrix q associated with position are multiplied in different orders, the difference between them is proportional to Planck’s constant, h.

As Born wrote in his 1956 book Physics in My Generation: “I shall never forget the thrill I experienced when I succeeded in condensing Heisenberg’s ideas on quantum conditions in the mysterious equation pqqp = 2πi, which is the centre of the new mechanics and was later found to imply the uncertainty relations”. In February 1926, Born, Heisenberg and Jordan published a landmark paper that worked out the implications of this equation (Zeit. Phys. 35 557). At last, physicists had a map of the quantum domain.

Almost four decades later in an interview with the historian Thomas Kuhn, Heisenberg recalled Pauli’s “extremely enthusiastic” reaction to the developments. “[Pauli] said something like ‘Morgenröte einer Neuzeit’,” Heisenberg told Kuhn. “The dawn of a new era.” But it wasn’t entirely smooth sailing after that dawn. Some physicists were unenthusiastic about Heisenberg’s new mechanics, while others were outright sceptical.

Werner Heisenberg and Erwin Schrödinger
That winning feeling Werner Heisenberg (right) won the 1932 Nobel Prize for Physics “for the creation of quantum mechanics”. He was given the prize in 1933, with that year’s award shared by Paul Dirac and Erwin Schrödinger, shown here (left) with Sweden’s King Gustav V (middle) at the Nobel ceremony in Stockholm in December 1933. (Max-Planck-Institute, courtesy of AIP Emilio Segrè Visual Archives)

Yet successful applications kept coming. Pauli applied the equation to light emitted by the hydrogen atom and derived the Balmer formula, a rule that had been known empirically since the mid-1880s. Then, in one of the most startling coincidences in the history of science, the Austrian physicist Erwin Schrödinger produced a complete map of the quantum domain stemming from a much more familiar mathematical basis called “wave mechanics”. Crucially, Heisenberg’s matrix mechanics and Schrödinger’s maps turned out to be identical.

Even more fundamental implications followed. In an article published in Naturwissenschaften (14 899) in September 1926, Heisenberg wrote that our “ordinary intuition” does not work in the subatomic realm. “Because the electron and the atom possess not any degree of physical reality as the objects of our daily experience,” he said, “investigation of the type of physical reality which is proper to electrons and atoms is precisely the subject of quantum mechanics.”

Quantum mechanics, alarmingly, was upending reality itself, for the uncertainty it introduced was not only mathematical but “ontological” – meaning it had to do with the fundamental features of the universe. Early the next year, Heisenberg, in correspondence with Pauli, derived the equation Δp – Δq ≥ h/4π, the “uncertainty principle”, which became the touchstone of quantum mechanics. The birth complications, however, persisted. Some even got worse.

Catalytic conference

A century on from Heisenberg’s visit to Helgoland, quantum mechanics still has physicists scratching their heads. “I think most people agree that we are still trying to make sense of even basic non-relativistic quantum mechanics,” admits Jack Harris, a quantum physicist at Yale University who is co-organizing Helgoland 2025 with Časlav Brukner, Steven Girvin and Florian Marquardt.

We really don’t fully understand the quantum world yet. We apply it, we generalize it, we develop quantum field theories and so on, but still a lot of it is uncharted territory.

Igor Pikovsky, Stevens Institute, New Jersey

“We really don’t fully understand the quantum world yet,” adds Igor Pikovsky from the Stevens Institute in New Jersey, who works in gravitational phenomena and quantum optics. “We apply it, we generalize it, we develop quantum field theories and so on, but still a lot of it is uncharted territory.” Philosophers and quantum physicists with strong opinions have debated interpretations and foundational issues for a long time, he points out, but the results of those discussions have been unclear.

Helgoland 2025 hopes to change all that. Advances in experimental techniques let us ask new kinds of fundamental questions about quantum mechanics. “You have new opportunities for studying quantum physics at completely different scales,” says Pikovsky. “You can make macroscopic, Schrödinger-cat-like systems, or very massive quantum systems to test. You don’t need to debate philosophically about whether there’s a measurement problem or a classical-quantum barrier – you can start studying these questions experimentally.”

One phenomenon fundamental to the puzzle of quantum mechanics is entanglement, which prevents the quantum state of a system from being described independently of the state of others. Thanks to the Einstein–Podolsky–Rosen (EPR) paper of 1935 (Phys. Rev. 47 777), Chien-Shiung Wu and Irving Shaknov’s experimental demonstration of entanglement in extended systems in 1949, and John Bell’s theorem in 1964 (Physics 1 195), physicists know that entanglement in extended systems is a large part of what’s so weird about quantum mechanics.

Understanding all that entanglement entails, in turn, has led physicists to realize that information is a fundamental physical concept in quantum mechanics. “Even a basic physical quantum system behaves differently depending on how information about it is stored in other systems,” Harris says. “That’s a starting point both for deep insights into what quantum mechanics tells us about the world, and also for applying it.”

Helgoland 2025: have you packed your tent?

Red and white striped lighthouse on sand dunes at coast of island
Quantum superposition Helgoland is so small that delegates to the workshop have to share double rooms in the few hotels and guesthouses on the islands, while others are being invited to camp on the beach. (Courtesy: iStock/Frederick Doerschem)

Running from 9–14 June 2025 on the island where Werner Heisenberg did his pioneering work on quantum mechanics, the Helgoland 2025 workshop is a who’s who of quantum physics. Five Nobel laureates in the field of quantum foundations are coming. David Wineland and Serge Haroche, who won in 2012 for measuring and manipulating individual quantum systems, will be there. So too will Alain Aspect, John Clauser and Anton Zeilinger, who were honoured in 2022 for their work on quantum-information science.

There’ll be Charles Bennett and Gilles Brassard, who pioneered quantum cryptography, quantum teleportation and other applications, as well quantum-sensing guru Carlton Caves. Researchers from industry are intending to be present, including Krysta Svore, who’s vice-president of Microsoft Quantum.

Other attendees are from the intersection of foundations and applications. There will be researchers working on gravitation, mostly from quantum gravity phenomenology, where the aim is to seek experimental signatures of the effect. Others work on quantum clocks, quantum cryptography, and innovative ways of controlling light, such as using squeezed light at LIGO, to detect gravitational waves.

Helgoland speakers
Entangled minds Helgoland 2025 boasts a who’s who of quantum physics including (clockwise from top right) Serge Haroche, Krysta Svore, Carlo Rovelli, Anton Zeilinger, Ana Maria Rey and Jan-Wei Pan. (Courtesy: Haroche, CC BY-SA 4.0; Svore, Microsoft Corp; Rovelli, Fronteiras do Pensamento/Greg Salibian, CC BY-SA 2.0; Zeilinger, Austrian Academy of Sciences, CC BY-SA 2.5; Rey, NIST, Public domain; Pan, CC BY-SA 4.0)

The programme starts in Hamburg on 9 June with a banquet and a few talks. Attendees will then take a ferry to Helgoland the following morning for a week of lectures, panel discussions and poster sessions. All talks are plenary, but in the evenings panels of a half-dozen or so people will address bigger questions familiar to every quantum physicist but rarely discussed in research papers. What is it about quantum mechanics, for instance, that makes it so compatible with so many interpretations?

If you’re thinking of going, you’re almost certainly out of luck. Registration closed in April 2024, while hotels, Airbnb and Booking.com venues are nearly exhausted. Participants are having to share double rooms or invited to camp on the beaches – with their own gear.

Helgoland 2025 will therefore focus on the two-way street between foundations and applications in what promises to be a unique event. “The conference is intended to be a bit catalytic,” Harris adds. “[There will be] people who didn’t realize that others were working on similar issues in different fields, and a lot of people who will never have met each other”. The disciplinary diversity will be augmented by the presence of students as well as poster sessions, which tend to bring in an even broader variety of research topics.

There will be people [at Helgoland] who work on black holes whose work is familiar to me but who I haven’t met yet.

Ana Maria Rey, University of Colorado, Boulder

One of those looking forward to such encounters is Ana Maria Rey – a theoretical physicist at the University of Colorado, Boulder, and a JILA fellow who studies quantum phenomena in ways that have improved atomic clocks and quantum computing. “There will be people who work on black holes whose work is familiar to me but who I haven’t met yet,” she says. Finding people should be easy: Helgoland is tiny and only a hand-picked group of people have been invited to attend (see box above).

What’s also unusual about Helgoland is that it has as many practically-minded as theoretically-minded participants. But that doesn’t faze Magdalena Zych, a physicist from Stockholm University in Sweden. “I’m biased because academically I grew up in Vienna, where Anton Zeilinger’s group always had people working on theory and applications,” she says.

Zych’s group has, for example, recently discovered a way to use the uncertainty principle to get a better understanding of the semi-classical space–time trajectories of composite particles. She plans to talk about this research at Helgoland, finding it appropriate given that it relies on Heisenberg’s principle, is a product of specific theoretical work and is valid more generally. “It relates to the arch of the conference, looking both backwards and forwards, and from theory to applications.”

Nathalie de Leon: heading for Helgoland

Nathalie de Leon
Precision thinker Nathalie de Leon from Princeton University is one of the researchers invited to the Helgoland meeting in June 2025. (Courtesy: Princeton University/Sameer A Khan)

In June 2022, Nathalie de Leon, a physicist at Princeton University working on quantum computing and quantum metrology, was startled to receive an invitation to the Helgoland conference. “It’s not often you get [one] a full three years in advance,” says de Leon, who also found it unusual that participants had to attend for the entire six days. But she was not surprised at the composition of the conference with its mix of theorists, experimentalists and people applying what she calls the “weirder” aspects of quantum theory.

“When I was a graduate student [in the late 2000s], it was still the case that quantum theorists and researchers who built things like quantum computers were well aware of each other but they didn’t talk to each other much,” she recalls. “In their grant proposals, the physicists had to show they knew what the computer scientists were doing, and the computer scientists had to justify their work with appeals to physics. But they didn’t often collaborate.” De Leon points out that over the last five or 10 years, however, more and more opportunities for these groups to collaborate have emerged. “Companies like IBM, Google, QuEra and Quantinuum now have theorists and academics trying to develop the hardware to make quantum tech a practical reality,” she says.

Some quantum applications have even cropped up in highly sophisticated technical devices, such as the huge Laser Interferometer Gravitational Wave Observatory (LIGO). “A crazy amount of classical engineering was used to build this giant interferometer,” says de Leon, “which got all the way down to a miniscule sensitivity. Then as a last step the scientists injected something called squeezed light, which is a direct consequence of quantum mechanics and quantum measurement.” According to de Leon, that squeezing let us see something like eight times more of the universe. “It’s one of the few places where we get a real tangible advantage out of the strangeness of quantum mechanics,” she adds.

Other, more practical benefits are also bound to emerge from quantum information theory and quantum measurement. “We don’t yet have quantum technologies on the open consumer market in the same way we have lasers you can buy on Amazon for $15,” de Leon says. But groups gathering in Helgoland will give us a better sense of where everything is heading. “Things,” she adds, “are moving so fast.”

Sadly, participants will not be able to visit Heisenberg’s Gasthaus, nor any other building where he might have been. During the Second World War, Germany again relocated Helgoland’s inhabitants and turned the island into a military base. After the war, the Allies piled up unexploded ordinances on the island and set them off, in what is said to be one of the biggest conventional explosions in history. The razed homeland was then given back to its inhabitants.

We will not be 300 Heisenbergs going for hikes. [Attendees] certainly won’t be trying to get away from each other.

Jack Harris, Yale University

Helgoland still has rocky outcroppings at its southern end, one of which may or may not be the site of Heisenberg’s early morning climb and vision. But despite the powerful mythology of his story, participants at Helgoland 2025 are not being asked to herald another dawn. “We will not,” says Harris, “be 300 Heisenbergs going for hikes. They certainly won’t be trying to get away from each other.”

The historian of science Mario Biagioli once wrote an article entitled “The scientific revolution is undead”, underlining how arbitrary it is to pin key developments in science – no matter how influential or long-lasting – to specific beginnings and endings, for each new generation of scientists finds ever more to mine in the radical discoveries of predecessors. With so many people working on so many foundational issues set to be at Helgoland 2025, new light is bound to emerge. A century on, the quantum revolution is alive and well.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the next 12 months for more coverage of the IYQ.

The post Return to Helgoland: celebrating 100 years of quantum mechanics appeared first on Physics World.

  •  
À partir d’avant-hierPhysics World

Delayed Big Bang for dark matter could be detected in gravitational waves

Par : No Author
30 novembre 2024 à 15:02

New constraints on a theory that says dark matter was created just after the Big Bang  – rather than at the Big Bang – have been determined by Richard Casey and  Cosmin Ilie at Colgate University in the US. The duo calculated the full range of parameters in which a “Dark Big Bang” could fit into the observed history of the universe. They say that evidence of this delayed creation could be found in gravitational waves.

Dark matter is a hypothetical substance that is believed to play an important role in the structure and dynamics of the universe. It appears to account for about 27% of the mass–energy in the cosmos and is part of the Standard Model of cosmology. However, dark matter particles have never been observed directly.

The Standard Model also says that the entire contents of the universe emerged nearly 14 billion years ago in the Big Bang. Yet in 2023, Katherine Freese and Martin Winkler at the University of Texas at Austin introduced a captivating new theory, which suggests that the universe’s dark matter may have been created after the Big Bang.

Evidence comes later on

Freese and Winkler pointed out that presence of photons and normal matter (mostly protons and neutrons) can be inferred from almost immediately after the Big Bang. However, the earliest evidence for dark matter comes from later on, when it began to exert its gravitational influence on normal matter. As a result, the duo proposed that dark matter may have appeared in a second event called the Dark Big Bang.

“In Freese and Winkler’s model, dark matter particles can be produced as late as one month after the birth of our universe,” Ilie explains. “Moreover, dark matter particles produced via a Dark Big Bang do not interact with regular matter except via gravity. Thus, this model could explain why all attempts at detecting dark matter – either directly, indirectly, or via particle production – have failed.”

According to this theory, dark matter particles are generated by a certain type of scalar field. This is an energy field that has a single value at every point in space and time (a familiar example is the field describing gravitational potential energy). Initially, each point of this scalar field would have occupied a local minimum in its energy potential. However, these points could have then transitioned to lower-energy minima via quantum tunnelling. During this transition, the energy difference between the two minima would be released, producing particles of dark matter.

Consistent with observations

Building on this idea, Casey and Ilie looked at how predictions of the Dark Big Bang model could be consistent with astronomers’ observations of the early universe.

“By focusing on the tunnelling potentials that lead to the Dark Big Bang, we were able to exhaust the parameter space of possible cases while still allowing for many different types of dark matter candidates to be produced from this transition,” Casey explains. “Aside from some very generous mass limits, the only major constraint on dark matter in the Dark Big Bang model is that it interacts with everyday particles through gravity alone.” This is encouraging because this limited interaction is what physicists expect of dark matter.

For now, the duo’s results suggest that the Dark Big Bang is far less constrained by past observations than Freese and Winkler originally anticipated. As Ilie explains, their constraints could soon be put to the test.

“We examined two Dark Big Bang scenarios in this newly found parameter space that produce gravitational wave signals in the sensitivity ranges of existing and upcoming surveys,” he says. “In combination with those considered in Freese and Winkler’s paper, these cases could form a benchmark for gravitational wave researchers as they search for evidence of a Dark Big Bang in the early universe.”

Subtle imprint on space–time

If a Dark Big Bang happened, then the gravitational waves it produced would have left a subtle imprint on the fabric of space–time. With this clearer outline of the Dark Big Bang’s parameter space, several soon-to-be active observational programmes will be well equipped to search for these characteristic imprints.

“For certain benchmark scenarios, we show that those gravitational waves could be detected by ongoing or upcoming experiments such as the International Pulsar Timing Array (IPTA) or the Square Kilometre Array Observatory (SKAO). In fact, the evidence of background gravitational waves reported in 2023 by the NANOGrav experiment – part of the IPTA – could be attributed to a Dark Big Bang realization,” Casey says.

If these studies find conclusive evidence for Freese and Winkler’s original theory, Casey and Ilie’s analysis could ultimately bring us a step closer to a breakthrough in our understanding of the ever-elusive origins of dark matter.

The research is described in Physical Review D.

The post Delayed Big Bang for dark matter could be detected in gravitational waves appeared first on Physics World.

  •  

The mechanics of squirting cucumbers revealed

29 novembre 2024 à 17:00

The plant kingdom is full of intriguing ways to distribute seeds such as the dandelion pappus effortlessly drifting on air currents to the ballistic nature of fern sporangia.

Not to be outdone, the squirting cucumber (Ecballium elaterium), which is native to the Mediterranean and is often regarded as a weed, has its own unique way of ejecting seeds.

When ripe, the ovoid-shaped fruits detach from the stem and as it does so explosively ejects seeds in a high-pressure jet of mucilage.

The process, which lasts just 30 milliseconds, launches the seeds at more than 20 metres per second with some landing 10 metres away.

Researchers in the UK have, for the first time, revealed the mechanism behind the squirt by carrying out high-speed videography, computed tomography scans and mathematical modelling.

“The first time we inspected this plant in the Botanic Garden, the seed launch was so fast that we weren’t sure it had happened,” recalls Oxford University mathematical biologist Derek Moulton. “It was very exciting to dig in and uncover the mechanism of this unique plant.”

The researchers found that in the weeks leading up to the ejection, fluid builds up inside the fruits so they become pressurised. Then just before seed dispersal, some of this fluid moves from the fruit to the stem, making it longer and stiffer.

This process crucially causes the fruit to rotate from being vertical to close to an angle of 45 degrees, improving the launch angle for the seeds.

During the first milliseconds of ejection, the tip of the stem holding the fruit then recoils away causing the fruit to counter-rotate and detach. As it does so, the pressure inside the fruit causes the seeds to eject at high speed.

By changing certain parameters in the model, such as the stiffness of the stem, reveals that the mechanism has been fine-tuned to ensure optimal seed dispersal. For example, a thicker or stiffer stem would result in the seeds being launched horizontally and distributed over a narrower area.

According to Manchester University physicist Finn Box, the findings could be used for more effective drug delivery systems “where directional release is crucial”.

The post The mechanics of squirting cucumbers revealed appeared first on Physics World.

  •  

From the blackboard to the boardroom: why university is a great place to become an entrepreneur

Par : No Author
29 novembre 2024 à 12:00

What does an idea need to change the world? Physics drives scientific advancements in healthcare, green energy, sustainable materials and many other applications. However, to bridge the gap between research and real-world applications, physicists need to be equipped with entrepreneurship skills.

Many students dream of using their knowledge and passion for physics to change the world, but when it comes to developing your own product, it can be hard to know where to start. That’s where my job comes in – I have been teaching scientists and engineers entrepreneurship for more than 20 years.

Several of the world’s most successful companies, including Sony, Texas Instruments, Intel and Tesla Motors, were founded by physicists, and there are many contemporary examples too. For example, Unitary, an AI company that identifies misinformation and deepfakes, was founded by Sasha Haco, who has a PhD in theoretical physics. In materials science, Aruna Zhuma is the co-founder of Global Graphene Group, which manufactures single layers of graphene oxide for use in electronics. Zhuma has nearly 500 patents, the second largest number of any inventor in the field.

In the last decade quantum technology, which encompasses computing, sensing and communications, has spawned hundreds of start-ups, often spun out from university research. This includes cybersecurity firm ID Quantique, super sensitive detectors from Single Quantum, and quantum computing from D-Wave. Overall, about 8–9% of students in the UK start businesses straight after they graduate, with just over half (58%) of these graduate entrepreneurs founding firms in their subject area.

However, even if you aren’t planning to set up your own business, entrepreneurship skills will be important no matter what you do with your degree. If you work in industry you will need to spot trends, understand customers’ needs and contribute to products and services. In universities, promotion often requires candidates to demonstrate “knowledge transfer”, which means working with partners outside academia.

Taking your ideas to the next level

The first step of kick-starting your entrepreneurship journey is to evaluate your existing experience and goals. Do you already have an idea that you want to take forward, or just want to develop skills that will broaden you career options?

If you’re exploring the possibilities of entrepreneurship you should look for curricular modules at your university. These are normally tailored to those with no previous experience and cover topics such as opportunity spotting, market research, basic finance, team building and intellectual property. In addition, in the UK at least, many postgraduate centres for doctoral training (CDTs) now offer modules in business and entrepreneurship as part of their training programmes. These courses sometimes give students the opportunity to take part in live company projects, which are a great way to gain skills.

You should also look out for extracurricular opportunities, from speaker events and workshops to more intensive bootcamps, competitions and start-up weekends. There is no mark or grade for these events, so they allow students to take risks and experiment.

Like any kind of research, commercializing physics requires resources such as equipment and laboratory space. For early-stage founders, access to business incubators – organizations that provide shared facilities – is invaluable. You would use an incubator at a relatively early stage to finalize your product, and they can be found in many universities.

Accelerator programmes, which aim to fast-track your idea once you have a product ready and usually run for a defined length of time, can also be beneficial. For example, the University of Southampton has the Future Worlds Programme based in the physical sciences faculty. Outside academia, the European Space Agency has incubators for space technology ideas at locations throughout Europe, and the Institute of Physics also has workspace and an accelerator programme for recently graduated physicists and especially welcomes quantum technology businesses. The Science and Technology Facilities Council (STFC) CERN Business Incubation Centre focuses on high-energy physics ideas and grants access to equipment that would be otherwise unaffordable for a new start-up.

More accelerator programmes supporting physics ideas include Duality, which is a Chicago-based 12-month accelerator programme for quantum ideas; Quantum Delta NL, based in the Netherlands, which provides programmes and shared facilities for quantum research; and Techstars Industries of the Future, which has locations worldwide.

Securing your future

It’s the multimillion-pound deals that make headlines but to get to that stage you will need to gain investors’ confidence, securing smaller funds to take your idea forward step-by-step. This could be used to protect your intellectual property with a patent, make a prototype or road test your technology.

Since early-stage businesses are high risk, this money is likely to come from grants and awards, with commercial investors such as venture capital or banks holding back until they see the idea can succeed. Funding can come from government agencies like the STFC in the UK, or US government scheme America’s Seed Fund. These grants are for encouraging innovation, applied research and for finding disruptive new technology, and no return is expected. Early-stage commercial funding might come from organizations such as Seedcamp, and some accelerator programmes offer funding, or at least organize a “demo day” on completion where you can showcase your venture to potential investors.

Group of students sat at a round table with large sheets of paper and Post-it notes
Science meets business Researchers at the University of Manchester participating in an entrepreneurship training event. (Courtesy: Robert Phillips)

While you’re a student, you can take advantage of the venture competitions that run at many universities, where students pitch an idea to a panel of judges. The prizes can be significant, ranging from £10k to £100k, and often come with extra support such as lab space, mentoring and help filing patents. Some of these programmes are physics-specific, for example the Eli and Britt Harari Enterprise Award at the University of Manchester, which is sponsored by physics graduate Eli Harari (founder of SanDisc) awards funding for graphene-related ideas.

Finally, remember that physics innovations don’t always happen in the lab. Theoretical physicist Stephen Wolfram founded Wolfram Research in 1988, which makes computational technology including the answer engine Wolfram Alpha.

Making the grade

There are many examples of students and recent graduates making a success from entrepreneurship. Wai Lau is a Manchester physics graduate who also has a master’s of enterprise degree. He started a business focused on digital energy management, identifying energy waste, while learning about entrepreneurship. His business Cloud Enterprise has now branched out to a wider range of digital products and services.

Computational physics graduate Gregory Mead at Imperial College London started Musicmetric, which uses complex data analytics to keep track of and rank music artists and is used by music labels and artists. He was able to get funding from Imperial Innovations after making a prototype and Musicmetric was eventually bought by Apple.

AssestCool Thermal Metaphotonics technology cools overhead power lines reducing power losses using novel coatings. It entered the Venture Further competition at the University of Manchester and has now had a £2.25m investment from Gritstone Capital.

Entrepreneurship skills are being increasingly recognized as necessary for physics graduates. In the UK, the IOP Degree Accreditation Framework, the standard for physics degrees, expects students to have “business awareness, intellectual property, digital media and entrepreneurship skills”.

Thinking about taking the leap into business can be daunting, but university is the ideal time to think about entrepreneurship. You have nothing to lose and plenty of support available.

The post From the blackboard to the boardroom: why university is a great place to become an entrepreneur appeared first on Physics World.

  •  

Astronomers can play an important role in explaining the causes and consequences of climate change, says astrophysicist

28 novembre 2024 à 16:09

Climate science and astronomy have much in common, and this has inspired the astrophysicist Travis Rector to call on astronomers to educate themselves, their students and the wider public about climate change. In this episode of the Physics World Weekly podcast, Rector explains why astronomers should listen to the concerns of the public when engaging about the science of global warming. And, he says the positive outlook of some of his students at the University of Alaska Anchorage makes him believe that a climate solution is possible.

Rector says that some astronomers are reluctant to talk to the public about climate change because they have not mastered the intricacies of the science. Indeed, one aspect of atmospheric physics that has challenged scientists is the role that clouds play in global warming. My second guest this week is the science journalist Michael Allen, who has written a feature article for Physics World called “Cloudy with a chance of warming: how physicists are studying the dynamical impact of clouds on climate change”. He talks about climate feedback mechanisms that involve clouds and how aerosols affect clouds and the climate.

The post Astronomers can play an important role in explaining the causes and consequences of climate change, says astrophysicist appeared first on Physics World.

  •  

Optimization algorithm gives laser fusion a boost

28 novembre 2024 à 12:26

A new algorithmic technique could enhance the output of fusion reactors by smoothing out the laser pulses used to compress hydrogen to fusion densities. Developed by physicists at the University of Bordeaux, France, a simulated version of the new technique has already been applied to conditions at the US National Ignition Facility (NIF) and could also prove useful at other laser fusion experiments.

A major challenge in fusion energy is keeping the fuel – a mixture of the hydrogen isotopes deuterium and tritium – hot and dense enough for fusion reactions to occur. The two main approaches to doing this confine the fuel with strong magnetic fields or intense laser light and are known respectively as magnetic confinement fusion and inertial confinement fusion (ICF). In either case, when the pressure and temperature become high enough, the hydrogen nuclei fuse into helium. Since the energy released in this fusion reaction is, in principle, greater than the energy needed to get it going, fusion has long been viewed as a promising future energy source.

In 2022, scientists at NIF became the first to demonstrate “energy gain” from fusion, meaning that the fusion reactions produced more energy than was delivered to the fuel target via the facility’s system of super-intense lasers. The method they used was somewhat indirect. Instead of compressing the fuel itself, NIF’s lasers heated a gold container known as a hohlraum with the fuel capsule inside. The appeal of this so-called indirect-drive ICF is that it is less sensitive to inhomogeneities in the laser’s illumination. These inhomogeneities arise from interactions between the laser beams and the highly compressed plasma produced during fusion, and they are hard to get rid of.

In principle, though, direct-drive ICF is a stronger candidate for a fusion reactor, explains Duncan Barlow, a postdoctoral researcher at Bordeaux who led the latest research effort. This is because it couples more energy into the target, meaning it can deliver more fusion energy per unit of laser energy.

Reducing computing cost and saving time

To work out which laser configurations are the most homogeneous, researchers typically use iterative radiation-hydrodynamic simulations. These are time-consuming and computationally expensive (requiring around 1 million CPU hours per evaluation). “This expense means that only a few evaluations were run, and each step was best performed by an expert who could use her or his experience and the data obtained to pick the next configurations of beams to test the illumination uniformity,” Barlow says.

The new approach, he explains, relies on approximating some of the laser beam-plasma interactions by considering isotropic plasma profiles. This means that each iteration uses less than 1000 CPU, so thousands can be run for the cost of a single simulation using the old method. Barlow and his colleagues also created an automated method to quantify improvements and select the most promising step forward for the process.

The researchers demonstrated their technique using simulations of a spherical target at NIF. These simulations showed that the optimized configuration should produce convergent shocks in the fuel target, resulting in pressures three times higher (and densities almost two times higher) than in the original experiment. Although their simulations focused on NIF, they say it could also apply to other pellet geometries and other facilities.

Developing tools

The study builds on work by Barlow’s supervisor, Arnaud Colaïtis, who developed a tool for simulating laser-plasma interactions that incorporates a phenomenon known as cross-beam energy transfer (CBET) that contributes to inhomogeneities. Even with this and other such tools, however, Barlow explains that fusion scientists have long struggled to define optical illuminations when the system deviates from a simple mathematical description. “My supervisor recognized the need for a new solution, but it took us a year of further development to identify such a methodology,” he says. “Initially, we were hoping to apply neural networks – similar to image recognition – to speed up the technique, but we realized that this required prohibitively large training data.”

As well as working on this project, Barlow is also involved in a French project called Taranis that aims to use ICF to produce energy – an approach known as inertial fusion energy (IFE). “I am applying the methodology from my ICF work in a new way to ensure the robust, uniform drive of targets with the aim of creating a new IFE facility and eventually a power plant,” he tells Physics World.

A broader physics application, he adds, would be to incorporate more laser-plasma instabilities beyond CBET that are non-linear and normally too expensive to model accurately with radiation-hydrodynamic simulations. Some examples include simulated Brillouin scattering, stimulated Raman scattering and two-plasmon decay. “The method presented in our work, which is detailed in Physical Review Letters, is a great accelerated scheme for better evaluating these laser-plasma instabilities, their impact for illumination configurations and post-shot analysis,” he says.

The post Optimization algorithm gives laser fusion a boost appeared first on Physics World.

  •  

Mark Thomson and Jung Cao: a changing of the guard in particle physics

27 novembre 2024 à 17:02

All eyes were on the election of Donald Trump as US president earlier this month, whose win overshadowed two big appointments in physics. First, the particle physicist Jun Cao took over as director of China’s Institute of High Energy Physics (IHEP) in October, succeeding Yifang Wang, who had held the job since 2011.

Over the last decade, IHEP has emerged as an important force in particle physics, with plans to build a huge 100 km-circumference machine called the Circular Electron Positron Collider (CEPC). Acting as a “Higgs factory”, such a machine would be hundreds of times bigger and pricier than any project IHEP has ever attempted.

But China is serious about its intentions, aiming to present a full CEPC proposal to the Chinese government next year, with construction staring two years later and the facility opening in 2035. If the CEPC opens as planned in 2035, China could leapfrog the rest of the particle-physics community.

China’s intentions will be one pressing issue facing the British particle physicist Mark Thomson, 58, who was named as the 17th director-general at CERN earlier this month. He will take over in January 2026 from current CERN boss Fabiola Gianotti, who will finish her second term next year. Thomson will have a decisive hand in the question of what – and where – the next particle-physics facility should be.

CERN is currently backing the 91 km-circumference Future Circular Collider (FCC), several times bigger than the Large Hadron Collider (LHC). An electron–positron collider designed to study the Higgs boson in unprecedented detail, it could later be upgraded to a hadron collider, dubbed FCC-hh. But with Germany already objecting to the FCC’s steep £12bn price tag, Thomson will have a tough job eking extra cash for it from CERN member states. He’ll also be busy ensuring the upgraded LHC, known as the High-Luminosity LHC, is ready as planned by 2030.

I wouldn’t dare tell Thomson how to do his job, but Physics World did once ask previous CERN directors-general what skills are needed as lab boss. Crucial, they said, were people management, delegation, communication and the ability to speak multiple languages. Physical stamina was deemed a vital attribute too, with extensive international travel and late-night working required.

One former CERN director-general even cited the need to “eat two lunches the same day to satisfy important visitors”. Squeezing double dinners in will probably be the least of Thomson’s worries.

Fortuantely, I bumped into Thomson at an Institute of Physics meeting in London earlier this week, where he agreed to do an interview with Physics World. So you can be sure we’ll get Thomson put his aims and priorities as next CERN boss on record. Stay tuned…

The post Mark Thomson and Jung Cao: a changing of the guard in particle physics appeared first on Physics World.

  •  

New imaging technique could change how we look at certain objects in space

27 novembre 2024 à 13:00

A new imaging technique that takes standard two-dimensional (2D) radio images and reconstructs them as three-dimensional (3D) ones could tell us more about structures such as the jet-like features streaming out of galactic black holes. According to the technique’s developers, it could even call into question physical models of how radio galaxies formed in the first place.

“We will now be able to obtain information about the 3D structures in polarized radio sources whereas currently we only see their 2D structures as they appear in the plane of the sky,” explains Lawrence Rudnick, an observational astrophysicist at the University of Minnesota, US, who led the study. “The analysis technique we have developed can be performed not only on the many new maps to be made with powerful telescopes such as the Square Kilometre Array and its precursors, but also from decades of polarized maps in the literature.”

Analysis of data from the MeerKAT radio telescope array

In their new work, Rudnick and colleagues in Australia, Mexico, the UK and the US studied polarized light data from the MeerKAT radio telescope array at the South African Radio Astronomy Observatory. They exploited an effect called Faraday rotation, which rotates the angle of polarized radiation as it travels through a magnetized ionized region. By measuring the amount of rotation for each pixel in an image, they can determine how much material that radiation passed through.

In the simplest case of a uniform medium, says Rudnick, this information tells us the relative distance between us and the emitting region for that pixel. “This allows us to reconstruct the 3D structure of the radiating plasma,” he explains.

An indication of the position of the emitting region

The new study builds on a previous effort that focused on a specific cluster of galaxies for which the researchers already had cubes of data representing its 2D appearance in the sky, plus a third axis given by the amount of Faraday rotation. In the latest work, they decided to look at this data in a new way, viewing the cubes from different angles.

“We realized that the third axis was actually giving us an indication of the position of the emitting region,” Rudnick says. “We therefore extended the technique to situations where we didn’t have cubes to start with, but could re-create them from a pair of 2D images.”

There is a problem, however, in that polarization angle can also rotate as the radiation travels through regions of space that are anything but uniform, including our own Milky Way galaxy and other intervening media. “In that case, the amount of radiation doesn’t tell us anything about the actual 3D structure of the emitting source,” Rudnick adds. “Separating out this information from the rest of the data is perhaps the most difficult aspect of our work.”

Shapes of structures are very different in 3D

Using this technique, Rudnick and colleagues were able determine the line-of-sight orientation of active galactic nuclei (AGN) jets as they are expelled from a massive black hole at the centre of the Fornax A galaxy. They were also able to observe how the materials in these jets interact with “cosmic winds” (essentially larger-scale versions of the magnetic solar wind streaming from our own Sun) and other space weather, and to analyse the structures of magnetic fields inside the jets from the M87 galaxy’s black hole.

The team found that the shapes of structures as inferred from 2D radio images were sometimes very different from those that appear in the 3D reconstructions. Rudnick notes that some of the mental “pictures” we have in our heads of the 3D structure of radio sources will likely turn out to be wrong after they are re-analysed using the new method. One good example in this study was a radio source that, in 2D, looks like a tangled string of filaments filling a large volume. When viewed in 3D, it turns out that these filamentary structures are in fact confined to a band on the surface of the source. “This could change the physical models of how radio galaxies are formed, basically how the jets from the black holes in their centres interact with the surrounding medium,” Rudnick tells Physics World.

The work is detailed in the Monthly Notices of the Royal Astronomical Society

The post New imaging technique could change how we look at certain objects in space appeared first on Physics World.

  •  

Millions of smartphones monitor Earth’s ever-changing ionosphere

Par : No Author
27 novembre 2024 à 09:35

A plan to use millions of smartphones to map out real-time variations in Earth’s ionosphere has been tested by researchers in the US. Developed by Brian Williams and colleagues at Google Research in California, the system could improve the accuracy of global navigation satellite systems (GNSSs) such as GPS and provide new insights into the ionosphere.

A GNSS uses a network of satellites to broadcast radio signals to ground-based receivers. Each receiver calculates its position based on the arrival times of signals from several satellites. These signals first pass through Earth’s ionosphere, which is a layer of weakly-ionized plasma about 50–1500 km above Earth’s surface. As a GNSS signal travels through the ionosphere, it interacts with free electrons and this slows down the signals slightly – an effect that depends on the frequency of the signal.

The problem is that the free electron density is not constant in either time or space. It can spike dramatically during solar storms and it can also be affected by geographical factors such as distance from the equator. The upshot is that variations in free electron density can lead to significant location errors if not accounted for properly.

To deal with this problem, navigation satellites send out two separate signals at different frequencies. These are received by dedicated monitoring stations on Earth’s surface and the differences between arrival times of the two frequencies is used create a real-time maps of the free electron density of the ionosphere. Such maps can then be used to correct location errors. However, these monitoring stations are expensive to install and tend to be concentrated in wealthier regions of the world. This results in large gaps in ionosphere maps.

Dual-frequency sensors

In their study, Williams’ team took advantage of the fact that many modern mobile phones have sensors that detect GNSS signals at two different frequencies. “Instead of thinking of the ionosphere as interfering with GPS positioning, we can flip this on its head and think of the GPS receiver as an instrument to measure the ionosphere,” Williams explains. “By combining the sensor measurements from millions of phones, we create a detailed view of the ionosphere that wouldn’t otherwise be possible.”

This is not a simple task, however, because individual smartphones are not designed for mapping the ionosphere. Their antennas are much less efficient than those of dedicated monitoring stations and the signals that smartphones receive are often distorted by surrounding buildings – and even users’ bodies. Also, these measurements are affected by the design of the phone and its GNSS hardware.

The big benefit of using smartphones is that their ownership is ubiquitous across the globe – including in developing regions such as India, Africa, and Southeast Asia. “In these parts of the world, there are still very few dedicated scientific monitoring stations that are being used by scientists to generate ionosphere maps,” says Williams. “Phone measurements provide a view of parts of the ionosphere that isn’t otherwise possible.”

The team’s proposal involves creating a worldwide network comprising millions of smartphones that will each carry out error correction measurements using the dual-frequency signals from GNSS satellites. Although each individual measurement will be relatively poor, the large number of measurements can be used to improve the overall accuracy of the map.

Simultaneous calibration

“By combining measurements from many phones, we can simultaneously calibrate the individual sensors and produce a map of ionosphere conditions, leading to improved location accuracy, and a better understanding of this important part of the Earth’s atmosphere,” Williams explains.

In their initial tests of the system, the researchers aggregated ionosphere measurements from millions of Android devices around the world. Crucially, there was no need to identify individual devices contributing to the study – ensuring the privacy and security of users.

Williams’ team was able to map a diverse array of variations in Earth’s ionosphere. These included plasma bubbles over India and South America; the effects of a small solar storm over North America; and a depletion in free electron density over Europe. These observations doubled the coverage are of existing maps and boosted resolution when compared to maps made using data from monitoring stations.

If such a smartphone-based network is rolled out, ionosphere-related location errors could be reduced by several metres – which would be a significant advantage to smartphone users.

“For example, devices could differentiate between a highway and a parallel rugged frontage road,” Williams predicts. “This could ensure that dispatchers send the appropriate first responders to the correct place and provide help more quickly.”

The research is described in Nature.

The post Millions of smartphones monitor Earth’s ever-changing ionosphere appeared first on Physics World.

  •  

Electromagnetic waves solve partial differential equations

26 novembre 2024 à 17:00

Waveguide-based structures can solve partial differential equations by mimicking elements in standard electronic circuits. This novel approach, developed by researchers at Newcastle University in the UK, could boost efforts to use analogue computers to investigate complex mathematical problems.

Many physical phenomena – including heat transfer, fluid flow and electromagnetic wave propagation, to name just three – can be described using partial differential equations (PDEs). Apart from a few simple cases, these equations are hard to solve analytically, and sometimes even impossible. Mathematicians have developed numerical techniques such as finite difference or finite-element methods to solve more complex PDEs. However, these numerical techniques require a lot of conventional computing power, even after using methods such as mesh refinement and parallelization to reduce calculation time.

Alternatives to numerical computing

To address this, researchers have been investigating alternatives to numerical computing. One possibility is electromagnetic (EM)-based analogue computing, where calculations are performed by controlling the propagation of EM signals through a materials-based processor. These processors are typically made up of optical elements such as Bragg gratings, diffractive networks and interferometers as well as optical metamaterials, and the systems that use them are termed “metatronic” by analogy with more familiar electronic circuit elements.

The advantage of such systems is that because they use EM waves, computing can take place literally at light speeds within the processors. Systems of this type have previously been used to solve ordinary differential equations, and to perform operations such as integration, differentiation and matrix multiplication.

Some mathematical operations can also be computed with electronic systems – for example, with grid-like arrays of “lumped” circuit elements (that is, components such as resistors, inductors and capacitors that produce a predictable output from a given input). Importantly, these grids can emulate the mesh elements that feature in the finite-element method of solving various types of PDEs numerically.

Recently, researchers demonstrated that this emulation principle also applies to photonic computing systems. They did this using the splitting and superposition of EM signals within an engineered network of dielectric waveguide junctions known as photonic Kirchhoff nodes. At these nodes, a combination of photonics structures, such as ring resonators and X-junctions, can similarly imitate lumped circuit elements.

Interconnected metatronic elements

In the latest work, Victor Pacheco-Peña of Newcastle’s School of Mathematics, Statistics and Physics and colleagues showed that such waveguide-based structures can be used to calculate solutions to PDEs that take the form of the Helmholtz equation ∇2f(x,y)+k2f(x,y)=0. This equation is used to model many physical processes, including the propagation, scattering and diffraction of light and sound as well as the interactions of light and sound with resonators.

Unlike in previous setups, however, Pacheco-Peña’s team exploited a grid-like network of parallel plate waveguides filled with dielectric materials. This structure behaves like a network of interconnected T-circuits, or metatronic elements, with the waveguide junctions acting as sampling points for the PDE solution, Pacheco-Peña explains. “By carefully manipulating the impedances of the metatronic circuits connecting these points, we can fully control the parameters of the PDE to be solved,” he says.

The researchers used this structure to solve various boundary value problems by inputting signals to the network edges. Such problems frequently crop up in situations where information from the edges of a structure is used to infer details of physical processes in other regions in it. For example, by measuring the electric potential at the edge of a semiconductor, one can calculate the distribution of electric potential near its centre.

Pacheco-Peña says the new technique can be applied to “open” boundary problems, such as calculating how light focuses and scatters, as well as “closed” ones, like sound waves reflecting within a room. However, he acknowledges that the method is not yet perfect because some undesired reflections at the boundary of the waveguide network distort the calculated PDE solution. “We have identified the origin of these reflections and proposed a method to reduce them,” he says.

In this work, which is detailed in Advanced Photonics Nexus, the researchers numerically simulated the PDE solving scheme at microwave frequencies. In the next stages of their work, they aim to extend their technique to higher frequency ranges. “Previous works have demonstrated metatronic elements working in these frequency ranges, so we believe this should be possible,” Pacheco-Peña tells Physics World. “This might also allow the waveguide-based structure to be integrated with silicon photonics or plasmonic devices.”

The post Electromagnetic waves solve partial differential equations appeared first on Physics World.

  •  

Institute of Physics says physics ‘deep tech’ missing out on £4.5bn of extra investment

26 novembre 2024 à 14:40

UK physics “deep tech” could be missing out on almost a £1bn of investment each year. That is according to a new report by the Institute of Physics (IOP), which publishes Physics World. It finds that venture capital investors often struggle to invest in high-innovation physics industries given the lack of a “one-size-fits-all” commercialisation pathway that is seen in others areas such as biotech.

According to the report, physics-based businesses add about £230bn to the UK economy each year and employ more than 2.7 million full-time employees. The UK also has one of the largest venture-capital markets in Europe and the highest rates of spin-out activity, especially in biotech.

Despite this, however, venture capital investment in “deep tech” physics – start-ups whose business model is based on high-tech innovation or significant scientific advances – remains low, attracting £7.4bn or 30% of UK science venture-capital investment.

To find out the reasons for this discrepancy, the IOP interviewed science-led businesses as well as 32 leading venture capital investors. Based on these discussions, it was found that many investors are confused about certain aspects of physics-based start-ups, finding that they often do not follow the familiar lifecycle of development as seen other areas like biotech.

Physics businesses are not, for example, always able to transition from being tech focussed to being product-led in the early stages of development, which prevents venture capitalists from committing large amounts of money. Another issue is that venture capitalists are less familiar with the technologies, timescales and “returns profile” of physics deep tech.

The IOP report estimates that if the full investment potential of physics deep tech is unlocked then it could result in an extra £4.5bn of additional funding over the next five years. In a foreword to the report, Hermann Hauser, the tech entrepreneur and founder of Acorn Computers, highlights “uncovered issues within the system that are holding back UK venture capital investment” into physics-based tech. “Physics deep-tech businesses generate huge value and have unique characteristics – so our national approach to finance for these businesses must be articulated in ways that recognise their needs,” writes Hauser.

Physics deep tech is central to the UK’s future prosperity

Tom Grinyer

At the same time, investors see a lot of opportunity in subjects such as quantum and semiconductor physics as well as with artificial intelligences and nuclear fusion. Jo Slota-Newson, a managing partner at Almanac Ventures who co-wrote the report, says there is “huge potential” for physics deep-tech businesses but “venture capital funds are being held back from raising and deploying capital to support this crucial sector”.

The IOP is now calling for a coordinated effort from government, investors as well as the business and science communities to develop “investment pathways” to address the issues raised in the report.  For example, the UK government should ensure grant and debt-financing options are available to support physics tech at “all stages of development”.

Slota-Newson, who has a background in science including a PhD in chemistry from the University of Cambridge, says that such moves should be “at the heart” of the UK’s government’s plans for growth. “Investors, innovators and government need to work together to deliver an environment where at every stage in their development there are opportunities for our deep tech entrepreneurs to access funding and support,” adds Slota-Newson. “If we achieve that we can build the science-driven, innovative economy, which will provide a sustainable future of growth, security and prosperity.”

The report also says that the IOP should play a role by continuing to highlight successful physics deep-tech businesses and to help them attract investment from both the UK and international venture-capital firms. Indeed, Tom Grinyer, group chief executive officer of the IOP, says that getting the model right could “supercharge the UK economy as a global leader in the technologies that will define the next industrial revolution”.

“Physics deep tech is central to the UK’s future prosperity — the growth industries of the future lean very heavily on physics and will help both generate economic growth and help move us to a lower carbon, more sustainable economy,” says Grinyer. “By leveraging government support, sharing information better and designing our financial support of this key sector in a more intelligent way we can unlock billions in extra investment.”

That view is backed by Hauser. “Increased investment, economic growth, and solutions to some of our biggest societal challenges [will move] us towards a better world for future generations,” he writes. “The prize is too big to miss”.

The post Institute of Physics says physics ‘deep tech’ missing out on £4.5bn of extra investment appeared first on Physics World.

  •  

Triboelectric device reduces noise pollution

Par : No Author
26 novembre 2024 à 13:00
Sound-absorbing mechanism of triboelectric fibrous composite foam
Sound absorption Incoming acoustic waves induce relative movements among the fibres, initiating the triboelectric effect within overlapping regions. The generated charges are dissipated through conductive elements and eventually transformed into heat. (Courtesy: Nat. Commun. 10.1038/s41467-024-53847-5)

Noise pollution is becoming increasingly common in society today, impacting both humans and wildlife. While loud noises can be an inconvenience, if it’s something that happens regularly, it can have an adverse effect on human health that goes beyond a mild irritation.

As such noise pollution gets worse, researchers are working to mitigate its impact through new sound absorption materials. A team headed up the Agency for Science, Technology and Research (A*STAR) in Singapore has now developed a new approach to tackling the problem by absorbing sound waves using the triboelectric effect.

The World Health Organization has defined noise pollution as noise levels as above 65 dB, with one in five Europeans being regularly exposed to levels considered harmful to their health. “The adverse impacts of airborne noise on human health are growing concern, including disturbing sleep, elevating stress hormone levels, inciting inflammation and even increasing the risk of cardiovascular diseases,” says Kui Yao, senior author on the study.

Passive provides the best route

Mitigating noise requires conversion of the mechanical energy in acoustic waves into another form. For this, passive sound absorbers are a better option than active versions because they require less maintenance and consume no power (so don’t require a lot of extra components to work).

Previous efforts from Yao’s research group have shown that the piezoelectric effect – the process of creating a current when a material undergoes mechanical stress – can convert mechanical energy into electricity and could be used for passive sound absorption. However, the researchers postulated that the triboelectric effect – the process of electrical charge transfer when two surfaces contact each other – could be more effective for absorbing low-frequency noise.

The triboelectric effect is more commonly applied for harvesting mechanical energy, including acoustic energy. But unlike when used for energy harvesting, the use of the triboelectric effect in noise mitigation applications is not limited by the electronics around the material, which can cause impedance mismatching and electrical leakage. For sound absorbers, therefore, there’s potential to create a device with close to 100% efficient triboelectric conversion of energy.

Exploiting the triboelectric effect

Yao and colleagues developed a fibrous polypropylene/polyethylene terephthalate (PP/PET) composite foam that uses the triboelectric effect and in situ electrical energy dissipation to absorb low-frequency sound waves. In this foam, sound is converted into electricity through embedded electrically conductive elements, and this electricity is then dissipated into heat and removed from the material.

The energy dissipation mechanism requires triboelectric pairing materials with a large difference in charge affinity (the tendency to gain or lose charge from/to the other material). The larger the difference between the two fibre materials in the foam, the better the acoustic absorption performance due to the larger triboelectric effect.

To understand the effectiveness of different foam compositions for absorbing and converting sound waves, the researchers designed an acoustic impedance model to analyse the underlying sound absorption mechanisms. “Our theoretical analysis and experimental results show superior sound absorption performance of triboelectric energy dissipator-enabled composite foams over common acoustic absorbing products,” explains Yao.

The researchers first tested the fibrous PP/PET composite foam theoretically and experimentally and found that it had a high noise reduction coefficient (NRC) of 0.66 (over a broad low-frequency range). This translates to a 24.5% improvement in sound absorption performance compared with sound absorption foams that don’t utilize the triboelectric effect.

On the back of this result, the researchers validated their process further by testing other material combinations. This included: a PP/polyvinylidene fluoride (PVDF) foam with an NRC of 0.67 and 22.6% improvement in sound absorption performance; a glass wool/PVDF foam with an NRC of 0.71 and 50.6% improvement in sound absorption performance; and a polyurethane/PVDF foam with an NRC of 0.79 and 43.6% improvement in sound absorption performance.

All the improvements are based on a comparison against their non-triboelectric counterparts – where the sound absorption performance varies from composition to composition, hence the non-linear relationship between percentage values and NRC values. The foams also showed a sound absorption performance of 0.8 NRC at 800 Hz and around 1.00 NRC with sound waves above 1.4 kHz, compared with commercially available counterpart absorber materials.

When asked about the future of the sound absorbers, Yao tells Physics World: “We are continuing to improve the performance properties and seeking collaborations for adoption in practical applications”.

The research is published in Nature Communications.

The post Triboelectric device reduces noise pollution appeared first on Physics World.

  •  

Cloudy with a chance of warming: how physicists are studying the dynamical impact of clouds on climate change

Par : No Author
26 novembre 2024 à 12:00

For all of us concerned about climate change, 2023 was a grim year. According to the World Meteorological Organisation (WMO), it was the warmest year documented so far, with records broken – and in some cases smashed – for ocean heat, sea-level rise, Antarctic sea-ice loss and glacier retreat.

Capping off the warmest 10-year period on record, global average near-surface temperature hit 1.45 °C above pre-industrial levels. “Never have we been so close – albeit on a temporary basis at the moment – to the 1.5 °C lower limit of the Paris Agreement on climate change,” said WMO secretary-general Celeste Saulo in a statement earlier this year.

The heatwaves, floods, droughts and wildfires of 2023 are clear signs of the increasing dangers of the climate crisis. As we look to the future and wonder how much the world will warm, accurate climate models are vital.

For the physicists who build and run these models, one major challenge is figuring out how clouds are changing as the world warms, and how those changes will impact the climate system. According to the Intergovernmental Panel on Climate Change (IPCC), these feedbacks create the biggest uncertainties in predicting future climate change. 

Cloud cover, high and low

Clouds play a key role in the climate system, as they have a profound impact on the Earth’s radiation budget. That is the balance between the amount of energy coming in from solar radiation, and the amount of energy going back out to space, which is both the reflected (shortwave) and thermal (longwave) energy radiated from the Earth.

According to NASA, about 29% of solar energy that hits Earth’s atmosphere is reflected back into space, primarily by clouds (figure 1). And clouds also have a greenhouse effect, warming the planet by absorbing and trapping the outgoing thermal radiation.

1 Earth’s energy budget

Diagram of energy flowing into and out of Earth's atmosphere
(Courtesy: NASA LaRC)

How energy flows into and away from the Earth. Based on data from multiple sources including NASA’s CERES satellite instrument, which measures reflected solar and emitted infrared radiation fluxes. All values are fluxes in watts per square metre and are average values based on 10 years of data. First published in 2014.

“Even a subtle change in global cloud properties could be enough to have a noticeable effect on the global energy budget and therefore the amount of warming,” explains climate scientist Paulo Ceppi of Imperial College London, who is an expert on the impact of clouds on global climate.

A key factor in this dynamic is “cloud fraction” – a measurement that climate scientists use to determine the percentage of the Earth covered by clouds at a given time. More specifically, it’s the portion of the Earth’s surface covered by cloud, relative to the portion that is uncovered. Cloud fraction is determined via satellite imagery and is the portion of each pixel (1-km-pixel resolution cloud mask) in an image that is covered by clouds (figure 2).

Apart from the amount of cover, what also matter are the altitude of clouds and their optical thickness. Higher, cooler clouds absorb more thermal energy originating from the Earth’s surface, and therefore have a greater greenhouse warming effect than low clouds. They also tend to be thinner, so they let more sunlight through and overall have a net warming effect. Low clouds, on the other hand, have a weak greenhouse effect, but tend to be thicker and reflect more solar radiation. They generally have a net cooling effect.

2 Cloud fraction

These maps show what fraction of an area was cloudy on average each month, according to measurements collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. MODIS collects information in gridded boxes, or pixels. Cloud fraction is the portion of each pixel that is covered by clouds. Colours range from blue (no clouds) to white (totally cloudy).

The band of persistent clouds around the equator is the Intertropical Convergence Zone – where the easterly trade winds in the Northern and Southern Hemispheres meet, pushing warm, moist air high into the atmosphere. The air expands and cools, and the water vapour condenses into clouds and rain. The cloud band shifts slightly north and south of the equator with the seasons. In tropical countries, this shifting of the zone is what causes rainy and dry seasons.

Video and data courtesy: NASA Earth Observations

As the climate warms, cloud properties are changing, altering the radiation budget and influencing the amount of warming. Indeed, there are two key changes: rising cloud tops and a reduction in low cloud amount.

The most understood effect, Ceppi explains, is that as global temperatures increase, clouds rise higher into the troposphere, which is the lowermost atmospheric layer. This is because as the troposphere warms it expands, increasing to greater altitudes. Over the last 40 years the top of the troposphere, known as the tropopause, has risen by about 50 metres per decade (Sci. Adv. 10.1126/sciadv.abi8065).

“You are left with clouds that rise higher up on average, so have a greater greenhouse warming effect,” Ceppi says. He adds that modelling data and satellite observations support the idea that cloud tops are rising.

Conversely, coverage of low clouds, which reflect sunlight and cool the Earth’s surface, is decreasing with warming. This reduction is mainly in marine low clouds over tropical and subtropical regions. “We are talking a few per cent, so not something that you would necessarily notice with your bare eyes, but it’s enough to have an effect of amplifying global warming,” he adds.

These changes in low clouds are partly responsible for some of the extreme ocean heatwaves seen in recent years (figure 3). While the mechanisms behind these events are complex, one known driver is this reduction in low cloud cover, which allows more solar radiation to hit the ocean (Science 325 460).

“It’s cloud feedback on a more local scale,” Ceppi says. “So, the ocean surface warms locally and that prompts low cloud dissipation, which leads to more solar radiation being absorbed at the surface, which prompts further warming and therefore amplifies and sustains those events.”

3 Ocean heat

Heat map of the Earth
(Data source: ERA5. Courtesy: Copernicus Climate Change Service/ECMWF)

Sea surface temperature anomaly (°C) for the month of June 2023, relative to the 1991–2020 reference period. The global ocean experienced an average daily marine heatwave coverage of 32%, well above the previous record of 23% in 2016. At the end of 2023, most of the global ocean between 20° S and 20° N had been in heatwave conditions since early November.

Despite these insights, several questions remain unanswered. For example, Ceppi explains that while we know that low cloud changes will amplify warming, the strength of these effects needs further investigation, to reduce the uncertainty range.

Also, as high clouds move higher, there may be other important changes, such as shifts in optical thickness, which is a measure of how much light is scattered or absorbed by cloud droplets, instead of passing through the atmosphere. “We are a little less certain about what else happens to [high clouds],” says Ceppi.

Diurnal changes

It’s not just the spatial distribution of clouds that impacts climate. Recent research has found an increasing asymmetry in cloud-cover changes between day and night. Simply put, daytime clouds tend to cool Earth’s surface by reflecting solar radiation, while at night clouds trap thermal radiation and have a warming effect. This shift in diurnal distribution could create a feedback loop that amplifies global warming.

The new study was led by theoretical meteorologist Johannes Quaas at Leipzig University, together with Hao Luo and Yong Han from Sun Yat-sen University in China, who found that as the climate warms, cloud cover – especially in the lower atmosphere – decreases more during the day than at night (Sci. Adv. 10.1126/sciadv.ado5179).

By analysing satellite observations and data from the sixth phase of the Coupled Model Intercomparison Project (CMIP6) – which incorporates historical data collected between 1970 and 2014 as well as projections up to the year 2100 – the researchers concluded that this diurnal asymmetry is largely due to rising concentrations of greenhouse gases that make the lower troposphere more stable, which in turn increases the overall heating.

Fewer clouds form during the day, thereby reducing the amount of shortwave radiation that is reflected away. Night-time clouds are more stable, which in turn increases the longwave greenhouse effect. “Our study shows that this asymmetry causes a positive feedback loop that amplifies global warming,” says Quaas. This growing asymmetry is mainly driven by a daytime increase in turbulence in the lower troposphere as the climate warms, meaning that clouds are less likely to form and remain stable during the day.

Mixed-phase clouds

Climate models are affected by more than just the distribution of clouds in space. What also matters is the distribution of liquid water and ice within clouds. In fact, researchers have found that the way in which models simulate this effect influences their predictions of warming in response to greenhouse gas emissions.

So-called “mixed-phase” clouds are those that contain water vapour, ice particles and supercooled liquid droplets, and exist in a three-phase colloidal system. Such clouds are ubiquitous in the troposphere. These clouds are found at all latitudes from the polar regions to the tropics and they play an important role in the climate system.

As the atmosphere warms, mixed-phase clouds tend to shift from ice to liquid water. This transition makes these clouds more reflective, enhancing their cooling effect on the Earth’s surface – a negative feedback that dampens global warming.

In 2016 Trude Storelvmo, an atmospheric scientist at the University of Oslo in Norway, and her colleagues made an important discovery: many climate models overestimate this negative feedback (Geophys. Res. Lett. 10.1029/2023GL105053). Indeed, the models often simulate clouds with too much ice and not enough liquid water. This error exaggerates the cooling effect from the phase transition. Essentially, the clouds in these simulations have too much ice to lose, causing the models to overestimate the increase in their reflectiveness as they warm.

One problem is that these models oversimplify cloud structure, failing to capture the true heterogeneity of mixed-phase clouds. Satellite, balloon and aircraft observations reveal that these clouds are not uniformly mixed, either vertically or horizontally. Instead, they contain pockets of ice and liquid water, leading to complex interactions that are inadequately represented in the simulations. As a result, they overestimate ice formation and underestimate liquid cloud development.

Storelvmo’s work also found that initially, increased cloud reflectivity has a strong effect that helps mitigate global warming. But as the atmosphere continues to warm, the increase in reflectiveness slows. This shift is intuitive: as the clouds become more liquid, they have less ice to lose. At some point they become predominantly liquid, eliminating the phase transition. The clouds cannot become anymore liquid – and thus reflective – and warming accelerates.

Liquid cloud tops

Earlier this year, Storelvmo and colleagues carried out a new study, using satellite data to study the vertical composition of mixed-phase clouds. The team discovered that globally, these clouds are more liquid at the top (Commun. Earth Environ. 5 390).

Storelvmo explains that this top cloud layer is important as “it is the first part of the cloud that radiation interacts with”. When the researchers adjusted climate models to correctly capture this vertical composition, it had a significant impact, triggering an additional degree of warming in a “high-carbon emissions” scenario by the end of this century, compared with current climate projections.

“It is not inconceivable that we will reach temperatures where most of [the negative feedback from clouds] is lost, with current CO2 emissions,” says Storelvmo. The point at which this happens is unclear, but is something that scientists are actively working on.

The study also revealed that while changes to mixed-phased clouds in the northern mid-to-high latitudes mainly influence the climate in the northern hemisphere, changes to clouds in the same southern latitudes have global implications.

“When we modify clouds in the southern extratropic that’s communicated all the way to the Arctic – it’s actually influencing warming in the arctic,” says Storelvmo. The reasons for this are not fully understood, but Storelvmo says other studies have seen this effect too.

“It’s an open and active area of research, but it seems that the atmospheric circulation helps pass on perturbations from the Southern Ocean much more efficiently than northern perturbations,” she explains.

The aerosol problem

As well as generating the greenhouse gases that drive the climate crisis, fossil fuel burning also produces aerosols. The resulting aerosol pollution is a huge public health issue. The recent “State of Global Air Report 2024” from the Health Effects Institute found that globally eight million people died because of air pollution in 2021. Dirty air is also now the second-leading cause of death in children under five, after malnutrition.

To tackle these health implications, many countries and organizations have introduced air-quality clean-up policies. But cleaning up air pollution has an unfortunate side-effect: it exacerbates the climate crisis. Indeed, a recent study has even warned that aggressive aerosol mitigation policies will hinder our chances of keeping global warming below 2 °C (Earth’s Future 10.1029/2023EF004233).

Smog in Lahore
Deadly conundrum According to some measures, Lahore in Pakistan is the city with the worst air pollution in the world. Air pollution is responsible for tens of millions of deaths every year. But improving air quality can actually exacerbate the climate crisis, as it decreases the small particles in clouds, which are key to reflecting radiation. (Courtesy: Shutterstock/A M Syed)

Jim Haywood, an atmospheric scientist at the University of Exeter, says that aerosols have two major cooling impacts on climate. The first is through the direct scattering of sunlight back out to space. The second is via the changes they induce in clouds.

When you add small pollution particles to clouds, explains Haywood, it creates “clouds that are made up of a larger number of small cloud droplets and those clouds are more reflective”. The shrinking in cloud droplet size can also reduce precipitation – adding more liquid water in clouds. The clouds therefore last longer, cover a greater area and become more reflective.

But if atmospheric aerosol concentrations are reduced, so too are these reflective, planet-cooling effects. “This masking effect by the aerosols is taken out and we unveil more and more of the full greenhouse warming,” says Quaas.

A good example of this is recent policy aimed at cleaning up shipping fuels by lowering sulphur concentrations. At the start of 2020 the International Maritime Organisation introduced regulations that slashed the limit on sulphur content in fuels from 3.5% to 0.5%.

Haywood explains that this has reduced the additional reflectivity that this pollution created in clouds and caused a sharp increase in global warming rates. “We’ve done some simulations with climate models, and they seem to be suggestive of at least three to four years acceleration of global warming,” he adds.

Overall models suggest that if we remove all the world’s polluting aerosols, we can expect to see around 0.4 °C of additional warming, says Quaas. He acknowledges that we must improve air quality “because we cannot just accept people dying and ecosystems deteriorating”.  By doing so, we must also be prepared for this additional warming. But more work is needed, “because the current uncertainty is too large”, he continues. Uncertainty in the figures is around 50%, according to Quaas, which means that slashing aerosol pollution could cause anywhere from 0.2 to 0.6 °C of additional warming.

Haywood says that while current models do a relatively good job of representing how aerosols reduce cloud droplet size and increase cloud brightness, they do a poor job of showing how aerosols effect cloud fraction.

Cloud manipulation

The fact that aerosols cool the planet by brightening clouds opens an obvious question: could we use aerosols to deliberately manipulate cloud properties to mitigate climate change?

“There are more recent proposals to combat the impacts, or the worst of the impacts of global warming, through either stratospheric aerosol injection or marine cloud brightening, but they are really in their infancy and need to be understood an awful lot better before any kind of deployment can even be considered,” says Haywood. “You need to know not just how the aerosols might interact with clouds, but also how the cloud then interacts with the climate system and the [atmospheric] teleconnections that changing cloud properties can induce.”

Haywood recently co-authored a position paper, together with a group of atmospheric scientists in the US and Europe, arguing that a programme of physical science research is needed to evaluate the viability and risks of marine cloud brightening (Sci. Adv. 10 eadi8594).

A proposed form of solar radiation management, known as marine cloud brightening, would involve injecting aerosol particles into low-level, liquid marine clouds – mainly those covering large areas of subtropical oceans – to increase their reflectiveness (figure 4).

Most marine cloud-brightening proposals suggest using saltwater spray as the aerosol. In theory, when sprayed into the air the saltwater would evaporate to produce fine haze particles, which would then be transported by air currents into cloud. Once in the clouds, these particles would increase the number of cloud droplets, and so increase cloud brightness.

4 Marine cloud brightening

Diagram of cloud brightening
CC BY-NC Sci. Adv. 10.1126/sciadv.adi8594

In this proposal, ship-based generators would ingest seawater and produce fine aerosol haze droplets with an equivalent dry diameter of approximately 50 nm. In optimal conditions, many of these haze droplets would be lofted into the cloud by updrafts, where they would modify cloud microphysics processes, such as increasing droplet number concentrations, suppressing rain formation, and extending the coverage and lifetime of the clouds. At the cloud scale, the degree of cloud brightening and surface cooling would depend on how effectively the droplet number concentrations can be increased, droplet sizes reduced, and cloud amount and lifetime increased.

Graham Feingold, research scientist at NOAA’s Chemical Laboratory in Boulder, Colorado, says that there are still unanswered questions on everything from particle generation to their interactions with clouds, and the overall impact on cloud brightness and atmospheric systems.

Feingold, an author on the position paper, says that a key challenge lies in predicting how additional particles will affect cloud properties. For instance, while more haze droplets might theoretically brighten clouds, it could also lead to unintended effects like increased evaporation or rain, which could even reduce cloud coverage.

Another difficult challenge is the inconstancy of cloud response to aerosols. “Ship traffic is really regular,” explains Feingold, “but if you look at satellite imagery on a daily basis in a certain area, sometimes you see really clear, beautiful ship tracks and other times you don’t – and the ship traffic hasn’t changed but the meteorology has.” This variability depends on cloud susceptibility to aerosols, which is influenced by meteorological conditions.

And even if cloud systems that respond well to marine cloud brightening are identified, it would not be sensible to repeatedly target them. “Seeding the same area persistently could have some really serious knock-on effects on regional temperature and rainfall,” says Feingold.

Essentially, aerosol injections into the same area day after day would create localized radiative cooling, which would impact regional climate patterns. This highlights the ethical concerns with cloud brightening, as such effects could benefit some regions while negatively impacting others.

Addressing many of these questions requires significant advances in current climate models, so that the entire process – from the effects of aerosols on cloud microphysics through to the larger impact on clouds and then global climate circulations – can be accurately simulated. Bridging these knowledge gaps will require controlled field experiments, such as aerosol releases from point sources in areas of interest, while taking observational data using tools like drones, airplanes and satellites. Such experiments would help scientists get a “handle on this connection between emitted particles and brightening”, says Feingold.

But physicists can only do so much. “We are not trying to push marine cloud brightening, we are trying to understand it,” says Feingold. He argues that a parallel effort to discuss the governance of marine cloud brightening is also needed.

In recent years, much progress has been made in determining the impact of clouds, when it comes to regulating our planet’s climate, and their importance in climate modelling. “While major advances in the understanding of cloud processes have increased the level of confidence and decreased the uncertainty range for the cloud feedback by about 50% compared to AR5 [IPCC report], clouds remain the largest contribution to overall uncertainty in climate feedbacks (high confidence),” states the IPCC’s latest Assessment Report (AR6), published in 2021. Physicists and atmospheric scientists will continue to study how cloud systems will respond to our ever-changing climate and planet, but ultimately, it is wider society that needs to decide the way forward.

The post Cloudy with a chance of warming: how physicists are studying the dynamical impact of clouds on climate change appeared first on Physics World.

  •  

Cascaded crystals move towards ultralow-dose X-ray imaging

Par : Tami Freeman
25 novembre 2024 à 14:30
Single-crystal and cascade-connected devices under X-ray irradiation
Working principle Illustration of the single-crystal (a) and cascade-connected two-crystal (b) devices under X-ray irradiation. (c) Time-resolved photocurrent responses of the two devices. (Courtesy: CC BY 4.0/ACS Cent. Sci. 10.1021/acscentsci.4c01296)

X-ray imaging plays an indispensable role in diagnosing and staging disease. Nevertheless, exposure to high doses of X-rays has potential for harm, and much effort is focused towards reducing radiation exposure while maintaining diagnostic function. With this aim, researchers at the King Abdullah University of Science and Technology (KAUST) have shown how interconnecting single-crystal devices can create an X-ray detector with an ultralow detection threshold.

The team created devices using lab-grown single crystals of methylammonium lead bromide (MAPbBr3), a perovskite material that exhibits considerable stability, minimal ion migration and a high X-ray absorption cross-section – making it ideal for X-ray detection. To improve performance further, they used cascade engineering to connect two or more crystals together in series, reporting their findings in ACS Central Science.

X-rays incident upon a semiconductor crystal detector generate a photocurrent via the creation of electron–hole pairs. When exposed to the same X-ray dose, cascade-connected crystals should exhibit the same photocurrent as a single-crystal device (as they generate equal net concentrations of electron–hole pairs). The cascade configuration, however, has a higher resistivity and should thus have a much lower dark current, improving the signal-to-noise ratio and enhancing the detection performance of the cascade device.

To test this premise, senior author Omar Mohammed and colleagues grew single crystals of MAPbBr3. They first selected four identical crystals to evaluate (SC1, SC2, SC3 and SC4), each 3 x 3 mm in area and approximately 2 mm thick. Measuring various optical and electrical properties revealed high consistency across the four samples.

“The synthesis process allows for reproducible production of MAPbBr3 single crystals, underscoring their strong potential for commercial applications,” says Mohammed.

Optimizing detector performance

Mohammed and colleagues fabricated X-ray detectors containing a single MAPbBr3 perovskite crystal (SC1) and detectors with two, three and four crystals connected in series (SC1−2, SC1−3 and SC1−4). To compare the dark currents of the devices they irradiated each one with X-rays under a constant 2 V bias voltage. The cascade-connected SC1–2 exhibited a dark current of 7.04 nA, roughly half that generated by SC1 (13.4 nA). SC1–3 and SC1–4 reduced the dark current further, to 4 and 3 nA, respectively.

The researchers also measured the dark current for the four devices as the bias voltage changed from 0 to -10 V. They found that SC1 reached the highest dark current of 547 nA, while SC1–2, SC1–3 and SC1–4 showed progressively decreasing dark currents of 134, 90 and 50 nA, respectively. “These findings highlight the effectiveness of cascade engineering in reducing dark current levels,” Mohammed notes.

Next, the team assessed the current stability of the devices under continuous X-ray irradiation for 450 s. SC1–2 exhibited a stable current response, with a skewness value of just 0.09, while SC1, SC1–3 and SC1–4 had larger skewness values of 0.75, 0.45 and 0.76, respectively.

The researchers point out that while connecting more single crystals in series reduced the dark current, increasing the number of connections also lowered the stability of the device. The two-crystal SC1–2 represents the optimal balance.

Low-dose imaging

One key component required for low-dose X-ray imaging is a low detection threshold. The conventional single-crystal SC1 showed a detection limit of 590 nGy/s under a 2 V bias. SC1–2 decreased this limit to 100 nGy/s – the lowest of all four devices and surpassing the existing record achieved by MAPbBr3 perovskite devices under near-identical conditions.

Spatial resolution is another important consideration. To assess this, the researchers estimated the modulation transfer function (the level of original contrast maintained by the detector) for each of the four devices. They found that SC1–2 exhibited the best spatial resolution of 8.5 line pairs/mm, compared with 5.6, 5.4 and 4 line pairs/mm for SC1, SC1–3 and SC1–4, respectively.

X-ray images of a key and a raspberry with a needle
Optimal imaging Actual and X-ray images of a key and a raspberry with a needle obtained by the SC1 to SC1–4 devices. (Courtesy: CC BY 4.0/ACS Cent. Sci. 10.1021/acscentsci.4c01296)

Finally, the researchers performed low-dose X-ray imaging experiments using the four devices, first imaging a key at a dose rate of 3.1 μGy/s. SC1 exhibited an unclear image due to the unstable current affecting its resolution. Devices SC1–2 to SC1–4 produced clearer images of the key, with SC1–2 showing the best image contrast.

They also imaged a USB port at a dose rate of 2.3 μGy/s, a metal needle piercing a raspberry at 1.9 μGy/s and an earring at 750 nGy/s. In all cases, SC1–2 exhibited the highest quality image.

The researchers conclude that the cascade-engineered configuration represents a significant shift in low-dose X-ray detection, with potential to advance applications that require minimal radiation exposure combined with excellent image quality. They also note that the approach works with different materials, demonstrating X-ray detection using cascaded cadmium telluride (CdTe) single crystals.

Mohammed says that the team is now investigating the application of the cascade structure in other perovskite single crystals, such as FAPbI3 and MAPbI3, with the goal of reducing their detection limits. “Moreover, efforts are underway to enhance the packaging of MAPbBr3 cascade single crystals to facilitate their use in dosimeter detection for real-world applications,” he tells Physics World.

The post Cascaded crystals move towards ultralow-dose X-ray imaging appeared first on Physics World.

  •  

Why academia should be funded by governments, not students

Par : No Author
25 novembre 2024 à 12:00

In an e-mail to staff in September 2024, Christopher Day, the vice-chancellor of Newcastle University in the UK, announced a £35m shortfall in its finances for 2024. Unfortunately, Newcastle is not alone in facing financial difficulties. The problem is largely due to UK universities obtaining much of their funding by charging international students exorbitant tuition fees of tens of thousands of pounds per year. In 2022 international students made up 26% of the total student population. But with the number of international students coming to the UK recently falling and tuition fees for domestic students having increased by less than 6% over the last decade, the income from students is no longer enough to keep our universities afloat.

Both Day and Universities UK (UUK) – the advocacy organization for universities in the UK – pushed for the UK government to allow universities to increase fees for both international and domestic students. They suggested raising the cap on tuition fees for UK students to £13,000 per year, much more than the new cap that was set earlier this month at £9535. Increasing tuition fees further, however, would be a disaster for our education system.

The introduction of student fees was sold to universities in the late 1990s as a way to get more money, and sold to the wider public as a way to allow “market fairness” to improve the quality of education given by universities. In truth, it was never about either of these things.

Tuition fees were about making sure that the UK government would not have to worry about universities pressuring them to increase funding. Universities instead would have to rationalize higher fees with the students themselves. But it is far easier to argue that “we need more money from you, the government, to continue the social good we do” than it is to say “we need more money from you, the students, to keep giving you the same piece of paper”.

Degree-level education in the UK is now treated as a private commodity, to be sold by universities and bought by students, with domestic students taking out a loan from the government that they pay back once they earn above a certain threshold. But this implies that it is only students who profit from the education and that the only benefit for them of a degree is a high-paid job.

Education ends up reduced to an initial financial outlay for a potential future financial gain, with employers looking for job applicants with a degree regardless of what it is in. We might as well just sell students pieces of paper boasting about how much money they have “invested” in themselves.

Yet going to university brings so much more to students than just a boost to their future earnings. Just look, for example, at the high student satisfaction for arts and humanities degrees compared to business or engineering degrees. University education also brings huge social, cultural and economic benefits to the wider community at a local, regional and national level.

UUK estimates that for every £1 of public money invested in the higher-education sector across the UK, £14 is put back into the economy – totalling £265bn per year. Few other areas of government spending give such large economic returns for the UK. No wonder, then, that other countries continue to fund their universities centrally through taxes rather than fees. (Countries such as Germany that do levy fees charge only a nominal amount, as the UK once did.)

Some might say that the public should not pay for students to go to university. But that argument doesn’t stack up. We all pay for roads, schools and hospitals from general taxation whether we use those services or not, so the same should apply for university education. Students from Scotland who study in the country have their fees paid by the state, for example.

Up in arms

Thankfully, some subsidy still remains in the system, mainly for technical degrees such as the sciences and medicine. These courses on average cost more to run than humanities and social sciences courses due to the cost of practical work and equipment. However, as budgets tighten, even this is being threatened.

In 2004 Newcastle closed its physics degree programme due to its costs. While the university soon reversed the mistake, it lives long in the memories of those who today still talk about the incalculable damage this and similar cuts did to UK physics. Indeed, I worry whether this renewed focus on profitability, which over the last few years has led to many humanities programmes and departments closing at UK universities, could again lead to closures in the sciences. Without additional funding, it seems inevitable.

University leaders should have been up in arms when student fees were introduced in the early 2000s. Instead, most went along with them, and are now reaping what they sowed. University vice-chancellors shouldn’t be asking the government to allow universities to charge ever higher fees – they should be telling the government that we need more money to keep doing the good we do for this country. They should not view universities as private businesses and instead lobby the government to reinstate a no-fee system and to support universities again as being social institutions.

If this doesn’t happen, then the UK academic system will fall. Even if we do manage to somehow cut costs in the short term by around £35m per university, it will only prolong the inevitable. I hope vice chancellors and the UK government wake up to this fact before it is too late.

The post Why academia should be funded by governments, not students appeared first on Physics World.

  •  

Ultrafast electron entanglement could be studied using helium photoemission

Par : No Author
23 novembre 2024 à 14:28

The effect of quantum entanglement on the emission time of photoelectrons has been calculated by physicists in China and Austria. Their result includes several counter-intuitive predictions that could be testable with improved free-electron lasers.

The photoelectric effect involves quantum particles of light (photons) interacting with electrons in atoms, molecules and solids. This can result in the emission of an electron (called a photoelectron), but only if the photon energy is greater than the binding energy of the electron.

“Typically when people calculate the photoelectric effect they assume it’s a very weak perturbation on an otherwise inert atom or solid surface and most of the time does not suffer anything from these other atoms or photons coming in,” explains Wei-Chao Jiang of Shenzhen University in China. In very intense radiation fields, however, the atom may simultaneously absorb multiple photons, and these can give rise to multiple emission pathways.

Jiang and colleagues have done a theoretical study of the ionization of a helium atom from its ground state by intense pulses of extreme ultraviolet (XUV) light. At sufficient photon intensities, there are two possible pathways by which a photoelectron can be produced. In the first, called direct single ionization, the photon in the ground state simply absorbs an electron and escapes the potential well. The second is a two-photon pathway called excitation ionization, in which both of the helium electrons absorb a photon from the same light pulse. One of them subsequently escapes, while the other remains in a higher energy level in the residual ion.

Distinct pathways

The two photoemission pathways are distinct, so making a measurement of the emitted electron reveals information about the state of the bound electron that was left behind. The light pulse therefore creates an entangled state in which the two electrons are described by the same quantum wavefunction. To better understand the system, the researchers modelled the emission time for an electron undergoing excitation ionization relative to an electron undergoing direct single ionization.

“The naïve expectation is that, if I have a process that takes two photons, that process will take longer than one where one photon does the whole thing,” says team member Joachim Burgdörfer of the Vienna University of Technology. What the researchers calculated, however, is that photoelectrons emitted by excitation ionization were most likely to be detected about 200 as earlier than photons detected by direct single ionization. This can be explained semi-classically by assuming that the photoionization event must precede the creation of the  helium ion (He+) for the second excitation step to occur. Excitation ionization therefore requires earlier photoemission.

The researchers believe that, in principle, it should be possible to test their model using attosecond streaking or RABBITT (reconstruction of attosecond beating by interference of two-photon transitions). These are special types of pump-probe spectroscopy that can observe interactions at ultrashort timescales. “Naïve thinking would say that, using a 500 as pulse as a pump and a 10 fs pulse as a probe, there is no way you can get time resolution down to say, 10 as,” says Burgdörfer. “This is where recently developed techniques such as streaking or RABBITT  come in. You no longer try to keep the pump and probe pulses apart, instead you want overlap between the pump and probe and you extract the time information from the phase information.”

Simulated streaking

The team also did numerical simulations of the expected streaking patterns at one energy and found that they were consistent with an analytical calculation based on their intuitive picture. “Within a theory paper, we can only check for mutual consistency,” says Burgdörfer.

The principal hurdle to actual experiments lies in generating the required XUV pulses. Pulses from high harmonic generation may not be sufficiently strong to excite the two-photon emission. Free electron laser pulses can be extremely high powered, but are prone to phase noise. However, the researchers note that entanglement between a photoelectron and an ion has been achieved recently at the FERMI free electron laser facility in Italy.

“Testing these predictions employing experimentally realizable pulse shapes should certainly be the next important step.” Burgdörfer says. Beyond this, the researchers intend to study entanglement in more complex systems such as multi-electron atoms or simple molecules.

Paul Corkum at Canada’s University of Ottawa is intrigued by the research. “If all we’re going to do with attosecond science is measure single electron processes, probably we understood them before, and it would be disappointing if we didn’t do something more,” he says. “It would be nice to learn about atoms, and this is beginning to go into an atom or at least its theory thereof.” He cautions, however, that “If you want to do an experiment this way, it is hard.”

The research is described in Physical Review Letters.  

The post Ultrafast electron entanglement could be studied using helium photoemission appeared first on Physics World.

  •  

Noodles of fun as UK researchers create the world’s thinnest spaghetti

22 novembre 2024 à 16:30

While spaghetti might have a diameter of a couple of millimetres and capelli d’angelo (angel hair) is around 0.8 mm, the thinnest known pasta to date is thought to be su filindeu (threads of God), which is made by hand in Sardinia, Italy, and is about 0.4 mm in diameter.

That is, however, until researchers in the UK created spaghetti coming in at a mindboggling 372 nanometres (0.000372 mm) across (Nanoscale Adv. 10.1039/D4NA00601A).

About 200 times thinner than a human hair, the “nanopasta” is made using a technique called electrospinning, in which the threads of flour and liquid were pulled through the tip of a needle by an electric charge.

“To make spaghetti, you push a mixture of water and flour through metal holes,” notes Adam Clancy from University College London (UCL). “In our study, we did the same except we pulled our flour mixture through with an electrical charge. It’s literally spaghetti but much smaller.”

While each individual strand is too thin to see directly with the human eye or with a visible light microscope, the team used the threads to form a mat of nanofibres about two centimetres across, creating in effect a mini lasagne sheet.

The researchers are now investigating how the starch-based nanofibres could be used for medical purposes such as wound dressing, for scaffolds in tissue regrowth and even in drug delivery. “We want to know, for instance, how quickly it disintegrates, how it interacts with cells, and if you could produce it at scale,” says UCL materials scientist Gareth Williams.

But don’t expect to see nanopasta hitting the supermarket shelves anytime soon. “I don’t think it’s useful as pasta, sadly, as it would overcook in less than a second, before you could take it out of the pan,” adds Williams. And no-one likes rubbery pasta.

The post Noodles of fun as UK researchers create the world’s thinnest spaghetti appeared first on Physics World.

  •  

Lens breakthrough paves the way for ultrathin cameras

Par : No Author
22 novembre 2024 à 13:00

A research team headed up at Seoul National University has pioneered an innovative metasurface-based folded lens system, paving the way for a new generation of slimline cameras for use in smartphones and augmented/virtual reality devices.

Traditional lens modules, built from vertically stacked refractive lenses, have fundamental thickness limitations, mainly due to the need for space between lenses and the intrinsic volume of each individual lens. In an effort to overcome these restrictions, the researchers – also at Stanford University and the Korea Institute of Science and Technology – have developed a lens system using metasurface folded optics. The approach enables unprecedented manipulation of light with exceptional control of intensity, phase and polarization – all while maintaining thicknesses of less than a millimetre.

Folding the light path

As part of the research – detailed in Science Advances – the team placed metasurface optics horizontally on a glass wafer. These metasurfaces direct light through multiple folded diagonal paths within the substrate, optimizing space usage and demonstrating the feasibility of a 0.7 mm-thick lens module for ultrathin cameras.

“Most prior research has focused on understanding and developing single metasurface elements. I saw the next step as integrating and co-designing multiple metasurfaces to create entirely new optical systems, leveraging each metasurface’s unique capabilities. This was the main motivation for our paper,” says co-author Youngjin Kim, a PhD candidate in the Optical Engineering and Quantum Electronics Laboratory at Seoul National University.

According to Kim, creation of a metasurface folded lens system requires a wide range of interdisciplinary expertise, including a fundamental understanding of conventional imaging systems such as ray-optic-based lens module design, knowledge of point spread function and modulation transfer function analysis and imaging simulations – both used in imaging and optics to describe the performance of imaging systems – plus a deep awareness of the physical principles behind designing metasurfaces and the nano-fabrication techniques for constructing metasurface systems.

“In this work, we adapted traditional imaging system design techniques, using the commercial tool Zemax, for metasurface systems,” Kim adds. “We then used nanoscale simulations to design the metasurface nanostructures and, finally, we employed lithography-based nanofabrication to create a prototype sample.”

Smoothing the “camera bump”

The researchers evaluated their proposed lens system by illuminating it with an 852 nm laser, observing that it could achieve near-diffraction-limited imaging quality. The folding of the optical path length reduced the lens module thickness to half of the effective focal length (1.4 mm), overcoming inherent limitations of conventional optical systems.

“Potential applications include fully integrated, miniaturized, lightweight camera systems for augmented reality glasses, as well as solutions to the ‘camera bump’ issue in smartphones and miniaturized microscopes for in vivo imaging of live animals,” Kim explains.

Kim also highlights some more general advantages of using novel folded lens systems in devices like compact cameras, smartphones and augmented/virtual reality devices – especially when compared with existing approaches – including include the ultraslim and lightweight form factor, and the potential for mass production using standard semiconductor fabrication processes.

When it comes to further research and practical applications in this area over the next few years, Kim points out that metasurface folded optics “offer a powerful platform for light modulation” within an ultrathin form factor, particularly since the system’s thickness remains constant regardless of the number of metasurfaces used.

“Recently, there has been growing interest in co-designing hardware-based optical elements with software-based AI-based image processing for end-to-end optimization, which maximizes device functionality for specific applications,” he says. “Future research may focus on combining metasurface folded optics with end-to-end optimization to harness the strengths of both advanced hardware and AI.”

The post Lens breakthrough paves the way for ultrathin cameras appeared first on Physics World.

  •  

Martin Rees, Carlo Rovelli and Steven Weinberg tackle big questions to mark Oxford anniversary

22 novembre 2024 à 10:00

If you want to read about controversies in physics, a (brief) history of the speed of light or the quest for dark matter, then make sure to check out this collection of papers to mark the 10th anniversary of the St Cross Centre for the History and Philosophy of Physics (HAPP).

HAPP was co-founded in 2014 by Jo Ashbourn and James Dodd and since then the centre has run a series of one-day conferences as well as standalone lectures and seminars about big topics in physics and philosophy.

Based on these contributions, HAPP has now published a 10th anniversary commemorative volume in the open-access Journal of Physics: Conference Series, which is published by IOP Publishing.

The volume is structured around four themes: physicists across history; space and astronomy; philosophical perspectives; and concepts in physics.

The big names in physics to write for the volume include Martin Rees on the search for extraterrestrial intelligence across a century; Carlo Rovelli on scientific thinking across the centuries; and the late Steven Weinberg on the greatest physics discoveries of the 20th century.

I was delighted to also contribute to the volume based on a talk I gave in February 2020 for a one-day HAPP meeting about big science in physics.

The conference covered the past, present and future of big science and I spoke about the coming decade of new facilities in physics and the possible science that may result. I also included my “top 10 facilities to watch” for the coming decade.

In a preface to the volume, Ashbourn writes that HAPP was founded to provide “a forum in which the philosophy and methodologies that inform how current research in physics is undertaken would be included alongside the history of the discipline in an accessible way that could engage the general public as well as scientists, historians and philosophers,” adding that she is “looking forward” to HAPP’s second decade.

The post Martin Rees, Carlo Rovelli and Steven Weinberg tackle big questions to mark Oxford anniversary appeared first on Physics World.

  •  

Top-cited authors from North America share their tips for boosting research impact

Par : No Author
21 novembre 2024 à 22:00

More than 80 papers from North America have been recognized with a Top Cited Paper award for 2024 from IOP Publishing, which publishes Physics World. The prize is given to corresponding authors who have papers published in both IOP Publishing and its partners’ journals from 2021 to 2023 that are in the top 1% of the most cited papers.

Among the awardees are astrophysicists Sarah Vigeland and Stephen Taylor who are co-authors of the winning article examining the gravitational-wave background using NANoGrav data. “This is an incredible validation of the hard work of the entire NANOGrav collaboration, who persisted over more than 15 years in the search for gravitational wave signals at wavelengths of lightyears,” says Vigeland and Taylor in a joint e-mail.

They add that the article has sparked and unexpected “interest and engagement” from the high-energy theory and cosmology communities and that the award is a “welcome surprise”.

While citations give broader visibility, the authors say that research is not impactful because of its citations alone, but rather it attracts citations because of its impact and importance.

“Nevertheless, a high citation count does signal to others that a paper is relevant and worth reading, which will attract broader audiences and new attention,” they explain, adding that factors that make a research paper highly citable is often because it is “an interesting problem” that intersects a variety of different disciplines. “Such work will attract a broad readership and make it more likely for researchers to cite a paper,” they say.

Aiming for impact

Another top-cited award winner from North America is bio-inspired engineer Carl White who is first author of the winning article about a tuna-inspired robot called Tunabot Flex. “In our paper, we designed and tested a research platform based on tuna to close the performance gap between robotic and biological systems,” says White. “Using this platform, termed Tunabot Flex, we demonstrated the role of body flexibility in high-performance swimming.”

White notes that the interdisciplinary nature of the work between engineers and biologists led to researchers from a variety of topics citing the work. “Our paper is just one example of the many studies benefitting from the rich cross-pollination of ideas to new contexts,” says White adding that the IOP Publishing award is a “great honour”.

White states that scientific knowledge grows in “irregular and interconnected” ways and tracing citations from one paper to another “provides transparency into the origins of ideas and their development”.

“My advice to researchers looking to maximize their work’s impact is to focus on a novel idea that addresses a significant need,” says White. “Innovative work fills gaps in existing literature, so you must identify a gap and then characterize its presence. Show how your work is groundbreaking by thoroughly placing it within the context of your field.”

  • For the full list of top-cited papers from North America for 2024, see here. To read the award-winning research click here and here.
  • For the full in-depth interviews with White, Vigeland and Taylor, see here.

The post Top-cited authors from North America share their tips for boosting research impact appeared first on Physics World.

  •  

Quantum error correction research yields unexpected quantum gravity insights

Par : Han Le
21 novembre 2024 à 17:00

In computing, quantum mechanics is a double-edged sword. While computers that use quantum bits, or qubits, can perform certain operations much faster than their classical counterparts, these qubits only maintain their quantum nature – their superpositions and entanglement – for a limited time. Beyond this so-called coherence time, interactions with the environment, or noise, lead to loss of information and errors. Worse, because quantum states cannot be copied – a consequence of quantum mechanics known as the no-cloning theorem – or directly observed without collapsing the state, correcting these errors requires more sophisticated strategies than the simple duplications used in classical computing.

One such strategy is known as an approximate quantum error correction (AQEC) code. Unlike exact QEC codes, which aim for perfect error correction, AQEC codes help quantum computers return to almost, though not exactly, their intended state. “When we can allow mild degrees of approximation, the code can be much more efficient,” explains Zi-Wen Liu, a theoretical physicist who studies quantum information and computation at China’s Tsinghua University. “This is a very worthwhile trade-off.”

The problem is that the performance and characteristics of AQEC codes are poorly understood. For instance, AQEC conventionally entails the expectation that errors will become negligible as system size increases. This can in fact be achieved simply by appending a series of redundant qubits to the logical state for random local noise; the likelihood of the logical information being affected would, in that case, be vanishingly small. However, this approach is ultimately unhelpful. This raises the questions: What separates good (that is, non-trivial) codes from bad ones? Is this dividing line universal?

Establishing a new boundary

So far, scientists have not found a general way of differentiating trivial and non-trivial AQEC codes. However, this blurry boundary motivated Liu, Daniel Gottesman of the University of Maryland, US; Jinmin Yi of Canada’s Perimeter Institute for Theoretical Physics; and Weicheng Ye at the University of British Columbia, Canada, to develop a framework for doing so.

To this end, the team established a crucial parameter called subsystem variance. This parameter describes the fluctuation of subsystems of states within the code space, and, as the team discovered, links the effectiveness of AQEC codes to a property known as quantum circuit complexity.

Circuit complexity, an important concept in both computer science and physics, represents the optimal cost of a computational process. This cost can be assessed in many ways, with the most intuitive metrics being the minimum time or the “size” of computation required to prepare a quantum state using local gate operations. For instance, how long does it take to link up the individual qubits to create the desired quantum states or transformations needed to complete a computational task?

The researchers found that if the subsystem variance falls below a certain threshold, any code within this regime is considered a nontrivial AQEC code and subject to a lower bound of circuit complexity. This finding is highly general and does not depend on the specific structures of the system. Hence, by establishing this boundary, the researchers gained a more unified framework for evaluating and using AQEC codes, allowing them to explore broader error correction schemes essential for building reliable quantum computers.

A quantum leap

But that wasn’t all. The researchers also discovered that their new AQEC theory carries implications beyond quantum computing. Notably, they found that the dividing line between trivial and non-trivial AQEC codes also arises as a universal “threshold” in other physical scenarios – suggesting that this boundary is not arbitrary but rooted in elementary laws of nature.

One such scenario is the study of topological order in condensed matter physics. Topologically ordered systems are described by entanglement conditions and their associated code properties. These conditions include long-range entanglement, which is a circuit complexity condition, and topological entanglement entropy, which quantifies the extent of long-range entanglement. The new framework clarifies the connection between these entanglement conditions and topological quantum order, allowing researchers to better understand these exotic phases of matter.

A more surprising connection, though, concerns one of the deepest questions in modern physics: how do we reconcile quantum mechanics with Einstein’s general theory of relativity? While quantum mechanics governs the behavior of particles at the smallest scales, general relativity accounts for gravity and space-time on a cosmic scale. These two pillars of modern physics have some incompatible intersections, creating challenges when applying quantum mechanics to strongly gravitational systems.

In the 1990s, a mathematical framework called the anti-de Sitter/conformal field theory correspondence (AdS/CFT) emerged as a way of using CFT to study quantum gravity even though it does not incorporate gravity. As it turns out, the way quantum information is encoded in CFT has conceptual ties to QEC. Indeed, these ties have driven recent advances in our understanding of quantum gravity.

By studying CFT systems at low energies and identifying connections between code properties and intrinsic CFT features, the researchers discovered that the CFT codes that pass their AQEC threshold might be useful for probing certain symmetries in quantum gravity. New insights from AQEC codes could even lead to new approaches to spacetime and gravity, helping to bridge the divide between quantum mechanics and general relativity.

Some big questions remain unanswered, though. One of these concerns the line between trivial and non-trivial codes. For instance, what happens to codes that live close to the boundary? The researchers plan to investigate scenarios where AQEC codes could outperform exact codes, and to explore ways to make the implications for quantum gravity more rigorous. They hope their study will inspire further explorations of AQEC’s applications to other interesting physical systems.

The research is described in Nature Physics.

The post Quantum error correction research yields unexpected quantum gravity insights appeared first on Physics World.

  •  

Mechanical qubit could be used in quantum sensors and quantum memories

Par : No Author
21 novembre 2024 à 14:08

Researchers in Switzerland have created a mechanical qubit using an acoustic wave resonator, marking a significant step forward in quantum acoustodynamics. The qubit is not good enough for quantum logic operations, but researchers hope that further efforts could lead to applications in quantum sensing and quantum memories.

Contemporary quantum computing platforms such as trapped ions and superconducting qubits operate according to the principles of quantum electrodynamics. In such systems, quantum information is held in electromagnetic states and transmitted using photons. In quantum acoustodynamics, however, the quantum information is stored in the quantum states of mechanical resonators. These devices interact with their surroundings via quantized vibrations (phonons), which cannot propagate through a vacuum. As a result, isolated mechanical resonators can have much longer lifetimes that their electromagnetic counterparts. This could be particularly useful for creating quantum memories.

John Teufel of the US’s National Institute for Standards and Technology (NIST) and his team shared Physics World’s 2021 Breakthrough of the Year award for using light to achieve the quantum entanglement of two mechanical resonators. “If you entangle two drums, you know that their motion is correlated beyond vacuum fluctuations,” explains Teufel. “You can do very quantum things, but what you’d really want is for these things to be nonlinear at the single-photon level – that’s more like a bit, holding one and only one excitation – if you want to do things like quantum computing. In my work that’s not a regime we’re usually ever in.”

Hitherto impossible

Several groups such as Yiwen Chu’s at ETH Zurich have interfaced electromagnetic qubits with mechanical resonators and used qubits to induce quantized mechanical excitations. Actually producing a mechanical qubit had proved hitherto impossible, however. A good qubit must have two energy levels, akin to the 1 and 0 states of a classical bit. It can then be placed (or initialized) in one of those levels and remain in a coherent superposition of the two without other levels interfering.

This is possible if the system has unevenly spaced energy levels – which is true in an atom or ion, and can be engineered in a superconducting qubit. Driving a qubit using photons with the exact transition energy then excites Rabi oscillations, in which the population of the upper level rises and falls periodically. However, acoustic resonators are harmonic oscillators, and the energy levels of a harmonic oscillator are evenly spaced. “Every time we would prepare a phonon mode into a harmonic oscillator we would jump by one energy level,” says Igor Kladarić, who is a PhD student in Chu’s group.

In the new work, Kladarić and colleagues used a superconducting transmon qubit coupled to an acoustic resonator on a sapphire chip. The frequency of the superconducting qubit was slightly off-resonance with that of the mechanical resonator. Within being driven in any way, the superconducting qubit coupled to the mechanical resonator and created a shift in the frequencies of the ground state and first excited state of the resonator. This created the desired two-level system in the resonator.

Swapping excitations

The researchers then injected microwave signals at the frequency of the mechanical resonator, converting them into acoustic signals using piezoelectric aluminium nitride. “The way we did the measurement is the way we did it beforehand,” says Kladarić. “We would simply put our superconducting qubit on resonance with our mechanical qubit to swap an excitation back into the superconducting qubit and then simply read out the superconducting qubit itself.”

The researchers confirmed that the mechanical resonator undergoes Rabi oscillations between the first and second excited states, with less than 10% probability of leakage into the second excited state, and was therefore a true mechanical qubit.

The team is now working to improve the qubit to the point where it could be useful in quantum information processing. They are also interested in the possibility of using the qubit in quantum sensing. “These mechanical systems are very massive and so…they can couple to degrees of freedom that single atoms or superconducting qubits cannot, such as gravitational forces,” explains Kladarić.

Teufel is impressed by the Swiss team’s accomplishment, “There are a very short list of strong nonlinearities in nature that are also clean and not lossy…The hard thing for any technology is to make something that’s simultaneously super-nonlinear and super-long lived, and if you do that, you’ve made a very good qubit”. He adds, “This is really the first mechanical resonator that is nonlinear at the single quantum level…It’s not a spectacular qubit yet, but the heart of this work is demonstrating that this is yet another of a very small number of technologies that can behave like a qubit.”

Warwick Bowen of Australia’s University of Queensland told Physics World, “the creation of a mechanical qubit has been a dream in the quantum community for many decades – taking the most classical of systems – a macroscopic pendulum – and converting it to the most quantum of systems, effectively an atom.”

The mechanical qubit is described in Science.

The post Mechanical qubit could be used in quantum sensors and quantum memories appeared first on Physics World.

  •  
❌
❌