↩ Accueil

Vue lecture

NASA’s Jet Propulsion Lab lays off a further 10% of staff

NASA’s Jet Propulsion Laboratory (JPL) is to lay off some 550 employees as part of a restructuring that began in July. The action affects about 11% of JPL’s employees and represents the lab’s third downsizing in the past 20 months. When the layoffs are complete by the end of the year, the lab will have roughly 4500 employees, down from about 6500 at the start of 2024. A further 4000 employees have already left NASA during the past six months via sacking, retirement or voluntary buyouts.

Managed by the California Institute of Technology in Pasadena, JPL oversees scientific missions such as the Psyche asteroid probe, the Europa Clipper and the Perseverance rover on Mars. The lab also operates the Deep Space Network that keeps Earth in communication with unmanned space missions. JPL bosses already laid off about 530 staff – and 140 contractors – in February last year followed by another 325 people in November 2024.

JPL director Dave Gallagher insists, however, that the new layoffs are not related to the current US government shutdown that began on 1 October. “[They are] essential to securing JPL’s future by creating a leaner infrastructure, focusing on our core technical capabilities, maintaining fiscal discipline, and positioning us to compete in the evolving space ecosystem,” he says in a message to employees.

Judy Chu, Democratic Congresswoman for the constituency that includes JPL, is less optimistic. “Every layoff devastates the highly skilled and uniquely talented workforce that has made these accomplishments possible,” she says. “Together with last year’s layoffs, this will result in an untold loss of scientific knowledge and expertise that threatens the very future of American leadership in space exploration and scientific discovery.”

John Logsdon, professor emeritus at George Washington University and founder of the university’s Space Policy Institute, says that the cuts are a direct result of the Trump administration’s approach to science and technology. “The administration gives low priority to robotic science and exploration, and has made draconic cuts to the science budget; that budget supports JPL’s work,” he told Physics World. “With these cuts, there is not enough money to support a JPL workforce sized for more ambitious activities. Ergo, staff cuts.”

The post NASA’s Jet Propulsion Lab lays off a further 10% of staff appeared first on Physics World.

  •  

How to solve the ‘future of physics’ problem

I hugely enjoyed physics when I was a youngster. I had the opportunity both at home and school to create my own projects, which saw me make electronic circuits, crazy flying models like delta-wings and autogiros, and even a gas chromatograph with a home-made chart recorder. Eventually, this experience made me good enough to repair TV sets, and work in an R&D lab in the holidays devising new electronic flow controls.

That enjoyment continued beyond school. I ended up doing a physics degree at the University of Oxford before working on the discovery of the gluon at the DESY lab in Hamburg for my PhD. Since then I have used physics in industry – first with British Oxygen/Linde and later with Air Products & Chemicals – to solve all sorts of different problems, build innovative devices and file patents.

While some students have a similarly positive school experience and subsequent career path, not enough do. Quite simply, physics at school is the key to so many important, useful developments, both within and beyond physics. But we have a physics education problem, or to put it another way – a “future of physics” problem.

There are just not enough school students enjoying and learning physics. On top of that there are not enough teachers enjoying physics and not enough students doing practical physics. The education problem is bad for physics and for many other subjects that draw on physics. Alas, it’s not a new problem but one that has been developing for years.

Problem solving

Many good points about the future of physics learning were made by the Institute of Physics in its 2024 report Fundamentals of 11 to 19 Physics. The report called for more physics lessons to have a practical element and encouraged more 16-year-old students in England, Wales and Northern Ireland to take AS-level physics at 17 so that they carry their GCSE learning at least one step further.

Doing so would furnish students who are aiming to study another science or a technical subject with the necessary skills and give them the option to take physics A-level. Another recommendation is to link physics more closely to T-levels – two-year vocational courses in England for 16–19 year olds that are equivalent to A-levels – so that students following that path get a background in key aspects of physics, for example in engineering, construction, design and health.

But do all these suggestions solve the problem? I don’t think they are enough and we need to go further. The key change to fix the problem, I believe, is to have student groups invent, build and test their own projects. Ideally this should happen before GCSE level so that students have the enthusiasm and background knowledge to carry them happily forward into A-level physics. They will benefit from “pull learning” – pulling in knowledge and active learning that they will remember for life. And they will acquire wider life skills too.

Developing skillsets

During my time in industry, I did outreach work with schools every few weeks and gave talks with demonstrations at the Royal Institution and the Franklin Institute. For many years I also ran a Saturday Science club in Guildford, Surrey, for pupils aged 8–15.

Based on this, I wrote four Saturday Science books about the many playful and original demonstrations and projects that came out of it. Then at the University of Surrey, as a visiting professor, I had small teams of final-year students who devised extraordinary engineering – designing superguns for space launches, 3D printers for full-size buildings and volcanic power plants inter alia. A bonus was that other staff working with the students got more adventurous too.

But that was working with students already committed to a scientific path. So lately I’ve been working with teachers to get students to devise and build their own innovative projects. We’ve had 14–15-year-old state-school students in groups of three or four, brainstorming projects, sketching possible designs, and gathering background information. We help them and get A-level students to help too (who gain teaching experience in the process). Students not only learn physics better but also pick up important life skills like brainstorming, team-working, practical work, analysis and presentations.

We’ve seen lots of ingenuity and some great projects such as an ultrasonic scanner to sense wetness of cloth; a system to teach guitar by lighting up LEDs along the guitar neck; and measuring breathing using light passing through a band of Lycra around the patient below the ribs. We’ve seen the value of failure, both mistakes and genuine technical problems.

Best of all, we’ve also noticed what might be dubbed the “combination bonus” – students having to think about how they combine their knowledge of one area of physics with another.  A project involving a sensor, for example, will often involve electronics as well the physics of the sensor and so student knowledge of both areas is enhanced.

Some teachers may question how you mark such projects. The answer is don’t mark them! Project work and especially group work is difficult to mark fairly and accurately, and the enthusiasm and increased learning by students working on innovative projects will feed through into standard school exam results.

Not trying to grade such projects will mean more students go on to study physics further, potentially to do a physics-related extended project qualification – equivalent to half an A-level where students research a topic to university level – and do it well. Long term, more students will take physics with them into the world of work, from physics to engineering or medicine, from research to design or teaching.

Such projects are often fun for students and teachers. Teachers are often intrigued and amazed by students’ ideas and ingenuity. So, let’s choose to do student-invented project work at school and let’s finally solve the future of physics problem.

The post How to solve the ‘future of physics’ problem appeared first on Physics World.

  •  

A recipe for quantum chaos

The control of large, strongly coupled, multi-component quantum systems with complex dynamics is a challenging task.

It is, however, an essential prerequisite for the design of quantum computing platforms and for the benchmarking of quantum simulators.

A key concept here is that of quantum ergodicity. This is because quantum ergodic dynamics can be harnessed to generate highly entangled quantum states.

In classical statistical mechanics, an ergodic system evolving over time will explore all possible microstates states uniformly. Mathematically, this means that a sufficiently large collection of random samples from an ergodic process can represent the average statistical properties of the entire process.

Quantum ergodicity is simply the extension of this concept to the quantum realm.

Closely related to this is the idea of chaos. A chaotic system is one in which is very sensitive to its initial conditions. Small changes can be amplified over time, causing large changes in the future.

The ideas of chaos and ergodicity are intrinsically linked as chaotic dynamics often enable ergodicity.

Until now, it has been very challenging to predict which experimentally preparable initial states will trigger quantum chaos and ergodic dynamics over a reasonable time scale.

In a new paper published in Reports on Progress in Physics, a team of researchers have proposed an ingenious solution to this problem using the Bose–Hubbard Hamiltonian.

They took as an example ultracold atoms in an optical lattice (a typical choice for experiments in this field) to benchmark their method.

The results show that there are certain tangible threshold values which must be crossed in order to ensure the onset of quantum chaos.

These results will be invaluable for experimentalists working across a wide range of quantum sciences.

The post A recipe for quantum chaos appeared first on Physics World.

  •  

Neural simulation-based inference techniques at the LHC

Precision measurements of theoretical parameters are a core element of the scientific program of experiments at the Large Hadron Collider (LHC) as well as other particle colliders. 

These are often performed using statistical techniques such as the method of maximum likelihood. However, given the size of datasets generated, reduction techniques, such as grouping data into bins, are often necessary. 

These can lead to a loss of sensitivity, particularly in non-linear cases like off-shell Higgs boson production and effective field theory measurements.  The non-linearity in these cases comes from quantum interference and traditional methods are unable to optimally distinguish the signal from background.

In this paper, the ATLAS collaboration pioneered the use of a neural network based technique called neural simulation-based inference (NSBI) to combat these issues. 

A neural network is a machine learning model originally inspired by how the human brain works. It’s made up of layers of interconnected units called neurons, which process information and learn patterns from data. Each neuron receives input, performs a simple calculation, and passes the result to other neurons. 

NSBI uses these neural networks to analyse each particle collision event individually, preserving more information and improving accuracy.

The framework developed here can handle many sources of uncertainty and includes tools to measure how confident scientists can be in their results.

The researchers benchmarked their method by using it to calculate the Higgs boson signal strength and compared it to previous methods with impressive results (see here for more details about this).

The greatly improved sensitivity gained from using this method will be invaluable in the search for physics beyond the Standard Model in future experiments at ATLAS and beyond.

Read the full article

An implementation of neural simulation-based inference for parameter estimation in ATLAS – IOPscience

The ATLAS Collaboration, 2025 Rep. Prog. Phys. 88 067801

The post Neural simulation-based inference techniques at the LHC appeared first on Physics World.

  •  

‘Science needs all perspectives – male, female and everything in-between’: Brazilian astronomer Thaisa Storchi Bergmann

As a teenager in her native Rio Grande do Sul, a state in Southern Brazil, Thaisa Storchi Bergmann enjoyed experimenting in an improvised laboratory her parents built in their attic. They didn’t come from a science background – her father was an accountant, her mother a primary school teacher – but they encouraged her to do what she enjoyed. With a friend from school, Storchi Bergmann spent hours looking at insects with a microscope and running experiments from a chemistry toy kit. “We christened the lab Thasi-Cruz after a combination of our names,” she chuckles.

At the time, Storchi Bergmann could not have imagined that one day this path would lead to cosmic discoveries and international recognition at the frontiers of astrophysics. “I always had the curiosity inside me,” she recalls. “It was something I carried since adolescence.”

That curiosity almost got lost to another discipline. By the time Storchi Bergmann was about to enter university, she was swayed by a cousin living with her family who was passionate about architecture. By 1974 she began studying architecture at the Federal University of Rio Grande do Sul (UFRGS). “But I didn’t really like technical drawing. My favourite part of the course were physics classes,” she says. Within a semester, she switched to physics.

There she met Edemundo da Rocha Vieira, the first astrophysicist UFRGS ever hired – who later went on to structure the university’s astronomy department. He nurtured Storchi Bergmann’s growing fascination with the universe and introduced her to research.

In 1977, newly married after graduation, Storchi Bergmann followed her husband to Rio de Janeiro, where she did a master’s degree and worked with William Kunkel, an American astronomer who was in Rio to help establish Brazil’s National Astrophysics Laboratory. She began working on data from a photometric system to measure star radiation. “But Kunkel said galaxies were a lot more interesting to study, and that stuck in my head,” she says.

Three years after moving to Rio, she returned to Porto Alegre, in Rio Grande do Sul, to start her doctoral research and teach at UFRGS. Vital to her career was her decision to join the group of Miriani Pastoriza, one of the pioneers of extragalactic astrophysics in Latin America. “She came from Argentina, where [in the late 1970s and early 1980s] scientists were being strongly persecuted [by the country’s military dictatorship] at the time,” she recalls. Pastoriza studied galaxies with “peculiar nuclei” – objects later known to harbour supermassive black holes. Under Pastoriza’s guidance, she moved from stars to galaxies, laying the foundation for her career.

Between 1986 and 1987, Storchi Bergmann often travelled to Chile to make observations and gather data for her PhD, using some of the largest telescopes available at the time. Then came a transformative period – a postdoc fellowship in Maryland, US, just as the Hubble Space Telescope was launched in 1990. “Each Thursday, I would drive to Baltimore for informal bag-lunch talks at the Space Telescope Science Institute, absorbing new results on active galactic nuclei (AGN) and supermassive black holes,” Storchi Bergmann recalls.

Discoveries and insights

In 1991, during an observing campaign, she and a collaborator saw something extraordinary in the galaxy NGC 1097: gas moving at immense speeds, captured by the galaxy’s central black hole. The work, published in 1993, became one of the earliest documented cases of what are now called “tidal disruption events”, in which a star or cloud gets too close to a black hole and is torn apart.

Her research also contributed to one of the defining insights of the Hubble era: that every massive galaxy hosts a central black hole. “At first, we didn’t know if they were rare,” she explains. “But gradually it became clear: these objects are fundamental to galaxy evolution.”

Another collaboration brought her into contact with Daniela Calzetti, whose work on the effects of interstellar dust led to the formulation of the widely used “Calzetti law”. These and other contributions placed Storchi Bergmann among the most cited scientists worldwide, recognition of which came in 2015 when she received the L’Oréal-UNESCO Award for Women in Science.

Her scientific achievements, however, unfolded against personal and structural obstacles. As a young mother, she often brought her baby to observatories and conferences so she could breastfeed. This kind of juggling is no stranger to many women in science.

“It was never easy,” Storchi Bergmann reflects. “I was always running, trying to do 20 things at once.” The lack of childcare infrastructure in universities compounded the challenge. She recalls colleagues who succeeded by giving up on family life altogether. “That is not sustainable,” she insists. “Science needs all perspectives – male, female and everything in-between. Otherwise, we lose richness in our vision of the universe.”

When she attended conferences early in her career, she was often the only woman in the room. Today, she says, the situation has greatly improved, even if true equality remains distant.

Now a tenured professor at UFRGS and a member of the Brazilian Academy of Sciences, Storchi Bergmann continues to push at the cosmic frontier. Her current focus is the Legacy Survey of Space and Time (LSST), about to begin at the Vera Rubin Observatory in Chile.

Her group is part of the AGN science collaboration, developing methods to analyse the characteristic flickering of accreting black holes. With students, she is experimenting with automated pipelines and artificial intelligence to make sense of and manage the massive amounts of data.

Challenges ahead

Yet this frontier science is not guaranteed. Storchi Bergmann is frustrated by the recent collapse in research scholarships. Historically, her postgraduate programme enjoyed a strong balance of grants from both of Brazil’s federal research funding agencies, CNPq (from the Ministry of Science) and CAPES (from the Ministry of Education). But cuts at CNPq, she says, have left students without support, and CAPES has not filled the gap.

“The result is heartbreaking,” she says. “I have brilliant students ready to start, including one from Piauí (a state in north-eastern Brazil), but without a grant, they simply cannot continue. Others are forced to work elsewhere to support themselves, leaving no time for research.”

She is especially critical of the policy of redistributing scarce funds away from top-rated programmes to newer ones without expanding the overall budget. “You cannot build excellence by dismantling what already exists,” she argues.

For her, the consequences go beyond personal frustration. They risk undermining decades of investment that placed Brazil on the international astrophysics map. Despite these challenges, Storchi Bergmann remains driven and continues to mentor master’s and PhD students, determined to prepare them for the LSST era.

At the heart of her research is a question as grand as any in cosmology: which came first – the galaxy or its central black hole? The answer, she believes, will reshape our understanding of how the universe came to be. And it will carry with it the fingerprint of her work: the persistence of a Brazilian scientist who followed her curiosity from a home-made lab to the centres of galaxies, overcoming obstacles along the way.

The post ‘Science needs all perspectives – male, female and everything in-between’: Brazilian astronomer Thaisa Storchi Bergmann appeared first on Physics World.

  •  

Chip-integrated nanoantenna efficiently harvests light from diamond defects

When diamond defects emit light, how much of that light can be captured and used for quantum technology applications? According to researchers at the Hebrew University of Jerusalem, Israel and Humboldt Universität of Berlin, Germany, the answer is “nearly all of it”. Their technique, which relies on positioning a nanoscale diamond at an optimal location within a chip-integrated nanoantenna, could lead to improvements in quantum communication and quantum sensing.

Illustration showing photon emission from a nanodiamond being directed by a bullseye antenna. The bullseye antenna is shown flat, and seven parallel orange arrows representing photons emerge from different parts of the bullseye, like candles on a birthday cake. At the centre of the bullseye is a diamond
Guided light: Illustration showing photon emission from a nanodiamond and light directed by a bullseye antenna. (Courtesy: Boaz Lubotzky)

Nitrogen-vacancy (NV) centres are point defects that occur when one carbon atom in diamond’s lattice structure is replaced by a nitrogen atom next to an empty lattice site (a vacancy). Together, this nitrogen atom and its adjacent vacancy behave like a negatively charged entity with an intrinsic quantum spin.

When excited with laser light, an electron in an NV centre can be promoted into an excited state. As the electron decays back to the ground state, it emits light. The exact absorption-and-emission process is complicated by the fact that both the ground state and the excited state of the NV centre have three sublevels (spin triplet states). However, by exciting an individual NV centre repeatedly and collecting the photons it emits, it is possible to determine the spin state of the centre.

The problem, explains Boaz Lubotzky, who co-led this research effort together with his colleague Ronen Rapaport, is that NV centres radiate over a wide range of angles. Hence, without an efficient collection interface, much of the light they emit is lost.

Standard optics capture around 80% of the light

Lubotzky and colleagues say they have now solved this problem thanks to a hybrid nanostructure made from a PMMA dielectric layer above a silver grating. This grating is arranged in a precise bullseye pattern that accurately guides light in a well-defined direction thanks to constructive interference. Using a nanometre-accurate positioning technique, the researchers placed the nanodiamond containing the NV centres exactly at the optimal location for light collection: right at the centre of the bullseye.

For standard optics with a numerical aperture (NA) of about 0.5, the team found that the system captures around 80% of the light emitted from the NV centres. When NA >0.7, this value exceeds 90%, while for NA > 0.8, Lubotzky says it approaches unity.

“The device provides a chip-based, room-temperature interface that makes NV emission far more directional, so a larger fraction of photons can be captured by standard lenses or coupled into fibres and photonic chips,” he tells Physics World. “Collecting more photons translates into faster measurements, higher sensitivity and lower power, thereby turning NV centres into compact precision sensors and also into brighter, easier-to-use single-photon sources for secure quantum communication.”

The researchers say their next priority is to transition their prototype into a plug-and-play, room-temperature module – one that is fully packaged and directly coupled to fibres or photonic chips – with wafer-level deterministic placement for arrays. “In parallel, we will be leveraging the enhanced collection for NV-based magnetometry, aiming for faster, lower-power measurements with improved readout fidelity,” says Lubotzky. “This is important because it will allow us to avoid repeated averaging and enable fast, reliable operation in quantum sensors and processors.”

They detail their present work in APL Quantum.

The post Chip-integrated nanoantenna efficiently harvests light from diamond defects appeared first on Physics World.

  •  

Illuminating quantum worlds: a Diwali conversation with Rupamanjari Ghosh

Homes and cities around the world are this week celebrating Diwali or Deepavali – the Indian “festival of lights”. For Indian physicist Rupamanjari Ghosh, who is the former vice chancellor of Shiv Nadar University Delhi-NCR, this festival sheds light on the quantum world. Known for her work on nonlinear optics and entangled photons, Ghosh finds a deep resonance between the symbolism of Diwali and the ongoing revolution in quantum science.

“Diwali comes from Deepavali, meaning a ‘row of lights’. It marks the triumph of light over dark; good over evil; and knowledge over ignorance,” Ghosh explains. “In science too, every discovery is a Diwali –  a victory of knowledge over ignorance.”

With 2025 being marked by the International Year of Quantum Science and Technology, a victory of knowledge over ignorance couldn’t ring truer. “It has taken us a hundred years since the birth of quantum mechanics to arrive at this point, where quantum technologies are poised to transform our lives,” says Ghosh.

Ghosh has another reason to celebrate, having been named as this year’s Institute of Physics (IOP) Homi Bhabha lecturer. The IOP and the Indian Physical Association (IPA) jointly host the Homi Bhabha and Cockcroft Walton bilateral exchange of lecturers. Running since 1998, these international programmes aim to promote dialogue on global challenges through physics and provide physicists with invaluable opportunities for global exposure and professional growth. Ghosh’s online lecture, entitled “Illuminating quantum frontiers: from photons to emerging technologies”, will be aired at 3 p.m. GMT on Wednesday 22 October.

From quantum twins to quantum networks

Ghosh’s career in physics took off in the mid-1980s, when she and American physicist Leonard Mandel – who is often referred to as one of the founding fathers of quantum optics – demonstrated a new quantum source of twin photons through spontaneous parametric down-conversion: a process where a high-energy photon splits into two lower-energy, correlated photons (Phys. Rev. A 34 3962).

“Before that,” she recalls, “no-one was looking for quantum effects in this nonlinear optical process. The correlations between the photons defied classical explanation. It was an elegant early verification of quantum nonlocality.”

Those entangled photon pairs are now the building blocks of quantum communication and computation. “We’re living through another Diwali of light,” she says, “where theoretical understanding and experimental innovation illuminate each other.”

Entangled light

During Diwali, lamps unite households in a shimmering network of connection,  and so too does entanglement of photons. “Quantum entanglement reminds us that connection transcends locality,” Ghosh says. “In the same way, the lights of Diwali connect us across borders and cultures through shared histories.”

Her own research extends that metaphor further. Ghosh’s team has worked on mapping quantum states of light onto collective atomic excitations. These “slow-light” techniques –  using electromagnetically induced transparency or Raman interactions –  allow photons to be stored and retrieved, forming the backbone of long-distance quantum communication (Opt. Lett. 36 1551).

“Symbolically,” she adds, “it’s like passing the flame from one diya (lamp) to another. We’re not just spreading light –  we’re preserving, encoding and transmitting it. Success comes through connection and collaboration.”

Rupamanjari Ghosh
Beyond the shadows: Ghosh calls for the bright light of inclusivity in science. (Courtesy: Rupamanjari Ghosh)

The dark side of light

Ghosh is quick to note that in quantum physics, “darkness” is far from empty. “In quantum optics, even the vacuum is rich –  with fluctuations that are essential to our understanding of the universe.”

Her group studies the transition from quantum to classical systems, using techniques such as error correction, shielding and coherence-preserving materials. “Decoherence –  the loss of quantum behaviour through environmental interaction –  is a constant threat. To build reliable quantum technologies, we must engineer around this fragility,” Ghosh explains.

There are also human-engineered shadows: some weaknesses in quantum communication devices aren’t due to the science itself – they come from mistakes or flaws in how humans built them. Hackers can exploit these “side channels” to get around security. “Security,” she warns, “is only as strong as the weakest engineering link.”

Beyond the lab, Ghosh finds poetic meaning in these challenges. “Decoherence isn’t just a technical problem –  it helps us understand the arrows of time, why the universe evolves irreversibly. The dark side has its own lessons.”

Lighting every corner

For Ghosh, Diwali’s illumination is also a call for inclusivity in science. “No corner should remain dark,” she says. “Science thrives on diversity. Diverse teams ask broader questions and imagine richer answers. It’s not just morally right – it’s good for science.”

She argues that equity is not sameness but recognition of uniqueness. “Innovation doesn’t come from conformity. Gender diversity, for example, brings varied cognitive and collaborative styles – essential in a field like quantum science, where intuition is constantly stretched.”

The shadows she worries most about are not in the lab, but in academia itself. “Unconscious biases in mentorship or gatekeeping in opportunity can accumulate to limit visibility. Institutions must name and dismantle these hidden shadows through structural and cultural change.”

Her vision of inclusion extends beyond gender. “We shouldn’t think of work and life as opposing realms to ‘balance’,” she says. “It’s about creating harmony among all dimensions of life – work, family, learning, rejuvenation. That’s where true brilliance comes from.”

As the rows of diyas are lit this Diwali, Ghosh’s reflections remind us that light –  whether classical or quantum –  is both a physical and moral force: it connects, illuminates and endures. “Each advance in quantum science,” she concludes, “is another step in the age-old journey from darkness to light.”

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Illuminating quantum worlds: a Diwali conversation with Rupamanjari Ghosh appeared first on Physics World.

  •  

Influential theoretical physicist and Nobel laureate Chen-Ning Yang dies aged 103

The Chinese particle physicist Chen-Ning Yang died on 18 October at the age of 103. Yang shared half of the 1957 Nobel Prize for Physics with Tsung-Dao Lee for their theoretical work that overturned the notion that parity is conserved in the weak force – one of the four fundamental forces of nature.

Born on 22 September 1922 in Hefei, China, Yang competed a BSc at the National Southwest Associated University in Kunming in 1942. After finishing an MSc in statistical physics at Tsinghua University two years later, in 1945 he moved to the University of Chicago in the US as part of a government-sponsored programme. He received his PhD in physics in 1948 working under the guidance of Edward Teller.

In 1949 Yang moved to the Institute for Advanced Study in Princeton, where he made pioneering contributions to quantum field theory, wotrking together with Robert Mills. In 1953 they proposed the Yang-Mills theory, which became a cornerstone of the Standard Model of particle physics.

The ‘Wu experiment’

It was also at Princeton where Yang began a fruitful collaboration with Lee, who died last year aged 97. Their work on parity – a property of elementary particles that expresses their behaviour upon reflection in a mirror – led to the duo winning the Nobel prize.

In the early 1950s, physicists had been puzzled by the decays of two subatomic particles, known as tau and theta, which are identical except that the tau decays into three pions with a net parity of -1, while a theta particle decays into two pions with a net parity of +1.

There were two possible explanations: either the tau and theta are different particles or that parity in the weak interaction is not conserved with Yang and Lee proposing various ways to test their ideas (Phys. Rev. 104 254).

This “parity violation” was later proved experimentally by, among others, Chien-Shiung Wu at Columbia University. She carried out an experiment based on the radioactive decay of unstable cobalt-60 nuclei into nickel-60 – what became known as the “Wu experiment”. For their work, Yang, who was 35 at the time, shared the 1957 Nobel Prize for Physics with Lee.

Influential physicist

In 1965 Yang moved to Stony Brook University, becoming the first director of the newly founded Institute for Theoretical Physics, which is now known as the C N Yang Institute for Theoretical Physics. During this time he also contributed to advancing science and education in China, setting up the Committee on Educational Exchange with China – a programme that has sponsored some 100 Chinese scholars to study in the US.

In 1997, Yang returned to Beijing where he became an honorary director of the Centre for Advanced Study at Tsinghua University. He then retired from Stony Brook in 1999, becoming a professor at Tsinghua University. During his time in the US, Yang obtained US citizenship, but renounced it in 2015.

More recently, Yang was involved in debates over whether China should build the Circular Electron Positron Collider (CEPC) – a huge 100 km circumference underground collider that would study the Higgs boson in unprecented detail and be a successor to CERN’s Large Hadron Collider. Yang took a sceptical view calling it “inappropriate” for a developing country that is still struggling with “more acute issues like economic development and environment protection”.

Yang also expressed concern that the science performed on the CEPC is just “guess” work and without guaranteed results. “I am not against the future of high-energy physics, but the timing is really bad for China to build such a super collider,” he noted in 2016. “Even if they see something with the machine, it’s not going to benefit the life of Chinese people any sooner.”

Lasting legacy

As well as the Nobel prize, Yang won many other awards such as the US National Medal of Science in 1986, the Einstein Medal in 1995, which is presented by the Albert Einstein Society in Bern, and the American Physical Society’s Lars Onsager Prize in 1990.

“The world has lost one of the most influential physicists of the modern era,” noted Stony Brook president Andrea Goldsmith in a statement. “His legacy will continue through his transformational impact on the field of physics and through the many colleagues and students influenced by his teaching, scholarship and mentorship.”

The post Influential theoretical physicist and Nobel laureate Chen-Ning Yang dies aged 103 appeared first on Physics World.

  •  

Precision sensing experiment manipulates Heisenberg’s uncertainty principle

Physicists in Australia and the UK have found a new way to manipulate Heisenberg’s uncertainty principle in experiments on the vibrational mode of a trapped ion. Although still at the laboratory stage, the work, which uses tools developed for error correction in quantum computing, could lead to improvements in ultra-precise sensor technologies like those used in navigation, medicine and even astronomy.

“Heisenberg’s principle says that if two operators – for example, position x and momentum, p – do not commute, then one cannot simultaneously measure both of them to absolute precision,” explains team leader Ting Rei Tan of the University of Sydney’s Nano Institute. “Our result shows that one can instead construct new operators – namely ‘modular position’ x̂ and ‘modular momentum’ p̂. These operators can be made to commute, meaning that we can circumvent the usual limitation imposed by the uncertainty principle.”

The modular measurements, he says, give the true measurement of displacements in position and momentum of the particle if the distance is less than a specific length l, known as the modular length. In the new work, they measured x̂ = x mod lx and p̂ = p mod lp, where lx and lp are the modular length in position and momentum.

“Since the two modular operators x̂ and p̂ commute, this means that they are now bounded by an uncertainty principle where the product is larger or equal to 0 (instead of the usual ℏ/2),” adds team member Christophe Valahu. “This is how we can use them to sense position and momentum below the standard quantum limit. The catch, however, is that this scheme only works if the signal being measured is within the sensing range defined by the modular lengths.”

The researchers stress that Heisenberg’s uncertainty principle is in no way “broken” by this approach, but it does mean that when observables associated with these new operators are measured, the precision of these measurements is not limited by this principle. “What we did was to simply push the uncertainty to a sensing range that is relatively unimportant for our measurement to obtain a better precision at finer details,” Valahu tells Physics World.

This concept, Tan explains, is related to an older method known as quantum squeezing that also works by shifting uncertainties around. The difference is that in squeezing, one reshapes the probability, reducing the spread in position at the cost of enlarging the spread of momentum, or vice versa. “In our scheme, we instead redistribute the probability, reducing the uncertainties of position and momentum within a defined sensing range, at the cost of an increased uncertainty if the signal is not guaranteed to lie within this range,” Tan explains. “We effectively push the unavoidable quantum uncertainty to places we don’t care about (that is, big, coarse jumps in position and momentum) so the fine details we do care about can be measured more precisely.

“Thus, as long as we know the signal is small (which is almost always the case for precision measurements), modular measurements give us the correct answer.”

Repurposed ideas and techniques

The particle being measured in Tan and colleagues’ experiment was a 171Yb+ ion trapped in a so-called grid state, which is a subclass of error-correctable logical state for quantum bits, or qubits. The researchers then used a quantum phase estimation protocol to measure the signal they imprinted onto this state, which acts as a sensor.

This measurement scheme is similar to one that is commonly used to measure small errors in the logical qubit state of a quantum computer. “The difference is that in this case, the ‘error’ corresponds to a signal that we want to estimate, which displaces the ion in position and momentum,” says Tan. “This idea was first proposed in a theoretical study.”

Towards ultra-precise quantum sensors

The Sydney researchers hope their result will motivate the development of next-generation precision quantum sensors. Being able to detect extremely small changes is important for many applications of quantum sensing, including navigating environments where GPS isn’t effective (such as on submarines, underground or in space). It could also be useful for biological and medical imaging, materials analysis and gravitational systems.

Their immediate goal, however, is to further improve the sensitivity of their sensor, which is currently about 14 x10-24 N/Hz1/2, and calculate its limit. “It would be interesting if we could push that to the 10-27 N level (which, admittedly, will not be easy) since this level of sensitivity could be relevant in areas like the search for dark matter,” Tan says.

Another direction for future research, he adds, is to extend the scheme to other pairs of observables. “Indeed, we have already taken some steps towards this: in the latter part of our present study, which is published in Science Advances, we constructed a modular number operator and a modular phase operator to demonstrate that the strategy can be extended beyond position and momentum.”

The post Precision sensing experiment manipulates Heisenberg’s uncertainty principle appeared first on Physics World.

  •  

Eye implant restores vision to patients with incurable sight loss

A tiny wireless implant inserted under the retina can restore central vision to patients with sight loss due to age-related macular degeneration (AMD). In an international clinical trial, the PRIMA (photovoltaic retina implant microarray) system restored the ability to read in 27 of 32 participants followed up after a year.

AMD is the most common cause of incurable blindness in older adults. In its advanced stage, known as geographic atrophy, AMD can cause progressive, irreversible death of light-sensitive photoreceptors in the centre of the retina. This loss of photoreceptors means that light is not transduced into electrical signals, causing profound vision loss.

The PRIMA system works by replacing these lost photoreceptors. The two-part system includes the implant itself: a 2 × 2 mm array of 378 photovoltaic pixels, plus PRIMA glasses containing a video camera that captures images and, after processing, projects them onto the implant using near-infrared light. The pixels in the implant convert this light into electrical pulses, restoring the flow of visual information to the brain. Patients can use the glasses to focus and zoom the image that they see.

The clinical study, led by Frank Holz of the University of Bonn in Germany, enrolled 38 participants at 17 hospital sites in five European countries. All participants had geographic atrophy due to AMD in both eyes, as well as loss of central sight in the study eye over a region larger than the implant (more than 2.4 mm in diameter), leaving only limited peripheral vision.

Around one month after surgical insertion of the 30 μm-thick PRIMA array into one eye, the patients began using the glasses. All underwent training to learn to interpret the visual signals from the implant, with their vision improving over months of training.

Eye images before and after array implantation
The PRIMA implant Representative fundus and OCT images obtained before and after implantation of the array in a patient’s eye. (Courtesy: Science Corporation)

After one year, 27 of the 32 patients who completed the trial could read letters and words (with some able to read pages in a book) and 26 demonstrated clinically meaningful improvement in visual acuity (the ability to read at least two extra lines on a standard eye chart). On average, participants could read an extra five lines, with one person able to read an additional 12 lines.

Nineteen of the participants experienced side-effects from the surgical procedure, with 95% of adverse events resolving within two months. Importantly, their peripheral vision was not impacted by PRIMA implantation. The researchers note that the infrared light used by the implant is not visible to remaining photoreceptors outside the affected region, allowing patients to combine their natural peripheral vision with the prosthetic central vision.

“Before receiving the implant, it was like having two black discs in my eyes, with the outside distorted,” Sheila Irvine, a trial patient treated at Moorfields Eye Hospital in the UK, says in a press statement. “I was an avid bookworm, and I wanted that back. There was no pain during the operation, but you’re still aware of what’s happening. It’s a new way of looking through your eyes, and it was dead exciting when I began seeing a letter. It’s not simple, learning to read again, but the more hours I put in, the more I pick up. It’s made a big difference.”

The PRIMA system – originally designed by Daniel Palanker at Stanford University – is being developed and manufactured by Science Corporation. Based on these latest results, reported in the New England Journal of Medicine, the company has applied for clinical use authorization in Europe and the United States.

The post Eye implant restores vision to patients with incurable sight loss appeared first on Physics World.

  •  

Single-phonon coupler brings different quantum technologies together

Researchers in the Netherlands have demonstrated the first chip-based device capable of splitting phonons, which are quanta of mechanical vibrations. Known as a single-phonon directional coupler, or more simply as a phonon splitter, the new device could make it easier for different types of quantum technologies to “talk” to each other. For example, it could be used to transfer quantum information from spins, which offer advantages for data storage, to superconducting circuits, which may be better for data processing.

“One of the main advantages of phonons over photons is they interact with a lot of different things,” explains team leader Simon Gröblacher of the Kavli Institute of Nanoscience at Delft University of Technology. “So it’s very easy to make them interface with systems.”

There are, however, a few elements still missing from the phononic circuitry developer’s toolkit. One such element is a reversible beam splitter that can either combine two phonon channels (which might be carrying quantum information transferred from different media) or split one channel into two, depending on its orientation.

While several research groups have already investigated designs for such phonon splitters, these works largely focused on surface acoustic waves. This approach has some advantages, as waves of this type have already been widely explored and exploited commercially. Mobile phones, for example, use surface acoustic waves as filters for microwave signals. The problem is that these unconfined mechanical excitations are prone to substantial losses as phonons leak into the rest of the chip.

Mimicking photonic beam splitters

Gröblacher and his collaborators chose instead to mimic the design of beam splitters used in photonic chips. They used a strip of thin silicon to fashion a waveguide for phonons that confined them in all dimensions but one, giving additional control and reducing loss. They then brought two waveguides into contact with each other so that one waveguide could “feel” the mechanical excitations in the other. This allowed phonon modes to be coupled between the waveguides – something the team demonstrated down to the single-phonon level. The researchers also showed they could tune the coupling between the two waveguides by altering the contact length.

Although this is the first demonstration of single-mode phonon coupling in this kind of waveguide, the finite element method simulations Gröblacher and his colleagues ran beforehand made him pretty confident it would work from the outset. “I’m not surprised that it worked. I’m always surprised how hard it is to get it to work,” he tells Physics World. “Making it to look and do exactly what you design it to do – that’s the really hard part.”

Prospects for integrated quantum phononics

According to A T Charlie Johnson, a physicist at the University of Pennsylvania, US whose research focuses on this area, that hard work paid off. “These very exciting new results further advance the prospects for phonon-based qubits in quantum technology,” says Johnson, who was not directly involved in the demonstration. “Integrated quantum phononics is one significant step closer.”

As well as switching between different quantum media, the new single-phonon coupler could also be useful for frequency shifting. For instance, microwave frequencies are close to the frequencies of ambient heat, which makes signals at these frequencies much more prone to thermal noise. Gröblacher already has a company working on transducers to transform quantum information from microwave to optical frequencies with this challenge in mind, and he says a single-phonon coupler could be handy.

One remaining challenge to overcome is dispersion, which occurs when phonon modes couple to other unwanted modes. This is usually due to imperfections in the nanofabricated device, which are hard to avoid. However, Gröblacher also has other aspirations. “I think the one component that’s missing for us to have the similar level of control over phonons as people have with photons is a phonon phase shifter,” he tells Physics World. This, he says, would allow on-chip interferometry to route phonons to different parts of a chip, and perform advanced quantum experiments with phonons.

The study is reported in Optica.

The post Single-phonon coupler brings different quantum technologies together appeared first on Physics World.

  •  

This jumping roundworm uses static electricity to attach to flying insects

Researchers in the US have discovered that a tiny jumping worm uses static electricity to increase the chances of attaching to its unsuspecting prey.

The parasitic roundworm Steinernema carpocapsae, which live in soil, are already known to leap some 25 times their body length into the air. They do this by curling into a loop and springing in the air, rotating hundreds of times a second.

If the nematode lands successfully, it releases bacteria that kills the insect within a couple of days upon which the worm feasts and lays its eggs. At the same time, if it fails to attach to a host then it faces death itself.

While static electricity plays a role in how some non-parasitic nematodes detach from large insects, little is known whether static helps their parasitic counterparts to attach to an insect.

To investigate, researchers are Emory University and the University of California, Berkeley, conducted a series of experiments, in which they used highspeed microscopy techniques to film the worms as they leapt onto a fruit fly.

They did this by tethering a fly with a copper wire that was connected to a high-voltage power supply.

They found that a charge of a few hundred volts – similar to that generated in the wild by an insect’s wings rubbing against ions in the air – fosters a negative charge on the worm, creating an attractive force with the positively charged fly.

Carrying out simulations of the worm jumps, they found that without any electrostatics, only 1 in 19 worm trajectories successfully reached their target. The greater the voltage, however, the greater the chance of landing. For 880 V, for example, the probability was 80%.

The team also carried out experiments using a wind tunnel, finding that the presence of wind helped the nematodes drift and this also increased their chances of attaching to the insect.

“Using physics, we learned something new and interesting about an adaptive strategy in an organism,” notes Emory physicist Ranjiangshang Ran. “We’re helping to pioneer the emerging field of electrostatic ecology.”

The post This jumping roundworm uses static electricity to attach to flying insects appeared first on Physics World.

  •  

Wearable UVA sensor warns about overexposure to sunlight

Illustration showing the operation of the UVA detector
Transparent healthcare Illustration of the fully transparent sensor that reacts to sunlight and allows real-time monitoring of UVA exposure on the skin. The device could be integrated into wearable items, such as glasses or patches. (Courtesy: Jnnovation Studio)

A flexible and wearable sensor that allows the user to monitor their exposure to ultraviolet (UV) radiation has been unveiled by researchers in South Korea. Based on a heterostructure of four different oxide semiconductors, the sensor’s flexible, transparent design could vastly improve the real-time monitoring of skin health.

UV light in the A band has wavelengths of 315–400 nm and comprises about 95% of UV radiation that reaches the surface of the earth. Because of its relatively long wavelength, UVA can penetrate deep into the skin. There it can alter biological molecules, damaging tissue and even causing cancer.

While covering up with clothing and using sunscreen are effective at reducing UVA exposure, researchers are keen on developing wearable sensors that can monitor UVA levels in real time. These can alert users when their UVA exposure reaches a certain level. So far, the most promising advances towards these designs have come from oxide semiconductors.

Many challenges

“For the past two decades, these materials have been widely explored for displays and thin-film transistors because of their high mobility and optical transparency,” explains Seong Jun Kang at Soongsil University, who led the research. “However, their application to transparent ultraviolet photodetectors has been limited by high persistent photocurrent, poor UV–visible discrimination, and instability under sunlight.”

While these problems can be avoided in more traditional UV sensors, such as gallium nitride and zinc oxide, these materials are opaque and rigid – making them completely unsuitable for use in wearable sensors.

In their study, Kang’s team addressed these challenges by introducing a multi-junction heterostructure, made by stacking multiple ultrathin layers of different oxide semiconductors. The four semiconductors they selected each had wide bandgaps, which made them more transparent in the visible spectrum but responsive to UV light.

The structure included zinc and tin oxide layers as n-type semiconductors (doped with electron-donating atoms) and cobalt and hafnium oxide layers as p-type semiconductors (doped with electron-accepting atoms) – creating positively charged holes. Within the heterostructure, this selection created three types of interface: p–n junctions between hafnium and tin oxide; n–n junctions between tin and zinc oxide; and p–p junctions between cobalt and hafnium oxide.

Efficient transport

When the team illuminated their heterostructure with UVA photons, the electron–hole charge separation was enhanced by the p–n junction, while the n–n and p–p junctions allowed for more efficient transport of electrons and holes respectively, improving the design’s response speed. When the illumination was removed, the electron–hole pairs could quickly decay, avoiding any false detections.

To test their design’s performance, the researchers integrated their heterostructure into a wearable detector. “In collaboration with UVision Lab, we developed an integrated Bluetooth circuit and smartphone application, enabling real-time display of UVA intensity and warning alerts when an individual’s exposure reaches the skin-type-specific minimal erythema dose (MED),” Kang describes. “When connected to the Bluetooth circuit and smartphone application, it successfully tracked real-time UVA variations and issued alerts corresponding to MED limits for various skin types.”

As well as maintaining over 80% transparency, the sensor proved highly stable and responsive, even in direct outdoor sunlight and across repeated exposure cycles. Based on this performance, the team is now confident that their design could push the capabilities of oxide semiconductors beyond their typical use in displays and into the fast-growing field of smart personal health monitoring.

“The proposed architecture establishes a design principle for high-performance transparent optoelectronics, and the integrated UVA-alert system paves the way for next-generation wearable and Internet-of-things-based environmental sensors,” Kang predicts.

The research is described in Science Advances.

The post Wearable UVA sensor warns about overexposure to sunlight appeared first on Physics World.

  •  

Astronauts could soon benefit from dissolvable eye insert

Spending time in space has a big impact on the human body and can cause a range of health issues. Many astronauts develop vision problems because microgravity causes body fluids to redistribute towards the head. This can lead to swelling in the eye and compression of the optic nerve.

While eye conditions can generally be treated with medication, delivering drugs in space is not a straightforward task. Eye drops simply don’t work without gravity, for example. To address this problem, researchers in Hungary are developing a tiny dissolvable eye insert that could deliver medication directly to the eye. The size of a grain of rice, the insert has now been tested by an astronaut on the International Space Station.

This episode of the Physics World Weekly podcast features two of those researchers – Diána Balogh-Weiser of Budapest University of Technology and Economics and Zoltán Nagy of Semmelweis University – who talk about their work with Physics World’s Tami Freeman.

The post Astronauts could soon benefit from dissolvable eye insert appeared first on Physics World.

  •  

Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids

Researchers in Japan and Taiwan have captured three-dimensional images of an entire geothermal system deep in the Earth’s crust for the first time. By mapping the underground distribution of phenomena such as fracture zones and phase transitions associated with seismic activity, they say their work could lead to improvements in earthquake early warning models. It could also help researchers develop next-generation versions of geothermal power – a technology that study leader Takeshi Tsuji of the University of Tokyo says has enormous potential for clean, large-scale energy production.

“With a clear three-dimensional image of where supercritical fluids are located and how they move, we can identify promising drilling targets and design safer and more efficient development plans,” Tsuji says. “This could have direct implications for expanding geothermal power generation, reducing dependence on fossil fuels, and contributing to carbon neutrality and energy security in Japan and globally.”

In their study, Tsuji and colleagues focused on a region known as the brittle-ductile transition zone, which is where rocks go from being seismically active to mostly inactive. This zone is important for understanding volcanic activity and geothermal processes because it lies near an impermeable sealing band that allows fluids such as water to accumulate in a high-pressure, supercritical state. When these fluids undergo phase transitions, earthquakes may follow. However, such fluids could also produce more geothermal energy than conventional systems. Identifying their location is therefore important for this reason, too.

A high-resolution “digital map”

Many previous electromagnetic and magnetotelluric surveys suffered from low spatial resolution and were limited to regions relatively close to the Earth’s surface. In contrast, the techniques used in the latest study enabled Tsuji and colleagues to create a clear high-resolution “digital map” of deep geothermal reservoirs – something that has never been achieved before.

To make their map, the researchers used three-dimensional multichannel seismic surveys to image geothermal structures in the Kuju volcanic group, which is located on the Japanese island of Kyushu. They then analysed these images using a method they developed known as extended Common Reflection Surface (CRS) stacking. This allowed them to visualize deeper underground features such as magma-related structures, fracture-controlled fluid pathways and rock layers that “seal in” supercritical fluids.

“In addition to this, we applied advanced seismic tomography and machine-learning based analyses to determine the seismic velocity of specific structures and earthquake mechanisms with high accuracy,” explains Tsuji. “It was this integrated approach that allowed us to image a deep geothermal system in unprecedented detail.” He adds that the new technique is also better suited to mountainous geothermal regions where limited road access makes it hard to deploy the seismic sources and receivers used in conventional surveys.

A promising site for future supercritical geothermal energy production

Tsuji and colleagues chose to study the Kuju area because it is home to several volcanoes that were active roughly 1600 years ago and have erupted intermittently in recent years. The region also hosts two major geothermal power plants, Hatchobaru and Otake. The former has a capacity of 110 MW and is the largest geothermal facility in Japan.

The heat source for both plants is thought to be located beneath Mt Kuroiwa and Mt Sensui, and the region is considered a promising site for supercritical geothermal energy production. Its geothermal reservoir appears to consist of water that initially fell as precipitation (so-called meteoric water) and was heated underground before migrating westward through the fault system. Until now, though, no detailed images of the magmatic structures and fluid pathways had been obtained.

Tsuji says he has long wondered why geothermal power is not more widely used in Japan, despite the country’s abundant volcanic and thermal resources. “Our results now provide the scientific and technical foundation for next-generation supercritical geothermal power,” he tells Physics World.

The researchers now plan to try out their technique using portable seismic sources and sensors deployed in mountainous areas (not just along roads) to image the shallower parts of geothermal systems in greater detail as well. “We also plan to extend our surveys to other geothermal fields to test the general applicability of our method,” Tsuji says. “Ultimately, our goal is to provide a reliable scientific basis for the large-scale deployment of supercritical geothermal power as a sustainable energy source.”

The present work is detailed in Communications Earth & Environment.

The post Scientists obtain detailed maps of earthquake-triggering high-pressure subsurface fluids appeared first on Physics World.

  •  

Researchers visualize blood flow in pulsating artificial heart

A research team in Sweden has used real-time imaging technology to visualize the way that blood pumps around a pulsating artificial heart – moving medicine one step closer to the safe use of such devices in people waiting for donor transplants.

The Linköping University (LiU) team used 4D flow MRI to examine the internal processes of a mechanical heart prototype created by Västerås-based technology company Scandinavian Real Heart. The researchers evaluated blood flow patterns and compared them with similar measurements taken in a native human heart, outlining their results in Scientific Reports.

“As the pulsatile total artificial heart contains metal parts, like the motor, we used 3D printing [to replace most metal parts] and a physiological flow loop so we could run it in the MRI scanner under representable conditions,” says first author Twan Bakker, a PhD student at the Center for Medical Image Science and Visualization at LiU.

No elevated risk

According to Bakker, this is first time that a 3D-printed MRI-compatible artificial heart has been built and successfully evaluated using 4D flow MRI. The team was pleased to discover that the results corroborate the findings of previous computational fluid dynamics simulations indicating “low shear stress and low stagnation”. Overall flow patterns also suggest there is no elevated risk for blood complications compared with hearts in healthy humans and those suffering from valvular disease.

“[The] patterns of low blood flow, a risk for thrombosis, were in the same range as for healthy native human hearts. Patterns of turbulent flow, a risk for activation of blood platelets, which can contribute to thrombosis, were lower than those found in patients with valvular disease,” says Bakker.

“4D flow MRI allows us to measure the flow field without altering the function of the total artificial heart, which is therefore a valuable tool to complement computer simulations and blood testing during the development of the device. Our measurements provided valuable information to the design team that could improve the artificial heart prototype further,” he adds.

Improved diagnostics

A key advantage of 4D flow MRI over alternative measurement techniques – such as particle image velocimetry and laser doppler anemometry – is that it doesn’t require the creation of a fully transparent model. This is an important distinction for Bakker, since some components in the artificial heart are made with materials possessing unique mechanical properties, meaning that replication in a see-through version would be extremely challenging.

Visualizing blood flow The central image shows a representation of the full cardiac cycle in the artificial heart, with circulating flow patterns in various locations highlighted at specified time points. (Courtesy: CC BY 4.0/Sci. Rep. 10.1038/s41598-025-18422-y)

“With 4D flow MRI we had to move the motor away from the scanner bore, but the material in contact with the blood and the motion of the device remained as the original design,” says Bakker.

According to Bakker, the velocity measurements can also be used for visualization and analysis of hemodynamic parameters, such as turbulent kinetic energy, wall shear stresses and more in the heart, as well as for larger vessels in our bodies.

“By studying the flow dynamics in patients and healthy subjects, we can better understand its role in health and disease, which can then support improved diagnostics, interventions and surgical therapies,” he explains.

Moving forward, Bakker says that the research team will continue to evaluate the improved heart design, which was recently granted designation as a Humanitarian Use Device (HUD) by the US Food and Drug Administration (FDA).

“This makes it possible to apply for designation as a Humanitarian Device Exemption (HDE) – which may grant the device limited marketing rights and paves the way for the pre-clinical and clinical studies,” he says.

“In addition, we are currently developing tools to compute blood flow using simulations. This may provide us with a deeper understanding of the mechanisms that cause the formation of thrombosis and haemolysis,” he tells Physics World.

The post Researchers visualize blood flow in pulsating artificial heart appeared first on Physics World.

  •  

Evo CT-Linac eases access to online adaptive radiation therapy

Adaptive radiation therapy (ART) is a personalized cancer treatment in which a patient’s treatment plan can be updated throughout their radiotherapy course to account for any anatomical variations – either between fractions (offline ART) or immediately prior to dose delivery (online ART). Using high-fidelity images to enable precision tumour targeting, ART improves outcomes while reducing side effects by minimizing healthy tissue dose.

Elekta, the company behind the Unity MR-Linac, believes that in time, all radiation treatments will incorporate ART as standard. Towards this goal, it brings its broad knowledge base from the MR-Linac to the new Elekta Evo, a next-generation CT-Linac designed to improve access to ART. Evo incorporates AI-enhanced cone-beam CT (CBCT), known as Iris, to provide high-definition imaging, while its Elekta ONE Online software automates the entire workflow, including auto-contouring, plan adaptation and end-to-end quality assurance.

A world first

In February of this year, Matthias Lampe and his team at the private centre DTZ Radiotherapy in Berlin, Germany became the first in the world to treat patients with online ART (delivering daily plan updates while the patient is on the treatment couch) using Evo. “To provide proper tumour control you must be sure to hit the target – for that, you need online ART,” Lampe tells Physics World.

The team at DTZ Radiotherapy
Initiating online ART The team at DTZ Radiotherapy in Berlin treated the first patient in the world using Evo. (Courtesy: Elekta)

The ability to visualize and adapt to daily anatomy enables reduction of the planning target volume, increasing safety for nearby organs-at-risk (OARs). “It is highly beneficial for all treatments in the abdomen and pelvis,” says Lampe. “My patients with prostate cancer report hardly any side effects.”

Lampe selected Evo to exploit the full flexibility of its C-arm design. He notes that for the increasingly prevalent hypofractionated treatments, a C-arm configuration is essential. “CT-based treatment planning and AI contouring opened up a new world for radiation oncologists,” he explains. “When Elekta designed Evo, they enabled this in an achievable way with an extremely reliable machine. The C-arm linac is the primary workhorse in radiotherapy, so you have the best of everything.”

Time considerations

While online ART can take longer than conventional treatments, Evo’s use of automation and AI limits the additional time requirement to just five minutes – increasing the overall workflow from 12 to 17 minutes and remaining within the clinic’s standard time slots.

Patient being set up on an Elekta treatment system
Elekta Evo Evo is a next-generation CT-Linac designed to improve access to adaptive radiotherapy. (Courtesy: Elekta)

The workflow begins with patient positioning and CBCT imaging, with Evo’s AI-enhanced Iris imaging significantly improving image quality, crucial when performing ART. The radiation therapist then matches the cone-beam and planning CTs and performs any necessary couch shift.

Simultaneously, Elekta ONE Online performs AI auto-contouring of OARs, which are reviewed by the physician, and the target volume is copied in. The physicist then simulates the dose distribution on the new contours, followed by a plan review. “Then you can decide whether to adapt or not,” says Lampe. “This is an outstanding feature.” The final stage is treatment delivery and online dosimetry.

When DTZ Berlin first began clinical treatments with Evo, some of Lampe’s colleagues were apprehensive as they were attached to the conventional workflow. “But now, with CBCT providing the chance to see what will be treated, every doctor on my team has embraced the shift and wouldn’t go back,” he says.

The first treatments were for prostate cancer, a common indication that’s relatively easy to treat. “I also thought that if the Elekta ONE workflow struggled, I could contour this on my own in a minute,” says Lampe. “But this was never necessary, the process is very solid. Now we also treat prostate cancer patients with lymph node metastases and those with relapse after radiotherapy. It’s a real success story.”

Lampe says that older and frailer patients may benefit the most from online ART, pointing out that while published studies often include relatively young, healthy patients, “our patients are old, they have chronic heart disease, they’re short of breath”.

For prostate cancer, for example, patients are instructed to arrive with a full bladder and an empty rectum. “But if a patient is in his eighties, he may not be able to do this and the volumes will be different every day,” Lampe explains. “With online adaptive, you can tell patients: ‘if this is not possible, we will handle it, don’t stress yourself’. They are very thankful.”

Making ART available to all

At UMC Utrecht in the Netherlands, the radiotherapy team has also added CT-Linac online adaptive to its clinical toolkit.

UMC Utrecht is renowned for its development of MR-guided radiotherapy, with physicists Bas Raaymakers and Jan Lagendijk pioneering the development of a hybrid MR-Linac. “We come from the world of MR-guidance, so we know that ART makes sense,” says Raaymakers. “But if we only offer MR-guided radiotherapy, we miss out on a lot of patients. We wanted to bring it to the wider community.”

The radiotherapy team at UMC Utrecht
ART for all The radiotherapy team at UMC Utrecht in the Netherlands has added CT-Linac online adaptive to its clinical toolkit. (Courtesy: UMC Utrecht)

At the time of speaking to Physics World, the team was treating its second patient with CBCT-guided ART, and had delivered about 30 fractions. Both patients were treated for bladder cancer, with future indications to explore including prostate, lung and breast cancers and bone metastases.

“We believe in ART for all patients,” says medical physicist Anette Houweling. “If you have MR and CT, you should be able to choose the optimal treatment modality based on image quality. For below the diaphragm, this is probably MR, while for the thorax, CT might be better.”

Ten minute target for OART

Houweling says that ART delivery has taken 19 minutes on average. “We record the CBCT, perform image fusion and then the table is moved, that’s all standard,” she explains. “Then the adaptive part comes in: delineation on the CBCT and creating a new plan with Elekta ONE Planning as part of Elekta One Online.”

The plan adaptation, when selected to perform, takes roughly four minutes to create a clinical-grade volumetric-modulated arc therapy (VMAT) plan. With the soon to be installed next-generation optimizer, it is expected to take less than one minute to generate a VMAT plan.

“As you start with the regular workflow, you can still decide not to choose adaptive treatment, and do a simple couch shift, up until the last second,” says Raaymakers. It’s very close to the existing workflow, which makes adoption easier. Also, the treatment slots are comparable to standard slots. Now with CBCT it takes 19 minutes and we believe we can get towards 10. That’s one of the drivers for cone-beam adaptive.”

Shorter treatment times will impact the decision as to which patients receive ART. If fully automated adaptive treatment is deliverable in a 10-minute time slot, it could be available to all patients. “From the physics side, our goal is to have no technological limitations to delivering ART. Then it’s up to the radiation oncologists to decide which patients might benefit,” Raaymakers explains.

Future gazing

Looking to the future, Raaymakers predicts that simulation-free radiotherapy will be adopted for certain standard treatments. “Why do you need days of preparation if you can condense the whole process to the moment when the patient is on the table,” he says. “That would be very much helped by online ART.”

“Scroll forward a few years and I expect that ART will be automated and fast such that the user will just sign off the autocontours and plan in one, maybe tune a little, and then go ahead,” adds Houweling. “That will be the ultimate goal of ART. Then there’s no reason to perform radiotherapy the traditional way.”

The post Evo CT-Linac eases access to online adaptive radiation therapy appeared first on Physics World.

  •  

Jesper Grimstrup’s The Ant Mill: could his anti-string-theory rant do string theorists a favour?

Imagine you had a bad breakup in college. Your ex-partner is furious and self-publishes a book that names you in its title. You’re so humiliated that you only dimly remember this ex, though the book’s details and anecdotes ring true.

According to the book, you used to be inventive, perceptive and dashing. Then you started hanging out with the wrong crowd, and became competitive, self-involved and incapable of true friendship. Your ex struggles to turn you around; failing, they leave. The book, though, is so over-the-top that by the end you stop cringing and find it a hoot.

That’s how I think most Physics World readers will react to The Ant Mill: How Theoretical High-energy Physics Descended into Groupthink, Tribalism and Mass Production of Research. Its author and self-publisher is the Danish mathematician-physicist Jesper Grimstrup, whose previous book was Shell Beach: the Search for the Final Theory.

After receiving his PhD in theoretical physics at the Technical University of Vienna in 2002, Grimstrup writes, he was “one of the young rebels” embarking on “a completely unexplored area” of theoretical physics, combining elements of loop quantum gravity and noncommutative geometry. But there followed a decade of rejected articles and lack of opportunities.

Grimstrup became “disillusioned, disheartened, and indignant” and in 2012 left the field, selling his flat in Copenhagen to finance his work. Grimstrup says he is now a “self-employed researcher and writer” who lives somewhere near the Danish capital. You can support him either through Ko-fi or Paypal.

Fomenting fear

The Ant Mill opens with a copy of the first page of the letter that Grimstrup’s fellow Dane Niels Bohr sent in 1917 to the University of Copenhagen successfully requesting a four-storey building for his physics institute. Grimstrup juxtaposes this incident with the rejection of his funding request, almost a century later, by the Danish Council for Independent Research.

Today, he writes, theoretical physics faces a situation “like the one it faced at the time of Niels Bohr”, but structural and cultural factors have severely hampered it, making it impossible to pursue promising new ideas. These include Grimstrup’s own “quantum holonomy theory, which is a candidate for a fundamental theory”. The Ant Mill is his diagnosis of how this came about.

The Standard Model of particle physics, according to Grimstrup, is dominated by influential groups that squeeze out other approaches.

A major culprit, in Grimstrup’s eyes, was the Standard Model of particle physics. That completed a structure for which theorists were trained to be architects and should have led to the flourishing of a new crop of theoretical ideas. But it had the opposite effect. The field, according to Grimstrup, is now dominated by influential groups that squeeze out other approaches.

The biggest and most powerful is string theory, with loop quantum gravity its chief rival. Neither member of the coterie can make testable predictions, yet because they control jobs, publications and grants they intimidate young researchers and create what Grimstrup calls an “undercurrent of fear”. (I leave assessment of this claim to young theorists.)

Half the chapters begin with an anecdote in which Grimstrup describes an instance of rejection by a colleague, editor or funding agency. In the book’s longest chapter Grimstrup talks about his various rejections – by the Carlsberg Foundation, The European Physics Journal C, International Journal of Modern Physics A, Classical and Quantum Gravity, Reports on Mathematical Physics, Journal of Geometry and Physics, and the Journal of Noncommutative Geometry.

Grimstrup says that the reviewers and editors of these journals told him that his papers variously lacked concrete physical results, were exercises in mathematics, seemed the same as other papers, or lacked “relevance and significance”. Grimstrup sees this as the coterie’s handiwork, for such journals are full of string theory papers open to the same criticism.

“Science is many things,” Grimstrup writes at the end. “[S]imultaneously boring and scary, it is both Indiana Jones and anonymous bureaucrats, and it is precisely this diversity that is missing in the modern version of science”. What the field needs is “courage…hunger…ambition…unwillingness to compromise…anarchy.

Grimstrup hopes that his book will have an impact, helping to inspire young researchers to revolt, and to make all the scientific bureaucrats and apparatchiks and bookkeepers and accountants “wake up and remember who they truly are”.

The critical point

The Ant Mill is an example of what I have called “rant literature” or rant-lit. Evangelical, convinced that exposing truth will make sinners come to their senses and change their evil ways, rant lit can be fun to read, for it is passionate and full of florid metaphors.

Theoretical physicists, Grimstrup writes, have become “obedient idiots” and “technicians”. He slams theoretical physics for becoming a “kingdom”, a “cult”, a “hamster wheel”, and “ant mill”, in which the ants march around in a pre-programmed “death spiral”.

Grimstrup hammers away at theories lacking falsifiability, but his vehemence invites you to ask: “Is falsifiability really the sole criterion for deciding whether to accept or fail to pursue a theory?”

An attentive reader, however, may come away with a different lesson. Grimstrup calls falsifiability the “crown jewel of the natural sciences” and hammers away at theories lacking it. But his vehemence invites you to ask: “Is falsifiability really the sole criterion for deciding whether to accept or fail to pursue a theory?”

In his 2013 book String Theory and the Scientific Method, for instance, the Stockholm University philosopher of science Richard Dawid suggested rescuing the scientific status of string theory by adding such non-empirical criteria to evaluating theories as clarity, coherence and lack of alternatives. It’s an approach that both rescues the formalistic approach to the scientific method and undermines it.

Dawid, you see, is making the formalism follow the practice rather than the other way around. In other words, he is able to reformulate how we make theories because he already knows how theorizing works – not because he only truly knows what it is to theorize after he gets the formalism right.

Grimstrup’s rant, too, might remind you of the birth of the Yang–Mills theory in 1954. Developed by Chen Ning Yang and Robert Mills, it was a theory of nuclear binding that integrated much of what was known about elementary particle theory but implied the existence of massless force-carrying particles that then were known not to exist. In fact, at one seminar Wolfgang Pauli unleashed a tirade against Yang for proposing so obviously flawed a theory.

The theory, however, became central to theoretical physics two decades later, after theorists learned more about the structure of the world. The Yang-Mills story, in other words, reveals that theory-making does not always conform to formal strictures and does not always require a testable prediction. Sometimes it just articulates the best way to make sense of the world apart from proof or evidence.

The lesson I draw is that becoming the target of a rant might not always make you feel repentant and ashamed. It might inspire you into deep reflection on who you are in a way that is insightful and vindicating. It might even make you more rather than less confident about why you’re doing what you’re doing

Your ex, of course, would be horrified.

The post Jesper Grimstrup’s <em>The Ant Mill</em>: could his anti-string-theory rant do string theorists a favour? appeared first on Physics World.

  •  

Further evidence for evolving dark energy?

The term dark energy, first used in 1998, is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe – an observation that was awarded the 2011 Nobel Prize in Physics.

Dark energy is now a well established concept and forms a key part of the standard model of Big Bang cosmology, the Lambda-CDM model.

The trouble is, we’ve never really been able to explain exactly what dark energy is, or why it has the value that it does.

Even worse, new data acquired by cutting-edge telescopes have suggested that dark energy might not even exist as we had imagined it.

This is where the new work by Mukherjee and Sen comes in. They combined two of these datasets, while making as few assumptions as possible, to understand what’s going on.

The first of these datasets came from baryon acoustic oscillations. These are patterns in the distribution of matter in the universe, created by sound waves in the early universe.

The second dataset is based on a survey of supernovae data from the last 5 years. Both sets of data can be used to track the expansion history of the universe by measuring distances at different snapshots in time.

The team’s results are in tension with the Lambda-CDM model at low redshifts. Put simply, the results disagree with the current model at recent times. This provides further evidence for the idea that dark energy, previously considered to have a constant value, is evolving over time.

Evolving dark energy
The tension in the expansion rate is most evident at low redshifts (Courtesy: P. Mukherjee)

The is far from the end of the story with dark energy. New observational data, and new analyses such as this one are urgently required to provide a clearer picture.

However, where there’s uncertainty, there’s opportunity. Understanding dark energy could hold the key to understanding quantum gravity, the Big Bang and the ultimate fate of the universe.

 

 

 

The post Further evidence for evolving dark energy? appeared first on Physics World.

  •  

Searching for dark matter particles

Dark matter is hypothesised form of matter that does not emit, absorb, or reflect light, making it invisible to electromagnetic observations. Although we have never detected it, its existence is inferred from its gravitational effects on visible matter and the large-scale structure of the universe.

The Standard Model of particle physics does not contain any dark matter particles but there have been several proposed extensions of how they might be included. Several of these are very low mass particles such as the axion or the sterile neutrino.

Detecting these hypothesised particles is very challenging, however, due to the extreme sensitivity required.

Electromagnetic resonant systems, such as cavities and LC circuits, are widely used for this purpose, as well as to detect high-frequency gravitational waves.

When an external signal matches one of these systems’ resonant frequencies, the system responds with a large amplitude, making the signal possible to detect. However, there is always a trade-off between the sensitivity of the detector and the range of frequencies it is able to detect (its bandwidth).

A natural way to overcome this compromise is to consider multi-mode resonators, which can be viewed as coupled networks of harmonic oscillators. Their scan efficiency can be significantly enhanced beyond the standard quantum limit of simple single-mode resonators.

In a recent paper, the researchers demonstrated how multi-mode resonators can achieve the advantages of both sensitive and broadband detection. By connecting adjacent modes inside the resonant cavity, and  tuning these interactions to comparable magnitudes, off-resonant (i.e. unwanted) frequency shifts are effectively cancelled increasing the overall response of the system.

Their method allows us to search for these elusive dark matter particles in a faster, more efficient way.

Dark matter detection circuit
A multi-mode detector design, where the first mode couples to dark matter and the last mode is read out (Courtesy: Y. Chen)

The post Searching for dark matter particles appeared first on Physics World.

  •  

Physicists explain why some fast-moving droplets stick to hydrophobic surfaces

What happens when a microscopic drop of water lands on a water-repelling surface? The answer is important for many everyday situations, including pesticides being sprayed on crops and the spread of disease-causing aerosols. Naively, one might expect it to depend on the droplet’s speed, with faster-moving droplets bouncing off the surface and slower ones sticking to it. However, according to new experiments, theoretical work and simulations by researchers in the UK and the Netherlands, it’s more complicated than that.

“If the droplet moves too slowly, it sticks,” explains Jamie McLauchlan, a PhD student at the University of Bath, UK who led the new research effort with Bath’s Adam Squires and Anton Souslov of the University of Cambridge. “Too fast, and it sticks again. Only in between is bouncing possible, where there is enough momentum to detach from the surface but not so much that it collapses back onto it.”

As well as this new velocity-dependent condition, the researchers also discovered a size effect in which droplets that are too small cannot bounce, no matter what their speed. This size limit, they say, is set by the droplets’ viscosity, which prevents the tiniest droplets from leaving the surface once they land on it.

Smaller-sized, faster-moving droplets

While academic researchers and industrialists have long studied single-droplet impacts, McLauchlan says that much of this earlier work focused on millimetre-sized drops that took place on millisecond timescales. “We wanted to push this knowledge to smaller sizes of micrometre droplets and faster speeds, where higher surface-to-volume ratios make interfacial effects critical,” he says. “We were motivated even further during the COVID-19 pandemic, when studying how small airborne respiratory droplets interact with surfaces became a significant concern.”

Working at such small sizes and fast timescales is no easy task, however. To record the outcome of each droplet landing, McLauchlan and colleagues needed a high-speed camera that effectively slowed down motion by a factor of 100 000. To produce the droplets, they needed piezoelectric droplet generators capable of dispensing fluid via tiny 30-micron nozzles. “These dispensers are highly temperamental,” McLauchlan notes. “They can become blocked easily by dust and fibres and fail to work if the fluid viscosity is too high, making experiments delicate to plan and run. The generators are also easy to break and expensive.”

Droplet modelled as a tiny spring

The researchers used this experimental set-up to create and image droplets between 30‒50 µm in diameter as they struck water-repelling surfaces at speeds of 1‒10 m/s. They then compared their findings with calculations based on a simple mathematical model that treats a droplet like a tiny spring, taking into account three main parameters in addition to its speed: the stickiness of the surface; the viscosity of the droplet liquid; and the droplet’s surface tension.

Previous research had shown that on perfectly non-wetting surfaces, bouncing does not depend on velocity. Other studies showed that on very smooth surfaces, droplets can bounce on a thin air layer. “Our work has explored a broader range of hydrophobic surfaces, showing that bouncing occurs due to a delicate balance of kinetic energy, viscous dissipation and interfacial energies,” McLauchlan tells Physics World.

This is exciting, he adds, because it reveals a previously unexplored regime for bounce behaviour: droplets that are too small, or too slow, will always stick, while sufficiently fast droplets can rebound. “This finding provides a general framework that explains bouncing at the micron scale, which is directly relevant for aerosol science,” he says.

A novel framework for engineering microdroplet processes

McLauchlan thinks that by linking bouncing to droplet velocity, size and surface properties, the new framework could make it easier to engineer microdroplets for specific purposes. “In agriculture, for example, understanding how spray velocities interact with plant surfaces with different hydrophobicity could help determine when droplets deposit fully versus when they bounce away, improving the efficiency of crop spraying,” he says.

Such a framework could also be beneficial in the study of airborne diseases, since exhaled droplets frequently bump into surfaces while floating around indoors. While droplets that stick are removed from the air, and can no longer transmit disease via that route, those that bounce are not. Quantifying these processes in typical indoor environments will provide better models of airborne pathogen concentrations and therefore disease spread, McLauchlan says. For example, in healthcare settings, coatings could be designed to inhibit or promote bouncing, ensuring that high-velocity respiratory droplets from sneezes either stick to hospital surfaces or recoil from them, depending on which mode of potential transmission (airborne or contact-based) is being targeted.

The researchers now plan to expand their work on aqueous droplets to droplets with more complex soft-matter properties. “This will include adding surfactants, which introduce time-dependent surface tensions, and polymers, which give droplets viscoelastic properties similar to those found in biological fluids,” McLauchlan reveals. “These studies will present significant experimental challenges, but we hope they broaden the relevance of our findings to an even wider range of fields.”

The present work is detailed in PNAS.

The post Physicists explain why some fast-moving droplets stick to hydrophobic surfaces appeared first on Physics World.

  •  

Quantum computing on the verge: a look at the quantum marketplace of today

“I’d be amazed if quantum computing produces anything technologically useful in ten years, twenty years, even longer.” So wrote University of Oxford physicist David Deutsch – often considered the father of the theory of quantum computing – in 2004. But, as he added in a caveat, “I’ve been amazed before.”

We don’t know how amazed Deutsch, a pioneer of quantum computing, would have been had he attended a meeting at the Royal Society in London in February on “the future of quantum information”. But it was tempting to conclude from the event that quantum computing has now well and truly arrived, with working machines that harness quantum mechanics to perform computations being commercially produced and shipped to clients. Serving as the UK launch of the International Year of Quantum Science and Technology (IYQ) 2025, it brought together some of the key figures of the field to spend two days discussing quantum computing as something like a mature industry, even if one in its early days.

Werner Heisenberg – who worked out the first proper theory of quantum mechanics 100 years ago – would surely have been amazed to find that the formalism he and his peers developed to understand the fundamental behaviour of tiny particles had generated new ways of manipulating information to solve real-world problems in computation. So far, quantum computing – which exploits phenomena such as superposition and entanglement to potentially achieve greater computational power than the best classical computers can muster – hasn’t tackled any practical problems that can’t be solved classically.

Although the fundamental quantum principles are well-established and proven to work, there remain many hurdles that quantum information technologies have to clear before this industry can routinely deliver resources with transformative capabilities. But many researchers think that moment of “practical quantum advantage” is fast approaching, and an entire industry is readying itself for that day.

Entangled marketplace

So what are the current capabilities and near-term prospects for quantum computing?

The first thing to acknowledge is that a booming quantum-computing market exists. Devices are being produced for commercial use by a number of tech firms, from the likes of IBM, Google, Canada-based D-Wave, and Rigetti who have been in the field for a decade or more; to relative newcomers like Nord Quantique (Canada), IQM (Finland), Quantinuum (UK and US), Orca (UK) and PsiQuantum (US), Silicon Quantum Computing (Australia).

The global quantum ecosystem

Map showing the investments globally into quantum computing
(Courtesy: QURECA)

We are on the cusp of a second quantum revolution, with quantum science and technologies growing rapidly across the globe. This includes quantum computers; quantum sensing (ultra-high precision clocks, sensors for medical diagnostics); as well as quantum communications (a quantum internet). Indeed, according to the State of Quantum 2024 report, a total of 33 countries around the world currently have government initiatives in quantum technology, of which more than 20 have national strategies with large-scale funding. As of this year, worldwide investments in quantum tech – by governments and industry – exceed $55.7 billion, and the market is projected to reach $106 billion by 2040. With the multitude of ground-breaking capabilities that quantum technologies bring globally, it’s unsurprising that governments all over the world are eager to invest in the industry.

With data from a number of international reports and studies, quantum education and skills firm QURECA has summarized key programmes and efforts around the world. These include total government funding spent through 2025, as well as future commitments spanning 2–10 year programmes, varying by country. These initiatives generally represent government agencies’ funding announcements, related to their countries’ advancements in quantum technologies, excluding any private investments and revenues.

A supply chain is also organically developing, which includes manufacturers of specific hardware components, such as Oxford Instruments and Quantum Machines and software developers like Riverlane, based in Cambridge, UK, and QC Ware in Palo Alto, California. Supplying the last link in this chain are a range of eager end-users, from finance companies such as J P Morgan and Goldman Sachs to pharmaceutical companies such as AstraZeneca and engineering firms like Airbus. Quantum computing is already big business, with around 400 active companies and current global investment estimated at around $2 billion.

But the immediate future of all this buzz is hard to assess. When the chief executive of computer giant Nvidia announced at the start of 2025 that “truly useful” quantum computers were still two decades away, the previously burgeoning share prices of some leading quantum-computing companies plummeted. They have since recovered somewhat, but such volatility reflects the fact that quantum computing has yet to prove its commercial worth.

The field is still new and firms need to manage expectations and avoid hype while also promoting an optimistic enough picture to keep investment flowing in. “Really amazing breakthroughs are being made,” says physicist Winfried Hensinger of the University of Sussex, “but we need to get away from the expectancy that [truly useful] quantum computers will be available tomorrow.”

The current state of play is often called the “noisy intermediate-scale quantum” (NISQ) era. That’s because the “noisy” quantum bits (qubits) in today’s devices are prone to errors for which no general and simple correction process exists. Current quantum computers can’t therefore carry out practically useful computations that could not be done on classical high-performance computing (HPC) machines. It’s not just a matter of better engineering either; the basic science is far from done.

IBM quantum computer cryogenic chandelier
Building up Quantum computing behemoth IBM says that by 2029, its fault-tolerant system should accurately run 100 million gates on 200 logical qubits, thereby truly achieving quantum advantage. (Courtesy: IBM)

“We are right on the cusp of scientific quantum advantage – solving certain scientific problems better than the world’s best classical methods can,” says Ashley Montanaro, a physicist at the University of Bristol who co-founded the quantum software company Phasecraft. “But we haven’t yet got to the stage of practical quantum advantage, where quantum computers solve commercially important and practically relevant problems such as discovering the next lithium-ion battery.” It’s no longer if or how, but when that will happen.

Pick your platform

As the quantum-computing business is such an emerging area, today’s devices use wildly different types of physical systems for their qubits. There is still no clear sign as to which of these platforms, if any, will emerge as the winner. Indeed many researchers believe that no single qubit type will ever dominate.

The top-performing quantum computers, like those made by Google (with its 105-qubit Willow chip) and IBM (which has made the 121-qubit Condor), use qubits in which information is encoded in the wavefunction of a superconducting material. Until recently, the strongest competing platform seemed to be trapped ions, where the qubits are individual ions held in electromagnetic traps – a technology being developed into working devices by the US company IonQ, spun out from the University of Maryland, among others.

But over the past few years, neutral trapped atoms have emerged as a major contender, thanks to advances in controlling the positions and states of these qubits. Here the atoms are prepared in highly excited electronic states called Rydberg atoms, which can be entangled with one another over a few microns. A Harvard startup called QuEra is developing this technology, as is the French start-up Pasqal. In September a team from the California Institute of Technology announced a 6100-qubit array made from neutral atoms. “Ten years ago I would not have included [neutral-atom] methods if I were hedging bets on the future of quantum computing,” says Deutsch’s Oxford colleague, the quantum information theorist Andrew Steane. But like many, he thinks differently now.

Some researchers believe that optical quantum computing, using photons as qubits, will also be an important platform. One advantage here is that there is no need for complex conversion of photonic signals in existing telecommunications networks going to or from the processing units, which is also handy for photonic interconnections between chips. What’s more, photonic circuits can work at room temperature, whereas trapped ions and superconducting qubits need to be cooled. Photonic quantum computing is being developed by firms like PsiQuantum, Orca, and Xanadu.

Other efforts, for example at Intel and Silicon Quantum Computing in Australia, make qubits from either quantum dots (Intel) or precision-placed phosphorus atoms (SQC), both in good old silicon, which benefits from a very mature manufacturing base. “Small qubits based on ions and atoms yield the highest quality processors”, says Michelle Simmons of the University of New South Wales, who is the founder and CEO of SQC. “But only atom-based systems in silicon combine this quality with manufacturability.”

Intel's silicon spin qubits are now being manufactured on an industrial scale
Spinning around Intel’s silicon spin qubits are now being manufactured on an industrial scale. (Courtesy: Intel Corporation)

And it’s not impossible that entirely new quantum computing platforms might yet arrive. At the start of 2025, researchers at Microsoft’s laboratories in Washington State caused a stir when they announced that they had made topological qubits from semiconducting and superconducting devices, which are less error-prone than those currently in use. The announcement left some scientists disgruntled because it was not accompanied by a peer-reviewed paper providing the evidence for these long-sought entities. But in any event, most researchers think it would take a decade or more for topological quantum computing to catch up with the platforms already out there.

Each of these quantum technologies has its own strengths and weaknesses. “My personal view is that there will not be a single architecture that ‘wins’, certainly not in the foreseeable future,” says Michael Cuthbert, founding director of the UK’s National Quantum Computing Centre (NQCC), which aims to facilitate the transition of quantum computing from basic research to an industrial concern. Cuthbert thinks the best platform will differ for different types of computation: cold neutral atoms might be good for quantum simulations of molecules, materials and exotic quantum states, say, while superconducting and trapped-ion qubits might be best for problems involving machine learning or optimization.

Measures and metrics

Given these pros and cons of different hardware platforms, one difficulty in assessing their merits is finding meaningful metrics for making comparisons. Should we be comparing error rates, coherence times (basically how long qubits remain entangled), gate speeds (how fast a single computational step can be conducted), circuit depth (how many steps a single computation can sustain), number of qubits in a processor, or what? “The metrics and measures that have been put forward so far tend to suit one or other platform more than others,” says Cuthbert, “such that it becomes almost a marketing exercise rather than a scientific benchmarking exercise as to which quantum computer is better.”

The NQCC evaluates the performance of devices using a factor known as the “quantum operation” (QuOp). This is simply the number of quantum operations that can be carried out in a single computation, before the qubits lose their coherence and the computation dissolves into noise. “If you want to run a computation, the number of coherent operations you can run consecutively is an objective measure,” Cuthbert says. If we want to get beyond the NISQ era, he adds, “we need to progress to the point where we can do about a million coherent operations in a single computation. We’re now at the level of maybe a few thousand. So we’ve got a long way to go before we can run large-scale computations.”

One important issue is how amenable the platforms are to making larger quantum circuits. Cuthbert contrasts the issue of scaling up – putting more qubits on a chip – with “scaling out”, whereby chips of a given size are linked in modular fashion. Many researchers think it unlikely that individual quantum chips will have millions of qubits like the silicon chips of today’s machines. Rather, they will be modular arrays of relatively small chips linked at their edges by quantum interconnects.

Having made the Condor, IBM now plans to focus on modular architectures (scaling out) – a necessity anyway, since superconducting qubits are micron-sized, so a chip with millions of them would be “bigger than your dining room table”, says Cuthbert. But superconducting qubits are not easy to scale out because microwave frequencies that control and read out the qubits have to be converted into optical frequencies for photonic interconnects. Cold atoms are easier to scale up, as the qubits are small, while photonic quantum computing is easiest to scale out because it already speaks the same language as the interconnects.

To be able to build up so called “fault tolerant” quantum computers, quantum platforms must solve the issue of error correction, which will enable more extensive computations without the results becoming degraded into mere noise.

In part two of this feature, we will explore how this is being achieved and meet the various firms developing quantum software. We will also look into the potential high-value commercial uses for robust quantum computers – once such devices exist.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

The post Quantum computing on the verge: a look at the quantum marketplace of today appeared first on Physics World.

  •  

Physicists achieve first entangled measurement of W states

Imagine two particles so interconnected that measuring one immediately reveals information about the other, even if the particles are light–years apart. This phenomenon, known as quantum entanglement, is the foundation of a variety of technologies such as quantum cryptography and quantum computing. However, entangled states are notoriously difficult to control. Now, for the first time, a team of physicists in Japan has performed a collective quantum measurement on a W state comprising three entangled photons. This allowed them to analyse the three entangled photons at once rather than one at a time. This achievement, reported in Science Advances, marks a significant step towards the practical development of quantum technologies.

Physicists usually measure entangled particles using a technique known as quantum tomography. In this method, many identical copies of a particle are prepared, and each copy is measured at a different angle. The results of these measurements are then combined to reconstruct its full quantum state. To visualize this, imagine being asked to take a family photo. Instead of taking one group picture, you have to photograph each family member individually and then combine all the photos into a single portrait. Now imagine taking a photo properly: taking one photograph of the entire family. This is essentially what happens in an entangled measurement: where all particles are measured simultaneously rather than separately. This approach allows for significantly faster and more efficient measurements.

So far, for three-particle systems, entangled measurements have only been performed on Greenberger–Horne–Zeilinger (GHZ) states, where all qubits (quantum bits of a system) are either in one state or another. Until now, no one had carried out an entangled measurement for a more complicated set of states known as W states, which do not share this all-or-nothing property. In their experiment, the researchers at Kyoto University and Hiroshima University specifically used the simplest type of W state, made up of three photons, where each photon’s polarization (horizontal or vertical) is represented by one qubit.

“In a GHZ state, if you measure one qubit, the whole superposition collapses. But in a W state, even if you measure one particle, entanglement still remains,” explains Shigeki Takeuchi, corresponding author of the paper describing the study. This robustness makes the W state particularly appealing for quantum technologies.

Fourier transformations

The team took advantage of the fact that different W states look almost identical but differ by tiny phase shift, which acts as a hidden label that distinguishes one state from another. Using a tool called a discrete Fourier transform (DFT) circuit, researchers were able to “decode” this phase and tell the states apart.

The DFT exploits a special type of symmetry inherent to W states. Since the method relies on symmetry, in principle it can be extended to systems containing any number of photons. The researchers prepared photons in controlled polarization states and ran them through the DFT, which provided each state’s phase label. After, the photons were sent through polarizing beam splitters that separate them into vertically and horizontally polarized groups. By counting both sets of photons, and combining this with information from the DFT, the team could identify the W state.

The experiment identified the correct W state about 87% of the time, well above the 15% success rate typically achieved using tomography-based measurements. Maintaining this level of performance was a challenge, as tiny fluctuations in optical paths or photon loss can easily destroy the fragile interference pattern. The fact that the team could maintain stable performance long enough to collect statistically reliable data marks an important technical milestone.

Scalable to larger systems

“Our device is not just a single-shot measurement: it works with 100% efficiency,” Takeuchi adds. “Most linear optical protocols are probabilistic, but here the success probability is unity.” Although demonstrated with three photons, this procedure is directly scalable to larger systems, as the key insight is the symmetry that the DFT can detect.

“In terms of applications, quantum communication seems the most promising,” says Takeuchi. “Because our device is highly efficient, our protocol could be used for robust communication between quantum computer chips. The next step is to build all of this on a tiny photonic chip, which would reduce errors and photon loss and help make this technology practical for real quantum computers and communication networks.”

The post Physicists achieve first entangled measurement of W states appeared first on Physics World.

  •  

Physicists apply quantum squeezing to a nanoparticle for the first time

Physicists at the University of Tokyo, Japan have performed quantum mechanical squeezing on a nanoparticle for the first time. The feat, which they achieved by levitating the particle and rapidly varying the frequency at which it oscillates, could allow us to better understand how very small particles transition between classical and quantum behaviours. It could also lead to improvements in quantum sensors.

Oscillating objects that are smaller than a few microns in diameter have applications in many areas of quantum technology. These include optical clocks and superconducting devices as well as quantum sensors. Such objects are small enough to be affected by Heisenberg’s uncertainty principle, which places a limit on how precisely we can simultaneously measure the position and momentum of a quantum object. More specifically, the product of the measurement uncertainties in the position and momentum of such an object must be greater than or equal to ħ/2, where ħ is the reduced Planck constant.

In these circumstances, the only way to decrease the uncertainty in one variable – for example, the position – is to boost the uncertainty in the other. This process has no classical equivalent and is called squeezing because reducing uncertainty along one axis of position-momentum space creates a “bulge” in the other, like squeezing a balloon.

A charge-neutral nanoparticle levitated in an optical lattice

In the new work, which is detailed in Science, a team led by Kiyotaka Aikawa studied a single, charge-neutral nanoparticle levitating in a periodic intensity pattern formed by the interference of criss-crossed laser beams. Such patterns are known as optical lattices, and they are ideal for testing the quantum mechanical behaviour of small-scale objects because they can levitate the object. This keeps it isolated from other particles and allows it to sustain its fragile quantum state.

After levitating the particle and cooling it to its motional ground state, the team rapidly varied the intensity of the lattice laser. This had the effect of changing the particle’s oscillation frequency, which in turn changed the uncertainty in its momentum. To measure this change (and prove they had demonstrated quantum squeezing), the researchers then released the nanoparticle from the trap and let it propagate for a short time before measuring its velocity. By repeating these time-of-flight measurements many times, they were able to obtain the particle’s velocity distribution.

The telltale sign of quantum squeezing, the physicists say, is that the velocity distribution they measured for the nanoparticle was narrower than the uncertainty in velocity for the nanoparticle at its lowest energy level. Indeed, the measured velocity variance was narrower than that of the ground state by 4.9 dB, which they say is comparable to the largest mechanical quantum squeezing obtained thus far.

“Our system will enable us to realize further exotic quantum states of motions and to elucidate how quantum mechanics should behave at macroscopic scales and become classical,” Aikawa tells Physics World. “This could allow us to develop new kinds of quantum devices in the future.”

The post Physicists apply quantum squeezing to a nanoparticle for the first time appeared first on Physics World.

  •