Whether creating a contaminant-free environment for depositing material or minimizing unwanted collisions in spectrometers and accelerators, vacuum environments are a crucial element of many scientific endeavours. Creating and maintaining very low pressures requires a holistic approach to system design that includes material selection, preparation, and optimization of the vacuum chamber and connection volumes. Measurement strategies also need to be considered across the full range of vacuum to ensure consistent performance and deliver the expected outcomes from the experiment or process.
Developing a vacuum system that achieves the optimal low-pressure conditions for each application, while also controlling the cost and footprint of the system, is a complex balancing act that benefits from specialized expertise in vacuum science and engineering. A committed technology partner with extensive experience of working with customers to design vacuum systems, including those for physics research, can help to define the optimum technologies that will produce the best solution for each application.
Over many years, the technology experts at Agilent have assisted countless customers with configuring and enhancing their vacuum processes. “Our best successes come from collaborations where we take the time to understand the customer’s needs, offer them guidance, and work together to create innovative solutions,” comments John Screech, senior applications engineer at Agilent. “We strive to be a trusted partner rather than just a commercial vendor, ensuring our customers not only have the right tools for their needs, but also the information they need to achieve their goals.”
In his role Screech works with customers from the initial design phase all the way through to installation and troubleshooting. “Many of our customers know they need vacuum, but they don’t have the time or resources to really understand the individual components and how they should be put together,” he says. “We are available to provide full support to help customers create a complete system that performs reliably and meets the requirements of their application.”
In one instance, Screech was able to assist a customer who had been using an older technology to create an ultrahigh vacuum environment. “Their system was able to produce the vacuum they needed, but it was unreliable and difficult to operate,” he remembers. By identifying the problem and supporting the migration to a modern, simpler technology, Screech helped his customer to achieve the required vacuum conditions improve uptime and increase throughput.
Agilent collaborates with various systems integrators to create custom vacuum solutions for scientific instruments and processes. Such customized designs must be compact enough to be integrated within the system, while also delivering the required vacuum performance at a cost-effective price point. “Customers trust us to find a practical and reliable solution, and realize that we will be a committed partner over the long term,” says Screech.
Expert partnership yields success
The company also partners with leading space agencies and particle physics laboratories to create customized vacuum solutions for the most demanding applications. For many years, Agilent has supplied high-performance vacuum pumps to CERN, which created the world’s largest vacuum system to prevent unwanted collisions between accelerated particles and residual gas molecules in the Large Hadron Collider.
When engineering a vacuum solution that meets the exact specifications of the facility, one key consideration is the physical footprint of the equipment. Another is ensuring that the required pumping performance is achieved without introducing any unwanted effects – such as stray magnetic fields – into the highly controlled environment. Agilent vacuum experts have the experience and knowledge to engineer innovative solutions that meet such a complex set of criteria. “These large organizations already have highly skilled vacuum engineers who understand the unique parameters of their system, but even they can benefit from our expertise to transform their requirements into a workable solution,” says Screech.
Agilent also shares its knowledge and experience through various educational opportunities in vacuum technologies, including online webinars and dedicated training courses. The practical aspects of vacuum can be challenging to learn online, so in-person classes emphasize a hands-on approach that allows participants to assemble and characterize rough- and high-vacuum systems. “In our live sessions everyone has the opportunity to bolt a system together, test which configuration will pump down faster, and gain insights into leak detection,” says Screech. “We have students from industry and academia in the classes, and they are always able to share tips and techniques with one another.” Additionally, the company maintains a vacuum community as an online resource, where questions can be posed to experts, and collaboration among users is encouraged.
Agilent recognizes that vacuum is an enabler for scientific research and that creating the ideal vacuum system can be challenging. “Customers can trust Agilent as a technology partner,” says Screech. “We can share our experience and help them create the optimal vacuum system for their needs.”
You can learn more about vacuum and leak detection technologies from Agilent through the company’s website. Alternatively, visit the partnership webpage to chat with an expert, find training, and explore the benefits of partnering with Agilent.
Physicists in the US have taken an important step towards a practical nuclear clock by showing that the physical vapour deposition (PVD) of thorium-229 could reduce the amount of this expensive and radioactive isotope needed to make a timekeeper. The research could usher in an era of robust and extremely accurate solid-state clocks that could be used in a wide range of commercial and scientific applications.
Today, the world’s most precise atomic clocks are the strontium optical lattice clocks created by Jun Ye’s group at JILA in Boulder, Colorado. These are accurate to within a second in the age of the universe. However, because these clocks use an atomic transition between electron energy levels, they can easily be disrupted by external electromagnetic fields. This means that the clocks must be operated in isolation in a stable lab environment. While other types of atomic clock are much more robust – some are deployed on satellites – they are no where near as accurate as optical lattice clocks.
Some physicists believe that transitions between energy levels in atomic nuclei could offer a way to make robust, portable clocks that deliver very high accuracy. As well as being very small and governed by the strong force, nuclei are shielded from external electromagnetic fields by their own electrons. And unlike optical atomic clocks, which use a very small number of delicately-trapped atoms or ions, many more nuclei can be embedded in a crystal without significantly affecting the clock transition. Such a crystal could be integrated on-chip to create highly robust and highly accurate solid-state timekeepers.
Sensitive to new physics
Nuclear clocks would also be much more sensitive to new physics beyond the Standard Model – allowing physicists to explore hypothetical concepts such as dark matter. “The nuclear energy scale is millions of electron volts; the atomic energy scale is electron volts; so the effects of new physics are also much stronger,” explains Victor Flambaum of Australia’s University of New South Wales.
Normally, a nuclear clock would require a laser that produces coherent gamma rays – something that does not exist. By exquisite good fortune, however, there is a single transition between the ground and excited states of one nucleus in which the potential energy changes due to the strong nuclear force and the electromagnetic interaction almost exactly cancel, leaving an energy difference of just 8.4 eV. This corresponds to vacuum ultraviolet light, which can be created by a laser.
That nucleus is thorium-229, but as Ye’s postgraduate student Chuankun Zhang explains, it is very expensive. “We bought about 700 µg for $85,000, and as I understand it the price has been going up”.
In September, Zhang and colleagues at JILA measured the frequency of the thorium-229 transition with unprecedented precision using their strontium-87 clock as a reference. They used thorium-doped calcium fluoride crystals. “Doping thorium into a different crystal creates a kind of defect in the crystal,” says Zhang. “The defects’ orientations are sort of random, which may introduce unwanted quenching or limit our ability to pick out specific atoms using, say, polarization of the light.”
Layers of thorium fluoride
In the new work, the researchers collaborated with colleagues in Eric Hudson’s group at University of California, Los Angeles and others to form layers of thorium fluoride between 30 nm and 100 nm thick on crystalline substrates such as magnesium fluoride. They used PVD, which is a well-established technique that evaporates a material from a hot crucible before condensing it onto a substrate. The resulting samples contained three orders of magnitude less thorium-229 than the crystals used in the September experiment, but had the comparable thorium atoms per unit area.
The JILA team sent the samples to Hudson’s lab for interrogation by a custom-built vacuum ultraviolet laser. Researchers led by Hudson’s student Richard Elwell observed clear signatures of the nuclear transition and found the lifetime of the excited state to be about four times shorter than observed in the crystal. While the discrepancy is not understood, the researchers say this might not be problematic in a clock.
More significant challenges lie in the surprisingly small fraction of thorium nuclei participating in the clock operation – with the measured signal about 1% of the expected value, according to Zhang. “There could be many reasons. One possibility is because the vapour deposition process isn’t controlled super well such that we have a lot of defect states that quench away the excited states.” Beyond this, he says, designing a mobile clock will entail miniaturizing the laser.
Flambaum, who was not involved in the research, says that it marks “a very significant technical advance,” in the quest to build a solid-state nuclear clock – something that he believes could be useful for sensing everything from oil to variations in the fine structure constant. “As a standard of frequency a solid state clock is not very good because it’s affected by the environment,” he says, “As soon as we know the frequency very accurately we will do it with [trapped] ions, but that has not been done yet.”
Metamaterials are artificial 3D structures that can provide all sorts of properties not available with “normal” materials. Pioneered around a quarter of a century ago by physicists such as John Pendry and David Smith, metamaterials can now be found in a growing number of commercial products.
Claire Dancer and Alastair Hibbins, who are joint leads of the UK Metamaterials Network, recently talked to Matin Durrani about the power and potential of these “meta-atom” structures. Dancer is an associate professor and a 125th anniversary fellow at the University of Birmingham, UK, while Hibbins is a professor and director of the Centre of Metamaterials Research and Innovation at the University of Exeter, UK.
Let’s start with the basics: what are metamaterials?
Alastair Hibbins (AH): If you want to describe a metamaterial in just one sentence, it’s all about adding functionality through structure. But it’s not a brand new concept. Take the stained-glass windows in cathedrals, which have essentially got plasmonic metal nanoparticles embedded in them. The colour of the glass is dictated by the size and the shape of those particles, which is what a metamaterial is all about. It’s a material where the properties we see or hear or feel depend on the structure of its building blocks.
Physicists have been at the forefront of much recent work on metamaterials, haven’t they?
AH: Yes, the work was reignited just before the turn of the century – in the late 1990s – when the theoretical physicist John Pendry kind of recrystallized this idea (see box “Metamaterials and John Pendry”). Based at Imperial College, London, he and others were was looking at artificial materials, such as metallic meshes, which had properties that were really different from the metal of which they were comprised.
In terms of applications, why are metamaterials so exciting?
Claire Dancer (CD): Materials can do lots of fantastic things, but metamaterials add a new functionality on top. That could be cloaking or it might be mechanically bending and flexing in a way that its constituent materials wouldn’t. You can, for example, have “auxetic metamaterials” with a honeycomb structure that gets wider – not thinner – when stretched. There are also nanoscale photonic metamaterials, which interact with light in unusual ways.
John Pendry: metamaterial pioneer
Metamaterials are fast becoming commercial reality, but they have their roots in physics –in particular, a landmark paper published in 2000 by theoretical physicist John Pendry at Imperial College, London (Phys. Rev. Lett. 85 3966). In the paper, Pendry described how a metamaterial could be created with a negative index of refraction for microwave radiation, calculating that it could be used to make a “perfect” lens that would focus an image with a resolution not restricted by the wavelength of light (Physics World September 2001 pp47–51).
A metamaterial using copper rings deposited on an electronic circuit board was built the following year by the US physicist David Smith and colleagues at the University of California, San Diego (Science292 77). Pendry later teamed up with Smith and others to use negative-index metamaterials to create a blueprint for an invisibility cloak – the idea being that the metamaterial would guide light around an object to be hidden (Science312 1780). While the mathematics describing how electromagnetic radiation interacts with metamaterials can be complicated, Pendry realized that it could be described elegantly by borrowing ideas from Einstein’s general theory of relativity.
Matin Durrani
What sorts of possible applications can metamaterials have?
CD: There are lots, including some exciting innovations in body armour and protective equipment for sport – imagine customized “auxetic helmets” and protective devices for contact sports like rugby. Metamaterials can also be used in communications, exploiting available frequencies in an efficient, discrete and distinct way. In the optical range, we can create “artificial colour”, which is leading to interesting work on different kinds of glitter and decorative substances. There are also loads of applications in acoustics, where metamaterials can absorb some of the incidental noise that plagues our world.
Have any metamaterials reached the commercial market yet?
AH: Yes. The UK firm Sonnobex won a Business Innovation Award from the Institute of Physics (IOP) in 2018 for its metamaterials that can reduce traffic noise or the annoying “buzz” from electrical power transformers. Another British firm – Metasonnix – won an IOP business award last year for its lightweight soundproofing metamaterial panels. They let air pass through so could be great as window blinds – cutting noise and providing ventilation at the same time.
High-end audio manufacturers, such as KEF, are using metamaterials as part of the baffle behind the main loudspeaker. There’s also Metahelios, which was spun out from the University of Glasgow in 2022. It’s making on-chip, multi-wavelength pixelated cameras that are also polarization-sensitive and could have applications in defence and aerospace.
The UK has a big presence in metamaterials but the US is strong too isn’t it?
AH: Perhaps the most famous metamaterial company is Metalenz, which makes flat conformal lenses for mobile phones – enabling amazing optical performance in a compact device. It was spun off in 2021 from the work of Federico Capasso at Harvard University. You can already find its products in Apple and Samsung phones and they’re coming to Google’s devices too.
Other US companies include Kymeta, which makes metamaterial-based antennas, and Lumotive, which is involved in solid-state LIDAR systems for autonomous vehicles and drones. There’s also Echodyne and Pivotal Commware. Those US firms have all received a huge amount of start-up and venture funding, and are doing really well at showing how metamaterials can make money and sell products.
CD: One important aim is to capitalize on all the work done in this country, supporting fundamental discovery science but driving commercialization too. We’ve been going since 2021 and have grown to a community of about 900 members – largely UK academics but with industry and overseas researchers too. We want to provide outsiders with a single source of access to the community and – as we move towards commercialization – develop ways to standardize and regulate metamaterials.
As well as providing an official definition of metamaterials (see box “Metamaterials: the official definition”), we also have a focus on talent and skills, trying to get the next generation into the field and show them it’s a good place to work.
How is the UK Metamaterials Network helping get products onto the market?
CD: The network wants to support the beginning of the commercialization process, namely working with start-ups and getting industry engaged, hopefully with government backing. We’ve also got various special-interest groups, focusing on the commercial potential of acoustic, microwave and photonics materials. And we’ve set up four key challenge areas that cut across different areas of metamaterials research: manufacturing; space and aviation; health; and sustainability.
Metamaterials: the official definition
One of the really big things the UK Metamaterials Network has done is to crowdsource the definition of a metamaterial, which has long been a topic of debate. A metamaterial, we have concluded, is “a 3D structure with a response or function due to collective effects of their building blocks (or meta-atoms) that is not possible to achieve conventionally with any individual constituent material”.
A huge amount of work went into this definition. We talked with the community and there was lots of debate about what should be in and what should be out. But I think we’ve emerged with a really nice definition there that’s going to stay in place for many years to come. It might seem a little trivial but it’s one of our great achievements.
Alastair Hibbins
What practical support can you give academics?
CD: The UK Metamaterials Network has been funded by the Engineering and Physical Sciences Research Council to set up a Metamaterials Network Plus programme. It aims to develop more research in these areas so that metamaterials can contribute to national and global priorities by, for example, being sustainable and ensuring we have the infrastructure for testing and manufacturing metamaterials on a large scale. In particular, we now have “pump prime” funding that we can distribute to academics who want to explore new applications of – and other reserach into – metamaterials.
What are the challenges of commercializing metamaterials?
CD: Commercializing any new scientific idea is difficult and metamaterials are no exception. But one issue with metamaterials is to ensure industry can manufacture them in big volumes. Currently, a lot of metamaterials are made in research labs by 3D printing or by manually sticking and gluing things together, which is fine if you just want to prove some interesting physics. But to make metamaterials in industry, we need techniques that are scalable – and that, in turn, requires resources, funding, infrastructure and a supply of talented, skilled workers. The intellectual property also needs to be carefully managed as much of the underlying work is done in collaborations with universities. If there are too many barriers, companies will give up and not bother trying.
Looking ahead, where do you think metamaterials will be a decade from now?
AH: If we really want to fulfil their potential, we’d ideally fund metamaterials as a national UK programme, just as we do with quantum technology. Defence has been one of the leaders in funding metamaterials because of their use in communications, but we want industry more widely to adopt metamaterials, embedding them in everyday devices. They offer game-changing control and I can see metamaterials in healthcare, such as for artificial limbs or medical imaging. Metamaterials could also provide alternatives in the energy sector, where we want to reduce the use of rare-earth and other minerals. In space and aerospace, they could function as incredibly lightweight, but really strong, blast-resistant materials for satellites and satellite communications, developing more capacity to send information around the world.
How are you working with the IOP to promote metamaterials?
Have you been able to interact with the research community?
CD: We’ve so far run three annual industry events showcasing the applications of metamaterials. The first two were at the National Physical Laboratory in Teddington, and in Leeds, with last year’s held at the IOP in December. It included a panel discussion about how to overcome barriers to commercialization along with demonstrations of various technologies, and presentations from academics and industrialists about their innovations. We also discussed the pathfinder project with the IOP as we’ll need the community’s help to exploit the power of metamaterials.
What’s the future of the UK Metamaterials Network?
AH: It’s an exciting year ahead working with the IOP and we want to involve as many new sectors as possible. We’re also likely to hit a thousand members of our network: we’ll have a little celebration when we reach that milestone. We’ll be running a 2025 showcase event as well so there’s a lot to look forward to.
This article is an edited version of an interview on the Physics World Weekly podcast of 5 December 2024
As I write this [and don’t tell the Physics World editors, please] I’m half-watching out of the corner of my eye the quirky French-made, video-game spin-off series Rabbids Invasion. The mad and moronic bunnies (or, in a nod to the original French, Les Lapins Crétins) are presently making another attempt to reach the Moon – a recurring yet never-explained motif in the cartoon – by stacking up a vast pile of junk; charming chaos ensues.
As explained in LUNAR: A History of the Moon in Myths, Maps + Matter – the exquisite new Thames & Hudson book that presents the stunning Apollo-era Lunar Atlas alongside a collection of charming essays – madness has long been associated with the Moon. One suspects there was a good kind of mania behind the drawing up of the Lunar Atlas, a series of geological maps plotting the rock formations on the Moon’s surface that are as much art as they are a visualization of data. And having drooled over LUNAR, truly the crème de la crème of coffee table books, one cannot fail but to become a little mad for the Moon too.
Many faces of the Moon
As well as an exploration of the Moon’s connections (both etymologically and philosophically) to lunacy by science writer Kate Golembiewski, the varied and captivating essays of 20 authors collected in LUNAR cover the gamut from the Moon’s role in ancient times (did you know that the Greeks believed that the souls of the dead gather around the Moon?) through to natural philosophy, eclipses, the space race and the Artemis Programme. My favourite essays were the more off-beat ones: the Moon in silent cinema, for example, or its fascinating influence on “cartes de visite”, the short-lived 19th-century miniature images whose popularity was boosted by Queen Victoria and Prince Albert. (I, for one, am now quite resolved to have my portrait taken with a giant, stylised, crescent moon prop.)
The pulse of LUNAR, however, are the breathtaking reproductions of all 44 of the exquisitely hand-drawn 1:1,000,000 scale maps – or “quadrangles” – that make up the US Geological Survey (USGS)/NASA Lunar Atlas (see header image).
Drawn up between 1962 and 1974 by a team of 24 cartographers, illustrators, geographers and geologists, the astonishing Lunar Atlas captures the entirety of the Moon’s near side, every crater and lava-filled maria (“sea”), every terra (highland) and volcanic dome. The work began as a way to guide the robotic and human exploration of the Moon’s surface and was soon augmented with images and rock samples from the missions themselves.
One could be hard-pushed to sum it up better than the American science writer Dava Sobel, who pens the book’s forward: “I’ve been to the Moon, of course. Everyone has, at least vicariously, visited its stark landscapes, driven over its unmarked roads. Even so, I’ve never seen the Moon quite the way it appears here – a black-and-white world rendered in a riot of gorgeous colours.”
Many moons ago
Having been trained in geology, the sections of the book covering the history of the Lunar Atlas piqued my particular interest. The Lunar Atlas was not the first attempt to map the surface of the Moon; one of the reproductions in the book shows an earlier effort from 1961 drawn up by USGS geologists Robert Hackman and Eugene Shoemaker.
Hackman and Shoemaker’s map shows the Moon’s Copernicus region, named after its central crater, which in turn honours the Renaissance-era Polish polymath Nicolaus Copernicus. It served as the first demonstration that the geological principles of stratigraphy (the study of rock layers) as developed on the Earth could also be applied to other bodies. The duo started with the law of superposition; this is the principle that when one finds multiple layers of rock, unless they have been substantially deformed, the older layer will be at the bottom and the youngest at the top.
“The chronology of the Moon’s geologic history is one of violent alteration,” explains science historian Matthew Shindell in LUNAR’s second essay. “What [Hackman and Shoemaker] saw around Copernicus were multiple overlapping layers, including the lava plains of the maria […], craters displaying varying degrees of degradations, and materials and features related to the explosive impacts that had created the craters.”
From these the pair developed a basic geological timeline, unpicking the recent history of the Moon one overlapping feature at the time. They identified five eras, with the Copernican, named after the crater and beginning 1.1 billion years ago, being the most recent.
Considering it was based on observations of just one small region of the Moon, their timescale was remarkably accurate, Shidnell explains, although subsequent observations have redefined its stratigraphic units – for example by adding the Pre-Nectarian as the earliest era (predating the formation of Nectaris, the oldest basin), whose rocks can still be found broken up and mixed into the lunar highlands.
Accordingly, the different quadrants of the atlas very much represent an evolving work, developing as lunar exploration progressed. Later maps tended to be more detailed, reflecting a more nuanced understanding of the Moon’s geological history.
New moon
Parts of the Lunar Atlas have recently found new life in the development of the first-ever complete map of the lunar surface, the “Unified Geologic Map of the Moon”. The new digital map combines the Apollo-era data with that from more recent satellite missions, including the Japan Aerospace Exploration Agency (JAXA)’s SELENE orbiter.
As former USGS Director and NASA astronaut Jim Reilly said when the unified map was first published back in 2020: “People have always been fascinated by the Moon and when we might return. So, it’s wonderful to see USGS create a resource that can help NASA with their planning for future missions.”
I might not be planning a Moon mission (whether by rocket or teetering tower of clutter), but I am planning to give the stunning LUNAR pride of place on my coffee table next time I have guests over – that’s how much it’s left me, ahem, “over the Moon”.
Researchers at the University of Strathclyde, UK, have developed a new method to recycle the valuable semiconductor colloidal quantum dots used to fabricate supraparticle lasers. The recovered particles can be reused to build new lasers with a photoluminescence quantum yield almost as high as lasers made from new particles.
Supraparticle lasers are a relatively new class of micro-scale lasers that show much promise in applications such as photocatalysis, environmental sensing, integrated photonics and biomedicine. The active media in these lasers – the supraparticles – are made by assembling and densely packing colloidal quantum dots (CQDs) in the microbubbles formed in a surfactant-stabilized oil-and-water emulsion. The underlying mechanism is similar to the way that dish soap, cooking oil and water mix when we do the washing up, explains Dillon H Downie, a physics PhD student at Strathclyde and a member of the research team led by Nicolas Laurand.
Supraparticles have a high refractive index compared to their surrounding medium. Thanks to this difference, light at the interface between them experiences total internal reflection. This means that when the diameter of the supraparticles is an integer multiple of the wavelength of the incident light, so-called whispering gallery modes (resonant light waves that travel around a concave boundary) form within the supraparticles.
“The supraparticles are therefore microresonators made of an optical gain material (the quantum dots),” explains Downie, “and individual supraparticles can be made to lase by optically pumping them.”
The problem is that many CQDs are made from expensive and sometimes toxic elements. Demand for these increasingly scarce elements will likely outstrip supply before the end of this decade, but at present, only 2% of quantum dots made from these rare-earth elements are recycled. While researchers have been exploring ways of recovering them from electronic waste, the techniques employed often require specialized instruments, complex bio-metallurgical absorbents and hazardous acid-leaching processes. A more environmentally friendly approach is thus sorely needed.
Exceptional recycling potential
In the new work, Laurand, Downie and colleagues recycled supraparticle lasers by first disassembling the CQDs in them. They did this by suspending the dots in an oil phase and applying ultrasonic high-frequency sound waves and heat. They then added water to separate out the dots. Finally, they filtered and purified the disassembled CQDs and tested their fluorescence efficiency before reassembling them into a new laser configuration.
Using this process, the researchers were able to recover 85% of the quantum dots from the initial supraparticle batch. They also found that the recycled quantum dots boasted a photoluminescence quantum yield of 83 ± 16%, which is comparable to the 86 ± 9% for the original particles.
“By testing the lasers’ performance both before and after this process we confirmed their exceptional recycling potential,” Downie says.
Simple, practical technique
Downie describes the team’s technique as simple and practical even for research labs that lack specialized equipment such as centrifuges and scrubbers. He adds that it could also be applied to other self-assembled nanocomposites.
“As we expect nanoparticle aggregates in everything from wearable medical devices to ultrabright LEDs in the future, it is, therefore, not inconceivable that some of these could be sent back for specialized recycling in the same way we do with commercial batteries today,” he tells Physics World. “We may even see a future where rare-earth or some semiconductor elements become critically scarce, necessitating the recycling for any and all devices containing such valuable nanoparticles.”
By proving that supraparticles are reusable, Downie adds, the team’s method provides “ample justification” to anyone wishing to incorporate supraparticle technology into their devices. “This is seen as especially relevant if they are to be used in biomedical applications such as targeted drug delivery systems, which would otherwise be limited to single-use,” he says.
With work on colloidal quantum dots and supraparticle lasers maturing at an incredible rate, Downie adds that it is “fantastic to be able to mature the process of their recycling alongside this progress, especially at such an early stage in the field”.
An international team of physicists has used the principle of entanglement entropy to examine how particles are produced in high-energy electron–proton collisions. Led by Kong Tu at Brookhaven National Laboratory in the US, the researchers showed that quarks and gluons in protons are deeply entangled and approach a state of maximum entanglement when they take part in high-energy collisions.
While particle physicists have made significant progress in understanding the inner structures of protons, neutrons, and other hadrons, there is still much to learn. Quantum chromodynamics (QCD) says that the proton and other hadrons comprise quarks, which are tightly bound together via exchanges of gluons – mediators of the strong force. However, using QCD to calculate the properties of hadrons is notoriously difficult except under certain special circumstances.
Calculations can be simplified by describing the quarks and gluons as partons in a model that was developed in late 1960s by James Bjorken, Richard Feynman, Vladimir Gribov and others. “Here, all the partons within a proton appear ‘frozen’ when the proton is moving very fast relative to an observer, such as in high-energy particle colliders,” explains Tu.
Dynamic and deeply complex interactions
While the parton model is useful for interpreting the results of particle collisions, it cannot fully capture the dynamic and deeply complex interactions between quarks and gluons within protons and other hadrons. These interactions are quantum in nature and therefore involve entanglement. This is a purely quantum phenomenon whereby a group of particles can be more highly correlated than is possible in classical physics.
“To analyse this concept of entanglement, we utilize a tool from quantum information science named entanglement entropy, which quantifies the degree of entanglement within a system,” Tu explains.
In physics, entropy is used to quantify the degree of randomness and disorder in a system. However, it can also be used in information theory to measure the degree of uncertainty within a set of possible outcomes.
“In terms of information theory, entropy measures the minimum amount of information required to describe a system,” Tu says. “The higher the entropy, the more information is needed to describe the system, meaning there is more uncertainty in the system. This provides a dynamic picture of a complex proton structure at high energy.”
Deeply entangled
In this context, particles in a system with high entanglement entropy will be deeply entangled – whereas those in a system with low entanglement entropy will be mostly uncorrelated.
In recent studies, entanglement entropy has been used to described how hadrons are produced through deep inelastic scattering interactions – such as when an electron or neutrino collides with a hadron at high energy. However, the evolution with energy of entanglement entropy within protons had gone largely unexplored. “Before we did this work, no one had looked at entanglement inside of a proton in experimental high-energy collision data,” says Tu.
Now, Tu’s team investigated how entanglement entropy varies with the speed of the proton – and how this relationship relates to the hadrons created during inelastic collisions.
Matching experimental data
Their study revealed that the equations of QCD can accurately predict the evolution of entanglement entropy – with their results closely matching with experimental collision data. Perhaps most strikingly, they discovered that if this entanglement entropy is increased at high energies, it may approach a state of maximum entanglement under certain conditions. This high degree of entropy is evident in the large numbers of particles that are produced in electron–proton collisions.
The researchers are now confident that their approach could lead to further insights about QCD. “This method serves as a powerful tool for studying not only the structure of the proton, but also those of the nucleons within atomic nuclei.” Tu explains. “It is particularly useful for investigating the underlying mechanisms by which nucleons are modified in the nuclear environment.”
In the future, Tu and colleagues hope that their model could boost our understanding of processes such as the formation and fragmentation of hadrons within the high-energy jets created in particle collisions, and the resulting shift in parton distributions within atomic nuclei. Ultimately, this could lead to a fresh new perspective on the inner workings of QCD.
A new foldable “bottlebrush” polymer network is both stiff and stretchy – two properties that have been difficult to combine in polymers until now. The material, which has a Young’s modulus of 30 kPa even when stretched up to 800% of its original length, could be used in biomedical devices, wearable electronics and soft robotics systems, according to its developers at the University of Virginia School of Engineering and Applied Science in the US.
Polymers are made by linking together building blocks of monomers into chains. To make polymers elastic, these chains are crosslinked by covalent chemical bonds. The crosslinks connect the polymer chains so that when a force is applied to stretch the polymer, it recovers its shape when the force is removed.
A polymer can be made stiffer by adding more crosslinks, to shorten the polymer chain. The stiffness increases because the crosslinks supress the thermal fluctuations of network strands, but this has the effect of making it brittle. This limitation has held back the development of materials that need both stiffness and stretchability, says materials scientist and engineer Liheng Cai, who led this new research effort.
Foldable bottlebrush polymers
In their new work, the researchers hypothesized that foldable bottlebrush-like polymers might not suffer from this problem. These polymers consist of many densely packed linear side chains randomly separated by small spacer monomers. There is a prerequisite, however: the side chains need to have a relatively high molecular weight (MW) and a low glass transition temperature (Tg) while the spacer monomer needs to be low MW and incompatible with the side chains. Achieving this requires control over the incompatibility between backbones and side chain chemistries, explains Baiqiang Huang, who is a PhD student in Cai’s group.
The researchers discovered that two polymers, poly(dimethyl siloxane) (PDMS) and benzyl methacrylate (BnMA) fit the bill here. PDMS is used as the side chain material and BnMA as the spacer monomer. The two are highly incompatible and have very different Tg values of −100°C and 54°C, respectively.
When stretched, the collapsed backbone in the polymer unfolds to release the stored length, so allowing it to be “remarkably extensible”, write the researchers in Science Advances. In contrast, the stiffness of the material changes little thanks to the molecular properties of the side chains in the polymer, says Huang. “Indeed, in our experiments, we demonstrated a significant enhancement in mechanical performance, achieving a constant Young’s modulus of 30 kPa and a tensile breaking strain that increased 40-fold, from 20% to 800%, compared to standard polymers.”
And that is not all: the design of the new foldable bottlebrush polymer means that stiffness and stretchability can be controlled independently in a material for the first time.
Potential applications
The work will be important for when it comes to developing next-generation materials with tailored mechanical properties. According to the researchers, potential applications include durable and flexible prosthetics, high-performance wearable electronics and stretchable materials for soft robotics and medical implants.
Looking forward, the researchers say they will now be focusing on optimizing the molecular structure of their polymer network to fine-tune its mechanical properties for specific applications. They also aim to incorporate functional metallic nanoparticles into the networks, so creating multifunctional materials with specific electrical, magnetic or optical properties. “These efforts will extend the utility of foldable bottlebrush polymer networks to a broader range of applications,” says Cai.
Popularized in the late 1950s as a child’s toy, the hula hoop is undergoing renewed interest as a fitness activity and performance art. But have you ever wondered how a hula hoop stays aloft against the pull of gravity?
Wonder no more. A team of researchers at New York University have investigated the forces involved as a hoop rotates around a gyrating body, aiming to explain the physics and mathematics of hula hooping.
To determine the conditions required for successful hula hoop levitation, Leif Ristroph and colleagues conducted robotic experiments with hoops twirling around various shapes – including cones, cylinders and hourglass shapes. The 3D-printed shapes had rubberized surfaces to achieve high friction with a thin, rigid plastic hoop, and were driven to gyrate by a motor. The researchers launched the hoops onto the gyrating bodies by hand and recorded the resulting motion using high-speed videography and motion tracking algorithms.
They found that successful hula hooping is dependent on meeting two conditions. Firstly, the hoop orbit must be synchronized with the body gyration. This requires the hoop to be launched at sufficient speed and in the same direction as the gyration, following which, the outward pull by centrifugal action and damping due to rolling frication result in stable twirling.
This process, however, does not necessarily keep the hoop elevated at a stable height – any perturbations could cause it to climb or fall away. The team found that maintaining hoop levitation requires the gyrating body to have a particular “body type”, including an appropriately angled or sloped surface – the “hips” – plus an hourglass-shaped profile with a sufficiently curved “waist”.
Indeed, in the robotic experiments, an hourglass-shaped body enabled steady-state hula hooping, while the cylinders and cones failed to successfully hula hoop.
The researchers also derived dynamical models that relate the motion and shape of the hoop and body to the contact forces generated. They note that their findings can be generalized to a wide range of different shapes and types of motion, and could be used in “robotic applications for transforming motions, extracting energy from vibrations, and controlling and manipulating objects without gripping”.
“We were surprised that an activity as popular, fun and healthy as hula hooping wasn’t understood even at a basic physics level,” says Ristroph in a press statement. “As we made progress on the research, we realized that the maths and physics involved are very subtle, and the knowledge gained could be useful in inspiring engineering innovations, harvesting energy from vibrations, and improving in robotic positioners and movers used in industrial processing and manufacturing.”
Our guest in this episode of the Physics World Weekly podcast is the Turkish quantum physicist Mete Atatüre, who heads up the Cavendish Laboratory at the UK’s University of Cambridge.
In a conversation with Physics World’s Katherine Skipper, Atatüre talks about hosting Quantour, the quantum light source that is IYQ’s version of the Olympic torch. He also talks about his group’s research on quantum sensors and quantum networks.
Some of our understanding of Uranus may be false, say physicists at NASA’s Jet Propulsion Laboratory who have revisited Voyager 2 data before and after its 1986 flyby of this ice-giant planet. The new analyses could shed more light on some of the mysterious and hitherto unexplainable measurements made by the spacecraft. For example, why did it register a strongly asymmetric, plasma-free magnetosphere – something that is unheard of for planets in our solar system – and belts of highly energetic electrons?
Voyager 2 reached Uranus, the seventh planet in our solar system, 38 years ago. The spacecraft gathered its data in just five days and the discoveries from this one and, so far, only flyby provide most of our understanding of this ice giant. Two major findings that delighted astronomers were its 10 new moons and two rings. Other observations perplexed researchers, however.
One of these, explains Jamie Jasinski, who led this new study, was the observation of the second most intense electron radiation belt after Jupiter’s. How such a belt could be maintained or even exist at Uranus lacked an explanation until now. “The other mystery was that the magnetosphere did not have any plasma,” he says. “Indeed, we have been calling the Uranian magnetosphere a ‘vacuum magnetosphere’ because of how empty it is.”
Unrepresentative conditions
These observations, however, may not be representative of the conditions that usually prevail at Uranus, Jasinski explains, because they were simply made during an anomalous period. Indeed, just before the flyby, unusual solar activity squashed the planet’s magnetosphere down to about 20% of its original volume. Such a situation exists only very rarely and was likely responsible for creating a plasma-free magnetosphere with the observed highly excited electron radiation belts.
Jasinski and colleagues came to their conclusions by analysing Voyager 2 data of the solar wind (a stream of charged particles emanating from the Sun) upstream of Uranus for the few days before the flyby started. They saw that the dynamic pressure of the solar wind increased by a factor of 20, meaning that it dramatically compressed the magnetosphere of Uranus. They then looked at eight months of solar wind data obtained by the spacecraft at Uranus’ orbit and found that the solar wind conditions present during the flyby only occur 4% of the time.
“The flyby therefore occurred during the maximum peak solar wind intensity in that entire eight-month period,” explains Jasinski.
The scientific picture we have of Uranus since the Voyager 2 flyby is that it has an extreme magnetospheric environment, he says. But maybe the flyby just happened to occur during some strange activity rather than it being like that generally.
The timing was just wrong
Jasinski previously worked on NASA’s MESSENGER mission to Mercury. Out of the thousands of orbits made by this spacecraft around the planet over a four-year period, there were occasional times where activity from the Sun completely eroded the entire magnetic field. “That really highlighted for me that if we had made an observation during one of those events, we would have a very different idea of Mercury.”
Following this line of thought, Jasinski asked himself whether we had simply observed Uranus during a similar anomalous time. “The Voyager 2 flyby lasted just five days, so we may have observed Uranus at just the ‘wrong time’,” he says.
One of the most important take-home messages from this study is that we can’t take the results from just one flyby as a being a good representation of the Uranus system, he tells Physics World. Future missions must therefore be designed so that a spacecraft remains in orbit for a few years, enabling variations to be observed over long time periods.
Why we need to go back to Uranus
One of the reasons that we need to go back to Uranus, Jasinski says, is to find out whether any of its moons have subsurface liquid oceans. To observe such oceans with a spacecraft, the moons need to be inside the magnetosphere. This is because the magnetosphere, as it rotates, provides a predictable, steadily varying magnetic field at the moon. This field can then induce a magnetic field response from the ocean that can be measured by the spacecraft. The conductivity of the ocean – and therefore the magnetic signal from the moon – will vary with the depth, thickness and salinity of the ocean.
If the moon is outside the magnetosphere, this steady and predictable external field does not exist and it can no longer drive the induction response. We cannot therefore, detect a magnetic field from the ocean if the moon is outside the magnetosphere.
Before these latest results, researchers thought that the outermost moons, Titania and Oberon, would spend a significant part of their orbit around the planet outside of the magnetosphere, Jasinski explains. This is because we thought that Uranus’s magnetosphere was generally small. However, in light of the new findings, this is probably not true and both moons will orbit inside the magnetosphere since it is much larger than previously thought.
Titania and Oberon are the most likely candidates for harbouring oceans, he adds, because they are slightly larger than the other moons. This means that they can retain heat better and therefore be warmer and less likely to be completely frozen.
“A future mission to Uranus is critical in collecting the scientific measurements to answer some of the most intriguing science questions in our solar system,” says Jasinski. “Only by going back to Uranus and orbiting the planet can we really gain an understanding of this curious planet.”
Happily, in 2022, the US National Academies outlined that a Uranus Orbiter and Probe mission should be a future NASA flagship mission that NASA should prioritize in the coming decade. Such a mission would help us unravel the nature of Uranus’s magnetosphere and its interaction with the planet’s atmosphere, moons and rings, and with the solar wind. “Of course, modern instrumentation would also revolutionize the type of discoveries we would make compared to previous missions,” says Jasinski.
From squirting cucumbers to cosmic stamps, physics has had its fair share of quirky stories this year. Here is our pick of the best 10, not in any particular order.
Escape from quantum physics
Staff at the clunkily titled Dresden-Würzburg Cluster of Excellence for Complexity and Topology in Quantum Matter (ct.qmat) had already created a mobile phone app “escape room” to teach children about quantum mechanics. But this year the app became reality at Dresden’s science museum. Billed as “Germany’s first quantum physics escape room”, the Kitty Q Escape Room has four separate rooms and 17 puzzles that offer visitors a multisensory experience that explores the quirky world of quantum mechanics. The goal for participants is to discover if Kitty Q – an imaginary being that embodies the spirit of Schrödinger’s cat – is dead or alive. Billed as being “perfect for family outings, children’s birthday parties and school field trips”, the escape room “embraces modern gamification techniques”, according to ct.qmat physicist Matthias Vojuta, “We ensure that learning happens in an engaging and subtle way,” he says. “The best part [is] you don’t need to be a maths or physics expert to enjoy the game.
Corking research
Coffee might be the drink of choice for physicists, but when it comes to studying the fascinating physics of liquids, champagne is hard to beat. That’s mostly because of the huge pressures inside the bottle and the explosion of bubbles that are released once the cork is removed. Experiments have already examined the expanding gas jet that propels the cork stopper out of a just-opened bottle caused by the radiation of shock waves up the neck. Now physicists in Austria have looked at the theory of how these supersonic waves move. The “Mach disc” that forms just outside the bottle opening is, they found, convex and travels away from the bottle opening before moving back towards it. A second Mach disc then forms when the first disc moves back although it’s not clear if this splits from the first or is a distinct disc. Measuring the distance of the Mach disc from the bottle also provides a way to determine the gas pressure or temperature in the champagne bottle.
Cosmic stamps
We love a good physics or astronomy stamp here at Physics World and this year’s offering from the US Postal Service didn’t disappoint. In January, they released two stamps to mark the success of NASA’s James Webb Space Telescope (JWST), which took off in 2021. The first features an image taken by the JWST’s Near-Infrared Camera of the “Cosmic Cliffs” in the Carina Nebula, located about 7600 light-years from Earth. The other stamp has an image of the iconic Pillars of Creation within the vast Eagle Nebula, which lies 6500 light-years away that was captured by the JWST’s Mid-Infrared Instrument. “With these stamps, people across the country can have their own snapshot of Webb’s captivating images at their fingertips,” noted NASA’s head of science, the British-born physicist Nicola Fox.
Record-breaking cicadas
This year marked the first time in more than 200 years that two broods belonging to two species of cicadas emerged at the same time. And the cacophony that the insects are famous for wasn’t the only aspect to watch out for. Researchers at Georgia Tech in the US examined another strange aspect of these creatures – how they wee. We know that most insects urinate via droplets as this is more energy efficient than emitting a stream of liquid. But cicadas are such voracious eaters of tree sap that individually flicking each drop away would be too taxing. To get around this problem, cicadas (just as we do) eject the pee via a jet, which the Georgia Tech scientists looked at for the first time. “Previously, it was understood that if a small animal wants to eject jets of water, then this [is] challenging, because the animal expends more energy to force the fluid’s exit at a higher speed,” says Elio Challita, who is based at Harvard University. “This is due to surface tension and viscous forces. But a larger animal can rely on gravity and inertial forces to pee.” According to the team, cicadas are the smallest animal to create such high-speed jets – a finding that could, say the researchers, lead to the design of better nozzles and robots.
Raising the bar
Machine learning was a big topic this year thanks to the 2024 Nobel prizes for both physics and chemistry. Not to be outdone, scientists from Belgium announced they had used machine-learning algorithms to predict the taste and quality of beer and what compounds brewers could use to improve the flavour of certain tipples. Kevin Verstrepen from KU Leuven and colleagues spent five years characterizing over 200 chemical properties from 250 Belgian commercial beers across 22 different styles, such as Blond and Tripel beers. They also gathered tasting notes from a panel of 15 people and from the RateBeer online beer review database. A machine-learning model that was trained on the data could predict the flavour and score of the beers using just the beverages’ chemical profile. By adding certain aromas predicted by the model, the team was even able to boost the quality – as determined by blind tasting – of existing commercial Belgian ale. The scientists hope the findings could be used to improve alcohol-free beer. Yet KU Leuven researcher Michiel Scheurs admits that they did celebrate the work “with the alcohol-containing variants”.
Beetling away
Whirligig beetles can reach speeds of up to 1m/s – or 100 body lengths per second – as they skirt across the water. Scientists thought the animals did this using their oar-like hind legs to generate “drag-based” thrust, a bit like how a rodent swims. To do so, however, the beetle would need to move its legs faster than its swimming speed, which in turn would require pushing against the water at unrealistic speeds. To solve this bugging problem, researchers at Cornell University used high-speed cameras to film the whirligigs as they swam. They found that the beetles instead use lift-based thrust, which has been documented in whales, dolphins and sea lions. The thrusting motion is perpendicular to the water surface and the researchers calculate that the forces generated by the beetle in this way can explain their speedy movements in the water. According to Cornell’s Yukun Sun, that makes whirligig beetles “by far the smallest organism to use lift-based thrust for swimming”.
Pistachio packing problem
It sounds like a question you might get in an exam: if you have a full bowl of N pistachios, what size container do you need for the leftover 2N non-edible shells? Given that pistachios come in different shapes and sizes and the shells are non-symmetric, the problem’s a tougher nut to crack than you might think. Thankfully, the secret of pistachio-packing was revealed in a series of experiments by physicists Ruben Zakine and Michael Benzaquen from École Polytechnique in Palaiseau, France. After placing 613 pistachios in a two-litre cylinder, they found that the container holding the shells needs to be just over half the size of the original pistachio bowl for well-packed nuts and three-quarters for loosely packed pistachios. Zakine and Benzaquen say that numerical simulations could be carried out to compare with the experimental findings and that the work extends beyond just nuts. “Our analysis can be relevant in other situations, for instance to determine the optimal container needed [for] mussel or oyster shells after a Pantagruelian seafood diner,” they claim
The physics of paper cuts
If you’ve ever been on the receiving end of a paper cut, you’ll know how painful it can be. To find out why paper is able to slice through skin so well, Kaare Jensen – a physicist from the Technical University of Denmark – and colleagues carried out a series of experiments using paper with a range of thicknesses to make incisions into a piece of gelatine at various angles. When combined with modelling, they discovered that paper cuts are a competition between slicing and “buckling”. Thin paper with a thickness of about 30microns doesn’t cut skin so well because it buckles – a mechanical instability that happens when a slender object like paper is compressed. But thick paper (above about 200microns) is poor at making an incision because it distributes the load over a greater area, resulting in only small indentations. The team discovered, however, that there is a paper cut “sweet spot” at around 65microns, which just happens to be close to the paper thickness used in print magazines. The researchers have now put their findings to use, creating a 3D-printed scalpel that uses scrap paper for the cutting edge. Dubbed a “papermachete”, it can slice through apple, banana peel, cucumber and even chicken. “Studying the physics of paper cuts has revealed a surprising potential use for paper in the digital age: not as a means of information dissemination and storage, but rather as a tool of destruction,” the researchers write.
Squirting cucumbers
The plant kingdom is full of intriguing ways to distribute seeds such as the dandelion pappus effortlessly, drifting on air currents. Not to be outdone, the squirting cucumber (Ecballium elaterium), which is native to the Mediterranean and is often regarded as a weed, has its own unique way of ejecting seeds. When ripe, the ovoid-shaped fruits detach from the stem and as it does so explosively ejects seeds in a high-pressure jet of mucilage. The process, which lasts just 30 ms, launches the seeds at more than 20 m/s with some landing 10 m away. Researchers in the UK revealed the mechanism behind the squirt for the first time by using high-speed imaging and mathematical modelling. The researchers found that in the weeks leading up to the ejection, fluid builds up inside the fruits so they become pressurized. Then just before seed dispersal, some of this fluid moves from the fruit to the stem, making it longer and stiffer. This process crucially causes the fruit to rotate from being vertical to close to an angle of 45°, improving the launch angle for the seeds. During the first milliseconds of ejection, the tip of the stem holding the fruit then recoils away causing the pod to counter-rotate and detach. As it does so, the pressure inside the fruit causes the seeds to eject at high speed. By changing parameters in the model, such as the stiffness of the stem, reveals that the mechanism has been fine-tuned to ensure optimal seed dispersal.
Chimp Shakespeare
And finally, according to the infinite monkeys theorem, a monkey randomly pressing keys on a typewriter for an infinite amount of time would eventually type out the complete works of William Shakespeare purely by chance. Yet analysis by two mathematicians in Australia found that even a troop might not have time to do so within the supposed timeframe of the universe. The researchers came to their conclusion after creating a computational model that assumed a constant chimpanzee population of 200 000, each typing at one key per second until the end of the universe in about 10100 years. If that is true, there’d be only a 5% chance a single monkey would type “bananas” within its own lifetime of just over 30 years. But even all the chimps feverishly typing away couldn’t produce Shakespeare’s entire works (coming in at over 850 000 words) before the universe ends. “It is not plausible that, even with improved typing speeds or an increase in chimpanzee populations, monkey labour will ever be a viable tool for developing non-trivial written works,” the authors conclude, adding that while the infinite monkeys theorem is true, it is also “somewhat misleading”, or in reality it’s “not to be”.
You can be sure that next year will throw up its fair share of quirky stories from the world of physics. See you next year!
The past few years have seen several missions to the Moon and that continued in 2024. Yet things didn’t get off to a perfect start. In 2023, the Japanese Space Agency, JAXA, launched its Smart Lander for Investigating Moon (SLIM) mission to the Moon. Yet when it landed in January, it did so upside down. Despite that slight mishap, Japan still became the fifth nation to successfully soft land a craft on the Moon, following the US, Soviet Union, China and India.
In February, meanwhile, US firm Intuitive Machines achieved a significant milestone when it became the first private mission to soft land on the Moon. Its Odysseus mission touched down on the Moon’s Malapert A region, a small crater about 300 km from the lunar south pole. In doing so it also became the first US mission to make a soft landing on the Moon since Apollo 17 in December 1972.
Astronomy is unique in having a significant amateur community and while radio astronomy emerged from amateur beginnings, it is now the focus of elite, international global consortia. In this fascinating feature, astrophysicist and amateur radio astronomer Emma Chapman from the University of Nottingham, UK, outlined how the subject developed and why it needs to strike a fine balance between its science and engineering roots. And also make sure not to miss Chapman discussing the history of radio astronomy on the Physics World Stories podcast.
Hidden stories
Still on the podcast front, this Physics World Stories podcast from this year features a fascinating chat with astronaut Eileen Collins, who shared her extraordinary journey as the first woman to pilot and command a spacecraft. In the process, she broke several barriers in space exploration and inspired generations with her courage and commitment to discovery.
Euclid’s spectacular images
Astronomy and spectacular images go hand in hand and this year didn’t disappoint. While the James Webb Space Telescope continued to amaze, in May the European Space Agency released five spectacular images of the cosmos along with 10 scientific papers as part of Euclid’s early release observations. Euclid’s next data release will focus on its primary science objectives and is currently slated for March 2025, so keep an eye out for those next year.
What makes a physics story popular? The answer is partly hidden in the depths of Internet search algorithms, but it’s possible to discern a few trends in this list of the 10 most read stories published on the Physics World website in 2024. Well, one trend, at least: it seems that many of you really, really like stories about quantum physics. Happily, we’ll be publishing lots more of them in 2025, the International Year of Quantum Science and Technology. But in the meantime, here are 2024’s most popular stories – quantum and otherwise.
As main characters in quantum thought experiments go, Wigner’s friend isn’t nearly as well-known as Schrödinger’s cat. While the alive-or-dead feline was popularized in the mid-20th century by the science fiction and fantasy writer Ursula K LeGuin, Wigner and his best mate remain relatively obscure, and unlikely to appear in an image created with entangled light (more on this later). Still, there’s plenty to ponder in this lesser-known thought experiment, which provocatively suggests that in the quantum world, what’s true may depend, quite literally, on where you stand: with Wigner’s friend inside a lab performing the quantum experiment, or with Wigner outside it awaiting the results.
Popularity isn’t everything. This story focused on a paper about a high-temperature superconducting wire that appeared to have a current density 10 times higher than any previously reported. Unfortunately, the paper’s authors made an error when converting the magnetic units they used to calculate current density. This error – which the authors acknowledged, leading to the paper’s retraction – meant that the current density was too high by… well, by a factor of 10, actually.
Surprisingly, this wasn’t the most blatant factor-of-10 flop to enter the scientific literature this year. That dubious honour belongs to a team of environmental chemists who multiplied 60 kg x 7000 nanograms/kg to calculate the maximum daily dose of potentially harmful chemicals, and got an answer of…42 000 nanograms. Oops.
Remember the entangled-light Schrödinger’s cat image? Well, here it is again, this time in its original context. In an experiment that made it onto our list of the top 10 breakthroughs of 2024, researchers in France used quantum correlations to encode an image into light such that the image only becomes visible when particles of light (photons) are observed by a single-photon sensitive camera. Otherwise, the image is hidden from view. It’s a neat result, and we’re glad you agree it’s worth reading about.
At this time of year, some of us in the Northern Hemisphere feel like we’re inhabiting an icy exoplanet already, and some of you experiencing Southern Hemisphere heat waves probably wish you were. Sadly, none of us is ever going to live on (or even visit) the temperate exoplanet LHS 1140 b, which is located 49 light-years away from Earth and has a mass 5.6 times larger. Still, astronomers think this watery, icy world could be only the third planet (after Earth and Mars) in its star’s habitable zone known to have an atmosphere, and that was enough to pique readers’ interest.
An electromagnetic vortex cannon might sound like an accessory from Star Trek. In fact, it’s a real object made from a device called a horn microwave antenna. It gets its name because it generates an electromagnetic field in free space that rotates around the propagation direction of the wave structure, similar to how an air cannon blows out smoke rings. According to its inventors, the electromagnetic vortex cannon could be used to develop communication, sensing, detection and metrology systems that overcome the limitations of existing wireless applications.
Returning to the quantum theme, the fifth-most-read story of 2024 concerned an experiment that demonstrated a new violation of the Leggett-Garg inequality (LGI). While the better-known Bell’s inequality describes how the behaviour of one object relates to that of another object with which it is entangled, the conceptually similar LGI describes how the state of a single object varies at different points in time. If either inequality is violated, the world is quantum. Previous experiments had already observed LGI violations in several quantum systems, but this one showed, for the first time, that neutrons in a beam must be in a coherent superposition of states – a fundamental property of quantum mechanics.
When a scientific paper introduces a concept that goes on to become common knowledge, you might expect later researchers to cite the living daylights out of it – and you would be wrong. According to the study described in this article, the ideas in many such papers become so well known that the opposite happens: no-one bothers to cite them anymore.
This means that purely citation-based metrics of research “impact” tend to underestimate the importance of seminal works such as Alan Guth’s 1981 paper that introduced the theory of cosmic inflation. So if your amazing paper isn’t getting the citation love it deserves, take heart: maybe it’s too foundational for its own good.
Physicists have been trying to produce a theory that incorporates both gravity and quantum mechanics for almost a century now. One of the sticking points is that we don’t really know what a quantum theory of gravity might look like. Presumably, it would have to combine the world of gravity (where space and time warp in the presence of massive objects) with the world of quantum mechanics (which assumes that space and time are fixed) – but how?
For the University College London theorist Jonathan Oppenheim, this is the wrong question. As this article explains, Oppenheim has developed a new theoretical framework that aims to unify quantum mechanics and classical gravity – but, crucially, without the need to define a theory of quantum gravity first.
Can a quantum system remain maximally entangled in a noisy environment? According to Julio I de Vicente from the Universidad Carlos III de Madrid, Spain, the answer is “no”. While the question and its answer might seem rather esoteric, this article explains that the implications extend beyond theoretical physics, with so-called “maximally entangled mixed states” having the potential to revolutionize our approach to other problems in quantum mechanics.
The science fiction writer Arthur C Clarke famously said that “Any sufficiently advanced technology is indistinguishable from magic.” Sadly for Clarke fans, the magic in this article doesn’t involve physicists chanting incantations or waving wands over their experiments. Instead, it refers to quantum states that are especially hard to simulate on classical machines. These so-called “magic” states are a resource for quantum computers, and the amount of them available is a measure of a system’s quantum computational power. Indeed, certain error-correcting codes can improve the quality of magic states in a system, which makes a pleasing connection between this, the most-read article of 2024 on the Physics World website, and our pick for 2024’s “Breakthrough of the year.” See you in 2025!
This year marked the 70th anniversary of the European Council for Nuclear Research, which is known universally as CERN. To celebrate, we have published a bumper crop of articles on particle and nuclear physics in 2024. Many focus on people and my favourite articles have definitely skewed in that direction. So let’s start with the remarkable life of accelerator pioneer Bruno Touschek.
Born in Vienna in 1921 to a Jewish mother, Bruno Touschek’s life changed when Nazi Germany annexed Austria in 1938. After suffering antisemitism in his hometown and then in Rome, he inexplicably turned down an offer to study in the UK and settled in Germany. There he worked on a “death ray” for the military but was eventually imprisoned by the German secret police. He was then left for dead during a forced march to a concentration camp in 1945. When the war ended a few weeks later, Touschek’s expertise came to the attention of the British, who occupied north-western Germany. He went on to become a leading accelerator physicist and you can read much more about the extraordinary life of Touschek in this article by the physicist and biographer Giulia Pancheri.
Today, the best atomic clocks would only be off by about 10 ms after running for the current age of the universe. But, could these timekeepers soon be upstaged by clocks that use a nuclear, rather than an atomic transition? Such nuclear clocks could rival their atomic cousins when it comes to precision and accuracy. They also promise to be fully solid-state, which means that they could be used in a wide range of commercial applications. This year saw physicists make new measurements and develop new technologies that could soon make nuclear clocks a reality. Click on the headline above to discover how physicists in the US have fabricated all of the components needed to create a nuclear clock made from thorium-229. Also, earlier this year physicists in Germany and Austria showed that they can put nuclei of the isotope into a low-lying metastable state that could be used in a nuclear clock. You can find out more here: “Excitation of thorium-229 brings a working nuclear clock closer”.
In 2024 we launched our Physics World Live series of panel discussions. In September, we explored the future of particle physics with Tara Shears of the UK’s University of Liverpool, Phil Burrows at the University of Oxford in the UK and Tulika Bose at the University of Wisconsin–Madison in the US. Moderated by Physics World’s Michael Banks, the discussion focussed on next-generation particle colliders and how they could unravel the mysteries of the Higgs boson and probe beyond the Standard Model of particle physics. You can watch a video of the event by clicking on the above headline (free registration) or read an article based on the discussion here: “How a next-generation particle collider could unravel the mysteries of the Higgs boson”.
Neutrinos do not fit in nicely with the Standard Model of particle physics because of their non-zero masses. As a result some physicists believe that they offer a unique opportunity to do experiments that could reveal new physics. In a wide-ranging interview, the particle physicist Juan Pedro Ochoa-Ricoux explains why he has devoted much of his career to the study of these elusive subatomic particles. He also looks forward to two big future experiments – JUNO and DUNE – which could change our understanding of the universe.
“Children decide quite early in their life, as early as primary school, if science is for them or not,” explains Çiğdem İşsever – who is leads the particle physics group at DESY in Hamburg, and the experimental high-energy physics group at the Humboldt University of Berlin. İşsever has joined forces with physicists Steven Worm and Becky Parker to create ATLAScraft, which creates a virtual version of CERN’s ATLAS detector in the hugely popular computer game Minecraft. In this profile, the science writer Rob Lea talks to İşsever about her passion for outreach and how she dispels gender stereotypes in science by talking to school children as young as five about her career in physics. İşsever also looks forward to the future of particle physics and what could eventually replace the Large Hadron collider as the world’s premier particle-physics experiment.
This year marked the 70th anniversary of the world’s most famous physics laboratory, so the last two items in my list celebrate that iconic facility nestled between the Alps and the Jura mountains. Formed in the aftermath of the Second World War, which devastated much of Europe, CERN came into being on 29 September 1954. That year also saw the start of construction of the Geneva-based lab’s proton synchrotron, which fired-up in 1959 with an energy of 24 GeV, becoming the world’s highest-energy particle accelerator. The original CERN had 12 member states and that has since doubled to 24, with an additional 10 associate members. The lab has been associated with a number of Nobel laureates and is a shining example of how science can bring nations together after a the trauma of war. Read more about the anniversary here.
When former physicist James Gillies sat down for dinner in 2009 with actors Tom Hanks and Ayelet Zurer, joined by legendary director Ron Howard, he could scarcely believe the turn of events. Gillies was the head of communications at CERN, and the Hollywood trio were in town for the launch of Angels & Demons. The blockbuster film is partly set at CERN with antimatter central to its plot, and is based on the Dan Brown novel. In this Physics World Stories podcast, Gillies looks back on those heady days. Gillies has also written a feature article for us about his Hollywood experience: “Angels & Demons, Tom Hanks and Peter Higgs: how CERN sold its story to the world”.
With so much fascinating research going on in quantum science and technology, it’s hard to pick just a handful of highlights. Fun, but hard. Research on entanglement-based imaging and quantum error correction both appear in Physics World’slist of 2024’s top 10 breakthroughs, but beyond that, here are a few other achievements worth remembering as we head into 2025 – the International Year of Quantum Science and Technology.
Quantum sensing
In July, physicists at Germany’s Forschungszentrum Jülich and Korea’s IBS Center for Quantum Nanoscience (QNS) reported that they had fabricated a quantum sensor that can detect the electric and magnetic fields of individual atoms. The sensor consists of a molecule containing an unpaired electron (a molecular spin) that the physicists attached to the tip of a scanning-tunnelling microscope. They then used it to measure the magnetic and electric dipole fields emanating from a single iron atom and a silver dimer on a gold substrate.
Not to be outdone, an international team led by researchers at the University of Melbourne, Australia, announced in August that they had created a quantum sensor that detects magnetic fields in any direction. The new omnidirectional sensor is based on a recently-discovered carbon-based defect in a two-dimensional material, hexagonal boron nitride (hBN). This same material also contains a boron vacancy defect that enables the sensor to detect temperature changes, too.
Quantum communications
One of the challenges with transmitting quantum information is that pretty much any medium you send it through – including high-spec fibre optic cables and even the Earth’s atmosphere – is at least somewhat good at absorbing photons and preventing them from reaching their intended destination.
In July, a team at the University of Chicago, the California Institute of Technology and Stanford University proposed a novel solution. A continent-scale network of vacuum-sealed tubes, they suggested, could transmit quantum information at rates as high as 1013 qubits per second. This would exceed currently-available quantum channels based on satellites or optical fibres by at least four orders of magnitude. Whether anyone will actually build such a network is, of course, yet to be determined – but you have to admire the ambition behind it.
Quantum fundamentals
Speaking of ambition, this year saw a remarkable flurry of ideas for using quantum devices and quantum principles to study gravity. One innovative proposal involves looking for the gravitational equivalent of the photoelectric effect in a system of resonant bars that have been cooled and tuned to vibrate when they absorb a graviton from an incoming gravitational wave. The idea is that absorbing a graviton would change the quantum state of the column, and this change of state would, in principle, be detectable.
Another quantum gravity proposal takes its inspiration from an even older experiment: the Cavendish torsion balance. The quantum version of this 18th-century classic would involve studying the correlations between two torsion pendula placed close together as they rotate back and forth like massive harmonic oscillators. If correlations appear that can’t be accounted for within a classical theory of gravity, this could imply that gravity is not, in fact, classical.
Perhaps the most exciting development in this space, though, is a new experimental technique for measuring the pull of gravity on a micron-scale particle. Objects of this size are just above the limit where quantum effects start to become apparent, and the Leiden and Southampton University researchers who performed the experiment have ideas for how to push their system further towards this exciting regime. Definitely one to keep an eye on.
The best of the rest
It wouldn’t be quantum if it wasn’t at least little bit weird, so here’s a few head-scratchers for you to puzzle over.
From tumour-killing quantum dots to proton therapy firsts, this year has seen the traditional plethora of exciting advances in physics-based therapeutic and diagnostic imaging techniques, plus all manner of innovative bio-devices and biotechnologies for improving healthcare. Indeed, the Physics World Top 10 Breakthroughs for 2024 included a computational model designed to improve radiotherapy outcomes for patients with lung cancer by modelling the interaction of radiation with lung cells, as well as a method to make the skin of live mice temporarily transparent to enable optical imaging studies. Here are just a few more of the research highlights that caught our eye.
Marvellous MRI machines
This year we reported on some important developments in the field of magnetic resonance imaging (MRI) technology, not least of which was the introduction of a 0.05 T whole-body MRI scanner that can produce diagnostic quality images. The ultralow-field scanner, invented at the University of Hong Kong’s BISP Lab, operates from a standard wall power outlet and does not require shielding cages. The simplified design makes it easier to operate and significantly lower in cost than current clinical MRI systems. As such, the BISP Lab researchers hope that their scanner could help close the global gap in MRI availability.
Moving from ultralow- to ultrahigh-field instrumentation, a team headed up by David Feinberg at UC Berkeley created an ultrahigh-resolution 7 T MRI scanner for imaging the human brain. The system can generate functional brain images with 10 times better spatial resolution than current 7 T scanners, revealing features as small as 0.35 mm, as well as offering higher spatial resolution in diffusion, physiological and structural MR imaging. The researchers plan to use their new NexGen 7 T scanner to study underlying changes in brain circuitry in degenerative diseases, schizophrenia and disorders such as autism.
Meanwhile, researchers at Massachusetts Institute of Technology and Harvard University developed a portable magnetic resonance-based sensor for imaging at the bedside. The low-field single-sided MR sensor is designed for point-of-care evaluation of skeletal muscle tissue, removing the need to transport patients to a centralized MRI facility. The portable sensor, which weighs just 11 kg, uses a permanent magnet array and surface RF coil to provide low operational power and minimal shielding requirements.
Proton therapy progress
Alongside advances in diagnostic imaging, 2024 also saw a couple of firsts in the field of proton therapy. At the start of the year, OncoRay – the National Center for Radiation Research in Oncology in Dresden – launched the world’s first whole-body MRI-guided proton therapy system. The prototype device combines a horizontal proton beamline with a whole-body MRI scanner that rotates around the patient, a geometry that enables treatments both with patients lying down or in an upright position. Ultimately, the system could enable real-time MRI monitoring of patients during cancer treatments and significantly improve the targeting accuracy of proton therapy.
Also aiming to enhance proton therapy outcomes, a team at the PSI Center for Proton Therapy performed the first clinical implementation of an online daily adaptive proton therapy (DAPT) workflow. Online plan adaptation, where the patient remains on the couch throughout the replanning process, could help address uncertainties arising from anatomical changes during treatments. In five adults with tumours in rigid body regions treated using DAPT, the daily adapted plans provided target coverage to within 1.1% of the planned dose and, in over 90% of treatments, improved dose metrics to the targets and/or organs-at-risk. Importantly, the adaptive approach took just a few minutes longer than a non-adaptive treatment, remaining within the 30-min time slot allocated for a proton therapy session.
Bots and dots
Last but certainly not least, this year saw several research teams demonstrate the use of tiny devices for cancer treatment. In a study conducted at the Institute for Bioengineering of Catalonia, for instance, researchers used self-propelling nanoparticles containing radioactive iodine to shrink bladder tumours.
Upon injection into the body, these “nanobots” search for and accumulate inside cancerous tissue, delivering radionuclide therapy directly to the target. Mice receiving a single dose of the nanobots experienced a 90% reduction in the size of bladder tumours compared with untreated animals.
At the Chinese Academy of Sciences’ Hefei Institutes of Physical Science, a team pioneered the use of metal-free graphene quantum dots for chemodynamic therapy. Studies in cancer cells and tumour-bearing mice showed that the quantum dots caused cell death and inhibition of tumour growth, respectively, with no off-target toxicity in the animals.
Finally, scientists at Huazhong University of Science and Technology developed novel magnetic coiling “microfibrebots” and used them to stem arterial bleeding in a rabbit – paving the way for a range of controllable and less invasive treatments for aneurysms and brain tumours.
December might be dark and chilly here in the northern hemisphere, but it’s summer south of the equator – and for many people that means eating ice cream.
It turns out that the physics of ice cream is rather remarkable – as I discovered when I travelled to Canada’s University of Guelph to interview the food scientist Douglas Goff. He is a leading expert on the science of frozen desserts and in this podcast he talks about the unique material properties of ice cream, the analytical tools he uses to study it, and why ice cream goes off when it is left in the freezer for too long.
Each year, the International Association of Physics Students organizes a physics competition for bachelor’s and master’s students from across the world. Known as the Physics League Across Numerous Countries for Kick-ass Students (PLANCKS), it’s a three-day event where teams of three to four students compete to answer challenging physics questions.
In the UK and Ireland, teams compete in a preliminary competition to be sent to the final. Here are some fiendish questions from past PLANCKS UK and Ireland preliminaries and the 2024 final in Dublin, written by Anthony Quinlan and Sam Carr, for you to try this holiday season.
Question 1: 4D Sun
Imagine you have been transported to another universe with four spatial dimensions. What would the colour of the Sun be in this four-dimensional universe? You may assume that the surface temperature of the Sun is the same as in our universe and is approximately T = 6 × 103 K. [10 marks]
Boltzmann constant, kB = 1.38 × 10−23 J K−1
Speed of light, c = 3 × 108 m s−1
Question 2: Heavy stuff
In a parallel universe, two point masses, each of 1 kg, start at rest a distance of 1 m apart. The only force on them is their mutual gravitational attraction, F = –Gm1m2/r2. If it takes 26 hours and 42 minutes for the two masses to meet in the middle, calculate the value of the gravitational constant G in this universe. [10 marks]
Question 3: Just like clockwork
Consider a pendulum clock that is accurate on the Earth’s surface. Figure 1 shows a simplified view of this mechanism.
A pendulum clock runs on the gravitational potential energy from a hanging mass (1). The other components of the clock mechanism regulate the speed at which the mass falls so that it releases its gravitational potential energy over the course of a day. This is achieved using a swinging pendulum of length l (2), whose period is given by
where g is the acceleration due to gravity.
Each time the pendulum swings, it rocks a mechanism called an “escapement” (3). When the escapement moves, the gear attached to the mass (4) is released. The mass falls freely until the pendulum swings back and the escapement catches the gear again. The motion of the falling mass transfers energy to the escapement, which gives a “kick” to the pendulum that keeps it moving throughout the day.
Radius of the Earth, R = 6.3781 × 106 m
Period of one Earth day, τ0 = 8.64 × 104 s
How slow will the clock be over the course of a day if it is lifted to the hundredth floor of a skyscraper? Assume the height of each storey is 3 m. [4 marks]
Question 4: Quantum stick
Imagine an infinitely thin stick of length 1 m and mass 1 kg that is balanced on its end. Classically this is an unstable equilibrium, although the stick will stay there forever if it is perfectly balanced. However, in quantum mechanics there is no such thing as perfectly balanced due to the uncertainty principle – you cannot have the stick perfectly upright and not moving at the same time. One could argue that the quantum mechanical effects of the uncertainty principle on the system are overpowered by others, such as air molecules and photons hitting it or the thermal excitation of the stick. Therefore, to investigate we would need ideal conditions such as a dark vacuum, and cooling to a few millikelvins, so the stick is in its ground state.
Moment of inertia for a rod,
where m is the mass and l is the length.
Uncertainty principle,
There are several possible approximations and simplifications you could make in solving this problem, including:
sinθ ≈ θ for small θ
and
Calculate the maximum time it would take such a stick to fall over and hit the ground if it is placed in a state compatible with the uncertainty principle. Assume that you are on the Earth’s surface. [10 marks]
Hint: Consider the two possible initial conditions that arise from the uncertainty principle.
Answers will be posted here on the Physics World website next month. There are no prizes.
If you’re a student who wants to sign up for the 2025 edition of PLANCKS UK and Ireland, entries are now open at plancks.uk
A reusable and biodegradable fibrous foam developed by researchers at Wuhan University in China can remove up to 99.8% of microplastics from polluted water. The foam, which is made from a self-assembled network of chitin and cellulose obtained from biomass wastes, has been successfully field-tested in four natural aquatic environments.
The amount of plastic waste in the environment has reached staggering levels and is now estimated at several billion metric tons. This plastic degrades extremely slowly and poses a hazard for ecosystems throughout its lifetime. Aquatic life is particularly vulnerable, as micron-sized plastic particles can combine with other pollutants in water and be ingested by a wide range of organisms. Removing these microplastic particles would help limit the damage, but standard filtration technologies are ineffective as the particles are so small.
A highly porous interconnected structure
The new adsorbent developed by Wuhan’s Hongbing Deng and colleagues consists of intertwined beta-chitin nanofibre sheets (obtained from squid bone) with protonated amines and suspended cellulose fibres (obtained from cotton). This structure contains a number of functional groups, including -OH, -NH3+ and -NHCO- that allow the structure to self-assemble into a highly porous interconnected network.
This self-assembly is important, Deng explains, because it means the foam does not require “complex processing (no cross-linking and minimal use of chemical reagents) or adulteration with toxic or expensive substances,” he tells Physics World.
The functional groups make the surface of the foam rough and positively charged, providing numerous sites that can interact and adsorb plastic particles ranging in size from less than 100 nm to over 1000 microns. Deng explains that multiple mechanisms are at work during this process, including physical interception, electrostatic attraction and intermolecular interactions. The latter group includes interactions that involv hydrogen bonding, van der Waals forces and weak hydrogen bonding interactions (between OH and CH groups, for example).
The researchers tested their foam in lake water, coastal water, still water (a small pond) and water used for agricultural irrigation. They also combined these systematic adsorption experiments with molecular dynamics (MD) simulations and Hirshfeld partition (IGMH) calculations to better understand how the foam was working.
They found that the foam can adsorb a variety of nanoplastics and microplastics, including the polystyrene, polymethyl methacrylate, polypropylene and polyethylene terephthalate found in everyday objects such as electronic components, food packaging and textiles. Importantly, the foam can adsorb these plastics even in water bodies polluted with toxic metals such as lead and chemical dyes. It adsorbed nearly 100% of the particles in its first cycle and around 96-98% of the particles over the following five cycles.
“The great potential of biomass”
Because the raw materials needed to make the foam are readily available, and the fabrication process is straightforward, Deng thinks it could be produced on a large scale. “Other microplastic removal materials made from biomass feedstocks have been reported in recent years, but some of these needed to be functionalized with other chemicals,” he says. “Such treatments can increase costs or hinder their large-scale production.”
Deng and his team have applied for a patent on the material and are now looking for industrial partners to help them produce it. In the meantime, he hopes the work will help draw attention to the microplastic problem and convince more scientists to work on it. “We believe that the great potential of biomass will be recognized and that the use of biomass resources will become more diverse and thorough,” he says.
Lithium iron phosphate (LFP) battery cells are ubiquitous in electric vehicles and stationary energy storage because they are cheap and have a long lifetime. This webinar will show our studies comparing 240 mAh LFP/graphite pouch cells undergoing charge-discharge cycles over 5 state of charge (SOC) windows (0%–25%, 0%–60%, 0%–80%, 0%–100%, and 75%–100%). To accelerate the degradation, elevated temperatures of 40°C and 55°C were used. In more realistic operating temperatures, it is expected that LFP cells will perform better with longer lifetimes. In this study, we found that cycling LFP cells across a lower average SOC result in less capacity fade than cycling across a higher average SOC, regardless of depth of discharge. The primary capacity fade mechanism is lithium inventory loss due to: lithiated graphite reactivity with electrolyte, which increases incrementally with SOC, and lithium alkoxide species causing iron dissolution and deposition on the negative electrode at high SOC which further accelerates lithium inventory loss. Our results show that even low voltage LFP systems (3.65 V) have a trade-off between average SOC and lifetime. Operating LFP cells at lower average SOC could extend their lifetime substantially in both EV and grid storage applications.
Eniko Zsoldos is a 5th year PhD candidate in chemistry at Dalhousie University in the Jeff Dahn research group. Her current research focuses on understanding degradation mechanisms in a variety of lithium-ion cell chemistries (NMC, LFP, LMO) using techniques such as isothermal microcalorimetry and electrolyte analysis. Eniko received her undergraduate degree in nanotechnology engineering from the University of Waterloo. During her undergrad, she was a member of the Waterloo Formula Electric team, building an electric race car for FSAE student competitions. She has completed internships at Sila Nanotechnologies working on silicon-based anodes for batteries, and at Tesla working on dry electrode processing in Fremont, CA.
In my previous article, I highlighted some of the quantum and green-energy companies that won Business Innovation Awards from the Institute of Physics in 2024. But imaging and medical-physics firms did well too. Having sat on the judging panel for the awards, I saw some fantastic entries – and picking winners wasn’t easy. Let me start, though, with Geoptic, which is one of an elite group of firms to win a second IOP business award, adding a Business Innovation Award to its start-up prize in 2020.
Geoptic is a spin-out from three collaborating groups of physicists at the universities of Durham, Sheffield and St Mary’s Twickenham. The company uses cosmic-ray muon radiography and tomography to study large engineering structures. In particular, it was honoured by the IOP for using the technique to ensure the safety of tunnels on the UK’s railway network.
Many of the railway tunnels in the UK date back to the mid-19th century. To speed up construction, temporary shafts were bored vertically down below the ground, allowing workers to dig at multiple points along the route of the tunnel. When the tunnel was complete, the shafts would be sealed, but their precise number and location is often unclear.
The shafts are a major hazard to the tunnel’s integrity, which is not great for Network Rail – the state-owned body that’s responsible for the UK’s rail infrastructure. Geoptic has, however, been working with Network Rail to provide its engineers with a clear structural view of the dangers that lurk along its route. In my view, it’s a really innovating imaging company, solving challenging real-world problems.
Another winner is Silveray, which was spun off from the University of Surrey. It’s picked up an IOP Business Start-up Award for creating flexible, “colour” X-ray detectors based on proprietary semiconductor materials. Traditional X-ray images are black and white, but what Silveray has done is to develop a nano-particle semiconductor ink that can be coated on to any surface and work at multiple wavelengths.
The X-ray detectors, which are flexible, can simply be wrapped around pipes and other structures that need to be imaged. Traditionally, this has been done using analogue X-ray film that has to be developed in an off-site dark room. That’s costly and time-consuming – especially if images failed to be recorded. Silveray’s detectors instead provide digital X-ray images in real time, making it an exciting and innovative technology that could transform the $5bn X-ray detector market.
Phlux Technology, meanwhile, has won an IOP Business Start-up Award for developing patented semiconductor technology for infrared light sensors that are 12 times more sensitive than the best existing devices, making them ideal for fast, accurate 3D imaging. Set up by researchers at the University of Sheffield, Phlux’s devices have many potential applications especially in light detection and ranging (LIDAR), laser range finders, optical-fibre test instruments and optical and quantum communications networks.
In LIDAR, Phlux’s can have 12 times greater image resolution for a given transmitter power. Its sensors could also make vehicles much safer by enabling higher-resolution images to be created over longer distances, making safety systems more effective. The first volume market for the company is likely to be in communications and where a >10 dB increase in detector sensitivity is going to be well received by the market.
Given the number of markets that will benefit from an “over an order of magnitude” improvement, Phlux is one to watch for a future Business Innovation Award too.
Medical marvel
Let me finish by mentioning Crainio, a medical technology spin-off company from City, University of London, which has won the 2024 Lee Lucas award. This award honours promising start-up firms in the medical and healthcare sector thanks to a generous donation by Mike and Ann Lee (née Lucas). These companies need all the support, time and money they can get given the many challenging regulatory requirements in the medical sector.
Crainio’s technology allows healthcare workers to measure intracranial pressure (ICP), a vital indicator of brain health after a head injury. Currently, the only way to measure ICP directly is for a neurosurgeon to drill a hole in a patient’s skull and place an expensive probe in the brain. It’s a highly invasive procedure that can’t easily be carried out in the “golden hours” immediately after an accident, requiring access to scarce and expensive neurosurgery resources. The procedure is also medically risky, leading to potential infection, bleeding and other complications.
Crainio’s technology eliminates these risks, enabling direct measurement of ICP through a simple non-invasive probe applied to the forehead. The technology – using infrared photoplethysmography (PPG) combined with machine learning – is based on years of research and development work conducted by Panicos Kyriacou and his team of biomedical engineers at City.
Good levels of accuracy have been demonstrated in clinical studies conducted at the Royal London Hospital. It certainly seems a much better plan than drilling a hole in your head as I am sure you can agree – making Crainio a worthy winner, with its non-invasive technology it should have a positive impact on patients globally. I hope the regulatory hurdles can be quickly cleared so the company can start helping patients as soon as possible.
As I have mentioned before, all physics-based firms require time and energy to develop products and become globally significant. There’s also the perennial difficulty of explaining a product idea, which is often quite specialized, to potential investors who have little or no science background. An IOP start-up award can therefore show that your technology has won approval from judges with solid physics and business experience.
I hope, therefore, that your company, if you have one, will be inspired to apply. Also remember that the IOP offers three other awards (Katharine Burr Blodgett, Denis Gabor and Clifford Paterson) for individuals or teams who have been involved in innovative physics with a commercial angle. Good luck – and remember, you have to be in it to win it. Award entries for 2025 will be open in February 2025.
Right now, I spend 95% of my time being a dean, and in that job the skill I use every day is problem-solving. That’s one of the first things we learn as physicists: it’s not enough just to know the technical background, you have to be able to apply it. I find myself looking at everything as systems of equations – this person wants this, this thing needs to go there, we need money to do that thing – and thinking about how to put them together. We do a really good job in physics of teaching people how to think, so they can take a broad look at things and make them work.
What do you like best and least about your job?
The thing I like best is the opportunity to have a wide impact, not just on the faculty who are doing amazing research, but also on students – our next generation of scientific leaders – and people in the wider community. We do a lot of public service outreach at UChicago PME. Outreach has had a big impact on me so it’s incredibly satisfying that, as dean, I can provide those opportunities at various levels for others.
The thing I like least is that because we have so much to do, figuring out who can do what, and how – within what are always limited resources – often feels like trying to solve a giant jigsaw puzzle. Half the time, it feels like the puzzle board is bigger than the number of pieces, so I’m figuring out how to make things work in ways that sometimes stretch people thin, which can be very frustrating for everybody. We all want to do the best job we can, but we need to understand that we sometimes have limits.
What do you know today that you wish you’d known at the start of your career?
I feel a little guilty saying this because I’m going to label myself as a true “in the lab” scientist, but I wish I’d known how much relationships matter. Early on, when I was a junior faculty member, I was focused on research; focused on training my students; focused on just getting the work done. But it didn’t take long for me to realize that of course, students aren’t just workers. They are twenty-somethings with lives and aspirations and goals.
Thankfully, I figured that out pretty quickly, but at every step along the way, as I try to focus on the problem to solve, I have to remind myself that people aren’t problems. People are people, and you have to work with them to solve problems in ways that work for everybody. I sometimes wish there was more personnel training for faculty, rather than a narrow focus on papers and products, because it really is about people at the end of the day.