Vue lecture
Scientists uncover how the brain washes itself during sleep
‘Good boy!’ Truffle-sniffing dogs are helping uncover hidden underground ecosystems
Airplane Noise May Be Tied To Heart Abnormalities
Vacuum expertise enables physics research
Whether creating a contaminant-free environment for depositing material or minimizing unwanted collisions in spectrometers and accelerators, vacuum environments are a crucial element of many scientific endeavours. Creating and maintaining very low pressures requires a holistic approach to system design that includes material selection, preparation, and optimization of the vacuum chamber and connection volumes. Measurement strategies also need to be considered across the full range of vacuum to ensure consistent performance and deliver the expected outcomes from the experiment or process.
Developing a vacuum system that achieves the optimal low-pressure conditions for each application, while also controlling the cost and footprint of the system, is a complex balancing act that benefits from specialized expertise in vacuum science and engineering. A committed technology partner with extensive experience of working with customers to design vacuum systems, including those for physics research, can help to define the optimum technologies that will produce the best solution for each application.
Over many years, the technology experts at Agilent have assisted countless customers with configuring and enhancing their vacuum processes. “Our best successes come from collaborations where we take the time to understand the customer’s needs, offer them guidance, and work together to create innovative solutions,” comments John Screech, senior applications engineer at Agilent. “We strive to be a trusted partner rather than just a commercial vendor, ensuring our customers not only have the right tools for their needs, but also the information they need to achieve their goals.”
In his role Screech works with customers from the initial design phase all the way through to installation and troubleshooting. “Many of our customers know they need vacuum, but they don’t have the time or resources to really understand the individual components and how they should be put together,” he says. “We are available to provide full support to help customers create a complete system that performs reliably and meets the requirements of their application.”
In one instance, Screech was able to assist a customer who had been using an older technology to create an ultrahigh vacuum environment. “Their system was able to produce the vacuum they needed, but it was unreliable and difficult to operate,” he remembers. By identifying the problem and supporting the migration to a modern, simpler technology, Screech helped his customer to achieve the required vacuum conditions improve uptime and increase throughput.
Agilent collaborates with various systems integrators to create custom vacuum solutions for scientific instruments and processes. Such customized designs must be compact enough to be integrated within the system, while also delivering the required vacuum performance at a cost-effective price point. “Customers trust us to find a practical and reliable solution, and realize that we will be a committed partner over the long term,” says Screech.
Expert partnership yields success
The company also partners with leading space agencies and particle physics laboratories to create customized vacuum solutions for the most demanding applications. For many years, Agilent has supplied high-performance vacuum pumps to CERN, which created the world’s largest vacuum system to prevent unwanted collisions between accelerated particles and residual gas molecules in the Large Hadron Collider.
When engineering a vacuum solution that meets the exact specifications of the facility, one key consideration is the physical footprint of the equipment. Another is ensuring that the required pumping performance is achieved without introducing any unwanted effects – such as stray magnetic fields – into the highly controlled environment. Agilent vacuum experts have the experience and knowledge to engineer innovative solutions that meet such a complex set of criteria. “These large organizations already have highly skilled vacuum engineers who understand the unique parameters of their system, but even they can benefit from our expertise to transform their requirements into a workable solution,” says Screech.
Agilent also shares its knowledge and experience through various educational opportunities in vacuum technologies, including online webinars and dedicated training courses. The practical aspects of vacuum can be challenging to learn online, so in-person classes emphasize a hands-on approach that allows participants to assemble and characterize rough- and high-vacuum systems. “In our live sessions everyone has the opportunity to bolt a system together, test which configuration will pump down faster, and gain insights into leak detection,” says Screech. “We have students from industry and academia in the classes, and they are always able to share tips and techniques with one another.” Additionally, the company maintains a vacuum community as an online resource, where questions can be posed to experts, and collaboration among users is encouraged.
Agilent recognizes that vacuum is an enabler for scientific research and that creating the ideal vacuum system can be challenging. “Customers can trust Agilent as a technology partner,” says Screech. “We can share our experience and help them create the optimal vacuum system for their needs.”
- A new webinar from Agilent, Vacuum for Physics Research, can now be viewed on demand via physicsworld.com. Further resources are available through the Agilent vacuum webinar library, or contact vacuum_training@agilent.com for more information about education and training opportunities.
- You can learn more about vacuum and leak detection technologies from Agilent through the company’s website. Alternatively, visit the partnership webpage to chat with an expert, find training, and explore the benefits of partnering with Agilent.
The post Vacuum expertise enables physics research appeared first on Physics World.
Solid-state nuclear clocks brought closer by physical vapour deposition
Physicists in the US have taken an important step towards a practical nuclear clock by showing that the physical vapour deposition (PVD) of thorium-229 could reduce the amount of this expensive and radioactive isotope needed to make a timekeeper. The research could usher in an era of robust and extremely accurate solid-state clocks that could be used in a wide range of commercial and scientific applications.
Today, the world’s most precise atomic clocks are the strontium optical lattice clocks created by Jun Ye’s group at JILA in Boulder, Colorado. These are accurate to within a second in the age of the universe. However, because these clocks use an atomic transition between electron energy levels, they can easily be disrupted by external electromagnetic fields. This means that the clocks must be operated in isolation in a stable lab environment. While other types of atomic clock are much more robust – some are deployed on satellites – they are no where near as accurate as optical lattice clocks.
Some physicists believe that transitions between energy levels in atomic nuclei could offer a way to make robust, portable clocks that deliver very high accuracy. As well as being very small and governed by the strong force, nuclei are shielded from external electromagnetic fields by their own electrons. And unlike optical atomic clocks, which use a very small number of delicately-trapped atoms or ions, many more nuclei can be embedded in a crystal without significantly affecting the clock transition. Such a crystal could be integrated on-chip to create highly robust and highly accurate solid-state timekeepers.
Sensitive to new physics
Nuclear clocks would also be much more sensitive to new physics beyond the Standard Model – allowing physicists to explore hypothetical concepts such as dark matter. “The nuclear energy scale is millions of electron volts; the atomic energy scale is electron volts; so the effects of new physics are also much stronger,” explains Victor Flambaum of Australia’s University of New South Wales.
Normally, a nuclear clock would require a laser that produces coherent gamma rays – something that does not exist. By exquisite good fortune, however, there is a single transition between the ground and excited states of one nucleus in which the potential energy changes due to the strong nuclear force and the electromagnetic interaction almost exactly cancel, leaving an energy difference of just 8.4 eV. This corresponds to vacuum ultraviolet light, which can be created by a laser.
That nucleus is thorium-229, but as Ye’s postgraduate student Chuankun Zhang explains, it is very expensive. “We bought about 700 µg for $85,000, and as I understand it the price has been going up”.
In September, Zhang and colleagues at JILA measured the frequency of the thorium-229 transition with unprecedented precision using their strontium-87 clock as a reference. They used thorium-doped calcium fluoride crystals. “Doping thorium into a different crystal creates a kind of defect in the crystal,” says Zhang. “The defects’ orientations are sort of random, which may introduce unwanted quenching or limit our ability to pick out specific atoms using, say, polarization of the light.”
Layers of thorium fluoride
In the new work, the researchers collaborated with colleagues in Eric Hudson’s group at University of California, Los Angeles and others to form layers of thorium fluoride between 30 nm and 100 nm thick on crystalline substrates such as magnesium fluoride. They used PVD, which is a well-established technique that evaporates a material from a hot crucible before condensing it onto a substrate. The resulting samples contained three orders of magnitude less thorium-229 than the crystals used in the September experiment, but had the comparable thorium atoms per unit area.
The JILA team sent the samples to Hudson’s lab for interrogation by a custom-built vacuum ultraviolet laser. Researchers led by Hudson’s student Richard Elwell observed clear signatures of the nuclear transition and found the lifetime of the excited state to be about four times shorter than observed in the crystal. While the discrepancy is not understood, the researchers say this might not be problematic in a clock.
More significant challenges lie in the surprisingly small fraction of thorium nuclei participating in the clock operation – with the measured signal about 1% of the expected value, according to Zhang. “There could be many reasons. One possibility is because the vapour deposition process isn’t controlled super well such that we have a lot of defect states that quench away the excited states.” Beyond this, he says, designing a mobile clock will entail miniaturizing the laser.
Flambaum, who was not involved in the research, says that it marks “a very significant technical advance,” in the quest to build a solid-state nuclear clock – something that he believes could be useful for sensing everything from oil to variations in the fine structure constant. “As a standard of frequency a solid state clock is not very good because it’s affected by the environment,” he says, “As soon as we know the frequency very accurately we will do it with [trapped] ions, but that has not been done yet.”
The research is described in Nature.
The post Solid-state nuclear clocks brought closer by physical vapour deposition appeared first on Physics World.
Metamaterials hit the market: how the UK Metamaterials Network is turning research into reality
Metamaterials are artificial 3D structures that can provide all sorts of properties not available with “normal” materials. Pioneered around a quarter of a century ago by physicists such as John Pendry and David Smith, metamaterials can now be found in a growing number of commercial products.
Claire Dancer and Alastair Hibbins, who are joint leads of the UK Metamaterials Network, recently talked to Matin Durrani about the power and potential of these “meta-atom” structures. Dancer is an associate professor and a 125th anniversary fellow at the University of Birmingham, UK, while Hibbins is a professor and director of the Centre of Metamaterials Research and Innovation at the University of Exeter, UK.
Let’s start with the basics: what are metamaterials?
Alastair Hibbins (AH): If you want to describe a metamaterial in just one sentence, it’s all about adding functionality through structure. But it’s not a brand new concept. Take the stained-glass windows in cathedrals, which have essentially got plasmonic metal nanoparticles embedded in them. The colour of the glass is dictated by the size and the shape of those particles, which is what a metamaterial is all about. It’s a material where the properties we see or hear or feel depend on the structure of its building blocks.
Physicists have been at the forefront of much recent work on metamaterials, haven’t they?
AH: Yes, the work was reignited just before the turn of the century – in the late 1990s – when the theoretical physicist John Pendry kind of recrystallized this idea (see box “Metamaterials and John Pendry”). Based at Imperial College, London, he and others were was looking at artificial materials, such as metallic meshes, which had properties that were really different from the metal of which they were comprised.
In terms of applications, why are metamaterials so exciting?
Claire Dancer (CD): Materials can do lots of fantastic things, but metamaterials add a new functionality on top. That could be cloaking or it might be mechanically bending and flexing in a way that its constituent materials wouldn’t. You can, for example, have “auxetic metamaterials” with a honeycomb structure that gets wider – not thinner – when stretched. There are also nanoscale photonic metamaterials, which interact with light in unusual ways.
John Pendry: metamaterial pioneer
Metamaterials are fast becoming commercial reality, but they have their roots in physics –in particular, a landmark paper published in 2000 by theoretical physicist John Pendry at Imperial College, London (Phys. Rev. Lett. 85 3966). In the paper, Pendry described how a metamaterial could be created with a negative index of refraction for microwave radiation, calculating that it could be used to make a “perfect” lens that would focus an image with a resolution not restricted by the wavelength of light (Physics World September 2001 pp47–51).
A metamaterial using copper rings deposited on an electronic circuit board was built the following year by the US physicist David Smith and colleagues at the University of California, San Diego (Science 292 77). Pendry later teamed up with Smith and others to use negative-index metamaterials to create a blueprint for an invisibility cloak – the idea being that the metamaterial would guide light around an object to be hidden (Science 312 1780). While the mathematics describing how electromagnetic radiation interacts with metamaterials can be complicated, Pendry realized that it could be described elegantly by borrowing ideas from Einstein’s general theory of relativity.
Matin Durrani
What sorts of possible applications can metamaterials have?
CD: There are lots, including some exciting innovations in body armour and protective equipment for sport – imagine customized “auxetic helmets” and protective devices for contact sports like rugby. Metamaterials can also be used in communications, exploiting available frequencies in an efficient, discrete and distinct way. In the optical range, we can create “artificial colour”, which is leading to interesting work on different kinds of glitter and decorative substances. There are also loads of applications in acoustics, where metamaterials can absorb some of the incidental noise that plagues our world.
Have any metamaterials reached the commercial market yet?
AH: Yes. The UK firm Sonnobex won a Business Innovation Award from the Institute of Physics (IOP) in 2018 for its metamaterials that can reduce traffic noise or the annoying “buzz” from electrical power transformers. Another British firm – Metasonnix – won an IOP business award last year for its lightweight soundproofing metamaterial panels. They let air pass through so could be great as window blinds – cutting noise and providing ventilation at the same time.
High-end audio manufacturers, such as KEF, are using metamaterials as part of the baffle behind the main loudspeaker. There’s also Metahelios, which was spun out from the University of Glasgow in 2022. It’s making on-chip, multi-wavelength pixelated cameras that are also polarization-sensitive and could have applications in defence and aerospace.
The UK has a big presence in metamaterials but the US is strong too isn’t it?
AH: Perhaps the most famous metamaterial company is Metalenz, which makes flat conformal lenses for mobile phones – enabling amazing optical performance in a compact device. It was spun off in 2021 from the work of Federico Capasso at Harvard University. You can already find its products in Apple and Samsung phones and they’re coming to Google’s devices too.
Other US companies include Kymeta, which makes metamaterial-based antennas, and Lumotive, which is involved in solid-state LIDAR systems for autonomous vehicles and drones. There’s also Echodyne and Pivotal Commware. Those US firms have all received a huge amount of start-up and venture funding, and are doing really well at showing how metamaterials can make money and sell products.
What are the aims of the UK Metamaterials Network?
CD: One important aim is to capitalize on all the work done in this country, supporting fundamental discovery science but driving commercialization too. We’ve been going since 2021 and have grown to a community of about 900 members – largely UK academics but with industry and overseas researchers too. We want to provide outsiders with a single source of access to the community and – as we move towards commercialization – develop ways to standardize and regulate metamaterials.
As well as providing an official definition of metamaterials (see box “Metamaterials: the official definition”), we also have a focus on talent and skills, trying to get the next generation into the field and show them it’s a good place to work.
How is the UK Metamaterials Network helping get products onto the market?
CD: The network wants to support the beginning of the commercialization process, namely working with start-ups and getting industry engaged, hopefully with government backing. We’ve also got various special-interest groups, focusing on the commercial potential of acoustic, microwave and photonics materials. And we’ve set up four key challenge areas that cut across different areas of metamaterials research: manufacturing; space and aviation; health; and sustainability.
Metamaterials: the official definition
One of the really big things the UK Metamaterials Network has done is to crowdsource the definition of a metamaterial, which has long been a topic of debate. A metamaterial, we have concluded, is “a 3D structure with a response or function due to collective effects of their building blocks (or meta-atoms) that is not possible to achieve conventionally with any individual constituent material”.
A huge amount of work went into this definition. We talked with the community and there was lots of debate about what should be in and what should be out. But I think we’ve emerged with a really nice definition there that’s going to stay in place for many years to come. It might seem a little trivial but it’s one of our great achievements.
Alastair Hibbins
What practical support can you give academics?
CD: The UK Metamaterials Network has been funded by the Engineering and Physical Sciences Research Council to set up a Metamaterials Network Plus programme. It aims to develop more research in these areas so that metamaterials can contribute to national and global priorities by, for example, being sustainable and ensuring we have the infrastructure for testing and manufacturing metamaterials on a large scale. In particular, we now have “pump prime” funding that we can distribute to academics who want to explore new applications of – and other reserach into – metamaterials.
What are the challenges of commercializing metamaterials?
CD: Commercializing any new scientific idea is difficult and metamaterials are no exception. But one issue with metamaterials is to ensure industry can manufacture them in big volumes. Currently, a lot of metamaterials are made in research labs by 3D printing or by manually sticking and gluing things together, which is fine if you just want to prove some interesting physics. But to make metamaterials in industry, we need techniques that are scalable – and that, in turn, requires resources, funding, infrastructure and a supply of talented, skilled workers. The intellectual property also needs to be carefully managed as much of the underlying work is done in collaborations with universities. If there are too many barriers, companies will give up and not bother trying.
Looking ahead, where do you think metamaterials will be a decade from now?
AH: If we really want to fulfil their potential, we’d ideally fund metamaterials as a national UK programme, just as we do with quantum technology. Defence has been one of the leaders in funding metamaterials because of their use in communications, but we want industry more widely to adopt metamaterials, embedding them in everyday devices. They offer game-changing control and I can see metamaterials in healthcare, such as for artificial limbs or medical imaging. Metamaterials could also provide alternatives in the energy sector, where we want to reduce the use of rare-earth and other minerals. In space and aerospace, they could function as incredibly lightweight, but really strong, blast-resistant materials for satellites and satellite communications, developing more capacity to send information around the world.
How are you working with the IOP to promote metamaterials?
AH: The IOP has an ongoing programme of “impact projects”, informed by the physics community in the UK and Ireland. Having already covered semiconductors, quantum tech and the green economy through such projects, the IOP is now collaborating with the UK Metamaterials Network on a “pathfinder” impact project. It will examine the commercialization and exploitation of metamaterials in ICT, sustainability, health, defence and security.
Have you been able to interact with the research community?
CD: We’ve so far run three annual industry events showcasing the applications of metamaterials. The first two were at the National Physical Laboratory in Teddington, and in Leeds, with last year’s held at the IOP in December. It included a panel discussion about how to overcome barriers to commercialization along with demonstrations of various technologies, and presentations from academics and industrialists about their innovations. We also discussed the pathfinder project with the IOP as we’ll need the community’s help to exploit the power of metamaterials.
What’s the future of the UK Metamaterials Network?
AH: It’s an exciting year ahead working with the IOP and we want to involve as many new sectors as possible. We’re also likely to hit a thousand members of our network: we’ll have a little celebration when we reach that milestone. We’ll be running a 2025 showcase event as well so there’s a lot to look forward to.
- This article is an edited version of an interview on the Physics World Weekly podcast of 5 December 2024
The post Metamaterials hit the market: how the UK Metamaterials Network is turning research into reality appeared first on Physics World.
NASA Wants to Explore the Icy Moons of Jupiter and Saturn With Autonomous Robots
Moonstruck: art and science collide in stunning collection of lunar maps and essays
As I write this [and don’t tell the Physics World editors, please] I’m half-watching out of the corner of my eye the quirky French-made, video-game spin-off series Rabbids Invasion. The mad and moronic bunnies (or, in a nod to the original French, Les Lapins Crétins) are presently making another attempt to reach the Moon – a recurring yet never-explained motif in the cartoon – by stacking up a vast pile of junk; charming chaos ensues.
As explained in LUNAR: A History of the Moon in Myths, Maps + Matter – the exquisite new Thames & Hudson book that presents the stunning Apollo-era Lunar Atlas alongside a collection of charming essays – madness has long been associated with the Moon. One suspects there was a good kind of mania behind the drawing up of the Lunar Atlas, a series of geological maps plotting the rock formations on the Moon’s surface that are as much art as they are a visualization of data. And having drooled over LUNAR, truly the crème de la crème of coffee table books, one cannot fail but to become a little mad for the Moon too.
Many faces of the Moon
As well as an exploration of the Moon’s connections (both etymologically and philosophically) to lunacy by science writer Kate Golembiewski, the varied and captivating essays of 20 authors collected in LUNAR cover the gamut from the Moon’s role in ancient times (did you know that the Greeks believed that the souls of the dead gather around the Moon?) through to natural philosophy, eclipses, the space race and the Artemis Programme. My favourite essays were the more off-beat ones: the Moon in silent cinema, for example, or its fascinating influence on “cartes de visite”, the short-lived 19th-century miniature images whose popularity was boosted by Queen Victoria and Prince Albert. (I, for one, am now quite resolved to have my portrait taken with a giant, stylised, crescent moon prop.)
The pulse of LUNAR, however, are the breathtaking reproductions of all 44 of the exquisitely hand-drawn 1:1,000,000 scale maps – or “quadrangles” – that make up the US Geological Survey (USGS)/NASA Lunar Atlas (see header image).
Drawn up between 1962 and 1974 by a team of 24 cartographers, illustrators, geographers and geologists, the astonishing Lunar Atlas captures the entirety of the Moon’s near side, every crater and lava-filled maria (“sea”), every terra (highland) and volcanic dome. The work began as a way to guide the robotic and human exploration of the Moon’s surface and was soon augmented with images and rock samples from the missions themselves.
One could be hard-pushed to sum it up better than the American science writer Dava Sobel, who pens the book’s forward: “I’ve been to the Moon, of course. Everyone has, at least vicariously, visited its stark landscapes, driven over its unmarked roads. Even so, I’ve never seen the Moon quite the way it appears here – a black-and-white world rendered in a riot of gorgeous colours.”
Many moons ago
Having been trained in geology, the sections of the book covering the history of the Lunar Atlas piqued my particular interest. The Lunar Atlas was not the first attempt to map the surface of the Moon; one of the reproductions in the book shows an earlier effort from 1961 drawn up by USGS geologists Robert Hackman and Eugene Shoemaker.
Hackman and Shoemaker’s map shows the Moon’s Copernicus region, named after its central crater, which in turn honours the Renaissance-era Polish polymath Nicolaus Copernicus. It served as the first demonstration that the geological principles of stratigraphy (the study of rock layers) as developed on the Earth could also be applied to other bodies. The duo started with the law of superposition; this is the principle that when one finds multiple layers of rock, unless they have been substantially deformed, the older layer will be at the bottom and the youngest at the top.
“The chronology of the Moon’s geologic history is one of violent alteration,” explains science historian Matthew Shindell in LUNAR’s second essay. “What [Hackman and Shoemaker] saw around Copernicus were multiple overlapping layers, including the lava plains of the maria […], craters displaying varying degrees of degradations, and materials and features related to the explosive impacts that had created the craters.”
From these the pair developed a basic geological timeline, unpicking the recent history of the Moon one overlapping feature at the time. They identified five eras, with the Copernican, named after the crater and beginning 1.1 billion years ago, being the most recent.
Considering it was based on observations of just one small region of the Moon, their timescale was remarkably accurate, Shidnell explains, although subsequent observations have redefined its stratigraphic units – for example by adding the Pre-Nectarian as the earliest era (predating the formation of Nectaris, the oldest basin), whose rocks can still be found broken up and mixed into the lunar highlands.
Accordingly, the different quadrants of the atlas very much represent an evolving work, developing as lunar exploration progressed. Later maps tended to be more detailed, reflecting a more nuanced understanding of the Moon’s geological history.
New moon
Parts of the Lunar Atlas have recently found new life in the development of the first-ever complete map of the lunar surface, the “Unified Geologic Map of the Moon”. The new digital map combines the Apollo-era data with that from more recent satellite missions, including the Japan Aerospace Exploration Agency (JAXA)’s SELENE orbiter.
As former USGS Director and NASA astronaut Jim Reilly said when the unified map was first published back in 2020: “People have always been fascinated by the Moon and when we might return. So, it’s wonderful to see USGS create a resource that can help NASA with their planning for future missions.”
I might not be planning a Moon mission (whether by rocket or teetering tower of clutter), but I am planning to give the stunning LUNAR pride of place on my coffee table next time I have guests over – that’s how much it’s left me, ahem, “over the Moon”.
- 2024 Thames and Hudson 256pp £50.00
The post Moonstruck: art and science collide in stunning collection of lunar maps and essays appeared first on Physics World.
NASA punts decision on Mars sample return to next administration
As academic Bluesky grows, researchers find strengths—and shortcomings
The First US Bird Flu Death Is a Stark Warning
Methylated Multivitamins May Be Easier to Absorb for Some People
Predicting Mars Weather Could Prevent Future Rover’s Dusty Demise
Human Metapneumovirus Is Finally Being Taken Seriously
NASA's LEXI Shoots to the Moon to Take First Full Images of Earth's Magnetic Field
New method recycles quantum dots used in microscopic lasers
Researchers at the University of Strathclyde, UK, have developed a new method to recycle the valuable semiconductor colloidal quantum dots used to fabricate supraparticle lasers. The recovered particles can be reused to build new lasers with a photoluminescence quantum yield almost as high as lasers made from new particles.
Supraparticle lasers are a relatively new class of micro-scale lasers that show much promise in applications such as photocatalysis, environmental sensing, integrated photonics and biomedicine. The active media in these lasers – the supraparticles – are made by assembling and densely packing colloidal quantum dots (CQDs) in the microbubbles formed in a surfactant-stabilized oil-and-water emulsion. The underlying mechanism is similar to the way that dish soap, cooking oil and water mix when we do the washing up, explains Dillon H Downie, a physics PhD student at Strathclyde and a member of the research team led by Nicolas Laurand.
Supraparticles have a high refractive index compared to their surrounding medium. Thanks to this difference, light at the interface between them experiences total internal reflection. This means that when the diameter of the supraparticles is an integer multiple of the wavelength of the incident light, so-called whispering gallery modes (resonant light waves that travel around a concave boundary) form within the supraparticles.
“The supraparticles are therefore microresonators made of an optical gain material (the quantum dots),” explains Downie, “and individual supraparticles can be made to lase by optically pumping them.”
The problem is that many CQDs are made from expensive and sometimes toxic elements. Demand for these increasingly scarce elements will likely outstrip supply before the end of this decade, but at present, only 2% of quantum dots made from these rare-earth elements are recycled. While researchers have been exploring ways of recovering them from electronic waste, the techniques employed often require specialized instruments, complex bio-metallurgical absorbents and hazardous acid-leaching processes. A more environmentally friendly approach is thus sorely needed.
Exceptional recycling potential
In the new work, Laurand, Downie and colleagues recycled supraparticle lasers by first disassembling the CQDs in them. They did this by suspending the dots in an oil phase and applying ultrasonic high-frequency sound waves and heat. They then added water to separate out the dots. Finally, they filtered and purified the disassembled CQDs and tested their fluorescence efficiency before reassembling them into a new laser configuration.
Using this process, the researchers were able to recover 85% of the quantum dots from the initial supraparticle batch. They also found that the recycled quantum dots boasted a photoluminescence quantum yield of 83 ± 16%, which is comparable to the 86 ± 9% for the original particles.
“By testing the lasers’ performance both before and after this process we confirmed their exceptional recycling potential,” Downie says.
Simple, practical technique
Downie describes the team’s technique as simple and practical even for research labs that lack specialized equipment such as centrifuges and scrubbers. He adds that it could also be applied to other self-assembled nanocomposites.
“As we expect nanoparticle aggregates in everything from wearable medical devices to ultrabright LEDs in the future, it is, therefore, not inconceivable that some of these could be sent back for specialized recycling in the same way we do with commercial batteries today,” he tells Physics World. “We may even see a future where rare-earth or some semiconductor elements become critically scarce, necessitating the recycling for any and all devices containing such valuable nanoparticles.”
By proving that supraparticles are reusable, Downie adds, the team’s method provides “ample justification” to anyone wishing to incorporate supraparticle technology into their devices. “This is seen as especially relevant if they are to be used in biomedical applications such as targeted drug delivery systems, which would otherwise be limited to single-use,” he says.
With work on colloidal quantum dots and supraparticle lasers maturing at an incredible rate, Downie adds that it is “fantastic to be able to mature the process of their recycling alongside this progress, especially at such an early stage in the field”.
The study is detailed in Optical Materials Express.
The post New method recycles quantum dots used in microscopic lasers appeared first on Physics World.
Does Dry January Really Make People Healthier?
Correcting Genetic Spelling Errors With Next-Generation Crispr
Satellites Can Now Identify Methane ‘Super-Emitters’
Entanglement entropy in protons affects high-energy collisions, calculations reveal
An international team of physicists has used the principle of entanglement entropy to examine how particles are produced in high-energy electron–proton collisions. Led by Kong Tu at Brookhaven National Laboratory in the US, the researchers showed that quarks and gluons in protons are deeply entangled and approach a state of maximum entanglement when they take part in high-energy collisions.
While particle physicists have made significant progress in understanding the inner structures of protons, neutrons, and other hadrons, there is still much to learn. Quantum chromodynamics (QCD) says that the proton and other hadrons comprise quarks, which are tightly bound together via exchanges of gluons – mediators of the strong force. However, using QCD to calculate the properties of hadrons is notoriously difficult except under certain special circumstances.
Calculations can be simplified by describing the quarks and gluons as partons in a model that was developed in late 1960s by James Bjorken, Richard Feynman, Vladimir Gribov and others. “Here, all the partons within a proton appear ‘frozen’ when the proton is moving very fast relative to an observer, such as in high-energy particle colliders,” explains Tu.
Dynamic and deeply complex interactions
While the parton model is useful for interpreting the results of particle collisions, it cannot fully capture the dynamic and deeply complex interactions between quarks and gluons within protons and other hadrons. These interactions are quantum in nature and therefore involve entanglement. This is a purely quantum phenomenon whereby a group of particles can be more highly correlated than is possible in classical physics.
“To analyse this concept of entanglement, we utilize a tool from quantum information science named entanglement entropy, which quantifies the degree of entanglement within a system,” Tu explains.
In physics, entropy is used to quantify the degree of randomness and disorder in a system. However, it can also be used in information theory to measure the degree of uncertainty within a set of possible outcomes.
“In terms of information theory, entropy measures the minimum amount of information required to describe a system,” Tu says. “The higher the entropy, the more information is needed to describe the system, meaning there is more uncertainty in the system. This provides a dynamic picture of a complex proton structure at high energy.”
Deeply entangled
In this context, particles in a system with high entanglement entropy will be deeply entangled – whereas those in a system with low entanglement entropy will be mostly uncorrelated.
In recent studies, entanglement entropy has been used to described how hadrons are produced through deep inelastic scattering interactions – such as when an electron or neutrino collides with a hadron at high energy. However, the evolution with energy of entanglement entropy within protons had gone largely unexplored. “Before we did this work, no one had looked at entanglement inside of a proton in experimental high-energy collision data,” says Tu.
Now, Tu’s team investigated how entanglement entropy varies with the speed of the proton – and how this relationship relates to the hadrons created during inelastic collisions.
Matching experimental data
Their study revealed that the equations of QCD can accurately predict the evolution of entanglement entropy – with their results closely matching with experimental collision data. Perhaps most strikingly, they discovered that if this entanglement entropy is increased at high energies, it may approach a state of maximum entanglement under certain conditions. This high degree of entropy is evident in the large numbers of particles that are produced in electron–proton collisions.
The researchers are now confident that their approach could lead to further insights about QCD. “This method serves as a powerful tool for studying not only the structure of the proton, but also those of the nucleons within atomic nuclei.” Tu explains. “It is particularly useful for investigating the underlying mechanisms by which nucleons are modified in the nuclear environment.”
In the future, Tu and colleagues hope that their model could boost our understanding of processes such as the formation and fragmentation of hadrons within the high-energy jets created in particle collisions, and the resulting shift in parton distributions within atomic nuclei. Ultimately, this could lead to a fresh new perspective on the inner workings of QCD.
The research is described in Reports on Progress in Physics.
The post Entanglement entropy in protons affects high-energy collisions, calculations reveal appeared first on Physics World.