Where to Catch The Full Blood Moon and Total Lunar Eclipse this March 2026



India’s space agency says a valve failure prevented a navigation spacecraft launched more than a year ago from raising its orbit.
The post Valve malfunction blamed for failure of Indian satellite to raise its orbit appeared first on SpaceNews.
Researchers in China have distributed device-independent quantum cryptographic keys over city-scale distances for the first time – a significant improvement compared to the previous record of a few hundred metres. Led by Jian-Wei Pan of the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS), the researchers say the achievement brings the world a step closer to a completely quantum-secure Internet.
Many of us use Internet encryption almost daily, for example when transferring sensitive information such as bank details. Today’s encryption techniques use keys based on mathematical algorithms, and classical supercomputers cannot crack them in any practical amount of time. Powerful quantum computers could change this, however, which has driven researchers to explore potential alternatives.
One such alternative, known as quantum key distribution (QKD), encrypts information by exploiting the quantum properties of photons. The appeal of this approach is that when quantum-entangled photons transmit a key between two parties, any attempted hack by a third party will be easy to detect because their intervention will disturb the entanglement.
While the basic form of QKD enables information to be transmitted securely, it does have some weak points. One of them is that a malicious third party could steal the key by hacking the devices the sender and/or receiver is using.
A more advanced version of QKD is device-independent QKD (DI-QKD). As its name suggests, this version does not depend on the state of a device. Instead, it derives its security key directly from fundamental quantum phenomena – namely, the violation of conditions known as Bell’s inequalities. Establishing this violation ensures that a third party has not interfered with the process employed to generate the secure key.
The main drawback of DI-QKD is that it is extremely technically demanding, requiring high-quality entanglement and an efficient means of detecting it. “Until now, this has only been possible over short distances – 700 m at best – and in laboratory-based proof-of-principle experiments,” says Pan.
In the latest work, Pan and colleagues constructed two quantum nodes consisting of single trapped atoms. Each node was equipped with four high-numerical-aperture lenses to efficiently collect single photons emitted by the atoms. These photons have a wavelength of 780 nm, which is not optimal for transmission through optical fibres. The team therefore used a process known as quantum frequency conversion to shift the emitted photons to a longer wavelength of 1315 nm, which is less prone to optical loss in fibres.
By interfering and detecting a single photon, the team was able to generate what’s known as heralded entanglement between the two quantum nodes – something Pan describes as “an essential resource” for DI-QKD. While significant progress has been made in extending the entangling distance for qubits of this type, Pan notes that these advances have been hampered by low fidelities and low entangling rates.
To address this, Pan and his colleagues employed a single-photon-based entangling scheme that boosts remote entangling probability by more than two orders of magnitude. They also placed their atoms in highly excited Rydberg states to generate single photons with high purity and low noise. “It is these innovations that allow us to achieve high-fidelity and high-rate entanglement over a long distance,” Pan explains.
Using this setup, the researchers explored the feasibility of performing DI-QKD between two entangled atoms linked by optical fibres up to 100 km in length. In this study, which is detailed in Science, they demonstrated practical DI-QKD under finite-key security over 11 km of fibre.
Based on the technologies they developed, Pan thinks it could now be possible to implement DI-QKD over metropolitan scales with existing optical fibres. Such a system could provide encrypted communication with the highest level of physical security, but Pan notes that it could also have other applications. For example, high-fidelity entanglement could also serve as a fundamental building block for constructing quantum repeaters and scaling up quantum networks.
Carlos Sabín, a physicist at the Autonomous University of Madrid (UAM), Spain, who was not involved in the study, says that while the work is an important step, there is still a long way to go before we are able to perform completely secure and error-free quantum key distribution on an inter-city scale. “This is because quantum entanglement is an inherently fragile property,” Sabín explains. “As light travels through the fibre, small losses accumulate and the entanglement generated is of poorer quality, which translates into higher error rates in the cryptographic keys generated. Indeed, the results of the experiment show that errors in the key range from 3% when the distance is 11 km to more than 7% for 100 km.”
Pan and colleagues now plan to add more atoms to each node and to use techniques like tweezer arrays to further enhance both the entangling rate and the secure key rate over longer distances. “We are aiming for 1000 km, over which we hope to incorporate quantum repeaters,” Pan tells Physics World. “By using processes like ‘entanglement swapping’ to connect a series of such two-node entanglement, we anticipate that we will be able maintain a similar entangling rate for much longer distances.”
The post Quantum-secure Internet expands to citywide scale appeared first on Physics World.

A recent SpaceNews opinion article argued that it is time to “take astronomy off Earth.” The suggestion is straightforward: If satellite constellations and commercial space activity threaten ground-based astronomy, perhaps astronomers should simply move their work into space. As current, incoming and past presidents of the American Astronomical Society, we feel impelled to respond. As […]
The post The future of astronomy is both on Earth and in space appeared first on SpaceNews.
Todd McNutt is a radiation oncology physicist at Johns Hopkins University in the US and the co-founder of Oncospace, where he led the development of an artificial intelligence (AI)-powered tool that simultaneously accelerates radiation planning and elevates plan quality and consistency. The software, now rebranded as Plan AI and available from US manufacturer Sun Nuclear, draws upon data from thousands of previous radiotherapy treatments to predict the lowest possible dose to healthy tissues for each new patient. Treatment planners then use this information to define goals that streamline and automate the creation of a best achievable plan.
Physics World’s Tami Freeman spoke with McNutt about the evolution of Oncospace and the benefits that Plan AI brings to radiotherapy patients and cancer treatment centres.
Back in 2007, several groups were discussing how we could better use clinical data for discovery and knowledge generation. I had several meetings with folks at Johns Hopkins, including Alex Szalay who helped develop the Sloan Digital Sky Survey. He built a large database of galaxies and stars and it became a huge research platform for both amateur and professional astronomers.
From that discussion, and other initiatives, we looked at moving towards structured data collection for patients in the clinical environment. By marrying these data with radiation treatment plans we could study how dose distributions across the anatomy affect patient outcomes. And we took that opportunity to build a database for radiotherapy.
After populating the database with data from many patients, we could examine which anatomic features impact our ability to generate a plan that minimizes radiation dose to normal tissues while treating target volumes as best as possible. We came up with a feature set that characterized the relationships between normal anatomy and targets, as well as target complexity.
This early work allowed us to predict expected doses from these shape-relationship features, and it worked well. At that point, we knew we could tap into this database and generate a prediction that could help create treatment plans for new patients. We thought of this as personalized medicine: for the first time, we could see the level of treatment plan quality that we could achieve for a specific patient.
I thought that this was useful commercially and that we should get it out to other clinics. Praveen Sinha, who I’d known from my previous work at Philips and now leads Sun Nuclear’s software business line, asked if I wanted to create a startup. The timing was right for both of us and I had a team here ready to go, so we went ahead and did it. With his knowledge of startups and my knowledge of what we wanted to achieve, we had perfect timing and a perfect group to work with.
The idea behind predictive planning is that, for a given patient, I can predict the expected dose that I should be able to achieve for them.


Treatment planning involves specifying dosimetric objectives to the planning system and asking it to optimize radiation delivery to meet these. But nobody really knows what the right objectives even are – it is just a trial-and-error process. Plan AI’s prediction provides a rational set of objectives for plan optimization, allowing the planning system’s algorithm to move towards a good solution and making treatment planning an easier problem to solve.
Peer review involves a peer physician looking at every treatment plan to evaluate it for quality and safety. But again, people don’t really know the level of quality you can generate, it depends on the patient’s anatomy. Providing a predicted dose with clinical dose goals enables a rapid review to see whether it is a high-quality plan or not.
In the past we looked at simple things like whether a contour is missing slices or contains discontinuities and Plan AI checks for this, but you can do far more with AI. For example, you could look at all the contoured rectums in the system and predict if your contour goes too far into the sigmoid colon, then it may be mis-contoured. We have research software that can flag such potential anomalies so they don’t get overlooked.
When we first started, we developed a large SQL database containing all the shape-relationship features and dosimetry features. The SQL language is ideal for being able to query and sift through the data, but when the company was formed, we recognized that there was some age to that technology.
So for the Plan AI data lake, we extracted all the different shape-relationship and shape-complexity features and put them into a Parquet database in the cloud. This made the data lake much more amenable to applying machine learning algorithms to it. The SQL data lake at Johns Hopkins is maintained separately and primarily used to investigate toxicity predictions and spatial dose patterns. But for Plan AI, the models are fixed and streamlined for the specific task of dose prediction.
One of the first tasks was to curate the data, using the AAPM’s standardized structure-naming model. Our data scientist Julie Shade wrote some tools for automatic name mapping and target identification; that helped us process much larger amounts of data for the model.
Once we had all the shape-relationship and shape-complexity features and all the doses, we trained the models by anatomical region. We have FDA-approved models for the male and female pelvis, thorax, abdomen and head-and-neck. For each of these, we predict the doses for every organ-at-risk. Then we used a five-fold validation model to make sure that the predictions were good on an internal data set.
We also performed external validation at institutions including Johns Hopkins and Montefiore hospitals. We created predicted plans from recent treatment plans that had been evaluated by physicians. For almost all cases, both plan quality and plan efficiency were improved with Plan AI.
One aspect of this training is that whenever we drive optimization via predictive planning we want to push towards the best achievable dose. Regular machine learning predicts an expected, or average, dose across all patients. But you never want to drive a treatment plan towards the average dose, because then every plan you generate will be happy being average. Our model predicts both the average and the best achievable dose, and drives plan optimization towards the best achievable.
Radiation therapy is protocol-driven: we know what technique we’re going to use to treat and what our clinical dose goals are for different structures. What we don’t know is the patient-specific part of that. So for each anatomical region, we built models out of a wide range of treatment protocols, with many different types of patients, to ensure that the same prediction model works for any protocol. This means a user can use any protocol for treatment and the predictions will work, they don’t have to retrain anything. It’s ready to go out of the box, there’s a library of protocols to start with, and you can change protocols as you need for your own clinic.
The other part of being clinic-ready is aligning with the way that planning is currently performed, which is using dose–volume histograms. Treatment plans are optimized by manipulating these dose objectives, and that’s exactly what we predict. So users aren’t changing the whole paradigm of how planners operate. They still use their treatment planning system (TPS) – we just put the objectives in there. Basically, a TPS script sends the patient’s CT and contours to the cloud, where Plan AI makes the predictions. The TPS then pulls back in the objectives built from the models, based on this specific patient’s anatomy. The TPS runs the optimization and, as a last step, can send the plan back to Plan AI to check that it fits within the best achievable predictions.
Interestingly, the challenges aren’t technical, they are more human related. One of the more systemic challenges is data security when using medical data for training. A nice thing about our system is that the features we generate from treatment plans are just mathematical shape-relationship features and don’t involve a lot of identifiable information.
AI has been used in radiation therapy for image contouring and auto-segmentation, and early efforts were not so good. So, there’s always a good, healthy scepticism. But once you show people that it works and works well, this can be overcome. I have seen some people worried about job security and AI taking over. We are medical professionals designing a treatment plan to care for a patient and there’s a lot of pride and art in that – if you automate that, it takes away some of this pride and art.
I tell people that if we automate the easier things, then they can spend their quality time on the more difficult and challenging cases, because that’s where their talent might be needed more.
Introduce it as an assistant, not as a solution. You want people that already know what they’re doing to be able to use their knowledge more efficiently. We want to make their jobs easier and show them that it also improves quality.
With dosimetrists, for example, they create a plan and work hard getting the dose down – and then the physician looks at it and suggests that they can do better. Predictive planning gives them confidence that they are right and takes the uncertainty out of the physician review process. And once you’ve gained that level of confidence, you can start using it for adaptive planning or other technologies.
Right now, there’s been a lot of data collected, but we want that data to advance and learn. Having multiple centres adding to this pool of knowledge and being able to continually update those models from new, broader data sets could be of huge value.
In terms of patient outcomes, we’ve done a lot of the work looking at how the spatial pattern of dose impacts toxicity and outcomes. This is part of the research being performed at Johns Hopkins and still in discovery mode. But down the road, some of these predictions of normal tissue outcomes could be fed into the planning process to help reduce toxicity at the patient level.
During my prior experience building treatment planning systems, the biggest problem was always that nobody knew what the objective was. Nobody knew how to tell the system: “this is the dose I expect to receive, now optimize to get it for me”, because you didn’t know what you could do. For any given patient, you could ask for too much or too little. Now, for the first time, I argue that we actually know what our objective is in our treatment planning.
This levels the playing field between different environments, different countries, or even different dosimetrists with different levels of experience. The Plan AI tool brings all this to a consistent state and enables high quality, efficient planning everywhere. We can provide this predictive planning tool to clinics around the world. Now we just have to get everybody using it.
The post Todd McNutt: how an AI software solution enables creation of the best possible radiation treatment plans appeared first on Physics World.
In his opening remarks to the 4th International Symposium on the History of Particle Physics, Chris Llewellyn Smith – who was a director-general of CERN in the 1990s – suggested participants should speak about “what’s not written in the journals”, including “mistakes, dead-ends and problems with getting funding”. Doing so, he said, would “provide insight into the way science really progresses”.
The symposium was not your usual science conference. Held last November at CERN, it took place inside the lab’s 400-seat main auditorium, which has been the venue for many historic announcements, including the discovery of the Higgs boson. Its brown-beige walls are covered with lively designs by the Finnish artist Ilona Rista, suggesting to me the aftermath of a collision of high-energy bar codes.
The 1980s and 1990s saw the construction and operation of various important accelerators and detectors.
The focus of the meeting was the development of particle physics in the 1980s and 1990s – a period that saw the construction and operation of various important accelerators and detectors. At CERN, these included the UA1 and UA2 experiments at the Super Proton Synchrotron, where the W and Z bosons were discovered. Later, there was the Large Electron-Positron Collider (LEP), which came online in 1989, and the Large Hadron Collider (LHC), approved five years later.
Delegates also heard about the opening of various accelerators in the US during those two decades, including two at the Stanford Linear Accelerator Center – the Positron-Electron Project in 1980 and the Stanford Linear Collider in 1989. Most famous of all was the start-up of the Tevatron at Fermilab in 1983. Over at Dubna in the former Soviet Union, meanwhile, scientists built the Nuclotron, a superconducting synchrotron, which opened in 1992.
Conference speakers covered unfinished machines of the era as well. The US cancelled two proton–proton facilities – ISABELLE in 1983 and the Superconducting Super Collider (SSC) a decade later. The Soviet Union, meanwhile, abandoned the multi-TeV proton–proton collider UNK a few years later, though news has recently emerged that Russia might revive the project.
Several speakers recounted the discovery of the W and Z particles at CERN in 1983 and the discovery of the top quark at Fermilab in 1995. Others addressed the strange fact that fewer neutrinos from the Sun had been detected than theory suggested. The “solar-neutrino problem”, as it was known, was finally resolved by Takaaki Kajita’s discovery of neutrino oscillation in 1998, for which he shared the 2015 Nobel Prize for Physics with Art McDonald.
The conference also addressed unsuccessful searches for proton decay, axions, magnetic monopoles, the Higgs boson, supersymmetry particles and other targets. Other speakers described projects with highly positive outcomes, such as the advent of particle cosmology, or what some have jokingly dubbed “the heavenly lab”. The development of string theory, grand unified theories and perturbative quantum chromodynamics was tackled too.
In an exchange in the question-and-answer session after one talk, the Greek physicist Kostas Gavroglu referred to many of such quests as “failures”. That remark prompted the Australian-born US theoretical physicist Helen Quinn to say she preferred the term “falling forward”; such failures, she said, were instances of “I tried this, and it didn’t work so I tried that”.
In relating his work on detecting gravitational waves, the US Nobel-prize-winning physicist Barry Barish said he felt his charge was not to celebrate the importance of his discoveries nor the ingenuity of the route he took. Instead, Barish explained, his job was to answer the much more informal question: “What made me do what?”.
His point was illustrated by the US theorist Alan Guth, who described the very human and serendipitous path he took to working on cosmic inflation – the super-fast expansion of the universe just after the Big Bang. When he started, Guth said, “all the ingredients were already invented”. But the startling idea of inflation hinged on accidental meetings, chance conversations, unexpected visits, a restricted word count for Physical Review Letters, competitions, insecurities and “spectacular realizations” coalescing.
Another theme that arose in the conference was that science does not unfold inside its own bubble but can have extensive and immediate impacts on the world around it. Two speakers, for instance, recounted the invention of the World Wide Web at CERN in the late 1980s. It’s fair to say that no other discovery by a single individual – Tim Berners-Lee – has so radically and quickly transformed the world.
The growing role of international politics in promoting and protecting projects was mentioned too, with various speakers explaining how high-level political negotiations enabled physicists to work at institutions and experiments in other nations. The Polish physicist Agnieszka Zalewska, for example, described her country’s path to membership in CERN, while Russian-born US physicist Vladimir Shiltsev spoke about the “diaspora” of Russian particle physicists after the fall of the Soviet Union in 1991.
As a result of the Superconducting Super Colllider’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.
Sometimes politics created destructive interference. The US physicist, historian and author Michael Riordan described how the US’s determination to “go it alone” to outcompete Europe in high-energy physics was a major factor in bringing about the opposite: the termination of the SSC in 1993. As a result of that project’s controversial closure, the centre of gravity of high-energy physics shifted to Europe.
Indeed, contemporary politics occasionally hit the conference itself in incongruous and ironic ways. Two US physicists, for example, were denied permission to attend because budgets had been cut and travel restrictions increased. In the end, one took personal time off and paid his own way, leaving his affiliation off the programme.
Before the conference, some people complained that conference organizers hadn’t paid enough attention to physicists who’d worked in the Soviet Union but were from occupied republics. Several speakers addressed this shortcoming by mentioning people like Gersh Budker (1918–1977). A Ukrainian-born physicist who worked and died in the Soviet Union, Budker was nominated for a Nobel Prize (1957) and even has had a street named after him at CERN. Unmentioned, though, was that Budker was Jewish and that his father was killed by Ukrainian nationalists in a pogrom.
On the final day of the conference, which just happened to be World Science Day for Peace and Development, CERN mounted a public screening of the 2025 documentary film The Peace Particle. Directed by Alex Kiehl, much of it was about CERN’s internationalism, with a poster for the film describing the lab as “Mankind’s biggest experiment…science for peace in a divided world”.
But in the Q&A afterwards some audience members criticized CERN for allegedly whitewashing Russia for its invasion of the Ukraine and Israel for genocide. Those onstage defended CERN on the grounds of its desire to promote internationalism.
The keynote speaker of the conference was John Krige, a science historian from Georgia Tech who has worked on a three-volume history of CERN. Those who launched the lab, Krige reminded the audience, had radical “scientific, political and cultural aspirations” for the institution. Their dream was that CERN wouldn’t just revive European science and promote regional collaborative effects after the Second World War, but also potentially improve the global world order too.
Krige went on to quote one CERN founder, who’d said that international science facilities such as CERN would be “one of the best ways of saving Western civilization”. Recent events, however, have shown just how fragile those ambitions are. Alluding to CERN’s Future Circular Collider and other possible projects, Llewellyn Smith ended his closing remarks with a warning.
“The perennial hope that the next big high-energy project will be genuinely global,” he said, “seems to be receding over the horizon due to the polarization of world politics”.
The post The future of particle physics: what can the past teach us? appeared first on Physics World.

As the number of satellites in orbit grows, one emerging challenge is the difficulty some satellite operators have contacting counterparts to avoid potential collisions.
The post In space traffic coordination, the biggest challenge may be coordination appeared first on SpaceNews.

Europe’s investment arm is lending Luxembourg-based OQ Technology 25 million euros ($30 million) to expand its direct-to-device constellation, bolstering the continent’s push to compete with U.S.-led efforts to connect smartphones from space.
The post OQ Technology secures $30 million from Europe for satellite-to-smartphone expansion appeared first on SpaceNews.
Attempts to understand quantum phase transitions in open systems usually rely on real‑time Lindbladian evolution, which tracks how a quantum state changes as it relaxes toward a steady state. This approach is powerful for studying decoherence, dissipation and long‑time behaviour, but it often fails to reveal the deeper structure of the system including the phase transitions, critical points and hidden quantum order that define its underlying physics.
In this work, the researchers introduce a new framework called imaginary‑time Lindbladian evolution, which allows them to define and classify quantum phases in open systems using the spectrum of an imaginary‑Liouville superoperator. This approach works not only for pure ground states but also for finite‑temperature Gibbs states of stabilizer Hamiltonians, showing its relevance for realistic, mixed‑state conditions.
A key diagnostic in their method is the imaginary‑Liouville gap, the spectral gap between the lowest and next‑lowest decay modes. When this gap closes, the system undergoes a phase transition, a change that is accompanied by diverging correlation lengths and nonanalytic shifts in physical observables. The closing of this gap also coincides with the divergence of the Markov length, a recently proposed indicator of criticality in open quantum systems.
To demonstrate the power of their framework, the researchers map out phase diagrams for systems with
symmetry, capturing both spontaneous symmetry breaking and average symmetry‑protected topological phases. Their method reveals universal critical behaviour that real‑time Lindbladian steady states fail to detect, highlighting why imaginary‑time evolution fills a missing piece in the theory of open‑system phases.
Importantly, the authors emphasise that real‑time Lindbladians remain essential for modelling dissipation in practical settings. Their new framework complements this conventional approach, offering a systematic way to study phase transitions in open systems. They also outline how phase diagrams can be constructed using both bottom‑up (state‑based) and top‑down (Hamiltonian‑based) strategies, illustrating the method with a dissipative transverse‑field Ising model.
Overall, this work provides a unified and versatile way to understand quantum phases in open systems, revealing critical behaviour and topological structure that were previously inaccessible. It opens new directions for studying mixed‑state quantum matter and advances the theoretical foundations needed for future quantum technologies.
Yuchen Guo et al 2025 Rep. Prog. Phys. 88 118001
Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)
The post A breakthrough in modelling open quantum matter appeared first on Physics World.
In the macroscopic world, we see irreversible processes everywhere, heat flowing from hot to cold, gases mixing, systems decaying. Yet at the microscopic level, quantum mechanics is perfectly reversible, with its equations running equally well forwards and backwards in time. How then, does irreversibility emerge from fundamentally reversible dynamics?
A common explanation is coarse-graining, which simplifies a complex system by ignoring microscopic details and focusing only on large-scale behaviour. To make the micro–macro divide precise, however, one must first define what “macroscopic” means. Here it is given a quantitative inferential meaning: a state is macroscopic if it is perfectly inferable from the perspective of a specified measurement and prior. Central to this framework is a coarse-graining map built from the measurement and its optimal Bayesian recovery via the Petz map; macroscopic states are precisely its fixed points, turning macroscopicity into a sharp condition of perfect inferability. This construction is grounded in Bayesian retrodiction, which infers what a system likely was before it was measured, together with an observational deficit that quantifies how much information is lost in forming a macroscopic description.
States that are macroscopically inferable can be characterised in several equivalent ways, all tied to to a new measure of disorder called macroscopic entropy, which captures how irreversible, or “uninferable”, a macroscopic process appears from the observer’s perspective. This perspective is formalised through inferential reference frames, built from the combination of a prior and a measurement, which determine what an observer can and cannot recover about the underlying quantum state.
The researchers also develop a resource theory of microscopicity, treating macroscopic states as free and identifying the operations that cannot generate microscopic detail. This unifies and extends existing resource theories of coherence, athermality, and asymmetry. They further introduce observational discord, a new way to understand quantum correlations when observational power is limited, and provide conditions for when this discord vanishes.
Altogether, this work reframes macroscopic irreversibility as an information-theoretic phenomenon, grounded not in a fundamental dynamical asymmetry but in an inferential asymmetry arising from the observer’s limited perspective. It offers a unified way to understand coarse-graining, entropy, and the emergence of classical behaviour from quantum mechanics. It deepens our understanding of time’s direction and has implications for quantum computing, thermodynamics, and the study of quantum correlations in realistic, constrained settings.
Macroscopicity and observational deficit in states, operations, and correlations
Teruaki Nagasawa et al 2025 Rep. Prog. Phys. 88 117601
Focus on Quantum Entanglement: State of the Art and Open Questions guest edited by Anna Sanpera and Carlo Marconi (2025-2026)
The post How reversibility becomes irreversible appeared first on Physics World.

Reconciliation boost lifts 2026 totals, but sustainability questions loom for missile defense and Space Development Agency
The post A banner year for military space funding— with an unclear path beyond appeared first on SpaceNews.




Researchers at Los Alamos National Laboratory in New Mexico, US have used visible light to both image and manipulate the domains of a chiral antiferromagnet (AFM). By “painting” complex patterns onto samples of cobalt niobium sulfite (Co1/3NbS2), they demonstrated that it is possible to control AFM domain formation and dynamics, boosting prospects for data storage devices based on antiferromagnetic materials rather than the ferromagnetic ones commonly used today.
In antiferromagnetic materials, the spins of neighbouring atoms in the material’s lattice are opposed to each other (they are antiparallel). For this reason, they do not exhibit a net magnetization in the absence of a magnetic field. This characteristic makes them largely immune to disturbances from external magnetic fields, but it also makes them all but invisible to simple electrical and optical probes, and extremely difficult to manipulate.
In the new work, a Los Alamos team led by Scott Crooker focused on Co1/3NbS2 because of its topological nature. In this material, layers of cobalt atoms are positioned, or intercalated, between monolayers of niobium disulfide, creating 2D triangular lattices with ABAB stacking. The spins of these cobalt atoms point either toward or away from the centers of the tetrahedra formed by the atoms. The result is a noncoplanar spin ordering that produces a chiral, or “handed,” spin texture.
This chirality affects the motion of electrons in the material because when an electron passes through a chiral pattern of spins, it picks up a geometrical phase known as a Berry phase. This makes it move as if it were “seeing” a region with a real magnetic field, giving the material a nonzero Hall conductivity which, in turn, affects how it absorbs circularly polarized light.
To characterize this behaviour, the researchers used an optical technique called magnetic circular dichroism (MCD) that measures the difference in absorption between left and right circularly polarized light and depends explicitly on the Hall conductivity.
Similar to the MCD that is measured in well-known ferromagnets such as iron or nickel, the amplitude and sign of the MCD measured in Co1/3NbS2 varied as a function of the wavelength of the light. This dependence occurs because light prompts optical transitions between filled and empty energy bands. “In more complex materials like this, there is a whole spaghetti of bands, and one needs to consider all of them,” Crooker explains. “Precisely which mix of transitions are being excited depends of course on the photon energy, and this mix changes with energy. Sometimes the net response is positive, sometimes negative; it just depends on the details of the band structure.”
To understand the mix of transitions taking place, as well as the topological character of those transitions, scientists use the concept of Berry curvature, which is the momentum-space version of the magnetic field-like effect described earlier. If the accumulated Berry phase is positive (negative), then the electron is moving in a right-handed (left-handed) spin texture chirality, which is captured by the Berry curvature of the band structure in momentum space.
To image directly the domains with positive and negative chirality, the researchers cooled the sample below its ordering temperature, shined light of a particular wavelength onto it, and measured its MCD using a scanning MCD microscope. The sign of the measured MCD value revealed the chirality of the AFM domains.
To “write” a different chirality into these AFM domains, the researchers again cooled the sample below its ordering temperature, this time in the presence of a small positive magnetic field B, which fixed the sample in a positive chiral AFM state. They then reversed the polarity of B and illuminated a spot of the sample to heat it above the ordering temperature. Once the spot cooled down, the negative-polarity B-field changed the AFM state in the illuminated region into a negative chirality. When the “painting” was finished, the researchers imaged the patterns with the MCD microscope.
In the past, a similar thermo-magnetic scheme gave rise to ferromagnetic-based data storage disks. This work, which is published in Physical Review Letters, marks the first time that light has been used to manipulate AFM chiral domains – a fundamental requirement for developing AFM-based information storage technology and spintronics. In the future, Crooker says the group plans to extend this technique to characterize other complex antiferromagnets with nontrivial magnetic configurations, use light to “write” interesting spatial patterns of chiral domains (patterns of Berry phase), and see how this influences electrical transport.
The post Visible light paints patterns onto chiral antiferromagnets appeared first on Physics World.