↩ Accueil

Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

New diffractive camera hides images from view

A schematic of the experiment
Now you see it: A schematic of the experiment, in which an optical diffractive camera hides information by concealing it within ordinary-looking “dummy” patterns. (Courtesy: A Ozcan)

Information security is an important part of our digital world, and various techniques have been deployed to keep data safe during transmission. But while these traditional methods are efficient, the mere existence of an encrypted message can alert malicious third parties to the presence of information worth stealing. Researchers at the University of California, Los Angeles (UCLA), US, have now developed an alternative based on steganography, which aims to hide information by concealing it within ordinary-looking “dummy” patterns. The new method employs an all-optical diffractive camera housed within an electronic decoder network that the intended receiver can use to retrieve the original image.

“Cryptography and steganography have long been used to protect sensitive data, but they have limitations, especially in terms of data embedding capacity and vulnerability to compression and noise,” explains Aydogan Ozcan, a UCLA electrical and computer engineer who led the research. “Our optical encoder-electronic decoder system overcomes these issues, providing a faster, more energy-efficient and scalable solution for information concealment.”

A seemingly mundane and misleading pattern

The image-hiding process starts with a diffractive optical process that takes place in a structure composed of multiple layers of 3D-printed features. Light passing through these layers is manipulated to transform the input image into a seemingly mundane and misleading pattern. “The optical transformation happens passively,” says Ozcan, “leveraging light-matter interactions. This means it requires no additional power once physically fabricated and assembled.”

The result is an encoded image that appears ordinary to human observers, but contains hidden information, he tells Physics World.

The encoded image is then processed by an electronic decoder, which uses a convolutional neural network (CNN) that has been trained to decode the concealed data and reconstruct the original image. This optical-to-digital co-design ensures that only someone with the appropriate digital decoder can retrieve the hidden information, making it a secure and efficient method of protecting visual data.

A secure and efficient method for visual data protection

The researchers tested their technique using arbitrarily chosen hand-written digits as the input image. The diffractive processor successfully transformed these into a uniform-looking digit 8. The CNN was then able to reconstruct the original handwritten digits using information “hidden” in the 8.

All was not plain sailing, however, explains Ozcan. For one, the UCLA researchers had to ensure that the digital decoder could accurately reconstruct the original images despite the transformations applied by the diffractive optical processor. They also had to show that the device worked under different lighting conditions.

“Fabricating precise diffractive layers was no easy task either and meant developing the necessary 3D printing techniques to create highly precise structures that can perform the required optical transformations,” Ozcan says.

The technique, which is detailed in Science Advances, could have several applications. Being able to transmit sensitive information securely without drawing attention could be useful for espionage or defence, Ozcan suggests. The security of the technique and its suitability for image transmission might also improve patient privacy by making it easier to safely transmit medical images that only authorized personnel can access. A third application would be to use the technique to improve the robustness and security of data transmitted over optical networks, including free-space optical communications. A final application lies in consumer electronics. “Our device could potentially be integrated into smartphones and cameras to protect users’ visual data from unauthorized access,” Ozcan says.

The researchers demonstrated that their system works for terahertz frequencies of light. They now aim to expand its capabilities so that it can work with different wavelengths of light, including visible and infrared, which would broaden the scope of its applications. “Another area [for improvement] is in miniaturization to further reduce the size of the diffractive optical elements to make the technology more compact and scalable for commercial applications,” Ozcan says.

The post New diffractive camera hides images from view appeared first on Physics World.

Sliding ferroelectrics offer fast, fatigue-free switching

Three years ago, researchers from institutions in the US and Israel discovered a new type of ferroelectricity in a material called boron nitride (BN). The team called this new mechanism “slidetronics” because the change in the material’s electrical properties occurs when adjacent atomically-thin layers of the material slide across each other.

Two independent teams have now made further contributions to the slidetronics field. In the first, members of the original US-Israel group fabricated ferroelectric devices from BN that can operate at room temperature and function at gigahertz frequencies. Crucially, they found that the material can endure many “on-off” switching cycles without losing its ferroelectric properties – an important property for a future non-volatile computer memory. Meanwhile, a second team based in China found that a different sliding ferroelectric material, bilayer molybdenum disulphide (MoS2), is also robust against this type of fatigue.

The term “ferroelectricity” refers to a material’s ability to change its electrical properties in response to an applied electric field. It was discovered over a 100 years ago in certain naturally-occurring crystals and is now exploited in a range of technologies, including digital information storage, sensing, optoelectronics and neuromorphic computing.

Being able to switch a material’s electrical polarization over small areas, or domains, is a key part of modern computational technologies that store and retrieve large volumes of information. Indeed, the dimensions of individually polarizable domains (that is, regions with a fixed polarization) within the silicon-based devices commonly used for information storage have fallen sharply in recent years, from roughly 100 nm to mere atoms across. The problem is that as the number of polarization switching cycles increases, an effect known as fatigue occurs in these conventional ferroelectric materials. This fatigue degrades the performance of devices and can even cause them to fail, limiting the technology’s applications.

Alternatives to silicon

To overcome this problem, researchers have been studying the possibility of replacing silicon with two-dimensional materials such as hexagonal boron nitride (h-BN) and transition metal dichalcogenides (TMDs). These materials are made up of stacked layers held together by weak van der Waals interactions, and they can be as little as one atom thick, yet they remain crystalline, with a well-defined lattice and symmetry.

In one of the new works, researchers led by Kenji Yasuda of the School of Applied and Engineering Physics at Cornell University made a ferroelectric field-effect transistor (FeFET) based on sliding ferroelectricity in BN. They did this by sandwiching a monolayer of graphene between top and bottom layers of bulk BN, which behaves like a dielectric rather than a ferroelectric. They then inserted a parallel layer of stacked bilayer BN – the sliding ferroelectric – into this structure.

Yasuda and colleagues measured the endurance of ferroelectric switching in their device by repeatedly applying 100-nanosecond-long 3V pulses for up to 104 switching cycles. They then applied another square-shaped pulse with the same duration and a frequency of up to 107 Hz and measured the graphene’s resistance to show that the device’s ferroelectricity performance did not degrade. They found that the devices remained robust after more than 1011 switching cycles.

Immobile charged defects

Meanwhile, a team led by Fucai Liu of the University of Electronic Science and Technology of China, in collaboration with colleagues at Ningbo Institute of Materials Technology and Engineering (NIMTE) of the Chinese Academy of Sciences, Fudan University and Xi Chang University, demonstrated a second fatigue-free ferroelectric system. Their device was based on sliding ferroelectricity in bilayer 3R-MoS2 and was made by sandwiching this material between two BN layers using a process known as chemical vapour transport. When the researchers applied pulsed voltages of durations between 1 ms and 100 ms to the device, they measured a switching speed of 53 ns. They also found that it retains its ferroelectric properties even after 106 switching cycles of different pulse durations.

Based on theoretical calculations, Liu and colleagues showed that the material’s fatigue-free properties stem from immobile charged defects known as sulphur vacancies. In conventional ferroelectrics, these defects can migrate along the direction of the applied electrical field.

Reporting their work in Science, they argue that “it is reasonable to assume that fatigue-free is an intrinsic property of sliding ferroelectricity” and that the effect is an “innovative” solution to the problem of performance degradation in conventional ferroelectrics.

For their part, Yasuda and colleagues, whose work also appears in Science, are now exploring ways of synthesizing their material on a larger, wafer scale for practical applications. “Although we have shown that our device is promising for applications, we have only demonstrated the performance of a single device until now,” Yasuda tells Physics World. “In our current method, it takes many days of work to make just a single device. It is thus of critical importance to develop a scalable synthesis method.”

The post Sliding ferroelectrics offer fast, fatigue-free switching appeared first on Physics World.

Classical models of gravitational field show flaws close to the Earth

If the Earth was a perfect sphere or ellipsoid, modelling its gravitational field would be easy. But it isn’t, so geoscientists instead use an approximate model based on a so-called Brillouin sphere. This is the smallest geocentric sphere that the entire planet fits inside, and it touches the Earth at a single point: the summit of Mount Chimborazo in Ecuador, near the crest of the planet’s equatorial bulge.

For points outside this Brillouin sphere, traditional methods based on spherical harmonic (SH) expansions produce a good approximation of the real Earth’s gravitational field. But for points inside it – that is, for everywhere on or near the Earth’s surface below the peak of Mount Chimborazo – these same SH expansions generate erroneous predictions.

A team of physicists and mathematicians from the universities of Ohio State and Connecticut in the US has now probed the difference between the model’s predictions and the actual field. Led by Ohio State geophysicist Michael Bevis, the team showed that the SH expansion equations diverge below the Brillouin sphere, leading to errors. They also quantified the scale of these errors.

Divergence is a genuine problem

Bevis explains that the initial motivation for the study was to demonstrate through explicit examples that a mathematical theory proposed by Ohio State’s Ovidiu Costin and colleagues in 2022 was correct. This landmark paper was the first to show that SH expansions of the gravitational potential always diverge below the Brillouin sphere, but “at the time, many geodesists and geophysicists found the paper somewhat abstract”, Bevis observes. “We wanted to convince the physics community that divergence is a genuine problem, not just a formal mathematical result. We also wanted to show how this intrinsic divergence produces model prediction errors.”

In the new study, the researchers demonstrated that divergence-driven prediction error increases exponentially with depth beneath the Brillouin sphere. “Furthermore, at a given point in free space beneath the sphere, we found that prediction error decreases as truncation degree N increases towards its optimal value, Nopt,” explains Bevis. Beyond this point, however, “further increasing N will cause the predictions of the model to degrade [and] when N >> Nopt, prediction error will grow exponentially with increasing N.”

The most important practical consequence of the analysis, he tells Physics World, was that it meant they could quantify the effect of this mathematical result on the prediction accuracy of any gravitational model formed from a so-called truncated SH expansion – or SH polynomial – anywhere on or near the surface of the Earth.

Synthetic planetary models

The researchers obtained this result by taking a classic theory developed by Robert Werner of the University of Texas at Austin in 1994 and using it to write code that simulates the gravitational field created by a polyhedron of constant density. “This code uses arbitrary precision arithmetic,” explains Bevis, “so it can compute the gravitational potential and gravitational acceleration g anywhere exterior to a synthetic planet composed of hundreds or thousands of faces with a triangular shape.

“The analysis is precise to many hundreds of significant digits, both above and below the Brillouin sphere, which allowed us to test and validate the asymptotic expression derived by Costin et al. for the upper limit on SH model prediction error beneath the Brillouin sphere.”

The new work, which is described in Reports on Progress in Physics, shows that traditional SH models of the gravitational field are fundamentally flawed when they are applied anywhere near the surface of the planet. This is because they are attempting to represent a definite physical quantity with a series that is actually locally diverging. “Our calculations emphasize the importance of finding a new approach to representing the external gravitational field beneath the Brillouin sphere,” says Bevis. “Such an approach will have to avoid directly evaluating SH polynomials.”

Ultimately, generalizations of the new g simulator will help researchers formulate and validate the next generation of global gravity models, he adds. This has important implications for inertial navigation and perhaps even the astrophysics of exoplanets.

The team is now working to improve the accuracy of its gravity simulator so that it can better model planets with variable internal density and more complex topography. They are also examining analytical alternatives to using SH polynomials to model the gravitational field beneath the Brillouin sphere.

The post Classical models of gravitational field show flaws close to the Earth appeared first on Physics World.

Hawaiian volcano erupted ‘like a stomp rocket’

A series of eruptions at the Hawaiian volcano Kilauea in 2018 may have been driven by a hitherto undescribed mechanism that resembles the “stomp-rocket” toys popular in science demonstrations. While these eruptions are the first in which scientists have identified such a mechanism, researchers at the University of Oregon, US, and the US Geological Survey say it may also occur in other so-called caldera collapse eruptions.

Volcanic eruptions usually fall into one of two main categories. The first is magmatic eruptions, which (as their name implies) are driven by rising magma. The second is phreatic eruptions, which are prompted by ground water flash-evaporating into steam. But the sequence of 12 closely-timed Kilauea eruptions didn’t match either of these categories. According to geophysicist Joshua Crozier, who led a recent study of the eruptions, these eruptions instead appear to have been triggered by a collapse of Kilauea’s subsurface magma reservoir, which contained a pocket of gas and debris as well as molten rock.

When this kilometre-thick chunk of rock dropped, Crozier explains that the pressure of the gas in the pocket suddenly increased. And just like stamping on the gas-filled cavity in a stomp rocket causes a little plastic rocket to shoot upwards, the increase in gas pressure within Kilauea blasted plumes of rock fragments and hot gas eight metres into the air, leaving behind a collapsed region of crustal rock known as a caldera.

A common occurrence?

Caldera collapses are fairly common, with multiple occurrences around the world in the past few decades, Crozier says. This means the stomp-rocket mechanism might be behind other volcanic eruptions, too. Indeed, previous studies had hinted at this possibility. “Several key factors led us to speculate along the line of the stomp-rocket, one being that the material erupted during the Kilauea events was largely lithic clasts [broken bits of crustal rock or cooled lava] rather than ‘fresh’ molten magma as occurs in typical magmatic eruptions,” Crozier tells Physics World.

This lack of fresh magma might imply phreatic activity, as was invoked for previous explosive eruptions at Kilauea in 1924. However, in 2018, USGS scientists Paul Hsieh and Steve Ingebritsen used groundwater simulations to show that the rocks around Kilauea’s summit vent should have been too hot for liquid groundwater to flow in at the time the explosions occurred. Seismic, geodetic, and infrasound data also all suggested that the summit region was experiencing early stages of caldera collapse during this time.

First test of the stomp-rocket idea

The new work is based on three-dimensional simulations of how plumes containing different types of matter rise through a conduit and enter the atmosphere. Crozier and colleagues compared these simulations with seismic and infrasound data from previously-published papers, and with plume heights measured by radar. They then connected the plume simulations with seismic inversions they conducted themselves.

The resulting model shows Kilauea’s magma reservoir overlain by a pocket of accumulated high-temperature magmatic gas and lithic clasts. When the reservoir collapsed, the gas and the lithic clasts were driven up through a conduit around 600-m long to erupt particles at a rate of roughly 3000 m3/s.

As well as outlining a new mechanism that could contribute to hazards during caldera collapse eruptions, Crozier and colleagues used subsurface and atmospheric data to constrain Kilauea’s eruption mechanics in more detail than is typically possible. They were able to do this, Crozier says, because Kilauea is unusually well-monitored, being covered with instruments such as ground sensors to detect seismic activity and spectrometers to analyze the gases released.

“Our work provides a valuable opportunity to validate next-generation transient eruptive plume simulations, which could ultimately help improve both ash hazard forecasts and interpretations of the existing geologic eruption record,” says Crozier, who is now a postdoctoral researcher at Stanford University in the US. “For example, I am currently looking into the fault mechanics involved in the sequence of caldera collapse earthquakes that produced these explosions. In most tectonic settings we haven’t been able to observe complete earthquake cycles since they occur over long timescales, so caldera collapses provide valuable opportunities to understand fault mechanics.”

The study is detailed in Nature Geoscience.

The post Hawaiian volcano erupted ‘like a stomp rocket’ appeared first on Physics World.

Extreme impacts make metals stronger when heated

Heating metals usually makes them softer, but new micro-ballistic impact testing experiments show that if they are deformed extremely quickly during heating, they actually become harder. This unexpected discovery, made by researchers in the department of materials science and engineering at the Massachusetts Institute of Technology (MIT) in the US, could be important for developing materials for use in extreme environments.

In the new work, Christopher Schuh and his PhD student Ian Dowding used laser beams to propel tiny particles of sapphire (an extremely hard material) towards flat sheets of pure copper, titanium and gold at velocities as high as a few hundred metres per second. Using high-speed cameras to observe the particles as they hit and bounce off the sheets, they calculated the difference between the particles’ incoming and outgoing velocities. From this, they determined the amount of energy deposited into the target sample, which in turn indicates its surface strength, or hardness.

Schuh and Dowding found that increasing the temperature by 157 °C boosted the strength of the copper sample by about 30%. At 177 °C, the sample’s hardness increased still further, to more than 300 MPa, making it nearly as hard as steel (304 MPa) at this temperature. This result is counterintuitive since pure copper is a soft metal at low strain rates and would normally be expected to soften further at high temperatures. The pair observed similar effects for their titanium and gold samples.

Drag strengthening

Schuh and Dowding say that the anomalous thermal strengthening effect appears to stem from the way the metals’ ordered crystal lattice deforms when the sapphire microparticles strike it. This effect is known as drag strengthening, and it occurs because higher temperatures increase the activity of phonons – vibrations of the crystal lattice – within the metal. These phonons limit the deformation of defects, like dislocations, in the metal, preventing them from slipping as they would do at lower temperatures.

“The effect increases with increased impact speed and temperature,” Dowding explains, “so that the faster the impact, the less the dislocations are able to respond.”

Unlike in previous high-velocity impact experiments, which used centimetre-scale particles, the small particles in this study do not create significant pressure shock waves when they hit the sample. “In our work, the impacted region is smaller than around 100 micrometres,” Schuh tells Physics World. “This small scale lets us perform high-rate deformations without having a large shock wave or high-pressure conditions to contend with. The result is a clean and quantitative set of data that clearly reveals the counterintuitive ‘hotter-is-stronger’ effect.”

The behaviour the MIT researchers uncovered could come in handy when designing materials for use in extreme conditions. Possible examples include shields that protect spacecraft from fast-moving meteorites and equipment for high-speed machining processes such as sandblasting. Metal additive manufacturing processes such as cold spraying could also benefit.

The researchers, who report their work in Nature, say they would now like to explore the range of conditions and materials in which this “hotter-is-stronger” behaviour occurs.

The post Extreme impacts make metals stronger when heated appeared first on Physics World.

Sun’s magnetic field may have a surprisingly shallow origin

A new mathematical model indicates that the Sun’s magnetic field originates just 20 000 miles below its surface, contradicting previous theories that point to much deeper origins. The model, developed by researchers at Northwestern University in the US and the University of Edinburgh, UK, could help explain the origins of the magnetic field, and might lead to more accurate forecasts for solar storms, which can damage electronics in space and even on the ground if they are powerful enough.

The physical processes that generate the Sun’s magnetic field – the magnetic dynamo – follow a very specific pattern. Every 11 years, a propagating region of sunspots appears at a solar latitude of around 30°, and vanishes near the equator. Around the same time, longitudinal flows of gas and plasma within the Sun, known as torsional oscillations, closely follow the motion of the sunspots.

These two phenomena might be related – they might, in other words, be different manifestations of the same underlying physical process – but researchers still do not know where they come from. Recent helioseismology measurements point to a relatively shallow origin, limited to the near-surface “shear layer” located in the outer 5-10% of the star. However, that contradicts previous theoretical explanations that rely on effects arising more than 130 000 miles below the Sun’s surface.

Magnetorotational instability at the surface

Researchers led by Geoffrey Vasil at Edinburgh may have found a resolution to this conflict. According to their model, the Sun’s magnetic field does indeed stem from a near-surface effect: a unstable fluid-dynamic process known as a magnetorotational instability.

This is promising, Vasil notes, because such instabilities also occur in astrophysical systems such as black holes and young planetary systems, and we have understood them in that context since the 1950s thanks to pioneering work by the Nobel Prize-winning physicist Subrahmanyan Chandrasekhar. More exciting still, he tells Physics World, is that the new model better matches observations of the Sun, successfully reproducing physical properties seen in sub-surface torsional oscillations and magnetic field amplitudes. A final advantage is that unlike theories that invoke deeper effects, the new model describes how sunspots follow the Sun’s magnetic activity.

Several difficulties with current theories

Vasil says he first stumbled across the idea that near-surface instability could be responsible while he was a PhD student at the University of Colorado in the US. “I remember the ‘huh, that’s funny’ insight while flipping through an astrophysics textbook,” he recalls. The previous leading hypothesis, he explains, held that the Sun’s magnetic field originated at the bottom of a 130 000-mile-deep “ocean”. Two things happen down in this so-called tachocline region: “The first is that the rolling, overturning turbulence of gas and plasma stops and gives way to a calmer interior,” he says. “There is also a jump in the solar windspeed that can ‘stretch’ magnetic fields.”

While these ideas hold some appeal, they suffer from several difficulties, he adds. For one, even if the magnetic field did originate deep inside the Sun, it would still have to get to the surface. Calculations show that this would not be easy.

“Overall, it makes a lot of sense if things happen near the surface and don’t have to go anywhere,” he says. “While that’s not the only reason supporting our surface-origin hypothesis, it is a big part of it.”

A better understanding of sunspot formation

If magnetic fields do originate right below the surface, they ought to be easy to measure. Such measurements could, in turn, lead to a better understanding of sunspot formation and improved warnings of sunspot eruptions – which would help us protect sensitive electronics.

The researchers need much more data to continue with their investigations. “Our current work mostly concerns the shallow region near the Sun’s equator, but we know for sure that the polar regions are also extremely important, including deeper down from the poles,” says Vasil. “The difference is that we don’t have any specific physical hypotheses of what might be happening in these zones.

“We hope to obtain such data from planned satellite missions (from both NASA and the European Space Agency, ESA) to observe the solar poles in much more detail. Unfortunately, these projects have recently been put on hold, but I hope that our work will encourage others to pursue these again.”

For now, the researchers plan to concentrate on building open-source tools to help analyse the wealth of data they already have. Their present study is detailed in Nature.

The post Sun’s magnetic field may have a surprisingly shallow origin appeared first on Physics World.

Nanostring sensor loses ‘almost no energy’ while vibrating

A new “nanostring” has the highest quality factor ever recorded for a room-temperature mechanical resonator, vibrating for unprecedented periods of time while dissipating hardly any energy. The device, which measures centimetres in length but just nanometres in diameter, could be used to detect ultra-small forces such as gravity.

Nanomechanical resonators are tiny vibrating beams that oscillate at very high resonant frequencies – often in the megahertz or gigahertz range. They are employed in wireless communication for signal processing, and in basic research for detecting and determining the mass of tiny objects such as single DNA molecules or viruses. The latter application works on the principle that whenever a small particle is absorbed onto the beam, the frequency at which the beam vibrates will change in a way that can be monitored and used to calculate the particle’s mass.

Long, thin resonators are more sensitive than resonators with a lower aspect ratio, but they are hard to fabricate. In the latest work, a team led by Richard Norte of TU Delft in the Netherlands, together with Miguel Bessa of Brown University, US, overcame this challenge by using machine learning to design the resonator and advanced nanofabrication processes to make it. The resulting “nanostrings” are three centimetres in length but just 70 nm thick – “equivalent to suspending a freely-standing 1 mm thick guitar string made from a ceramic material over half a kilometre with almost no sag,” Norte says. “Such a structure would be impossible to produce at our everyday macroscales.”

The new vibrating sensor can register some of the smallest forces in science, at levels of sensitivity that have only previously been possible at temperatures near absolute zero, Norte adds. This sensitivity stems from the device’s extremely high quality factor (Qm), which at kilohertz frequencies can be up to 10 billion – meaning that the nanostring can vibrate 100 000 times per second while losing very little energy.

Unprecedented levels of sensitivity

To make the sensor, the researchers chose a high-stress material, silicon nitride (Si3N4), that is commonly used in resonators. An algorithm known as multi-fidelity Bayesian optimization helped them find a good design quickly and efficiently, having first specified that the algorithm should consider devices made from a slab of Si3N4 tens of nanometres thick, freely suspended over a length of several centimetres and placed on a microchip of silicon.

The algorithm suggested a resonator with a length of 3 cm and aspect ratios greater than 4.3 x 105. To make a device according to this exacting specification, the researchers deposited the Si3N4 on 2-mm silicon wafers manufactured with low-pressure chemical vapour deposition (LPCVD). They then used electron beam lithography or photolithography to pattern a “scaffolding” layer that they subsequently removed using chemical etching. This last step produces a string that has not been subjected to any additional forces during manufacturing, which could otherwise lead it to collapse or fracture, Norte says.

Record-breaking quality factor

To characterize the device, the team set it vibrating with piezoelectric stages and used an optical interferometer to measure the time it took for the vibrations to stop. These “ringdown” measurements provide information about how fast the resonator’s amplitude decays, and thus the rate at which it dissipates energy – values that are then used to calculate Qm. For a 3-cm-long Si3N4 string, they achieved a Qm exceeding 6.5 × 109 at room temperature, which is the highest value ever recorded for a mechanically clamped resonator of this kind.

Writing in Nature Communications, the researchers report that almost no energy is lost to the exterior of the microchip-based resonator. This is because vibrations get trapped in the middle of the string, they say. “It also means that noise from our hot, everyday environment cannot enter the centre of the string either, so shielding it and allowing it to sense even the smallest forces,” Norte explains. “It is somewhat like a swing that, once pushed, keeps swinging for almost 100 years because it loses almost no energy through the ropes.”

The researchers would now like to make more complex structures such as membranes or drumheads. They are also studying ways of using high-aspect-ratio nanotechnology to make ultra-thin lenses and mirrors. “These have applications in imaging, sensing and even ambitious space missions like Breakthrough Starshot that aim to send reflective sails into interstellar space,” says Norte. “We think this is really just the beginning of new playground that mixes nanotechnology and machine learning.”

The post Nanostring sensor loses ‘almost no energy’ while vibrating appeared first on Physics World.

Optical tweezers think big

A series of images showing the location and orientation two different irregularly-shaped particles over time. When trapped by the new optical tweezers, the location and orientation of the particles does not change, whereas with conventional optical tweezers, the particles quickly drift away from the laser focus and are not confined.
Trapped: Researchers developed contour-tracking optical tweezers that can trap large and irregularly shaped particles such as the ones pictured. The blue dots show illumination points while the red dots depict the contours extracted by the new method. (Courtesy: Takahashi-Michihata Lab, The University of Tokyo)

Optical tweezers – already a mainstay of biological research for their ability to hold and move nano-sized objects – can now trap larger items such as cell clusters, bacteria, plankton and microplastics thanks to a new technique developed at the University of Tokyo in Japan. Known as contour-tracking optical tweezers (CTOTs), the new method produces stable traps for irregularly shaped particles bigger than 0.1 mm – something that was challenging to do using conventional optical tweezers. According to team leader Satoru Takahashi, the technique could expand the technology’s applications to include environmental research as well as biology.

“This new capability enables the observation and analysis of these different types of samples with precise manipulation, contributing to a deeper understanding of their behaviours in various settings, crucial for advances in biology and environmental science,” Takahashi says.

Powerful tools for biological research but limited

Optical tweezers were invented by the American physicist Arthur Ashkin, who received a share of the 2018 Nobel Prize for Physics for his work. These devices use a highly focused laser beam to generate forces that hold and move micron-or nano-sized objects near the beam’s focus, and they have become powerful tools for biological research.

Standard optical tweezers come up short, however, for particles bigger than 10 μm. This is because the optical forces available cannot create a big enough gradient to trap and manipulate such large objects in three dimensions. Another weakness is that tweezers work best for symmetrical shapes like spheres and rods. In this case, the reason is that the forces light exerts on irregularly shaped objects are unbalanced due to complex interactions between the light and the particle, Takahashi explains. This imbalance tends to make the object rotate uncontrollably or move out of the laser focus spot altogether.

Determining the contour of the target particle

In CTOT, the incident light hits the edge of the particle. Even if the particle has, overall, an irregular shape, its shape in the illuminated region can be locally approximated as a curved surface. “Our system determines the contour of the target particle from microscope images and then scans the laser focal point along this contour in real time, balancing the optical forces around its irregular shape,” Takahashi tells Physics World. “It also automatically adjusts the size of the scanning light patterns to fit the target’s size, allowing it to be applied to particles bigger than 0.1 mm.”

The researchers tested their technique on irregularly shaped polystyrene microparticles, which they collected by polishing a polystyrene spoon with a rasp. CTOTs do not require any prior information about the particles’ shape. They do not have to be illuminated by laser light from two sides either, as is the case for conventional methods for larger particles, making the method easier to implement.

The new optical tweezers could be used with living organisms such as plankton and cultured biological cells as well as environmental samples, adds Ryohel Omine, who did the bulk of the work on the study. For example, Omine suggests that analysing the behaviour of microplastics could inform more effective measures to mitigate pollution, thereby improving human health and aiding environmental conservation.

The CTOTs technique is detailed in Optics Letters.

The post Optical tweezers think big appeared first on Physics World.

Nuclear physicists tame radius calculation problem

Image showing three clusters of spheres, representing nucleons, within a 3D grid
Nucleon numbers The research team found a new way of calculating the size of atomic nuclei such as the ones represented on this grid. (Courtesy: Serdar Elhatisari)

A new way of calculating the size of atomic nuclei has helped solve a longstanding problem in nuclear theory. Previously, all so-called ab initio approaches to this problem “under-predicted” the sizes of nuclear radii, but the new method produces answers in line with experimental results for the radii of elements with atomic numbers from 2 to 58. Among other possibilities, the improved method should enable astrophysicists to make more precise calculations of how stars convert helium into heavier elements via nuclear fusion, which the researchers who developed the method describe as a “holy grail” of nuclear astrophysics.

To study atomic nuclei, physicists often use ab initio calculations. These calculations begin with the nucleons – neutrons and protons – that make up the nucleus, and incorporate the strong force, which is one of the four known fundamental forces. The strong force is responsible for binding protons and neutrons together, and it is also responsible for “gluing” together the quarks that make up protons, neutrons and other baryons. At very short distances, the strong force is attractive and much stronger than the electromagnetic force, which works to push protons and other like-charged particles apart.

While ab initio calculations are excellent at describing the properties of atomic nuclei and how their structure affects their interactions when the number of nucleons is small, they fail when the number of nucleons gets too high or when the nucleons’ interactions become too complex. In particular, a class of ab initio calculations known as quantum Monte Carlo simulations, which use stochastic (random) processes to calculate desired quantities, suffers from something called the sign problem. This problem appears when positive and negative statistical weights of a certain configuration of components start to cancel each other out. The result is a huge increase in statistical errors that severely limits the size of the systems physicists can study.

Simple method

Researchers at the University of Bonn, Germany, together with an international team of collaborators at other institutions in Germany, the US, Korea, China, France, Georgia and Turkey, have now solved this problem using an approach called wavefunction matching. “The method is simple,” says Ulf-G Meißner, who co-led the team together with Serdar Elhatisari. “Below some radius R, we substitute the wavefunction of a complex interaction [with] one that is simpler (and free from ‘sign oscillations’), assuming that such a simple interaction does exist.”

This transformation is done in a way that preserves all the important properties of the original, more realistic interaction, he adds. Any errors introduced into the new wavefunction can be dealt with using a standard method known as perturbation theory.

The researchers applied their new technique to quantum Monte Carlo simulations for light- and medium-mass nuclei, neutron matter and nuclear matter and found that they could predict the nuclear radii of elements with atomic numbers ranging from 2 to 58 (hydrogen, with atomic number 1, is a special case). Their results agree with experimental measurements in the existing literature.

The new approach will allow physicists to make precision calculations in nuclear structure and dynamics, Meißner says. “One much sought after issue is in determining the structure and the precise locations of the neutron and proton drip lines (the so-called edges of stability which describe the maximum number of nucleons an isotope of each element can contain),” he explains. “Another is in reaction theory for the calculation of radiative alpha-capture on 12C at the ‘Gamov peak’ (astrophysical energies), which is the ‘holy grail’ of nuclear astrophysics.”

The researchers now plan to test their framework on structure and reaction calculations. “We will eventually refine the values of the three-nucleon forces in our calculations if disagreements appear,” Meißner tells Physics World.

The team reports its work in Nature.

The post Nuclear physicists tame radius calculation problem appeared first on Physics World.

MRI technique detects light-emitting molecules deep inside the brain

A new magnetic resonance imaging (MRI) technique maps the location of cells labelled with light-emitting molecules even when they are located deep within organs and other tissues. The technique, which works by detecting changes in blood vessels triggered by the presence of bioluminescent proteins, overcomes a major limitation of optical imaging. It could find use in biomedical applications such as probing tumour growth, measuring changes in gene expression and studying brain cell function.

Biologists often use light-emitting proteins to label cells, as it enables them to follow processes such as cell signalling, metabolism and many other cellular functions by tracking where these proteins go. However, while these bioluminescent proteins work well as indicators within cells, they are not as good for imaging structures deep in tissues and organs because these objects absorb and scatter visible light too much.

Locating the source of light emission

Biological engineer Alan Jasanoff and colleagues of the Department of Biological Engineering at the Massachusetts Institute of Technology (MIT) in the US have now developed a new way to detect bioluminescence. Their method begins with genetically engineering blood vessels to carry a photosensitive protein – in this case, an enzyme known as Beggiatoa photoactivated adenylate cyclase (bPAC).

When the engineered blood vessels are illuminated with light, the protein within them makes them dilate. This has the knock-on effect of altering the balance of oxygenated and deoxygenated haemoglobin within the vessels. Because these forms of haemoglobin have different magnetic properties, the shift between them can be detected using MRI. This enables the researchers to locate where light emissions are happening with high precision.

Jasanoff and colleagues tested their technique on the blood vessels in rat brains. “Blood vessels form a network in the brain that is extremely dense. Every cell in the brain is within a couple dozen microns of a blood vessel,” Jasanoff explains. “Our technique, which we have dubbed bioluminescence imaging using haemodynamics, or BLUsH, works by essentially turning the vasculature of the brain itself into a three-dimensional camera.”

Blood vessels become light amplifiers

Each blood vessel is like a pixel, he adds, responding to nearby sources of light in the tissue. “Since vascular changes are readily detectable by noninvasive readouts like MRI, BLUsH enables us to perform optical imaging through tissue that wouldn’t normally be easily accessible with optical techniques.”

The most difficult part of getting the technique to work, he tells Physics World, is getting the genetic modification of blood vessels properly targeted. “We are working on simplifying this,” he says.

By making luminescent proteins detectable via MRI or other noninvasive imaging techniques, BLUsH could help scientists study how cellular level processes lead to emergent phenomena such as brain-wide activity dynamics. It could also be useful for discovery-oriented science and clinical research in animal models. For example, the researchers suggest that studies of how gene expression changes during embryonic development and cell differentiation, or when new memories form, might benefit. Luminescent proteins could even help map anatomical connections between cells, revealing how cells communicate with each other.

For its part, the MIT team hopes to use BLUsH to study brain plasticity — the process that underlies learning and memory. “We also want to use this technique to read out measures of neural signalling,” Jasanoff says.

The work is detailed in Nature Biomedical Engineering.

The post MRI technique detects light-emitting molecules deep inside the brain appeared first on Physics World.

‘Cavendish-like’ experiment could reveal gravity’s quantum nature

Diagram of the new "Cavendish-like" gravitation experiment
Cavendish-like: A schematic diagram of the proposed experiment on gravitational interaction between two torsion balances. Two torsion pendula are placed with their equilibrium orientations (dashed lines) in parallel and allowed to interact through gravity. An electromagnetic shield is placed between the two pendula to suppress electromagnetic interactions. The rotational degrees of freedom of each pendulum are monitored through their coupling to two cavity fields (red lines). (Courtesy: Ludovico Lami, Julen S Pedernales and Martin B Plenio, Phys. Rev. X 14 021022, https://doi.org/10.1103/PhysRevX.14.021022)

Mathematical physicists in the Netherlands and Germany have proposed a new “Cavendish-like” gravitation experiment that could offer an alternative means of determining whether gravity is a classical or quantum phenomenon. If built, the experiment might bring us closer to understanding whether the theory of gravity can be reconciled with quantum-mechanical descriptions of the other fundamental forces – a long sought-after goal in physics.

Gravity is one of the four known fundamental forces in nature. It is different from the others – the electromagnetic force and the weak and strong nuclear forces – because it describes a curvature in space-time rather than interactions between objects. This may be why we still do not understand whether it is classical (as Albert Einstein described it in his general theory of relativity) or governed by the laws of quantum mechanics and therefore unable to be fully described by a local classical field.

Many experiments that aim to resolve this long-standing mystery rely on creating quantum entanglement between two macroscopic objects placed a certain distance from each other. Entanglement is a phenomenon whereby the information contained in an ensemble of particles is encoded in correlations among them, and it is an essential feature of quantum mechanics – one that clearly distinguishes the quantum from the classical world.

The hypothesis, therefore, is that if massive, distant objects (known as delocalized states) can be entangled, then gravity must be quantum.

Revealing gravity’s quantum nature without generating entanglement

The problem is that it is extremely difficult to make large objects behave as quantum particles. In fact, the bigger they get, the more likely they are to lose their quantum-ness and resort to behaving like classical objects.

Ludovico Lami of the University of Amsterdam, together with Martin Plenio and Julen Pedernales of the University of Ulm, have now thought up a new experiment that would reveal gravity’s quantum nature without having to generate entanglement. Their proposal – which is so far only a thought experiment – involves studying the correlations between two torsion pendula placed close to each other as they rotate back and forth with respect to each other, acting as massive harmonic oscillators (see figure).

This set-up is very similar to the one that Henry Cavendish employed in 1797 to measure the strength of the gravitational force, but its purpose is different. The idea, the team say, would be to uncover correlations generated by the whole gravity-driven dynamical process and show that they are not reproducible if one assumes the type of dynamics implied by a local, classical version of gravity. “In quantum information, we call this type of dynamics an ‘LOCC’ (from ‘local operations and classical communication’),” Lami says.

In their work, Lami continues, he and his colleagues “design and prove mathematically some ‘LOCC inequalities’ whose violation, if certified by an experiment, can falsify all LOCC models. It turns out that you can use them to rule out LOCC models also in cases where no entanglement is physically generated.”

An alternative pathway

The researchers, who detail their study in Physical Review X, say they decided to look into this problem because traditional experiments have well-known bottlenecks that are difficult to overcome. Most notably, they require the preparation of large delocalized states.

The new experiment, Lami says, is an alternative way of realizing experiments that can definitively indicate whether gravity is ultimately fully classical, as Einstein taught us, or somehow non-classical – and hence most likely quantum. “While we don’t claim that our method is completely and utterly better than the others, it is quite different and, depending on the experimental platform, may prove easier to practically set up,” he tells Physics World.

Lami, Plenio and Pedernales are now working to bring their analyses closer to real-world experiments by taking into account other interactions besides gravity. While doing so will complicate the picture and make their analyses more involved, they recognize that it will eventually be necessary for building a “bulletproof” experiment.

Plenio adds that the approach they are taking could also reveal other finer details about the nature of gravity. “In our work we describe how to decide whether gravity can be mimicked by local operations and classical communications or not,” he says. “There might be other models, however – for example, where gravity follows dynamics that do not obey LOCC, but still do not have to create entanglement either. This type of dynamics is called ‘separability preserving’. In principle we can also solve our equations for these.”

The post ‘Cavendish-like’ experiment could reveal gravity’s quantum nature appeared first on Physics World.

Early Earth’s magnetic field strength was similar to today’s

Ancient organisms preserved in the Earth’s oldest fossils may have experienced a planetary magnetic field similar to the one we observe today. This finding, from a team of researchers at the University of Oxford, UK and the Massachusetts Institute of Technology in the US, suggests that the planet’s magnetic field was relatively strong 3.7 billion years ago – a fact with important consequences for early microbial Earthlings.

“Our finding is interesting because the Sun was generating a much more intense solar wind in the Earth’s early history,” explains team leader Claire Nichols of Oxford’s Department of Earth Sciences. “This means that the same strength of magnetic field would have provided far less shielding (because the protective ‘bubble’ around Earth provided by the magnetosphere would have been much smaller) for life emerging at that time.”

Without the magnetosphere, which protects us from cosmic radiation as well as the solar wind, many scientists think that life as we know it would not have been possible. Until now, however, researchers weren’t sure when it first appeared or how strong it was.

Unique rock samples

In the new work, Nichols and colleagues analysed rocks from the northern edge of the Isua Supracrustal Belt in southwest Greenland. Billions of years ago, as these rocks were heated, crystals of magnetite formed, and iron oxide particles within them recorded the strength and direction of the ambient magnetic field.

While similar processes happened in many places and at many times during Earth’s history, the rocks in the northernmost part of Isua are extremely unusual. This is because their location atop a thick continental crust prevented their magnetic information from being “erased” by later geological activity.

Indeed, according to the researchers, this band of land experienced only three significant thermal events in its history. The first and hottest occurred 3.7 billion years ago and heated the rocks up to 550 °C. The two subsequent heating events were less intense, and because they did not heat the rocks to more than 400 °C, the 3.7-billion-year-old record of Earth’s magnetic field remains as it was after the first event locked it in.

Recovering a vector of magnetization

Collecting samples from Isua was challenging, Nichols says, because the sample site is so remote it can only be reached by helicopter. Once back in the laboratory, the team demagnetized the samples stepwise, either by gradually heating them or by applying increasingly strong alternating magnetic fields. “These processes allow us to slowly remove the magnetic signal from the samples,” Nichols explains, “and recover a vector of magnetization that tells us about the direction of the ancient magnetic field.”

To determine the strength of the ancient field, the researchers applied a known magnetic field and compared the vector of magnetization acquired in the lab to that recovered in the original demagnetization. They found that rocks dating from 3.7 billion years ago recorded a magnetic field strength of at least 15 microtesla (µT). To compare, Earth’s present-day magnetic field averages around 30 µT. These results constitute the oldest estimate of the Earth’s magnetic field strength ever recovered from bulk rock samples – a method that is more accurate and reliable than previous analyses of individual crystals.

Consequences for early life

The fact that the Earth’s magnetic field was already fairly strong 3.7 billion years ago has several implications. One is that, over time, as the solar wind decreased, life on Earth would have become progressively less likely to experience high levels of ionizing radiation. This may have allowed organisms to move onto land and leave the more protective environment of the deep oceans, Nichols says.

The results also suggest that Earth’s early magnetic dynamo was as efficient as the mechanism that generates our planet’s magnetic field today. This finding will help scientists better understand when the Earth’s inner, solid core began to form, potentially shedding light on processes such as mantle convection and plate tectonics.

Are magnetic fields a key criteria for habitability?

Perhaps the most profound implication, though, concerns the possibility of life on other planets. “Understanding the oldest record of Earth’s magnetic field is really important for figuring out whether magnetic fields are a key criteria for habitability,” Nichols tells Physics World. “I’m really interested to know why Earth appears so unique – and whether the magnetic field matters for its uniqueness.”

The technique developed in this study, which is detailed in the Journal of Geophysical Research, could be used to study other very old rocks, such as such as those found in Australia, Canada and South Africa. Nichols says her next big project will be to carry out similar studies on these rocks.

The post Early Earth’s magnetic field strength was similar to today’s appeared first on Physics World.

Stellar magnetic fields may give doomed exoplanets a temporary reprieve

The fate of so-called “hot Jupiter” exoplanets is as fiery as it is inevitable. Over billions of years, these exoplanets – which have about the same mass as the familiar giant planet Jupiter, but much smaller orbits – will gradually spiral in towards their host stars until they collide with them.

For some hot Jupiters, though, this moment of reckoning isn’t happening as quickly as theory predicts – and now astrophysicists think they know why. According to a team led by Craig Duguid of Durham University, UK, the stars’ magnetic fields may be partially dissipating the gravitational tides responsible for the “doom spirals” of orbiting hot Jupiters. This explanation, which the team obtained by analyzing models of stars with convective cores, is consistent with observations of the exoplanet WASP-12b, which is destined to crash into its host star in just a few million years.

Of the more than 5000 exoplanets discovered to date, hot Jupiters are by far the most common type. This is partly due to limitations in observational techniques: because hot Jupiters are so massive, and orbit so close to their host stars, it is relatively easy to spot their effects with powerful telescopes. But their closeness is also their downfall, as it subjects both the hot Jupiters and their hosts to powerful gravitational tides. These tides transfer orbital energy from the exoplanet to the star, causing the planet’s orbit to decay until the planet is “swallowed up” by its host.

A wave-breaking hypothesis

WASP-12b, discovered in 2008 by the SuperWASP planetary transit survey, is a prime example of a doomed hot Jupiter. Thanks to its decaying orbit, it is due to collide with its host star, WASP-12, in a few million years, which is quite soon in astronomical terms.

The problem is that current theories of gravitational tides cannot fully explain WASP-12b’s orbital decay. One hypothesis that might resolve the discrepancy involves internal gravity waves (IGWs) that propagate towards the centre of the host star. Strong gravitational tides are known to excite IGWs, and recent work has shown that if IGWs reach the centre of the star, they can break in the same way as water waves break on a beach. Wave breaking is an extremely efficient source of tidal dissipation, as all the energy in the IGWs is lost to turbulence and heating. Indeed, it could provide enough dissipation to explain WASP-12b’s orbital decay – except for one thing.

“Wave breaking cannot occur if the star has a convective core – and WASP-12 does,” Duguid says, adding that it belongs to the family of F-type stars, which are 1.2 to 1.6 times heavier than our Sun and have convective cores.

This discovery left the tidal and observational community stunned, he says. “No other tidal theory could predict anything close to the amount of dissipation required for the rate of orbital decay observed – for example, my own study would suggest billions of years rather than a few million – but if the tidal theory was right, then the observations were wrong.”

A magnetic explanation

As an alternative, Duguid and colleagues focused on WASP-12’s magnetic field. Previous studies of how IGWs might convert into magnetic waves were carried out in a non-tidal context, but Duguid thinks such a conversion could happen for tidally excited IGWs, too – with important consequences. “Instead of the IGWs needing to reach the centre where they could break, they need only encounter a strong enough magnetic field that will cause them to be converted into outwardly propagating magnetic waves,” he explains. “These are then dissipated into heat.”

The mechanism is as efficient as wave breaking, he adds, since all the IGW energy is again lost to heat and turbulence. And importantly, it should still operate in stars that have convective cores – and probably won’t in stars that don’t. “The reason is that a convective core is likely to have a convectively driven dynamo able to generate a strong magnetic field,” he tells Physics World. “For WASP-12b, this mechanism can exist and hence explain the orbital decay rate while still agreeing with the observations.”

New mechanism for tidal dissipation

The Durham researchers, who detail their work in The Astrophysical Journal Letters, obtained their result by using the MESA code to assess whether IGWs could convert into magnetic waves for other F-type stars using estimates of the magnetic fields produced by a convective dynamo in the stellar cores.

“As well as being consistent with the observed inspiral of WASP-12b, we found that this previously unexplored source of efficient tidal dissipation can also operate in such stars over a significant fraction of their lifetimes,” Duguid says.

More generally, it could play an important role in how the orbits of hot Jupiters – and indeed other planets with ultra-short orbital periods – evolve over time, he adds.

“One important implication is that this new mechanism can guide observers to promising targets to detect tidally-driven orbital decay. Currently, only the orbit of WASP-12b has been confirmed to be decaying, but we expect many other planets orbiting F-type stars should be, too.”

Given the novelty of this mechanism for tidal dissipation, Duguid says there is much work to be done to understand the details of the fluid dynamics behind it and find out where the dissipated tidal energy goes within the star’s interior. “It is also quite exciting that the mechanism might be observationally tested within our lifetime,” he adds.

The post Stellar magnetic fields may give doomed exoplanets a temporary reprieve appeared first on Physics World.

Micro-tornadoes help transport nutrients within egg cells

Scientists in the US have simulated the transport of nutrients through maturing egg cells such as those found in newly formed embryos. Using a simple system comprising microtubules, motors and a fluid, they showed that tornado-like vortex flows allow critical components needed for egg cell development to mix and be transported rapidly around the cell. The work advances our understanding of how egg cells nourish themselves – a key process in organism growth and development.

Maturing egg cells, or oocytes, can be much bigger than other types of cells. This is because they must contain everything required to grow into entire organisms. Their large size has a major downside, though: it makes it harder for important nutrients to reach all parts of the cell. For example, while it takes just 10 to 15 seconds for protein molecules to travel from one side of a typical human cell to the other via diffusion, in oocytes, the same process would take an entire day – far too long for the cell to be able to function properly.

To compensate for their large size, oocytes have evolved special mechanisms to generate so-called “cytoplasmic streaming flows” that help nutrients and other molecules circulate more rapidly. However, the origin of these flows was not well understood.

A self-organizing system that creates global flows

In the new work, researchers led by computational biologist Michael Shelley found that they could create a self-organizing system in which materials flow around the model cell in ways that are strongly reminiscent of those observed in real fruit fly oocytes. “It is known that these flows are necessary for the proper development of the organism,” explains Shelley, who directs the Center for Computational Biology at the Flatiron Institute in New York. “They are also thought to allow for a timely delivery of components that need to be properly placed in the oocyte, and perhaps for mixing of these components.”

The fruit fly is one modern biology’s model organisms, Shelley adds, so it is very intensely studied. However, flows like this also arise in the oocytes of other organisms, such as mice, that are closer to humans in evolutionary terms. “Our simple (though computationally difficult) model provides an explanation of the variety of flows observed not only in our study but also those in previous experiments,” he says.

Modelling the cellular contents that create the micro-tornadoes

The team, which includes researchers from Princeton University and Northwestern University, used an advanced open-source biophysics software package called SkellySim, which was developed at Flatiron, to create their model. SkellySim models the cellular structures and processes that create the micro-vortices. These include the cell’s microtubules, which are flexible filaments that line the inside of cells; molecular “motors”, which are specialized proteins that carry “payloads” involved in generating the flows; and the cytoplasm that surrounds them.

By simulating the motion of thousands of microtubules as they respond to the forces exerted by the molecular motors, the researchers determined that the flows originate from interactions between the microtubules and the fluid naturally present in the cell. As the microtubules buckle under the forces, they create movement in the fluid surrounding them. This movement can push other microtubules to bend in the same direction. When enough microtubules do this, the fluid starts to propagate in a vortex-like flow across the whole oocyte. This micro-tornado is what allows nutrients and other molecules to cross the cell in just 20 minutes instead of the 20 hours possible with diffusion alone

Few ingredients required

“Our model shows that the system has an incredible capacity for organizing itself to create this functional flow,” Shelley says. “And you just need a few ingredients – microtubules, the geometry of the cell and molecular motors.”

The results were possible, he adds, thanks to advances in high-performance computing methods on several fronts. These include fluid-structure interactions involving the Stokes equations for fluid motion in complex domains as well as models that capture the hydrodynamics of immersed long, thin polymers (the microtubules) moving through such a fluid.

Shelley says the problem was a “beautiful” one to solve: “It sits at the boundaries of soft and active materials, experimental and development biology,” he tells Physics World. “Its understanding brings in tools from mathematical modelling, fluid physics, soft elastic materials and stability analysis.”

The researchers say their work, which is detailed in Nature Physics, advances our understanding of developmentally important flows and other activity-driven self-organization problems in biology. They will soon publish another study on why vortices appear to be globally attracting states and are preparing a new publication on how egg cell geometry causes them to become asymmetric. “Finally, we are looking into other fluid transport problems in developing oocytes involving ‘ring canals’,” Shelley reveals.

The post Micro-tornadoes help transport nutrients within egg cells appeared first on Physics World.

Metasurfaces make a single-shot polarization imaging system

The polarization of light scattered off an object provides a treasure trove of information. Techniques that image this polarization are often overlooked, however, because they are difficult to implement outside a laboratory setting. Researchers at Harvard University, US, have addressed this problem by developing a compact, single-shot polarization imaging system based on a technique called Mueller matrix imaging. The new system replaces many elements in traditional systems with nano-engineered metasurfaces, and its developers say it might find applications in biomedical imaging as well as fundamental research.

The colour of an object – that is, the frequency of light that scatters off it – generally depends on the colour of light incident on it. In the same way, the polarization of the light scattered off an object depends on the polarization of light that illuminates it. Mueller matrix imaging works by controlling the polarization of this incident light, and it is currently the most complete method for imaging polarization, revealing information that would be impossible to obtain using traditional techniques. It is difficult to implement in practice, however, because it requires complex apparatus that uses multiple rotating plates and polarizers to capture a sequence of images that are then combined to produce a matrix.

A much simpler system

The new system, developed by Federico Capasso and colleagues in the electrical engineering department at Harvard’s John A Paulson School of Engineering and Applied Sciences (SEAS), is much simpler. At its heart are two metasurfaces – artificially engineered ultrathin films made from arrays of tiny dielectric structures. These structures, which behave somewhat like atoms, are termed “meta-atoms” and are separated from each other by distances smaller than the wavelengths of the incoming light.

As light passes through the two metasurfaces, its properties – such as its amplitude, phase and polarization – change. “The first surface generates what’s known as polarized structured light, in which the polarization is designed to vary spatially in a unique pattern,” explains Aun Zaidi, who was involved in the work as a PhD student in Capasso’s laboratory and is now an optical scientist and engineer at Apple. “When this polarized light reflects off or transmits through the object being illuminated, the polarization of the profile beam changes. This change is captured and analyzed by the second metasurface to construct the final image – in a single shot.”

The nanoengineered metasurfaces greatly simplify the system’s design, Zaidi adds. Indeed, the system is free of any moving parts or bulk polarization optics and will allow for applications in real-time medical imaging, materials characterization, machine vision and target detection, to name but a few. “Our new compact system might even unlock the vast potential of polarization imaging for a range of existing and new applications, including augmented and virtual reality systems, for example, for facial recognition in smartphones and eye-tracking,” he says.

Other potential applications

In the near term, the new technique, which the team describe in Nature Photonics, could prove useful in applications that require both compact and single-shot polarization imaging. “In biomedicine, these include imaging live tissue samples in microscopy, polarimetric endoscopic imaging, retinal scanning and non- or minimally-invasive imaging of cancerous tumours,” Zaidi tells Physics World.

Thanks to its superior time resolution and flexibility, the technique might also be employed to generate large datasets for training neural networks in machine learning classification applications, he adds. “Beyond these, it might also be important for fundamental science, such as in the detection of the time-varying birefringence of the vacuum in the presence of intense electric and magnetic fields (as theorized by quantum electrodynamics) for the study of three-dimensional polarization states of light, and in research on both short-wavelength (X-ray) and long-wavelength (terahertz) polarimetry,” Zaidi says. “We plan to expand our work into these exciting new directions.”

The post Metasurfaces make a single-shot polarization imaging system appeared first on Physics World.

Antiviral hydrogel stops SARS-CoV-2 in its tracks

A new hydrogel binds to spike proteins in the SARS-CoV-2 virus like “molecular Velcro”, preventing it from interacting with potential host cells and inhibiting infection. According to its US-based developers, the hydrogel forms a multi-layer “mask” that abates the action of the virus in a non-specific way – meaning that, unlike vaccines, it would not need to be updated regularly as the virus evolves. The team say the technology could be developed into a cost-effective nose spray that would help fight the spread of airborne infections.

The new hydrogel is made from chains of protein-forming amino acids called peptides. In the latest study, a team led by biomedical engineer Vivek Kumar of the New Jersey Institute of Technology (NJIT) engineered these peptides to self-assemble into a functionalized hydrogel containing nanoscale fibres known as fibrils. It is these fibrils that bind to the protein complexes within viruses – in this case, the spike proteins on SARS-CoV-2, the virus that causes COVID-19.

Targeted vs non-specific approach

The researchers began working on this project in 2020, at the start of the COVID-19 pandemic. By 2021 they had engineered the self-assembling peptides to have a “targeting domain” specific to spikes. In some sense, they say, this mechanism was analogous to the way vaccines produce antibodies by targeting specific proteins on the SARS-CoV-2 virus.

Over the next few years, as new variants emerged, the researchers optimized their peptides further. Then, while performing a control experiment in live-virus assays – one that involved the hydrogel’s self-assembling domain on its own, without the targeting domain – they observed that some peptides could interact with the charged viral protein coat or viral proteins in a non-specific way. “Interestingly, we also found a synergy (improved viral inhibition) when testing combinations of the just-self-assembling domain and the self-assembling domain plus targeting domain,” Kumar says.

Further investigation showed that the self-assembled fibrils act like “molecular Velcro”, forming a stable fibrous mesh on the virus that is also highly resilient to viral mutations, Kumar adds.

“The ability to design and optimize these assemblies and target novel receptors is truly exciting,” says team member Petr Kral of the University of Illinois in Chicago (UIC), “along with the ability to tune densities in functionalized peptides to enhance targeting.”

Simulations and safety tests in laboratory animals

The team tested the fibrils on several SARS-CoV-2 variants – first with computer simulations at UIC, and then in the laboratory on mice and rats via injections and nasal sprays. They found that the treatment inhibited the Alpha and Omicrons variants of SARS-CoV-2 in vitro, while the animals exhibited no adverse effects.

While the hydrogel has not yet been tested in humans, Kumar says the early results are promising. “We think this platform could be expanded to a number of other disease-causing viruses, and could potentially rapidly, and cost-effectively, address the dearth of specific drugs/devices on the market for emerging epidemics/pandemics,” he says. He adds that the hydrogel could be “useful as a therapeutic in early stages of a disease or as a prophylactic – a gel sprayed into the nose to prevent the virus from infecting the host more seriously”.

The researchers are now seeking to understand how the fibrils interact with the spike proteins on SARS-CoV-2. In particular, they would like to know whether the infection-inhibiting mechanism is biomechanical in nature. The answer could have important implications for the platform’s versatility. “Drug-resistant pathogens mutate around biochemical modulators, but are they less likely to mutate around a mechanical spear?” Kumar asks rhetorically. “By understanding this fundamental interaction, we want to figure out how to use it against different diseases.”

The team, which also includes scientists from Georgia Tech, the Baylor School of Medicine and Rutgers University, hopes to find partners interested in developing the technology further. “We would like to extend it to other viruses, which we have shown is possible in computational simulations,” Kumar tells Physics World. “We are also exploring expanding the platform to a number of hard-to-treat bacterial and fungal pathogens that we have seen excellent efficacy against.”

The study is detailed in Nature Communications.

The post Antiviral hydrogel stops SARS-CoV-2 in its tracks appeared first on Physics World.

Ancient lull in Earth’s magnetic field may have allowed large animals to evolve

The list of conditions required for complex life to emerge on Earth is contentious, poorly understood, and one item longer than it used to be. According to an international team of geoscientists, an unusual lull in the Earth’s magnetic field nearly 600 million years ago may have triggered a rise in the planet’s oxygen levels, thereby allowing large, complex animals to evolve and thrive for the first time. Though evidence for the link is circumstantial, the team says that new measurements of magnetization in 591-million-year-old rocks suggest an important connection between processes occurring deep within the Earth and the appearance of life on its surface.

Scientists believe that the Earth’s magnetic field – including the bubble-like magnetosphere that shields us from the solar wind – stems from a dynamo effect created by molten metal moving inside the planet’s outer core. The strength of this field varies, and between 591 and 565 million years ago, it was almost 30 times weaker than it is today. Indeed, researchers think that in these years, which lie within the geological Ediacaran Period, the field dropped to its lowest-ever point.

Early in the Ediacaran, life on Earth was limited to small, soft-bodied organisms. Between 575 and 565 million years ago, however, fossil records show that lifeforms became significantly larger, more complex and mobile. While previous studies linked this change to an increase in atmospheric and oceanic oxygen levels that occurred around the same time, the cause of this increase was not known.

Weak magnetic field allowed more hydrogen gas to escape

In the latest study, which is published in Communications Earth & Environment, researchers led by geophysicist John Tarduno of the University of Rochester, US, present a hypothesis that links these two phenomena. The weak magnetic field, they argue, could have allowed more hydrogen gas to escape from the atmosphere into space, resulting in a net increase in the percentage of oxygen in the atmosphere and oceans. This, in turn, allowed larger, more oxygen-hungry animals to emerge.

To quantify the drop in Earth’s magnetic field, the team used a technique known as single-crystal paleointensity that Tarduno and colleagues invented 25 years ago and later refined. This technique enabled them to measure the magnetization of feldspar plagioclase crystals from a 591-million-year-old rock formation in Passo da Fabiana Gabbros, Brazil. These crystals are common in the Earth’s crust and contain minute magnetic inclusions that record, with high fidelity, the intensity of the Earth’s magnetic field at the time they crystallized. They also preserve accurate values of magnetic field strength for billions of years.

Photograph of a rounded, flattish fossilized organism with lines around its perimeter embedded in yellowish rock
Ancient organism: Photograph of a cast of Dickinsonia costata, an organism that lived ~560 million years ago, during the Ediacaran Period. The fossil was found at Flinders Ranges in South Australia. (Courtesy: Shuhai Xiao, Virginia Tech)

Tarduno says it was challenging to find rock formations that would have cooled slowly enough to yield a representative average for the magnetic field, and that also contained crystals suitable for paleointensity analyses. “We were able to find these in Brazil, aided by our colleagues at Universidade Federal do Rio Grande do Sul,” he says.

The team mounted the samples in quartz holders (which had negligible magnetic moments) and measured their magnetism using a DC SQUID magnetometer at Rochester. This instrument is composed of high-resolution sensing coils housed in a magnetically shielded room with an ambient field of less 200 nT.

Geodynamo remained ultra-weak for at least 26 million years

The team compared its results to those from previous studies of 565-million-year-old anorthosite rocks from the Sept Îles Mafic Intrusive Suite in Quebec, Canada, which also contain feldspar plagioclase crystals. Together, these measurements suggest that the Earth’s geodynamo remained ultra-weak, with a time-averaged field intensity of less than 0.8 × 1022 A m2, for least 26 million years. “This timespan allowed the oxygenation of the atmosphere and oceans to cross a threshold that in turn helped the Ediacaran radiation of animal life,” Tarduno says. “If this is true, it would represent a profound connection between the deep Earth and life. This history could also have implications for the search for life elsewhere.”

The researchers say they need additional records of geomagnetic field strength throughout the Ediacaran, plus the Cryogenian (720 to 635 million years ago) and Cambrian (538.8 to 485.4 million years ago) Periods that bookend it, to better constrain the duration of the ultralow magnetic fields. “This is crucial for understanding how much hydrogen could be lost from the atmosphere to space, ultimately resulting in the increased oxygenation,” Tarduno tells Physics World.

The post Ancient lull in Earth’s magnetic field may have allowed large animals to evolve appeared first on Physics World.

Domain walls in twisted graphene make 1D superconductors

Domain walls in graphene form strictly one-dimensional (1D) systems that can become superconducting via the so-called proximity effect. This is the finding of a team led by scientists at the University of Manchester, UK, who uncovered the behaviour by isolating individual domain walls in graphene and studying the transport of electrons within them – something that had never been done before. The discovery has applications in metrology and in some types of quantum bits (qubits), though team member Julien Barrier, who is now a postdoctoral researcher at the Institute of Photonic Sciences (ICFO) in Barcelona, Spain, suggests it might also impact other fields.

“Such strict 1D systems are extremely rare,” Barrier says, “and could serve a number of potential applications.”

The researchers made their 1D system by stacking two layers of graphene (a sheet of carbon just one atom thick) atop each other. When they misalign the layers ever so slightly (less than 0.1°) with respect to each other, the material experiences a strain that makes the atoms in its lattice rearrange themselves into micrometre-scale domains of aligned bilayer graphene.

The narrow regions at the intersection between these domains are known as domain walls, and previous work by members of the same team showed that these walls are very good at conducting electricity. The boundaries between domains were also known (thanks to work by Vladimir Fal’ko and colleagues at Manchester’s National Graphene Institute) to contain special counterpropagating electronic channels that form by hybridizing “edge states” within the conducting domain walls.

These edge states are a consequence of the quantum Hall effect, which occurs when a current passing along the length of a thin conducting sheet gives rise to an extremely precise voltage across opposite surfaces of the sheet. This voltage only occurs when a strong magnetic field is applied perpendicular to the sheet, and it is quantized – that is, it can only change in discrete steps.

One important property of edge states is that electrons in them are said to be “topologically protected” because they can only travel in one direction. They also steer around imperfections or defects in the material without backscattering. Since backscattering is the main energy-dissipating process in electronic devices, such protected states could be useful components in next-generation energy-efficient devices.

Achieving superconductivity in the quantum Hall regime

Over the years, much effort has therefore gone into trying to achieve superconductivity in the quantum Hall regime, as this would mean that the transport of Cooper pairs of electrons (that is, those that travel unhindered through a material to form supercurrents) is mediated by these 1D edge states.

To identify the domain walls in their bilayer graphene sample, the Manchester team turned to a technique called near-field photocurrent imaging developed by Krishna Kumar’s group at the ICFO. This technique provided the information the scientists needed to isolate the domain walls and place them between two superconductors.

They found that doing so not only induces robust superconductivity inside the walls, it also allows the walls to carry individual electronic modes. “This is proof of the 1D nature of this system,” Barrier says.

One unexpected finding is that the superconductivity stems from the proximity of edge states in the domain walls and, in particular, from the strictly 1D states existing within the walls. This means it does not arise from quantum Hall edges on each of the domain walls, as the researchers previously thought.

New approach overcomes previous limitations

The researchers led by Barrier, Na Xin and Andre Geim began their study using a conventional approach in which counterpropagating quantum Hall edge states were brought close together, but these experiments did not produce the results they expected. In previous studies, experimental progress in this direction was limited to observing oscillatory behaviour in the normal (non-superconducting) state, or more recently, to very small supercurrent (of less than 1 nA) at ultralow temperatures of less than 10 mK, Barrier explains.

The new approach, which the team describes in Nature, overcomes these limitations. Thanks to support on the theory side from Fal’ko’s group, the researchers realized that the strictly 1D electronic states they observed in the graphene domain walls hybridize much better with superconductivity than quantum Hall edge states. This realization enabled the researchers to measure supercurrents of a few 10nA at temperatures of ~1K.

Barrier says the new system could develop along numerous research directions. There is currently an intense interest in quasi-1D (multimode) proximity superconductivity using nanowires, quantum point contacts, quantum dots and other such structures. Indeed, electronic devices containing these structures are already on the market. The new system provides superconductivity via single-mode 1D states and could make such research redundant, he says.

“Our estimation is that electrons propagate in two opposite directions, less than a nanometre apart, without scattering,” he tells Physics World. “Such ballistic 1D systems are exceptionally rare and can be controlled by a gate voltage (like quantum point contacts) and exhibit standing waves (like superconducting quantum dots).”

The post Domain walls in twisted graphene make 1D superconductors appeared first on Physics World.

Synthetic diamonds grow in liquid metal at ambient pressure

The usual way of manufacturing synthetic diamonds involves applying huge pressures to carbon at high temperatures. Now, however, researchers at the Institute for Basic Science (IBS) in Korea have shown that while high temperatures are still a prerequisite, it is possible to make polycrystalline diamond film at standard pressures. The new technique could revolutionize diamond manufacturing, they say.

Natural diamonds form over billions of years in the Earth’s upper mantle at temperatures of between 900 and 1400 °C and pressures of 5–6 gigapascals (GPa). For the most part, the manufacturing processes used to make most synthetic diamonds mimic these conditions. In the 1950s, for example, scientists at General Electric in the US developed a way to synthesize diamonds in the laboratory using molten iron sulphide at around 7 GPa and 1600 °C. Although other researchers have since refined this technique (and developed an alternative known as chemical vapour deposition for making high-quality diamonds), diamond manufacturing largely still depends on liquid metals at high pressures and temperatures (HPHT).

A team led by Rodney Ruoff has now turned this convention on its head by making a polycrystalline diamond film using liquid metal at just 1 atmosphere of pressure and 1025 °C. When Ruoff and colleagues exposed a liquid alloy of gallium, iron, silicon and nickel to a mix of methane and hydrogen, they observed diamond growing in the subsurface of the liquid metal. The team attribute this effect to the catalytic activation of methane and the diffusion of carbons atoms in the subsurface region.

No seed particles

Unusually, the first diamond crystals in the IBS experiment began to form (or nucleate) without seed particles, which are prerequisites for conventional HPHT and chemical vapour deposition techniques. The individual crystals later merged into a film that is easy to detach and transfer to other substrates.

To investigate the nucleation process further, the team used high-resolution transmission electron microscopy (TEM) to capture cross-sections of the diamond film. These TEM images showed that carbon builds up in the liquid metal subsurface until it reaches supersaturated levels. This, according to the researchers, is likely what leads to the nucleation and growth of the diamonds.

Separately, synchrotron X-ray diffraction measurements revealed that although the diamond formed via this method was highly pure, it contained some silicon atoms situated between two unoccupied sites in the diamond lattice of carbon atoms. The researchers say that these silicon-vacancy colour centres, as they are known, could have applications in magnetic sensing and quantum computing, where similar defects known as nitrogen-vacancy centres are already an active topic of research. The presence of silicon also appears to play an important role in stabilizing the tetravalently-bonded carbon clusters responsible for nucleation, they add.

Scaling up

The researchers, who report their work in Nature, are now trying to pin down when the nucleation of the diamond begins. They also plan “temperature drop” experiments in which they will first supersaturate the liquid metal with carbon and then rapidly lower the temperature in the experimental chamber to trigger diamond nucleation.

Another future research direction might involve studying alternative metal liquid alloy compositions. “Our optimized growth was achieved using the gallium/nickel/iron/silicon liquid alloy,” explains team member Da Luo. “However, we also found that high-quality diamond can be grown by substituting nickel with cobalt or by replacing gallium with a gallium-indium mixture.”

Ruoff adds that the team might also try carbon precursors other than methane, noting that various gases and solid carbons could yield different results. Overall, the discovery of diamond nucleation and growth in this liquid is “fascinating”, he says, and it offers many exciting opportunities for basic science and for scaling up the growth of synthetic diamonds in new ways. “New designs and methods for introducing carbon atoms and/or small carbon clusters into liquid metals for diamond growth will surely be important,” he concludes.

The post Synthetic diamonds grow in liquid metal at ambient pressure appeared first on Physics World.

Ship-based atomic clock passes precision milestone

A new ultra-precise atomic clock outperforms existing microwave clocks in time-keeping and sturdiness under real-world conditions. The clock, made by a team of researchers from the California, US-based engineering firm Vector Atomic, exploits the precise frequencies of atomic transitions in iodine molecules and recently passed a three-week trial aboard a ship sailing around Hawaii.

Atomic clocks are the world’s most precise timekeeping devices, and they are essential to staples of modern life such as global positioning systems, telecommunications and data centres. The most common types of atomic clock used in these real-world applications were developed in the 1960s, and they work by measuring the frequency at which atoms oscillate between two energy states. They are often based on caesium atoms, which absorb and emit radiation at microwave frequencies as they oscillate, and the best of them are precise to within one second in six million years.

Clocks that absorb and emit at higher, visible, frequencies are even more precise, with timing errors of less than 1 second in 30 billion years. These optical atomic clocks are, however, much bulkier than their microwave counterparts, and their sensitivity to disturbances in their surroundings means they only work properly under well-controlled conditions.

Prototypes based on iodine

The Vector Atomic work, which the team describe in Nature, represents a step towards overturning these limitations. Led by Vector Atomic co-founder and study co-author Jamil-Abo-Shaeer, the team developed three robust optical clock prototypes based on transitions in iodine molecules (I2). These transitions occur at wavelengths conveniently near those of routinely-employed commercial frequency-doubled lasers, and the iodine itself is confined in a vapour cell, doing away with the need to cool atoms to extremely cold temperatures or keep them in an ultrahigh vacuum. With a volume of around 30 litres, the clocks are also compact enough to fit on a tabletop.

While the precision of these prototype optical clocks lags behind that of the best lab-based versions, it is still 1000 times better than clocks of a similar size that ships currently use, says Abo-Shaeer. The prototype clocks are also 100 times more precise than existing microwave clocks of the same size.

Sea trials

The researchers tested their clocks aboard a Royal New Zealand Navy ship, HMNZS Aotearoa, during a three-week voyage around Hawaii. They found that the clocks performed almost as well as in the laboratory, despite the completely different conditions. Indeed, two of the larger devices recorded errors of less than 400 picoseconds (10-12 seconds) over 24 hours.

The team describe the prototypes as a “key building block” for upgrading the world’s timekeeping networks from the nanosecond to the picosecond regime. According to team member Jonathan Roslund, the goal is to build the world’s first fully integrated optical atomic clock with the same “form factor” as a microwave clock, and then demonstrate that it outperforms microwave clocks under real-world conditions.

“Iodine optical clocks are certainly not new,” he tells Physics World. “In fact, one of the very first optical clocks utilized iodine, but researchers moved onto more exotic atoms with better timekeeping properties. Iodine does have a number of attractive properties, however, for making a compact and simple portable optical clock.”

The most finicky parts of any atomic-clock system, Roslund explains, are the lasers, but iodine can rely on industrial-grade lasers operating at both 1064 nm and 1550 nm. “The vapour cell architecture we employ also uses no consumables and requires neither laser cooling nor a pre-stabilization cavity,” Roslund adds.

The next generation

After testing their first-generation clocks on HMNZS Aotearoa, the researchers developed a second-generation device that is 2.5 times more precise. With a volume of just 30 litres including the power supply and computer control, the upgraded version is now a commercial product called Evergreen-30. “We are also hard at work on a 5-litre version targeting the same performance, and an ultracompact 1-litre version,” Roslund reveals.

As well as travelling aboard ships, Roslund says these smaller clocks could have applications in airborne and space-based systems. They might also make a scientific impact: “We have just finished an exciting demonstration in collaboration with the University of Arizona, in which our Evergreen-30 clocks served as the timebase for a radio observatory in the Event Horizon Telescope Array, which is imaging distant supermassive blackholes.”

The post Ship-based atomic clock passes precision milestone appeared first on Physics World.

Semiconductor substrate behaves ‘like the tail wagging the dog’, say scientists

The substrates on which semiconductor chip are grown usually get ignored, but they may be more important than we think. This is the finding of researchers in the US and Germany, who used high-energy X-rays to study titanium dioxide – a common substrate for insulator-to-metal semiconductors. The discovery that this material is far more than just a passive platform could help scientists develop next-generation electronics.

Materials that switch from metal-like to insulating very quickly offer a promising route for developing super-fast electronic transistors. To this end, a team led by materials scientist and physicist Venkatraman Gopalan of Pennsylvania State University, US, began studying a leading candidate for such devices, vanadium dioxide (VO2). Vanadium dioxide is unusual in that its electrons are strongly correlated. This means that, unlike in silicon-based electronics, the repulsion between electrons cannot be ignored.

Crucially, though, the researchers did not look at the VO2 layer on its own. They also analysed how it interacts with the titanium dioxide (TiO2) substrate upon which it is grown. To their surprise, they found that the substrate contains an active layer that behaves just like the semiconductor when the VO2 switches between an insulating state and a metallic one.

Timed X-ray pulse

Gopalan and colleagues obtained their results by growing a very thin film of VO2 atop a thick TiO2 single crystal substrate. They then fabricated a device channel on the ensemble across which they could apply the voltage pulses that switch the semiconductor from insulating to conducting. During this switching, they applied high-energy X-ray pulses from the Advanced Photon Source (APS) at Argonne National Laboratory to the channel and observed the lattice planes of the semiconducting film and the substrate.

“The X-ray pulse was timed so that it could arrive before, at and after the electrical pulse so that we see what happens with time,” Gopalan explains. “It was also raster scanned across the channel to map what happens to the entire channel when the material switches from being an insulator to a metal.”

This technique, known as spatio-temporal X-ray diffraction microscopy, is good at revealing the behaviour of materials at the atomic level. In this case, it showed the researchers that the VO2 film bulges as it changes to a metal. This was unexpected: according to Gopalan, the material was supposed to shrink. “What is more, the substrate, which is usually thought to be electrically and mechanically passive, also bulges along with the VO2 film,” he says. “It is like the tail wagging the dog, and shows that a mechanism that was missed before is at play.”

Native oxygen vacancies are responsible

According to the researchers’ theoretical calculations and modelling, this mechanism involves atomic sites in the material lattice that are missing oxygen atoms. These native oxygen vacancies, as they are known, are present in both the semiconductor and substrate and they ionize and deionize in concert with the applied electric field.

“Neutral oxygen vacancies hold a charge of two electrons, which they can release when the material switches from an insulator to a metal,” Gopalan explains. “The oxygen vacancy left behind is now charged and swells up, leading to the observed swelling in the device. This can also happen in the substrate.”

The experiment itself was very challenging, Gopalan says. One of the X-ray beamlines at the APS had to be specially rigged and it took the team several years to complete the set-up. Then, he adds, “The results were so intriguing and unexpected that it took us several more years to analyse the data and come up with a theory to understand the results.”

According to Gopalan, there is tremendous interest in next-generation electronics based on correlated electronic materials such as VO2 that exhibit a fast insulator-to-metal transition. “While previous studies have analysed this material using various techniques, including using X-rays, our is the first to study a functioning device geometry under realistic conditions, while mapping its response in space and time,” he tells Physics World. “This study is unique in that respect, and it paid off in what it revealed.”

The researchers are now trying to understand the mechanisms behind the substrate’s surprising response, and they plan to revisit their experiment to this end. “We are thinking, for example, of intentionally adding ionizing defects that release electrons and trigger a metal-to-insulator transition when a voltage is applied,” Gopalan reveals.

The present study – which also involved collaborators at Cornell University and Georgia Tech in the US, and the Paul Drude Institute in Germany – is detailed in Advanced Materials.

The post Semiconductor substrate behaves ‘like the tail wagging the dog’, say scientists appeared first on Physics World.

Wigner crystal appears in bilayer graphene

Researchers at Princeton University in the US say they have made the first direct observation of a Wigner crystal – a structure consisting solely of electrons arranged in a lattice-like configuration. The finding, made by using scanning tunnelling microscopy to examine a material known as Bernal-stacked graphene, confirms a nearly century-old theory that electrons can assemble into a closely-packed lattice without having to orbit around an atom. The work could help scientists discover other phases of exotic matter in which electrons behave collectively.

Although electrons repel each other, at room temperatures their kinetic energy is high enough to overcome this, so they flow together as electric currents. At ultralow temperatures, however, repulsive forces dominate, and electrons spontaneously crystallize into an ordered quantum phase of matter. This, at least, is what the physicist Eugene Wigner predicted 90 years ago would happen. But while scientists have seen evidence of this type of crystalline lattice forming before (for example, in a one-dimensional carbon nanotube and in a quantum wire), it had never been observed directly.

A pristine sample of graphene

In the new work, which is detailed in Nature, researchers led by Princeton’s Ali Yazdani used a scanning tunnelling microscope (STM) to study electrons in a pristine sample of graphene (a sheet of carbon one atom thick). To keep the material as pure as possible, and so avoid the possibility of electron crystals forming in lattice defects or imperfections, they placed one sheet of graphene atop another in a configuration known as a bilayer Bernal stack.

Next, they cooled the sample down to just above absolute zero, which reduced the kinetic energy of the electrons. They also applied a magnetic field perpendicular to the sample’s layers, which suppresses kinetic energy still further by restricting the electrons’ possible orbits. The result was a two-dimensional gas of electrons located between the graphene layers, with a density the researchers could tune by applying a voltage across the sample.

Scanning tunnelling microscopy involves scanning a sharp metallic tip across a sample. When the tip passes over an electron, the particle tunnels through the gap between the sample surface and the tip, thereby creating an electric current. By measuring this current, researchers can determine the local density of electrons. Yazdani and colleagues found that when they increased this density, they observed a phase transition during which the electrons spontaneously assembled into an ordered triangular lattice structure – just as Wigner predicted.

Forcing a lattice to form

The team explains that this spontaneous assembly is the natural outcome of a “battle” between the electrons’ increased density (which pushes them closer together) and their mutual repulsion (which pushes them apart). An organized lattice configuration – a Wigner crystal – is, in effect, a compromise that lets electrons maintain a degree of distance from each other even when their density is relatively high. If the density increases still further, this crystalline phase melts, producing a phase known as a fractional quantum Hall electron liquid as well as an anisotropic quantum fluid in which the electrons organize themselves into stripes.

By analysing the size of each electron site in the Wigner crystal, the researchers also found evidence for the crystal’s “zero-point” motion. This motion, which comes about because of the Heisenberg uncertainty principle, occupies a “remarkable” 30% of the lattice constant of a crystal site, Yazdani explains, and highlights the crystal’s quantum nature.

The Princeton team now aims to use this same STM technique to image a Wigner crystal made of “holes”, which are regions of positive charge where electrons are absent. “We also plan to image other types of electron solid phases, so-called skyrme crystals and ‘bubble phases’,” Yazdani says. “In addition to even more exotic phases such as quasiparticle Wigner crystals made of fractional charges, there is also the possibility to study how these quantum crystals would change in the presence of a net electrical current.”

The post Wigner crystal appears in bilayer graphene appeared first on Physics World.

❌