↩ Accueil

Vue normale

Fabrication and device performance of Ni0/Ga2O3 heterojunction power rectifiers

28 octobre 2025 à 18:49

ecs webinar image

This talk shows how integrating p-type NiO to form NiO/GaO heterojunction rectifiers overcomes that barrier, enabling record-class breakdown and Ampere-class operation. It will cover device structure/process optimization, thermal stability to high temperatures, and radiation response – with direct ties to today’s priorities: EV fast charging, AI data‑center power systems, and aerospace/space‑qualified power electronics.

An interactive Q&A session follows the presentation.

 

Jian-Sian Li

Jian-Sian Li received the PhD in chemical engineering from the University of Florida in 2024, where his research focused on NiO/β-GaO heterojunction power rectifiers, includes device design, process optimization, fast switching, high-temperature stability, and radiation tolerance (γ, neutron, proton). His work includes extensive electrical characterization and microscopy/TCAD analysis supporting device physics and reliability in harsh environments. Previously, he completed his BS and MS at National Taiwan University (2015, 2018), with research spanning phoretic/electrokinetic colloids, polymers for OFETs/PSCs, and solid-state polymer electrolytes for Li-ion batteries. He has since transitioned to industry at Micron Technology.

The post Fabrication and device performance of Ni0/Ga<sub>2</sub>O<sub>3</sub> heterojunction power rectifiers appeared first on Physics World.

Randomly textured lithium niobate gives snapshot spectrometer a boost

28 octobre 2025 à 17:00

A new integrated “snapshot spectroscopy” system developed in China can determine the spectral and spatial composition of light from an object with much better precision than other existing systems. The instrument uses randomly textured lithium niobate and its developers have used it for astronomical imaging and materials analysis – and they say that other applications are possible.

Spectroscopy is crucial to analysis of all kinds of objects in science and engineering, from studying the radiation emitted by stars to identifying potential food contaminants. Conventional spectrometers – such as those used on telescopes – rely on diffractive optics to separate incoming light into its constituent wavelengths. This makes them inherently large, expensive and inefficient at rapid image acquisition as the light from each point source has to be spatially separated to resolve the wavelength components.

In recent years researchers have combined computational methods with advanced optical sensors to create computational spectrometers with the potential to rival conventional instruments. One such approach is hyperspectral snapshot imaging, which captures both spectral and spatial information in the same image. There are currently two main snapshot-imaging techniques available. Narrowband-filtered snapshot spectral imagers comprise a mosaic pattern of narrowband filters and acquire an image by taking repeated snapshots at different wavelengths. However, these trade spectral resolution with spatial resolution, as each extra band requires its own tile within the mosaic. A more complex alternative design – the broadband-modulated snapshot spectral imager – uses a single, broadband detector covered with a spatially varying element such as a metasurface that interacts with the light and imprints spectral encoding information onto each pixel. However, these are complex to manufacture and their spectral resolution is limited to the nanometre scale.

Random thicknesses

In the new work, researchers led by Lu Fang at Tsinghua University in Beijing unveil a spectroscopy technique that utilizes the nonlinear optical properties of lithium niobate to achieve sub-Ångström spectral resolution in a simply fabricated, integrated snapshot detector they call RAFAEL. A lithium niobate layer with random, sub-wavelength thickness variations is surrounded by distributed Bragg reflectors, forming optical cavities. These are integrated into a stack with a set of electrodes. Each cavity corresponds to a single pixel. Incident light enters  from one side of a cavity, interacting with the lithium niobate repeatedly before exiting and being detected. Because lithium niobate is nonlinear, its response varies with the wavelength of the light.

The researchers then applied a bias voltage using the electrodes. The nonlinear optical response of lithium niobate means that this bias alters its response to light differently at different wavelengths. Moreover, the random variation of the lithium niobate’s thickness around the surface means that the wavelength variation is spatially specific.

The researchers designed a machine learning algorithm and trained it to use this variation of applied bias voltage with resulting wavelength detected at each point to reconstruct the incident wavelengths on the detector at each point in space.

“The randomness is useful for making the equations independent,” explains Fang; “We want to have uncorrelated equations so we can solve them.”

Thousands of stars

The researchers showed that they could achieve 88 Hz snapshot spectroscopy on a grid of 2048×2048 pixels with a spectral resolution of 0.5 Å (0.05 nm) between wavelengths of 400–1000 nm. They demonstrated this by capturing the full atomic absorption spectra of up to 5600 stars in a single snapshot. This is a two to four orders of magnitude improvement in observational efficiency over world-class astronomical spectrometers. They also demonstrated other applications, including a materials analysis challenge involving the distinction of a real leaf from a fake one. The two looked identical at optical wavelengths, but, using its broader range of wavelengths, RAFAEL was able to distinguish between the two.

The researchers are now attempting to improve the device further: “I still think that sub-Ångstrom is not the ending – it’s just the starting point,” says Fu. “We want to push the limit of our resolution to the picometre.” In addition, she says, they are working on further integration of the device – which requires no specialized lithography – for easier use in the field. “We’ve already put this technology on a drone platform,” she reveals. The team is also working with astronomical observatories such as Gran Telescopio Canarias in La Palma, Spain.

The research is described in Nature.

Computational imaging expert David Brady of Duke University in North Carolina is impressed by the instrument. “It’s a compact package with extremely high spectral resolution,” he says; “Typically an optical instrument, like a CMOS sensor that’s used here, is going to have between 10,000 and 100,000 photo-electrons per pixel.  That’s way too many photons for getting one measurement…I think you’ll see that with spectral imaging as is done here, but also with temporal imaging. People are saying you don’t need to go at 30 frames second, you can go at a million frames per second and push closer to the single photon limit, and then that would require you to do computation to figure out what it all means.”

The post Randomly textured lithium niobate gives snapshot spectrometer a boost appeared first on Physics World.

Tumour-specific radiofrequency fields suppress brain cancer growth

28 octobre 2025 à 14:00

A research team headed up at Wayne State University School of Medicine in the US has developed a novel treatment for glioblastoma, based on exposure to low levels of radiofrequency electromagnetic fields (RF EMF). The researchers demonstrated that the new therapy slows the growth of glioblastoma cells in vitro and, for the first time, showed its feasibility and clinical impact in patients with brain tumours.

The study, led by Hugo Jimenez and reported in Oncotarget, uses a device developed by TheraBionic that delivers amplitude-modulated 27.12 MHz RF EMF throughout the entire body, via a spoon-shaped antenna placed on the tongue. Using tumour-specific modulation frequencies, the device has already received US FDA approval for treating patients with advanced hepatocellular carcinoma (HCC, a liver cancer), while its safety and effectiveness are currently being assessed in clinical trials in patients with pancreatic, colorectal and breast cancer.

In this latest work, the team investigated its use in glioblastoma, an aggressive and difficult to treat brain tumour.

To identify the particular frequencies needed to treat glioblastoma, the team used a non-invasive biofeedback method developed previously to study patients with various types of cancer. The process involves measuring variations in skin electrical resistance, pulse amplitude and blood pressure while individuals are exposed to low levels of amplitude-modulated frequencies. The approach can identify the frequencies, usually between 1 Hz and 100 kHz, specific to a single tumour type.

Jimenez and colleagues first examined the impact of glioblastoma-specific amplitude-modulated RF EMF (GBMF) on glioblastoma cells, exposing various cell lines to GBMF for 3 h per day at the exposure level used for patient treatments. After one week, GBMF decreased the proliferation of three glioblastoma cell lines (U251, BTCOE-4765 and BTCOE-4795) by 34.19%, 15.03% and 14.52%, respectively.

The team note that the level of this inhibitive effect (15–34%) is similar to that observed in HCC cell lines (19–47%) and breast cancer cell lines (10–20%) treated with tumour-specific frequencies. A fourth glioblastoma cell line (BTCOE-4536) was not inhibited by GBMF, for reasons currently unknown.

Next, the researchers examined the effect of GBMF on cancer stem cells, which are responsible for treatment resistance and cancer recurrence. The treatment decreased the tumour sphere-forming ability of U251 and BTCOE-4795 cells by 36.16% and 30.16%, respectively – also a comparable range to that seen in HCC and breast cancer cells.

Notably, these effects were only induced by frequencies associated with glioblastoma. Exposing glioblastoma cells to HCC-specific modulation frequencies had no measurable impact and was indistinguishable from sham exposure.

Looking into the underlying treatment mechanisms, the researchers hypothesized that – as seen in breast cancer and HCC – glioblastoma cell proliferation is mediated by T-type voltage-gated calcium channels (VGCC). In the presence of a VGCC blocker, GBMF did not inhibit cell proliferation, confirming that GBMF inhibition of cell proliferation depends on T-type VGCCs, in particular, a calcium channel known as CACNA1H.

The team also found that GBMF blocks the growth of glioblastoma cells by modulating the “Mitotic Roles of Polo-Like Kinase” signalling pathway, leading to disruption of the cells’ mitotic spindles, critical structures in cell replication.

A clinical first

Finally, the researchers used the TheraBionic device to treat two patients: a 38-year-old patient with recurrent glioblastoma and a 47-year-old patient with the rare brain tumour oligodendroglioma. The first patient showed signs of clinical and radiological benefit following treatment; the second exhibited stable disease and tolerated the treatment well.

“This is the first report showing feasibility and clinical activity in patients with brain tumour,” the authors write. “Similarly to what has been observed in patients with breast cancer and hepatocellular carcinoma, this report shows feasibility of this treatment approach in patients with malignant glioma and provides evidence of anticancer activity in one of them.”

The researchers add that a previous dosimetric analysis of this technique measured a whole-body specific absorption rate (SAR, the rate of energy absorbed by the body when exposed to RF EMF) of 1.35 mW/kg and a peak spatial SAR (over 1 g of tissue) of 146–352 mW/kg. These values are well within the safety limits set by the ICNIRP (whole-body SAR of 80 mW/kg; peak spatial SAR of 2000 mW/kg). Organ-specific values for grey matter, white matter and the midbrain also had mean SAR ranges well within the safety limits.

The team concludes that the results justify future preclinical and clinical studies of the TheraBionic device in this patient population. “We are currently in the process of designing clinical studies in patients with brain tumors,” Jimenez tells Physics World.

The post Tumour-specific radiofrequency fields suppress brain cancer growth appeared first on Physics World.

Entangled light leads to quantum advantage

28 octobre 2025 à 09:00
Photo showing the optical components used to manipulate the quantum fluctuations of light
Quantum manipulation: The squeezer – an optical parametric oscillator (OPO) that uses a nonlinear crystal inside an optical cavity to manipulate the quantum fluctuations of light – is responsible for the entanglement. (Courtesy: Jonas Schou Neergaard-Nielsen)

Physicists at the Technical University of Denmark have demonstrated what they describe as a “strong and unconditional” quantum advantage in a photonic platform for the first time. Using entangled light, they were able to reduce the number of measurements required to characterize their system by a factor of 1011, with a correspondingly huge saving in time.

“We reduced the time it would take from 20 million years with a conventional scheme to 15 minutes using entanglement,” says Romain Brunel, who co-led the research together with colleagues Zheng-Hao Liu and Ulrik Lund Andersen.

Although the research, which is described in Science, is still at a preliminary stage, Brunel says it shows that major improvements are achievable with current photonic technologies. In his view, this makes it an important step towards practical quantum-based protocols for metrology and machine learning.

From individual to collective measurement

Quantum devices are hard to isolate from their environment and extremely sensitive to external perturbations. That makes it a challenge to learn about their behaviour.

To get around this problem, researchers have tried various “quantum learning” strategies that replace individual measurements with collective, algorithmic ones. These strategies have already been shown to reduce the number of measurements required to characterize certain quantum systems, such as superconducting electronic platforms containing tens of quantum bits (qubits), by as much as a factor of 105.

A photonic platform

In the new study, Brunel, Liu, Andersen and colleagues obtained a quantum advantage in an alternative “continuous-variable” photonic platform. The researchers note that such platforms are far easier to scale up than superconducting qubits, which they say makes them a more natural architecture for quantum information processing. Indeed, photonic platforms have already been crucial to advances in boson sampling, quantum communication, computation and sensing.

The team’s experiment works with conventional, “imperfect” optical components and consists of a channel containing multiple light pulses that share the same pattern, or signature, of noise. The researchers began by performing a procedure known as quantum squeezing on two beams of light in their system. This caused the beams to become entangled – a quantum phenomenon that creates such a strong linkage that measuring the properties of one instantly affects the properties of the other.

The team then measured the properties of one of the beams (the “probe” beam) in an experiment known as a 100-mode bosonic displacement process. According to Brunel, one can imagine this experiment as being like tweaking the properties of 100 independent light modes, which are packets or beams of light. “A ‘bosonic displacement process’ means you slightly shift the amplitude and phase of each mode, like nudging each one’s brightness and timing,” he explains. “So, you then have 100 separate light modes, and each one is shifted in phase space according to a specific rule or pattern.”

By comparing the probe beam to the second (“reference”) beam in a single joint measurement, Brunel explains that he and his colleagues were able to cancel out much of the uncertainties in these measurements. This meant they could extract more information per trial than they could have by characterizing the probe beam alone. This information boost, in turn, allowed them to significantly reduce the number of measurements – in this case, by a factor of 1011.

While the DTU researchers acknowledge that they have not yet studied a practical, real-world system, they emphasize that their platform is capable of “doing something that no classical system will ever be able to do”, which is the definition of a quantum advantage. “Our next step will therefore be to study a more practical system in which we can demonstrate a quantum advantage,” Brunel tells Physics World.

The post Entangled light leads to quantum advantage appeared first on Physics World.

Queer Quest: a quantum-inspired journey of self-discovery

27 octobre 2025 à 17:00

This episode of Physics World Stories features an interview with Jessica Esquivel and Emily Esquivel – the creative duo behind Queer Quest. The event created a shared space for 2SLGBTQIA+ Black and Brown people working in science, technology, engineering, arts and mathematics (STEAM).

Mental health professionals also joined Queer Quest, which was officially recognized by UNESCO as part of the International Year of Quantum Science and Technology (IYQ). Over two days in Chicago this October, the event brought science, identity and wellbeing into powerful conversation.

Jessica Esquivel, a particle physicist and associate scientist at Fermilab, is part of the Muon g-2 experiment, pushing the limits of the Standard Model. Emily Esquivel is a licensed clinical professional counsellor. Together, they run Oyanova, an organization empowering Black and Brown communities through science and wellness.

Quantum metaphors and resilience through connection

queer quest advert - a woman's face inside a planet
Courtesy: Oyanova

Queer Quest blended keynote talks, with collective conversations, alongside meditation and other wellbeing activities. Panellists drew on quantum metaphors – such as entanglement – to explore identity, community and mental health.

In a wide-ranging conversation with podcast host Andrew Glester, Jessica and Emily speak about the inspiration for the event, and the personal challenges they have faced within academia. They speak about the importance of building resilience through community connections, especially given the social tensions in the US right now.

Hear more from Jessica Esquivel in her 2021 Physics World Stories appearance on the latest developments in muon science.

This article forms part of Physics World‘s contribution to the 2025 International Year of Quantum Science and Technology (IYQ), which aims to raise global awareness of quantum physics and its applications.

Stayed tuned to Physics World and our international partners throughout the year for more coverage of the IYQ.

Find out more on our quantum channel.

 

The post Queer Quest: a quantum-inspired journey of self-discovery appeared first on Physics World.

💾

Fingerprint method can detect objects hidden in complex scattering media

27 octobre 2025 à 15:00
Buried metal spheres can be seen using new fingerprint imaging method
Imaging buried objects Left: artistic impression of metal spheres buried in small glass beads; centre: conventional ultrasound image; right: the new technology can precisely determine the positions of the metal spheres. (Courtesy: TU Wien/Arthur Le Ber)

Physicists have developed a novel imaging technique for detecting and characterizing objects hidden within opaque, highly scattering material. The researchers, from France and Austria, showed that their new mathematical approach, which utilizes the fact that hidden objects generate their own complex scattering pattern, or “fingerprint”, can work on biological tissue.

Viewing the inside of the human body is challenging due to the scattering nature of tissue. With ultrasound, when waves propagate through tissue they are reflected, bounce around and scatter chaotically, creating noise that obscures the signal from the object that the medical practitioner is trying to see. The further you delve into the body the more incoherent the image becomes.

There are techniques for overcoming these issues, but as scattering increases – in more complex media or as you push deeper through tissue – they struggle and unpicking the required signal becomes too complex.

The scientists behind the latest research, from the Institut Langevin in Paris, France and TU Wien in Vienna, Austria, say that rather than compensating for scattering, their technique instead relies on detecting signals from the hidden object in the disorder.

Objects buried in a material create their own complex scattering pattern, and the researchers found that if you know an object’s specific acoustic signal it’s possible to find it in the noise created by the surrounding environment.

“We cannot see the object, but the backscattered ultrasonic wave that hits the microphones of the measuring device still carries information about the fact that it has come into contact with the object we are looking for,” explains Stefan Rotter, a theoretical physicist at TU Wien.

Rotter and his colleagues examined how a series of objects scattered ultrasound waves in an interference-free environment. This created what they refer to as fingerprint matrices: measurements of the specific, characteristic way in which each object scattered the waves.

The team then developed a mathematical method that allowed them to calculate the position of each object when hidden in a scattering medium, based on its fingerprint matrix.

“From the correlations between the measured reflected wave and the unaltered fingerprint matrix, it is possible to deduce where the object is most likely to be located, even if the object is buried,” explains Rotter.

The team tested the technique in three different scenarios. The first experiment trialled the ultrasound imaging of metal spheres in a dense suspension of glass beads in water. Conventional ultrasound failed in this setup and the spheres were completely invisible, but with their novel fingerprint method the researchers were able to accurately detect them.

Next, to examine a medical application for the technique, the researchers embedded lesion markers often used to monitor breast tumours in a foam designed to mimic the ultrasound scattering of soft tissue. These markers can be challenging to detect due to scatterers randomly distributed in human tissue. With the fingerprint matrix, however, the researchers say that the markers were easy to locate.

Finally, the team successfully mapped muscle fibres in a human calf using the technique. They claim this could be useful for diagnosing and monitoring neuromuscular diseases.

According to Rotter and his colleagues, their fingerprint matrix method is a versatile and universal technique that could be applied beyond ultrasound to all fields of wave physics. They highlight radar and sonar as examples of sensing techniques where target identification and detection in noisy environments are long-standing challenges.

“The concept of the fingerprint matrix is very generally applicable – not only for ultrasound, but also for detection with light,” Rotter says. “It opens up important new possibilities in all areas of science where a reflection matrix can be measured.”

The researchers report their findings in Nature Physics.

The post Fingerprint method can detect objects hidden in complex scattering media appeared first on Physics World.

Ask me anything: Kirsty McGhee – ‘Follow what you love: you might end up doing something you never thought was an option’

27 octobre 2025 à 11:00

What skills do you use every day in your job?

Obviously, I write: I wouldn’t be a very good science writer if I couldn’t. So communication skills are vital. Recently, for example, Qruise launched a new magnetic-resonance product for which I had to write a press release, create a new webpage and do social-media posts. That meant co-ordinating with lots of different people, finding out the key features to advertise, identifying the claims we wanted to make – and if we have the data to back those claims up. I’m not an expert in quantum computing or magnetic-resonance imagining or even marketing so I have to pick things up fast and then translate technically complex ideas from physics and software into simple messages for a broader audience. Thankfully, my colleagues are always happy to help. Science writing is a difficult task but I think I’m getting better at it.

What do you like best and least about your job?

I love the variety and the fact that I’m doing so many different things all the time. If there’s a day I feel I want something a little bit lighter, I can do some social media or the website, which is more creative. On the other hand, if I feel I could really focus in detail on something then I can write some documentation that is a little bit more technical. I also love the flexibility of remote working, but I do miss going to the office and socialising with my colleagues on a regular basis. You can’t get to know someone as well online, it’s nicer to have time with them in person.

What do you know today, that you wish you knew when you were starting out in your career?

That’s a hard one. It would be easy to say I wish I’d known earlier that I could combine science and writing and make a career out of that. On the other hand, if I’d known that, I might not have done my PhD – and if I’d gone into writing straight after my undergraduate degree, I perhaps wouldn’t be where I am now. My point is, it’s okay not to have a clear plan in life. As children, we’re always asked what we want to be – in my case, my dream from about the age of four was to be a vet. But then I did some work experience in a veterinary practice and I realized I’m really squeamish. It was only when I was 15 or 16 that I discovered I wanted to do physics because I liked it and was good at it. So just follow the things you love. You might end up doing something you never even thought was an option.

The post Ask me anything: Kirsty McGhee – ‘Follow what you love: you might end up doing something you never thought was an option’ appeared first on Physics World.

SCOOBY-DOO x LUSH

27 octobre 2025 à 09:25

Alors que Halloween approche, Lush a créé une collection inspirée de célèbre bande de détectives dans le cadre d’une collaboration exclusive en édition limitée : la collection Scooby-Doo x Lush, disponible depuis le 9 octobre en ligne et en boutiques.

Puisant son inspiration dans le dessin animé intemporel de Hanna-Barbera, Lush dévoile une collection pleine de nostalgie qui invite chacun·e à se reconnecter avec les personnages qui ont éveillé notre imagination, tout en retrouvant la joie de se détendre après une journée passée à résoudre des mystères…

On retrouve des bombes de bain à l’effigie de Scooby-Doo (8€) lui-même ou bien de la Mystery Machine (9€) mais aussi la gelée de douche Zoinks! (15€) et le gel douche Mystery Inc. (12,50€ les 120g) qui, avec sa mystérieuse couleur violette, sent bon les Scooby-Snacks! Ajoutez à cela le fameux spray corporel Scooby Snacks (45€…) et vous êtes prêts à élucider les mystères de Halloween…

Les produits peuvent être réunis dans une boite cadeau en forme de Mystery Machine! Trop classe!

Plus d’infos sur LUSH

Cet article SCOOBY-DOO x LUSH est apparu en premier sur Insert Coin.

TEST de NINJA GAIDEN 4 – Ça va re-trancher!

27 octobre 2025 à 08:31

Cela fait 13 ans que nous attendions une suite pour la licence Ninja Gaiden. C’est chose faite car l’épisode 4 est enfin disponible sur quasi tous les supports et nous avons eu l’occasion de le tester sur Xbox Series X. Ninja Gaiden 4 propose une expérience toujours aussi difficile mais se modernise grâce à une accessibilité disponible à tout moment et ça c’est vraiment chouette ! 

Une modernisation du genre

Ninja Gaiden est connu pour être un jeu difficile à cause d’un gameplay hardcore pour les combats. L’exigence de cette licence se modernise en proposant dès le début du jeu plusieurs modes de difficultés : mode héros, standard ou difficile. Personnellement j’ai opté pour le mode héros pour profiter d’une expérience plus fluide et sentir manette en main toute l’étendue des pouvoirs du héros que l’on joue. A noter que la nature des combats restent tout aussi nerveuse car certains boss sont assez difficiles. Cependant si vous êtes à la recherche de gros challenge, le mode standard ou difficile sont des expériences qui colleront à vos envies. 

Si vous êtes à la recherche d’un bon beat’em all sans trop de difficulté, je vous recommande le mode héros qui permet de combattre avec certaines mécaniques qui s’automatisent, citons par exemple : la parade parfaite quand la jauge de santé descend en dessous des 30% ou un système de combo simplifié. Le jeu est découpé en plusieurs chapitres qui durent environ entre 30 et 45 minutes chacun. Ninja Gaiden 4 ne dispose pas de sauvegarde manuelle et il vous faudra faire attention car les check-point peuvent être assez loin les uns des autres. 

Ninja Gaiden 4 vous invite à partager les aventures de Yakumo, un jeune ninja du clan des corbeaux qui a pour mission de tuer la prêtresse capable de ressusciter le dragon noir. Cependant il va l’épargner pour qu’elle puisse le faire revivre et l’éliminer pour de bon. Yakumo va aller à l’encontre des ordres de son clan et s’émanciper à travers un périple dangereux. Il va tout faire pour sauver Tokyo de l’emprise des démons avec l’aide de ses amis. Dans Ninja Gaiden 4, vous allez obtenir un score à chaque fin de chapitre pour déterminer votre maîtrise du jeu. Vous allez devoir réaliser des missions annexes pour obtenir des ressources importantes qui vous serviront à acheter des techniques et des capacités liées à chacunes des armes obtenues. Les combats sont assez exigeants surtout quand les ennemis sont en très grand nombre. Il vous faudra esquiver ou contrer chaque attaque pour tenter de survivre. En combat, vous aurez deux modes pour vos armes et l’une vous demandera d’augmenter une jauge pour lancer des attaques dévastatrices et contrer les parades des élites. 

Ninja Gaiden 4 propose d’explorer chaque recoin de la map pour obtenir des items de soins ou des ninjacoins pour acheter de nouvelles techniques. Dans ce nouvel épisode les phases de déplacement sont assez variées et vous allez pouvoir utiliser, un grappin, des ailes de corbeau pour vous déplacer dans les airs, sauter de toit en toit tel un ninja et même rider sur des bâtiments. 

Ninja Gaiden 4 est un nouvel épisode qui risque de diviser les fans de la série et les nouveaux joueurs. Le jeu est plus accessible et permet de profiter du scénario et de l’univers de Ninja Gaiden sans compromis. Cependant pour les vétérans il faudra certainement choisir le mode difficile pour retrouver le gameplay d’origine. Toutefois les combats dans ces modes sont tout aussi impressionnants grâce à des effets et des graphismes d’excellente facture. Personnellement mes amis et moi adorons cette nouvelle itérations et je vous recommande d’y jouer si vous avez terminé Lost Soul Aside et que vous êtes à la recherche d’expérience forte.

Test réalisé par Pierre

Cet article TEST de NINJA GAIDEN 4 – Ça va re-trancher! est apparu en premier sur Insert Coin.

New adaptive optics technology boosts the power of gravitational wave detectors

27 octobre 2025 à 09:00

Future versions of the Laser Interferometer Gravitational Wave Observatory (LIGO) will be able to run at much higher laser powers thanks to a sophisticated new system that compensates for temperature changes in optical components. Known as FROSTI (for FROnt Surface Type Irradiator) and developed by physicists at the University of California Riverside, US, the system will enable next-generation machines to detect gravitational waves emitted when the universe was just 0.1% of its current age, before the first stars had even formed.

Gravitational waves are distortions in spacetime that occur when massive astronomical objects accelerate and collide. When these distortions pass through the four-kilometre-long arms of the two LIGO detectors, they create a tiny difference in the (otherwise identical) distance that light travels between the centre of the observatory and the mirrors located at the end of each arm. The problem is that detecting and studying gravitational waves requires these differences in distance to be measured with an accuracy of 10-19 m, which is 1/10 000th the size of a proton.

Extending the frequency range

LIGO overcame this barrier 10 years ago when it detected the gravitational waves produced when two black holes located roughly 1.3 billion light–years from Earth merged. Since then, it and two smaller facilities, KAGRA and VIRGO, have observed many other gravitational waves at frequencies ranging from 30–2000 Hz.

Observing waves at lower and higher frequencies in the gravitational wave spectrum remains challenging, however. At lower frequencies (around 10–30 Hz), the problem stems from vibrational noise in the mirrors. Although these mirrors are hefty objects – each one measures 34 cm across, is 20 cm thick and has a mass of around 40 kg – the incredible precision required to detect gravitational waves at these frequencies means that even the minute amount of energy they absorb from the laser beam is enough to knock them out of whack.

At higher frequencies (150 – 2000 Hz), measurements are instead limited by quantum shot noise. This is caused by the random arrival time of photons at LIGO’s output photodetectors and is a fundamental consequence of the fact that the laser field is quantized.

A novel adaptive optics device

Jonathan Richardson, the physicist who led this latest study, explains that FROSTI is designed to reduce quantum shot noise by allowing the mirrors to cope with much higher levels of laser power. At its heart is a novel adaptive optics device that is designed to precisely reshape the surfaces of LIGO’s main mirrors under laser powers exceeding 1 megawatt (MW), which is nearly five times the power used at LIGO today.

Though its name implies cooling, FROSTI actually uses heat to restore the mirror’s surface to its original shape. It does this by projecting infrared radiation onto test masses in the interferometer to create a custom heat pattern that “smooths out” distortions and so allows for fine-tuned, higher-order corrections.

The single most challenging aspect of FROSTI’s design, and one that Richardson says shaped its entire concept, is the requirement that it cannot introduce even more noise into the LIGO interferometer. “To meet this stringent requirement, we had to use the most intensity-stable radiation source available – that is, an internal blackbody emitter with a long thermal time constant,” he tells Physics World. “Our task, from there, was to develop new non-imaging optics capable of reshaping the blackbody thermal radiation into a complex spatial profile, similar to one that could be created with a laser beam.”

Richardson anticipates that FROSTI will be a critical component for future LIGO upgrades – upgrades that will themselves serve as blueprints for even more sensitive next-generation observatories like the proposed Cosmic Explorer in the US and the Einstein Telescope in Europe. “The current prototype has been tested on a 40-kg LIGO mirror, but the technology is scalable and will eventually be adapted to the 440-kg mirrors envisioned for Cosmic Explorer,” he says.

Jan Harms, a physicist at Italy’s Gran Sasso Science Institute who was not involved in this work, describes FROSTI as “an ingenious concept to apply higher-order corrections to the mirror profile.” Though it still needs to pass the final test of being integrated into the actual LIGO detectors, Harms notes that “the results from the prototype are very promising”.

Richardson and colleagues are continuing to develop extensions to their technology, building on the successful demonstration of their first prototype. “In the future, beyond the next upgrade of LIGO (A+), the FROSTI radiation will need to be shaped into an even more complex spatial profile to enable the highest levels of laser power (1.5 MW) ultimately targeted,” explains Richardson. “We believe this can be achieved by nesting two or more FROSTI actuators together in a single composite, with each targeting a different radial zone of the test mass surfaces. This will allow us to generate extremely finely-matched optical wavefront corrections.”

The present study is detailed in Optica.

The post New adaptive optics technology boosts the power of gravitational wave detectors appeared first on Physics World.

Test – Écouteurs Aurvana Ace 3 de Creative

27 octobre 2025 à 00:51

Que valent les écouteurs Aurvana Ace 3 de Creative ?

Creative lance de façon très régulière de nouveaux modèles sur le marché et cette fois, nous a gentiment fait parvenir ses nouveaux écouteurs haut de gamme, j’ai nommé les Aurvana Ace 3. Nous avons d’ailleurs pu tester la V2 juste ici en mars 2024.

Voyons ensemble ce qu’ils valent et quelles sont les améliorations qui ont été apportées. Vous les retrouverez au prix de 149,99 € directement sur le site de la marque. Place au test !

 

Unboxing

Cette fois, pas de touche orange à laquelle la marque Creative nous a habitué depuis quelques années. Ici, on retrouvera sur la face avant un visuel des écouteurs à peine sortie de leur boîte de rechargement, un rappel de la marque et du modèle juste en dessous. Simple, élégant, efficace. À gauche, nous retrouverons quelques mentions légales en plusieurs langues tandis qu’à droite, nous aurons un visuel des écouteurs en utilisations et un rappel des divers technologies qu’ils renferment.

Sur le dessus, le contenu de la boîte y sera dessiné et pour finir, à l’arrière, les principales caractéristiques techniques et un rappel des applications dédiées.

Creative Aurvana Ace 3

 

Caractéristiques techniques

Caractéristique
Technologie audio principale xMEMS (double driver) + transducteur dynamique 10 mm
Codecs audio pris en charge LDAC + aptX Lossless (via Snapdragon Sound)
Connectivité Bluetooth 5.4
Annulation de bruit active (ANC) Système hybride adaptatif
Mode ambient  Oui
Résistance à l’eau / sueur IPX5
Autonomie annoncée 7 heures par charge + ~26 heures (ou 28 h selon source) avec le boîtier
Nombre de micros / appels Six micros annoncés pour des appels clairs
Détection usage (play/pause auto) Oui, détection intelligente de retrait / remise d’écouteurs

Fonctionnalités

  • Système à double transducteur hybride – un driver xMEMS (semi-conducteur) + un driver dynamique de 10 mm pour conjuguer précision des aigus et puissance des basses.

  • Support audio haut-de-gamme et sans perte – compatibilité avec Qualcomm Snapdragon Sound, codec aptX Lossless et LDAC.

  • Connectivité Bluetooth 5.4 – prise en charge du standard LE Audio et de la technologie Auracast pour le partage audio et la diffusion sur plusieurs appareils.

  • Technologie de personnalisation sonore Mimi Hearing Technologies (Mimi Sound Personalization) – un test auditif génère un profil et le son s’adapte en temps réel selon votre oreille.

  • Annulation active du bruit (ANC) de type « hybride adaptatif » – l’ANC s’ajuste selon l’environnement, mais également le mode « Ambient » (bruit extérieur) pour rester conscient de ce qui se passe autour.

  • Détection de port (« Wear Detect ») – la lecture met en pause automatiquement quand vous retirez un écouteur, et reprend quand vous le remettez.

  • Contrôles tactiles – pour lecture/pause, appels, assistant vocal etc.

  • Mode mono – possibilité d’utiliser un seul écouteur pour un usage plus flexible.

  • Résistance à l’eau et à la transpiration – certification IPX5 pour les écouteurs (leur usage lors d’activités sportives ou sous pluie légère).

  • Autonomie annoncée – jusqu’à 7 heures d’écoute avec une seule charge des écouteurs, et jusqu’à 26 heures combinées avec le boîtier. Recharge par USB-C et charge sans fil compatible.

  • Application dédiée (Creative App) – permettant profil auditif, mise à jour firmware, réglages audio personnalisés.

  • Embouts de différentes tailles – (XS, S, M, L, XL) pour adapter le confort et l’isolation.

 

Contenu

  • 1 x Creative Aurvana Ace 3
  • 1 x Boîtier de recharge USB-C
  • 1 x Câble de recharge USB-C
  • 1 x Paire d’embouts en silicone (XS), (S), (M), (L) et (XL)
  • 1 x Guide de démarrage rapide
  • 1 x Pochette de transport

Creative Aurvana Ace 3

 

Test

Creative revient sur le devant de la scène avec une nouvelle génération d’écouteurs intra-auriculaires : les Aurvana Ace 3. Après le joli succès des Aurvana Ace et Ace 2, la marque singapourienne semble vouloir affirmer un peu plus son savoir-faire audio, entre innovation technique et sensibilité musicale. Commençons par le design, ici épuré, fonctionnel, sans extravagance. Les Aurvana Ace 3 s’inscrivent dans la continuité visuelle des Ace anciennes génération, avec ce boîtier légèrement ovoïde, au couvercle un peu transparent et au logo gravé en relief.

Le boîtier s’ouvre avec une résistance bien calibrée – plus ferme que sur les modèles précédents si ma mémoire est bonne– et les écouteurs s’en extraient aisément, sans craindre de les faire tomber. Ils tiennent bien dans l’oreille, grâce à un format semi-ergonomique qui épouse naturellement le pavillon sans créer de pression. Le port reste confortable même après plusieurs heures, un point sur lequel Creative a nettement progressé.

Côté fabrication, on retrouve un assemblage propre, des finitions précises et une texture douce qui ne garde pas trop les traces de doigts, mais un peu quand même. Les écouteurs sont certifiés IPX5, donc capables de résister à la transpiration ou à une pluie fine – un vrai plus pour une utilisation nomade ou sportive.

Grande nouveauté de cette génération, la technologie Mimi personnalise le son selon votre audition. Après un court test, les écouteurs adaptent la restitution en temps réel à votre sensibilité, ajustant subtilement les aigus, médiums et graves.

Creative Aurvana Ace 3

Le résultat est bluffant : chaque écoute devient unique, naturelle et parfaitement équilibrée. Les voix gagnent en clarté, les instruments respirent mieux, et l’on découvre à quel point un son « fait pour soi » peut transformer l’expérience d’écoute.

L’annulation de bruit active a toujours été le point faible des modèles Aurvana. Sur cette version, Creative introduit une ANC hybride adaptative, capable d’ajuster automatiquement son intensité selon l’environnement. Dans les faits, cela fonctionne plutôt bien pour les sons continus – le vrombissement d’un bus, le souffle d’une climatisation – mais reste limité face aux bruits soudains ou aux voix.

Ce n’est pas un défaut rédhibitoire, le but est plutôt d’isoler suffisamment pour profiter pleinement de la musique sans se couper totalement du monde. Le mode « transparence », lui, m’a laissé plus perplexe : il amplifie les sons ambiants de façon naturelle, utile pour une balade urbaine ou un trajet à vélo. En courant par exemple, une voiture a réussi à me surprendre, en passant de pas de bruit de moteur à un bruit assourdissant tout d’un coup. Petit moment panique. On notera également la possibilité de basculer très vite d’un mode à l’autre via un simple geste tactile, ce qui rend l’usage fluide et intuitif.

Le passage au Bluetooth 5.4 se ressent immédiatement. L’appairage est quasi instantané, la stabilité irréprochable, même à plusieurs mètres du smartphone. Les codecs LDAC et aptX Lossless sont évidemment de la partie, garantissant une restitution sans perte si vous disposez d’un appareil compatible.

Pour les appels, les six microphones assurent une captation claire et un traitement efficace du bruit environnant. Même en extérieur, la voix reste nette.

L’autonomie annoncée est de 7 heures d’écoute par charge, avec environ 26 heures supplémentaires grâce au boîtier. En pratique, avec l’ANC actif et un volume autour de 70 %, on se situe plutôt entre 4 et 5 heures – ce qui reste dans la moyenne haute du segment. Le boîtier se recharge en USB-C, et la charge sans fil est toujours de la partie.

Creative Aurvana Ace 3

Parlons rapidement des deux applications : Creative App et Super SXFI. La première permet de tout personnaliser : son, commandes tactiles, ANC, détection de port ou encore mises à jour. C’est aussi elle qui intègre la technologie Mimi Sound Personalization, capable d’adapter le son à votre audition après un court test. Simple, fluide et efficace, l’application transforme les écouteurs en un produit vraiment sur mesure, où chaque réglage s’ajuste à vos préférences et à votre manière d’écouter.

L’application Super X-Fi quant à elle permet de reproduire un son 3D immersif fidèle à la spatialisation d’un home cinéma. Elle donne la possibilité de calibrer l’écoute selon la forme de votre tête et de vos oreilles, d’ajuster les profils audio, et de gérer les réglages des écouteurs.

Pour terminer, comparons nos Aurvana Ace 3 au modèle sorti l’année dernier, ce qui nous permettra au passage de résumer un peu notre test. Notre modèle du jour apporte plusieurs améliorations par rapport aux Ace 2. Leur son est plus riche et détaillé grâce au double transducteur hybride (xMEMS + dynamique 10 mm). La grande nouveauté est la personnalisation sonore Mimi, qui ajuste le son selon votre audition, absente sur les Ace 2. L’ANC est plus précise et adaptative, la connectivité passe au Bluetooth 5.4 avec support LE Audio et Auracast, et l’autonomie atteint jusqu’à 7 h par charge et 26 h avec le boîtier. Les Ace 3 ajoutent également 6 microphones et une résistance IPX5, offrant ainsi une expérience plus complète et moderne pour un prix similaire.

Conclusion 

Les Creative Aurvana Ace 3 marquent une évolution pour la marque. Ce ne sont pas des écouteurs révolutionnaires, mais des compagnons bien aboutis et polyvalent. Creative a trouvé ici un équilibre entre innovation et technique.

De plus, la personnalisation sonore rend les écoutes uniques. On rappellera cependant pour nous leur faiblesse, un ANC perfectible et une autonomie un peu en deçà des espérances, mais rien qui ne gâche réellement l’expérience.

Test – Écouteurs Aurvana Ace 3 de Creative a lire sur Vonguru.

Les sangliers arrivent en ville

24 octobre 2025 à 12:26

Aux portes de Bordeaux, les sangliers s’invitent en ville, provoquant inquiétude, dégradations et débats politiques. Dans ce reportage avec Le Monde, une équipe du CNRS enquête : par où passent-ils ? Pourquoi s’installent-ils ? Et comment humains et sangliers peuvent-ils coexister sans conflit ?

A SMART approach to treating lung cancers in challenging locations

24 octobre 2025 à 14:00

Radiation treatment for patients with lung cancer represents a balancing act, particularly if malignant lesions are centrally located near to critical structures. The radiation may destroy the tumour, but vital organs may be seriously damaged as well.

The standard treatment for non-small cell lung cancer (NSCLC) is stereotactic ablative body radiotherapy (SABR), which delivers intense radiation doses in just a few treatment sessions and achieves excellent local control. For ultracentral lung legions, however – defined as having a planning target volume (PTV) that abuts or overlaps the proximal bronchial tree, oesophagus or pulmonary vessels – the high risk of severe radiation toxicity makes SABR highly challenging.

A research team at GenesisCare UK, an independent cancer care provider operating nine treatment centres in the UK, has now demonstrated that stereotactic MR-guided adaptive radiotherapy (SMART)-based SABR may be a safer and more effective option for treating ultracentral metastatic lesions in patients with histologically confirmed NSCLC. They report their findings in Advances in Radiation Oncology.

SMART uses diagnostic-quality MR scans to provide real-time imaging, 3D multiplanar soft-tissue tracking and automated beam control of an advanced linear accelerator. The idea is to use daily online volume adaptation and plan re-optimization to account for any changes in tumour size and position relative to organs-at-risk (OAR). Real-time imaging enables treatment in breath-hold with gated beam delivery (automatically pausing delivery if the target moves outside a defined boundary), eliminating the need for an internal target volume and enabling smaller PTV margins.

The approach offers potential to enhance treatment precision and target coverage while improving sparing of adjacent organs compared with conventional SABR, first author Elena Moreno-Olmedo and colleagues contend.

A safer treatment option

The team conducted a study to assess the incidence of SABR-related toxicities in patients with histologically confirmed NSCLC undergoing SMART-based SABR. The study included 11 patients with 18 ultracentral lesions, the majority of whom had oligometastatic or olioprogressive disease.

Patients received five to eight treatment fractions, to a median dose of 40 Gy (ranging from 30 to 60 Gy). The researchers generated fixed-field SABR plans with dosimetric aims including a PTV V100% (the volume receiving at least 100% of the prescription dose) of 95% or above, a PTV V95% of 98% or above and a maximum dose of between110% and 140%. PTV coverage was compromised where necessary to meet OAR constraints, with a minimum PTV V100% of at least 70%.

SABR was performed using a 6 MV 0.35 T MRIdian linac with gated delivery during repeated breath-holds, under continuous MR guidance. Based on daily MRI scans, online plan adaptation was performed for all of the 78 delivered fractions.

The researchers report that both the PTV volume and PTV overlap with ultracentral OARs were reduced in SMART treatments compared with conventional SABR. The median SMART PTV was 10.1 cc, compared with 30.4 cc for the simulated SABR PTV, while the median PTV overlap with OARs was 0.85 cc for SMART (8.4% of the PTV) and 4.7 cc for conventional SABR.

In terms of treatment-related side effects for SMART, the rates of acute and late grade 1–2 toxicities were 54% and 18%, respectively, with no grade 3–5 toxicities observed. This demonstrates the technique’s increased safety compared with non-adaptive SABR treatments, which have exhibited severe rates of toxicity, including treatment-related deaths, in ultracentral tumours.

Two-thirds of patients were alive at the median follow-up point of 28 months, and 93% were free from local progression at 12 months. The median progression-free survival was 5.8 months and median overall survival was 20 months.

Acknowledging the short follow-up time frame, the researchers note that additional late toxicities may occur. However, they are hopeful that SMART will be considered as a favourable treatment option for patients with ultracentral NSCLC lesions.

“Our analysis demonstrates that hypofractionated SMART with daily online adaptation for ultracentral NSCLC achieved comparable local control to conventional non-adaptive SABR, with a safer toxicity profile,” they write. “These findings support the consideration of SMART as a safer and effective treatment option for this challenging subgroup of thoracic tumours.”

The SUNSET trial

SMART-based SABR radiotherapy remains an emerging cancer treatment that’s not available yet in many cancer treatment centres. Despite the high risk for patients with ultracentral tumours, SABR is the standard treatment for inoperable NSCLC.

The phase 1 clinical trial, Stereotactic radiation therapy for ultracentral NSCLC: a safety and efficacy trial (SUNSET), assessed the use of SBRT for ultracentral tumours in 30 patients with early-stage NSCLC treated at five Canadian cancer centres. In all cases, the PTVs touched or overlapped the proximal bronchial tree, the pulmonary artery, the pulmonary vein or the oesophagus. Led by Meredith Giuliani of the Princess Margaret Cancer Centre, the trial aimed to determine the maximum tolerated radiation dose associated with a less than 30% rate of grade 3–5 toxicity within two years of treatment.

All patients received 60 Gy in eight fractions. Dose was prescribed to deliver a PTV V100% of 95%, a PTV V90% of 99% and a maximum dose of no more than 120% of the prescription dose, with OAR constraints prioritized over PTV coverage. All patients had daily cone-beam CT imaging to verify tumour position before treatment.

At a median follow-up of 37 months, two patients (6.7%) experienced dose-limiting grade 3–5 toxicities – an adverse event rate within the prespecified acceptability criteria. The three-year overall survival was 72.5% and the three-year progression-free survival was 66.1%.

In a subsequent dosimetric analysis, the researchers report that they did not identify any relationship between OAR dose and toxicity, within the dose constraints used in the SUNSET trial. They note that 73% of patients could be treated without compromise of the PTV, and where compromise was needed, the mean PTV D95 (the minimum dose delivered to 95% of the PTV) remained high at 52.3 Gy.

As expected, plans that overlapped with central OARs were associated with worse local control, but PTV undercoverage was not. “[These findings suggest] that the approach of reducing PTV coverage to meet OAR constraints does not appear to compromise local control, and that acceptable toxicity rates are achievable using 60 Gy in eight fractions,” the team writes. “In the future, use of MRI or online adaptive SBRT may allow for safer treatment delivery by limiting dose variation with anatomic changes.”

The post A SMART approach to treating lung cancers in challenging locations appeared first on Physics World.

TEST de THE OUTER WORLDS 2 – Back in space…

24 octobre 2025 à 10:56

Six ans après le premier opus de The Outer Worlds (digne héritier de Fallout New Vegas), sa suite sort ce 29 octobre 2025 sur Xbox Series, PS5 et PC. Est-ce que ce FPS/RPG est à la hauteur de ses ambitions ? On vous dit tout sans spoiler !

JOUR 1…

Il faut savoir que ce titre démarre directement sur de l’humour second degré et apporte déjà une certaine fraîcheur, sans se prendre au sérieux tout en annonçant de belles promesses. Comme dans tous les jeux du genre, vous allez passer par une case création du personnage assez complète. Une fois le physique choisi et le look sélectionné avec goût (ou pas) vous allez devoir vous spécialiser parmi 12 compétences comme la persuasion, les armes à feu, etc.

Si vous êtes connaisseur du genre, ce sont des compétences que vous aurez l’habitude de choisir mais pas de réinitialisation possible sachant que les choix que vous allez faire auront des répercussions tout le long du récit. Il faudra choisir judicieusement.

Et HOP ! Vous voilà prêt(e) pour démarrer cette nouvelle aventure où vous allez incarner un agent du Directoire, un flic de l’espace en gros ! Vous allez accepter votre première mission pour récupérer un moteur et aller d’un point A à B à la vitesse lumière mais comme dans tous jeux qui se respectent, les choses ne vont pas se passer comme prévu mais on vous laisse découvrir ça en temps voulu !

Gameplay la rejouabilité

Comme dit plus tôt, les choix auront des répercussions sur l’histoire de cette suite, ce qui permet déjà une belle re-jouabilité, mais en fonction de vos compétences, vous pourrez aborder les missions de différentes façons, négocier avec les marchands, tuer tout ce qui bouge, ou être ultra discret comme dans Dishonored, à vous de voir !

Vous allez rencontrer comme dans tous RPG qui se respectent plusieurs personnages qui vont vous aider à accomplir vos missions, des caractères bien trempés, plutôt bien écrits mais assez classiques dans le genre (le Tank, le robot drôle, la discrète).

Ce qu’on apprécie dans les quêtes principales ou secondaires, c’est qu’elles sont toutes scénarisées donc si vous voulez platiner le jeu, cela va vous prendre énormément de temps. Et on parle souvent des compétences, des points forts des personnages, si je vous parlais des défauts des personnages…

OUI ! Vous avez bien lu, votre personnage pourra avoir des points faibles, par exemple si vous êtes du genre discret et que vous choisissez une compétence pour vous déplacer plus vite accroupi, ce sera le cas mais vous aurez des « genoux fragiles » et quand vous allez vous accroupir, il y aura un « CRAC » et ça peut alerter vos ennemis, il en existe une vingtaine du genre, à vous de voir si vous voulez les accepter ou non avec leur contrepartie.

Concernant les combats, on ne change pas vraiment, c’est très classique mais cela n’en demeure pas moins efficace et ça, on prend. Vous allez vous battre à chaque fois avec deux comparses, et comme dans un certain Mass Effect à l’époque vous allez pouvoir leur donner des ordres mais l’IA n’est pas exceptionnelle, vous allez souvent devoir gérer la situation tout seul, et ça, c’est dommage.

ET LES GRAPHISMES ?

Même si on n’a pas le dernier PC à 5000€, avec une RTX 3060 SUPER, on a pu régler la qualité du jeu en haute qualité, avec quelques chutes de framerates dans les décors sachant que le titre n’a pas encore eu de patch DAY ONE. Mise à part ça, on a eu aucun problème et c’est appréciable. Ce que l’on regrette, ce sont les expressions faciales qui ne sont pas folles…

Le jeu est très beau pour le style avec des cinématiques qui viennent ponctuer le titre. Pour le reste, c’est très scolaire, très classique, du champs, contre-champs lors des échanges et des PNJ qui restent immobiles.

LES +

Un univers immersif, drôle et frais
Des combats dynamiques et prenants
Une excellente durée de vie (50h en moyenne)
Une vraie suite et pas une version 1.5
Une rejouabilité pour découvrir toutes les possibilités du jeu

LES –

Manque une VF (VOST seulement)
IA très limitée
Mise en scène en retard sur son temps

Outer Worlds 2 est un jeu très attendu, le premier avait été une belle surprise et sa suite est une vraie version améliorée. Si vous aimez vous perdre dans un univers, que vous voulez passer un bon moment et tester plusieurs BUILDS de personnage tout en ayant une ambiance avec une vraie fraîcheur et que Fallout vous manque, n’hésitez pas une seule seconde, c’est le titre qu’il vous faut. Sans révolutionner le genre, Obsidian apporte des nouveautés qui font plaisir. On regrettera juste une IA très limitée et des PNJ trop statiques et remarque personnelle, avoir une VF aurait été un plus, du moins, offrir le choix aux joueurs. Mais en soit, c’est l’un des jeux à faire absolument en cette fin d’année 2025 !

Test réalisé par Aurélien

Cet article TEST de THE OUTER WORLDS 2 – Back in space… est apparu en premier sur Insert Coin.

Spiral catheter optimizes drug delivery to the brain

24 octobre 2025 à 10:00

Researchers in the United Arab Emirates have designed a new catheter that can deliver drugs to entire regions of the brain. Developed by Batoul Khlaifat and colleagues at New York University Abu Dhabi, the catheter’s helical structure and multiple outflow ports could make it both safer and more effective for treating a wide range of neurological disorders.

Modern treatments for brain-related conditions including Parkinson’s disease, epilepsy, and tumours often involve implanting microfluidic catheters that deliver controlled doses of drug-infused fluids to highly localized regions of the brain. Today, these implants are made from highly flexible materials that closely mimic the soft tissue of the brain. This makes them far less invasive than previous designs.

However, there is still much room for improvement, as Khlaifat explains. “Catheter design and function have long been limited by the neuroinflammatory response after implantation, as well as the unequal drug distribution across the catheter’s outlets,” she says.

A key challenge with this approach is that each of the brain’s distinct regions has highly irregular shapes, which makes it incredibly difficult to target via single drug doses. Instead, doses must be delivered either through repeated insertions from a single port at the end of a catheter, or through single insertions across multiple co-implanted catheters. Either way, the approach is highly invasive, and runs the risk of further trauma to the brain.

Multiple ports

In their study, Khlaifat’s team explored how many of these problems stem from existing catheter designs. They tend to be simple tubes with single input and output ports at either end. Using fluid dynamics simulations, they started by investigating how drug outflow would change when multiple output ports are positioned along the length of the catheter.

To ensure this outflow is delivered evenly, they carefully adjusted the diameter of each port to account for the change in fluid pressure along the catheter’s length – so that four evenly spaced ports could each deliver roughly one quarter of the total flow. Building on this innovation, the researchers then explored how the shape of the catheter itself could be adjusted to optimize delivery even further.

“We varied the catheter design from a straight catheter to a helix of the same small diameter, allowing for a larger area of drug distribution in the target implantation region with minimal invasiveness,” explains team member Khalil Ramadi. “This helical shape also allows us to resist buckling on insertion, which is a major problem for miniaturized straight catheters.”

Helical catheter

Based on their simulations, the team fabricated a helical catheter the call Strategic Precision Infusion for Regional Administration of Liquid, or SPIRAL. In their first set of experiments, they tested their simulations in controlled lab conditions. They verified their prediction of even outflow rates across the catheter’s outlets.

“Our helical device was also tested in mouse models alongside its straight counterpart to study its neuroinflammatory response,” Khlaifat says. “There were no significant differences between the two designs.”

Having validated the safety of their approach, the researchers are now hopeful that SPIRAL could pave the way for new and improved methods for targeted drug delivery within the brain. With the ability to target entire regions of the brain with smaller, more controlled doses, this future generation of implanted catheters could ultimately prove to be both safer and more effective than existing designs.

“These catheters could be optimized for each patient through our computational framework to ensure only regions that require dosing are exposed to therapy, all through a single insertion point in the skull,” describes team member Mahmoud Elbeh. “This tailored approach could improve therapies for brain disorders such as epilepsy and glioblastomas.”

The research is described in the Journal of Neural Engineering.

The post Spiral catheter optimizes drug delivery to the brain appeared first on Physics World.

Voile : Violette Dorange se lance dans la transat café l'or

24 octobre 2025 à 08:41
Voile : Violette Dorange se lance dans la transat café l'or avec Samantha Davies . Elle raconte son "coup de blues" après son retour du Vendée Globe. Après trois semaines de vacances, elle est retournée à l'entraînement ayant fait le plein d'énergie et prévoit après cette transatlantique de s'inscrire à la Route du Rhum.

Performance metrics and benchmarks point the way to practical quantum advantage

23 octobre 2025 à 17:35
Quantum connections Measurement scientists are seeking to understand and quantify the relative performance of quantum computers from different manufacturers as well as across the myriad platform technologies. (Courtesy: iStock/Bartlomiej Wroblewski)

From quantum utility today to quantum advantage tomorrow: incumbent technology companies – among them Google, Amazon, IBM and Microsoft – and a wave of ambitious start-ups are on a mission to transform quantum computing from applied research endeavour to mainstream commercial opportunity. The end-game: quantum computers that can be deployed at-scale to perform computations significantly faster than classical machines while addressing scientific, industrial and commercial problems beyond the reach of today’s high-performance computing systems.

Meanwhile, as technology translation gathers pace across the quantum supply chain, government laboratories and academic scientists must maintain their focus on the “hard yards” of precompetitive research. That means prioritizing foundational quantum hardware and software technologies, underpinned by theoretical understanding, experimental systems, device design and fabrication – and pushing out along all these R&D pathways simultaneously.

Bringing order to disorder

Equally important is the requirement to understand and quantify the relative performance of quantum computers from different manufacturers as well as across the myriad platform technologies – among them superconducting circuits, trapped ions, neutral atoms as well as photonic and semiconductor processors. A case study in this regard is a broad-scope UK research collaboration that, for the past four years, has been reviewing, collecting and organizing a holistic taxonomy of metrics and benchmarks to evaluate the performance of quantum computers against their classical counterparts as well as the relative performance of competing quantum platforms.

Funded by the National Quantum Computing Centre (NQCC), which is part of the UK National Quantum Technologies Programme (NQTP), and led by scientists at the National Physical Laboratory (NPL), the UK’s National Metrology Institute, the cross-disciplinary consortium has taken on an endeavour that is as sprawling as it is complex. The challenge lies in the diversity of quantum hardware platforms in the mix; also the emergence of two different approaches to quantum computing – one being a gate-based framework for universal quantum computation, the other an analogue approach tailored to outperforming classical computers on specific tasks.

“Given the ambition of this undertaking, we tapped into a deep pool of specialist domain knowledge and expertise provided by university colleagues at Edinburgh, Durham, Warwick and several other centres-of-excellence in quantum,” explains Ivan Rungger, a principal scientist at NPL, professor in computer science at Royal Holloway, University of London, and lead scientist on the quantum benchmarking project. That core group consulted widely within the research community and with quantum technology companies across the nascent supply chain. “The resulting study,” adds Rungger, “positions transparent and objective benchmarking as a critical enabler for trust, comparability and commercial adoption of quantum technologies, aligning closely with NPL’s mission in quantum metrology and standards.”

Not all metrics are equal – or mature

2025-10-npl-na-aqml-image
Made to measure NPL’s Institute for Quantum Standards and Technology (above) is the UK’s national metrology institute for quantum science. (Courtesy: NPL)

For context, a number of performance metrics used to benchmark classical computers can also be applied directly to quantum computers, such as the speed of operations, the number of processing units, as well as the probability of errors to occur in the computation. That only goes so far, though, with all manner of dedicated metrics emerging in the past decade to benchmark the performance of quantum computers – ranging from their individual hardware components to entire applications.

Complexity reigns, it seems, and navigating the extensive literature can prove overwhelming, while the levels of maturity for different metrics varies significantly. Objective comparisons aren’t straightforward either – not least because variations of the same metric are commonly deployed; also the data disclosed together with a reported metric value is often not sufficient to reproduce the results.

“Many of the approaches provide similar overall qualitative performance values,” Rungger notes, “but the divergence in the technical implementation makes quantitative comparisons difficult and, by extension, slows progress of the field towards quantum advantage.”

The task then is to rationalize the metrics used to evaluate the performance for a given quantum hardware platform to a minimal yet representative set agreed across manufacturers, algorithm developers and end-users. These benchmarks also need to follow some agreed common approaches to fairly and objectively evaluate quantum computers from different equipment vendors.

With these objectives in mind, Rungger and colleagues conducted a deep-dive review that has yielded a comprehensive collection of metrics and benchmarks to allow holistic comparisons of quantum computers, assessing the quality of hardware components all the way to system-level performance and application-level metrics.

Drill down further and there’s a consistent format for each metric that includes its definition, a description of the methodology, the main assumptions and limitations, and a linked open-source software package implementing the methodology. The software transparently demonstrates the methodology and can also be used in practical, reproducible evaluations of all metrics.

“As research on metrics and benchmarks progresses, our collection of metrics and the associated software for performance evaluation are expected to evolve,” says Rungger. “Ultimately, the repository we have put together will provide a ‘living’ online resource, updated at regular intervals to account for community-driven developments in the field.”

From benchmarking to standards

Innovation being what it is, those developments are well under way. For starters, the importance of objective and relevant performance benchmarks for quantum computers has led several international standards bodies to initiate work on specific areas that are ready for standardization – work that, in turn, will give manufacturers, end-users and investors an informed evaluation of the performance of a range of quantum computing components, subsystems and full-stack platforms.

What’s evident is that the UK’s voice on metrics and benchmarking is already informing the collective conversation around standards development. “The quantum computing community and international standardization bodies are adopting a number of concepts from our approach to benchmarking standards,” notes Deep Lall, a quantum scientist in Rungger’s team at NPL and lead author of the study. “I was invited to present our work to a number of international standardization meetings and scientific workshops, opening up widespread international engagement with our research and discussions with colleagues across the benchmarking community.”

He continues: “We want the UK effort on benchmarking and metrics to shape the broader international effort. The hope is that the collection of metrics we have pulled together, along with the associated open-source software provided to evaluate them, will guide the development of standardized benchmarks for quantum computers and speed up the progress of the field towards practical quantum advantage.”

That’s a view echoed – and amplified – by Cyrus Larijani, NPL’s head of quantum programme. “As we move into the next phase of NPL’s quantum strategy, the importance of evidence-based decision making becomes ever-more critical,” he concludes. “By grounding our strategic choices in robust measurement science and real-world data, we ensure that our innovations not only push the boundaries of quantum technology but also deliver meaningful impact across industry and society.”

Further reading

Deep Lall et al. 2025 A  review and collection of metrics and benchmarks for quantum computers: definitions, methodologies and software https://arxiv.org/abs/2502.06717

The headline take from NQCC

Quantum computing technology has reached the stage where a number of methods for performance characterization are backed by a large body of real-world implementation and use, as well as by theoretical proofs. These mature benchmarking methods will benefit from commonly agreed-upon approaches that are the only way to fairly, unambiguously and objectively benchmark quantum computers from different manufacturers.

“Performance benchmarks are a fundamental enabler of technology innovation in quantum computing,” explains Konstantinos Georgopoulos, who heads up the NQCC’s quantum applications team and is responsible for the centre’s liaison with the NPL benchmarking consortium. “How do we understand performance? How do we compare capabilities? And, of course, what are the metrics that help us to do that? These are the leading questions we addressed through the course of this study.

”If the importance of benchmarking is a given, so too is collaboration and the need to bring research and industry stakeholders together from across the quantum ecosystem. “I think that’s what we achieved here,” says Georgopoulos. “The long list of institutions and experts who contributed their perspectives on quantum computing was crucial to the success of this project. What we’ve ended up with are better metrics, better benchmarks, and a better collective understanding to push forward with technology translation that aligns with end-user requirements across diverse industry settings.”

End note: NPL retains copyright on this article.

The post Performance metrics and benchmarks point the way to practical quantum advantage appeared first on Physics World.

Quantum computing and AI join forces for particle physics

23 octobre 2025 à 15:57

This episode of the Physics World Weekly podcast explores how quantum computing and artificial intelligence can be combined to help physicists search for rare interactions in data from an upgraded Large Hadron Collider.

My guest is Javier Toledo-Marín, and we spoke at the Perimeter Institute in Waterloo, Canada. As well as having an appointment at Perimeter, Toledo-Marín is also associated with the TRIUMF accelerator centre in Vancouver.

Toledo-Marín and colleagues have recently published a paper called “Conditioned quantum-assisted deep generative surrogate for particle–calorimeter interactions”.

Delft logo

This podcast is supported by Delft Circuits.

As gate-based quantum computing continues to scale, Delft Circuits provides the i/o solutions that make it possible.

The post Quantum computing and AI join forces for particle physics appeared first on Physics World.

Edito | La Presse Jeux Vidéo est Morte ?

Par :Sadako
23 octobre 2025 à 14:11

Si vous avez grandi dans les années 80 et 90, la simple vision d’un magazine de jeux vidéo vous fait très certainement encore quelque chose. Sur ce titre pessimiste, j’avais besoin de vous donner mon avis sur la mort lente et douloureuse de la presse gaming. Dans cet édito, j’aimerais vous détailler ce qui explique, selon moi, le déclin de ce média qui n’a plus grand chose d’intéressant à offrir. Et pourquoi nous en sommes arrivés là. Bonne lecture !

Mort de la presse Gaming

Préambule

J’ai créé Playerone.tv en décembre 2009. C’est à la fois lointain et proche. Sur certains points, on croirait en effet parler du siècle dernier tant les choses ont changées. A cette époque là, il y avait une véritable guerre des « nouveaux sites de jeux vidéo » qui tentaient tous de sortir de l’anonymat. C’était une concurrence assez bizarre par ailleurs.

Mais de 2009 à 2015, la presse était LE point central de l’information autour des jeux vidéo, consoles et PC. Les magazines disparaissaient alors logiquement un par un, par manque d’intérêt et de réactivité de l’information. Les jeux sortaient par dizaines de février à avril, puis de septembre à novembre. On ne savait plus où donner de la tête en étant quelques rédacteurs. Les éditeurs étaient en mode « séduction » dès que les statistiques de fréquentation des sites étaient potentiellement bons pour faire parler de leurs projets : communiqués de presse en pagaille, invitations à des sessions de jeux bêta, soirées spécialement dédiées à la presse, jeux reçus en avance pour les tester dans de bonnes conditions etc.

La faute des joueurs et lecteurs ?

La première raison du déclin est à mettre sur le dos des joueurs / lecteurs. Si les sites ferment les uns après les autres, c’est avant tout parce qu’ils ne cliquent plus sur les articles. Et qu’ils ne font quasiment plus de recherches sur internet. Pourquoi ? Selon moi, il y a deux raisons majeures.

Raison N°1 : Les publicités ont tué Internet
J’en suis le premier agacé : il est parfois impossible de lire le contenu d’un article sans que celui-ci ne soit noyé, voire souvent TOTALEMENT superposé par des publicités. Cela fait des années que j’utilise un bloqueur de pubs. Plus les lecteurs en ont utilisé un, plus les gros sites ont démultipliés les formats toujours plus intrusifs les uns des autres. Le serpent s’est mordu la queue jusqu’à se manger lui-même.

Un visiteur avec un bloqueur de publicité, c’est zéro centime qui entre dans la poche du site. C’est quelque part mérité, beaucoup ayant vraiment trop abusés ! Mais quand ton site n’affiche que peu de formats non intrusifs, tu payes quand même les débordements des autres.

Raison N°2 : Les articles pour ne rien dire et le putaclick / clickbait
Pour maximiser les profits sur un article, on a beaucoup vu leur substance diminuer. Un gros titre bien accrocheur (voire mensonger ou n’ayant aucun lien avec le sujet de l’article), trois ou quatre lignes de texte, et c’est parti ! On a souvent l’impression de ne rien apprendre.

Sur ce point, c’est aussi la faute des éditeurs et constructeurs qui ne livrent plus assez de détails dans leurs annonces pour en faire quelque chose d’intéressant. La plupart des communiqués tenant en un Tweet, il est compliqué d’en extraire de la substance. Si avant nous avions des communiqués de presse complet, l’information est aujourd’hui totalement morcelée. On reçoit au compte gouttes : date de sortie, teaser du trailer, annonce de la date de publication du trailer, des trailers qui ne racontent pas grand chose, et du gameplay qui ne dévoile quasi rien. C’est ultra superficiel et donc vide de substance exploitable pour une véritable travail journalistique.

Au-delà d’être journaliste, je suis aussi et surtout un joueur. Et tout comme vous, je me suis lassé d’aller sur des sites d’actualité jeux vidéo pour ces deux raisons. Je n’en consulte plus aucun en France, et seulement quelques uns aux USA. Avec Adblock bien entendu…

La faute des médias ?

Pour payer les pigistes et / ou les rédacteurs, il faut de l’argent. D’où vient l’argent des sites ? De la publicité. Avec le paragraphe précédent, j’imagine que vous avez déjà compris pourquoi la presse jeux vidéo est quasiment morte. Entre les bloqueurs de publicité qui transforment les visiteurs en fantômes financiers et le désintérêt des joueurs envers la presse, il est logique de voir tant de médias disparaitre.

Et une fois que les joueurs ont perdu cette habitude de lecture et de recherches autonomes sur un jeu vidéo, il me paraît impossible de faire machine arrière. Les réseaux sociaux ont remplacé totalement le lieu où l’information doit être dévoilée.

En 15 ans, Facebook, YouTube, TikTok et Instagram ont tué les sites internet d’actualité gaming. Leurs algorithmes aussi. Puisqu’il faut être toujours plus sensationnel dans le contenu pour être lu et partagé, cela a participé à la dégradation en qualité des médias. Il suffit de regarder ce qui marche le mieux en France pour s’en convaincre…

La faute des constructeurs et éditeurs ?

Autre point qui me paraît important : Il est impossible de faire un vrai travail journalistique dans le secteur des jeux vidéo. Cela n’a jamais été possible, et cela ne le sera très certainement jamais. La raison ? Les éditeurs et constructeurs souhaitent contrôler leur communication de A à Z. La place à l’investigation est donc verrouillée de tous les côtés. Si vous avez la chance d’accéder à un jeu avant sa sortie, c’est le NDA (accord de non divulgation) qui vous musèle.

Si vous souhaitez obtenir des informations sur un jeu en cours de développement, la seule réponse que vous obtiendrez est, au mieux : « nous communiquerons lorsque nous serons prêts ». Au fil de mes nombreuses interviews de ces 16 dernières années, je n’ai littéralement JAMAIS ramené un scoop. Même en « cuisinant » les développeurs, éditeurs et constructeurs rencontrés, ils ne lâchent RIEN. Ce n’est souvent pas l’envie qui manque, mais la peur de la sanction, puisqu’ils ne doivent pas dire autre chose que ce qui est prévu.

Depuis 2021, l’actualité est souvent morte. De phases de désert d’actualités en phases de maigres sorties, difficile de maintenir une communauté en haleine. C’est d’ailleurs cette pénurie de matière première que tant de rumeurs et fake news tombent de partout. Il faut faire du chiffre quand même !

Avec une industrie du jeu vidéo AAA en crise, la scène indépendante aurait pu prendre le relai. Mais les jeux indé n’intéressent quasiment personne. On ne peut pas faire tourner une boutique avec cela. C’est triste, mais c’est comme ça.

La faute des influenceurs ?

Quelle insulte, lorsque certaines entreprises me contactent en me collant directement dans la case « influenceur » parce que je fais des vidéos sur les réseaux sociaux… Mais passons. Aujourd’hui, il n’y en a que pour eux. Je ne dis pas ça par jalousie, mais par constat. Leur audience est tellement aveuglée par diverses choses que c’est du pain béni pour les éditeurs, et ces fameux influenceurs. Toute une économie s’est créée autour d’eux : liens d’affiliation, codes promo, vidéos sponsorisées, opérations spéciales etc. C’est aujourd’hui le moyen le mieux rémunéré pour faire de l’actualité gaming. Quand ça marche.

Mais en martelant leurs chaînes de contenus, plusieurs fois par jour, quitte à ne rien dire de nouveau sur 12 vidéos de GTA 6, ils retiennent leur audience chez eux. Un cercle fermé de joueurs qui ne se renseignent plus ailleurs. L’information tombe tout cuit dans le bec des scrolleurs infinis de réseaux sociaux. Plus besoin d’aller chercher par soi-même !

Une évolution logique d’un modèle démodé

Tout comme la lecture de livres et magazines papiers, la lecture d’articles (je dis bien articles, et pas « news » de 12 mots) est en train de mourir. Tout se passe sur les réseaux sociaux aujourd’hui. Et un réseau peut devenir démodé en l’espace de quelques années seulement. Cela ne serait pas un problème si ces plateformes n’étaient pas régies par des algorithmes sournois qui incitent à faire du sensationnel pour sortir de la masse. Avoir une communauté de 100 000 personnes ne garantit en effet pas que la totalité de vos abonnés verront passer votre contenu dans leur fil d’actualité.

Ce n’est même quasiment jamais le cas. Cela a également participé à l’effondrement de la qualité des contenus sur internet, puisque seuls quelques rares types d’actu arrivent à percer le mur de votre noyau communautaire.

Ce que je remarque en revanche depuis 16 ans n’est pas très glorieux : la feignantise a remplacé la curiosité. Avec des outils d’accès à l’information ultra performants, la plupart des joueurs ne font qu’attendre que l’actualité ne tombe du ciel. Inondés de contenus en tous genres sur une ribambelle d’applications mobiles, c’est aujourd’hui aux médias d’aller chercher les lecteurs où ils sont, et plus l’inverse.

L’arrivée de l’I.A : Le clou sur le cercueil !

Autre exemple de ce que la technologie peut donner de pire en termes de feignantise ? L’arrivée de l’I.A. Si Grok dit que c’est vrai, c’est que c’est vrai ! Je vois même de plus en plus de contenus écrits par Intelligence Artificielle. L’avenir des sites reposera-t-il sur de la rédaction automatisée à 100% ? Fort probable, et c’est pour moi le dernier clou du cercueil d’une presse déjà en état de mort cérébrale depuis pas mal d’années.

Je ne me pose pas en « vieux con », l’I.A pouvant avoir de chouettes avantages pour économiser du temps. Mais faire confiance aveuglément à un robot n’est pas la meilleure preuve d’évolution de l’humanité en ce qui me concerne…

Les conséquences de tout cela

1200 journalistes partis en 2 ans. C’est ce que révèle le fondateur de Press Engine, plateforme sur laquelle je suis inscrit également. Et selon lui, le pire n’est pas encore passé. Il explique cela également par la perte d’intérêt des lecteurs avec l’avènement des jeux service qui n’ont pas besoin de la presse pour vivre.

Le problème est que plus aucun jeu vidéo n’a besoin de la presse pour vivre. Seuls restent quelques irréductibles qui aiment lire, se renseigner, comparer les avis de différents médias. Le grand public, lui, se contente d’ouvrir le bec et de picorer ce qui tombe sous son nez. Encore une fois, ce n’est pas une critique, juste la réalité des faits.

Que devient Playerone.tv alors ?

Cela fait maintenant plusieurs mois que je n’écris plus d’articles d’actualité sur le site. Je réserve cela à Facebook et Instagram, cela touche en effet beaucoup plus de monde. Je réserve Playerone.tv pour les tests écrits des jeux, et pour m’exprimer par éditos comme avec celui-ci. J’ai pris cette décision l’année dernière, après plusieurs années à voir les statistiques chuter.

La débilisation de l’actualité gaming

Il n’y a malheureusement plus que ça qui marche. A grande échelle tout du moins. Quand vous regardez les YouTubers qui font le plus de clics, et les rares sites internet qui fonctionnent encore, c’est souvent une insulte à l’intelligence humaine. Le ton de parole est infantilisant. Le niveau d’écriture est très bas. Les éléments de langage sont partout. L’actu du pauvre, en somme. J’ai toujours refusé de me plier à cela personnellement. Et je vois trop souvent sur Facebook et YouTube des gens trop habitués à cette médiocrité qui ne comprennent plus rien à rien.

Il faudrait vraiment être aveugle pour ne pas voir que tout a changé très vite depuis 2020. Entre irrespect, comportements haineux et autres joies des réseaux sociaux, il faudrait vraiment être fou pour continuer à aimer ce métier de journaliste gaming.

Et bien vous pouvez me qualifier de fou ! Malgré toutes ces transformations, j’adore encore et toujours traiter l’actualité jeux vidéo. Et par dessus tout, échanger avec tous mes abonnés, que ce soit sur Facebook et YouTube, les deux plus grandes réussites numériques de Playerone.tv !

Que reste-t-il à sauver de la presse jeux vidéo ?

A mon humble avis : pas grand chose. Peut-être même rien du tout. Le mal est fait. Beaucoup ont cru que faire payer des abonnements à leurs lecteurs les tiendraient dans le troupeau tout en maintenant un niveau financier correct, mais cela n’a pas fonctionné. En tout cas pas longtemps. Et je me mets à la place d’un joueur : qui voudrait payer de 5 à 10 abonnements par mois pour simplement lire de bons articles ? Tout travail mérite salaire, certes, mais il y a des limites à l’empathie.

En bref, et à mes yeux, la presse jeux vidéo vit ses dernières années. Est-ce que c’est bien ? Est-ce que c’est mal ? Je ne sais pas, mais c’est en tout cas la réalité.

L’article Edito | La Presse Jeux Vidéo est Morte ? est apparu en premier sur PLAYERONE.TV.

Master’s programme takes microelectronics in new directions

23 octobre 2025 à 10:28
hong-kong-university-na-main-image
Professor Zhao Jiong, who leads a Master’s programme in microelectronics technology and material, has been recognized for his pioneering research in 2d ferroelectronics (Courtesy: PolyU)

The microelectronics sector is known for its relentless drive for innovation, continually delivering performance and efficiency gains within ever more compact form factors. Anyone aspiring to build a career in this fast-moving field needs not just a thorough grounding in current tools and techniques, but also an understanding of the next-generation materials and structures that will propel future progress.

That’s the premise behind a Master’s programme in microelectronics technology and materials at the Hong Kong Polytechnic University (PolyU). Delivered by the Department for Applied Physics, globally recognized for its pioneering research in technologies such as two-dimensional materials, nanoelectronics and artificial intelligence, the aim is to provide students with both the fundamental knowledge and practical skills they need to kickstart their professional future – whether they choose to pursue further research or to find a job in industry.

“The programme provides students with all the key skills they need to work in microelectronics, such as circuit design, materials processing and failure analysis,” says programme leader Professor Zhao Jiong, who research focuses on 2D ferroelectrics. “But they also have direct access to more than 20 faculty members who are actively investigating novel materials and structures that go beyond silicon-based technologies.”

The course in also unusual in providing a combined focus on electronics engineering and materials science, providing students with a thorough understanding of the underlying semiconductors and device structures as well as their use in mass-produced integrated circuits. That fundamental knowledge is reinforced through regular experimental work, providing the students with hands-on experience of fabricating and testing electronic devices. “Our cleanroom laboratory is equipped with many different instruments for microfabrication, including thin-film deposition, etching and photolithography, as well as advanced characterization tools for understanding their operating mechanisms and evaluating their performance,” adds Zhao.

In a module focusing on thin-film materials, for example, students gain valuable experience from practical sessions that enable them to operate the equipment for different growth techniques, such as sputtering, molecular beam epitaxy, and both physical and chemical vapour deposition. In another module on materials analysis and characterization, the students are tasked with analysing the layered structure of a standard computer chip by making cross-sections that can be studied with a scanning electron microscope.

During the programme students have access to a cleanroom laboratory that gives them hand-on experience of using advanced tools for fabricating and characterizing electronic materials and structures (Courtesy: PolyU)

That practical experience extends to circuit design, with students learning how to use state-of-the-art software tools for configuring, simulating and analysing complex electronic layouts. “Through this experimental work students gain the technical skills they need to design and fabricate integrated circuits, and to optimize their performance and reliability through techniques like failure analysis,” says Professor Dai Jiyan, PolyU Associate Dean of Students, who also teaches the module on thin-film materials. “This hands-on experience helps to prepare them for working in a manufacturing facility or for continuing their studies at the PhD level.”

Also integrated into the teaching programme is the use of artificial intelligence to assist key tasks, such as defect analysis, materials selection and image processing. Indeed, PolyU has established a joint laboratory with Huawei to investigate possible applications of AI tools in electronic design, providing the students with early exposure to emerging computational methods that are likely to shape the future of the microelectronics industry. “One of our key characteristics is that we embed AI into our teaching and laboratory work,” says Dai. “Two of the modules are directly related to AI, while the joint lab with Huawei helps students to experiment with using AI in circuit design.”

Now in its third year, the Master’s programme was designed in collaboration with Hong Kong’s Applied Science and Technology Research Institute (ASTRI), established in 2000 to enhance the competitiveness of the region through the use of advanced technologies. Researchers at PolyU already pursue joint projects with ASTRI in areas like chip design, microfabrication and failure analysis. As part of the programme, these collaborators are often invited to give guest lectures or to guide the laboratory work. “Sometimes they even provide some specialized instruments for the students to use in their experiments,” says Zhao. “We really benefit from this collaboration.”

Once primed with the knowledge and experience from the taught modules, the students have the opportunity to work alongside one of the faculty members on a short research project. They can choose whether to focus on a topic that is relevant to present-day manufacturing, such as materials processing or advanced packaging technologies, or to explore the potential of emerging materials and devices across applications ranging from solar cells and microfluidics to next-generation memories and neuromorphic computing.

“It’s very interesting for the students to get involved in these projects,” says Zhao. “They learn more about the research process, which can make them more confident to take their studies to the next level. All of our faculty members are engaged in important work, and we can guide the students towards a future research field if that’s what they are interested in.”

There are also plenty of progression opportunities for those who are more interested in pursuing a career in industry. As well as providing support and advice through its joint lab in AI, Huawei arranges visits to its manufacturing facilities and offers some internships to interested students. PolyU also organizes visits to Hong Kong’s Science Park, home to multinational companies such as Infineon as well as a large number of start-up companies in the microelectronics sector. Some of these might support a student’s research project, or offer an internship in areas such as circuit design or microfabrication.

The international outlook offered by PolyU has made the Master’s programme particularly appealing to students from mainland China, but Zhao and Dai believe that the forward-looking ethos of the course should make it an appealing option for graduates across Asia and beyond. “Through the programme, the students gain knowledge about all aspects of the microelectronics industry, and how it is likely to evolve in the future,” says Dai. “The knowledge and technical skills gained by the students offer them a competitive edge for building their future career, whether they want to find a job in industry or to continue their research studies.”

The post Master’s programme takes microelectronics in new directions appeared first on Physics World.

Resonant laser ablation selectively destroys pancreatic tumours

23 octobre 2025 à 10:00

Pancreatic ductal adenocarcinoma (PDAC), the most common type of pancreatic cancer, is an aggressive tumour with a poor prognosis. Surgery remains the only potential cure, but is feasible in just 10–15% of cases. A team headed up at Sichuan University in China has now developed a selective laser ablation technique designed to target PDAC while leaving healthy pancreatic tissue intact.

Thermal ablation techniques, such as radiofrequency, microwave or laser ablation, could provide a treatment option for patients with locally advanced PDAC, but existing methods risk damaging surrounding blood vessels and healthy pancreatic tissues. The new approach, described in Optica, uses the molecular fingerprint of pancreatic tumours to enable selective ablation.

The technique exploits the fact that PDAC tissue contains a large amount of collagen compared with healthy pancreatic tissue. Amide-I collagen fibres exhibit a strong absorption peak at 6.1 µm, thus the researchers surmised that tuning the treatment laser to this resonant wavelength could enable efficient tumour ablation with minimal collateral thermal damage. As such, they designed a femtosecond pulsed laser that can deliver 6.1 µm pulses with a power of more than 1 W.

FTIR spectra of PDAC and the laser
Resonant wavelength Fourier-transform infrared spectra of PDAC (blue) and the laser (red). (Courtesy: Houkun Liang, Sichuan University)

“We developed a mid-infrared femtosecond laser system for the selective tissue ablation experiment,” says team leader Houkun Liang. “The system is tunable in the wavelength range of 5 to 11 µm, aligning with various molecular fingerprint absorption peaks such as amide proteins, cholesteryl ester, hydroxyapatite and so on.”

Liang and colleagues first examined the ablation efficiency of three different laser wavelengths on two types of pancreatic cancer cells. Compared with non-resonant wavelengths of 1 and 3 µm, the collagen-resonant 6.1 µm laser was far more effective in killing pancreatic cancer cells, reducing cell viability to ranges of 0.27–0.32 and 0.37–0.38, at 0 and 24 h, respectively.

The team observed similar results in experiments on ectopic PDAC tumours cultured on the backs of mice. Irradiation at 6.1 µm led to five to 10 times deeper tumour ablation than seen for the non-resonant wavelengths (despite using a laser power of 5 W for 1 µm ablation and just 500 mW for 6.1 and 3 µm), indicating that 6.1 µm is the optimal wavelength for PDAC ablation surgery.

To validate the feasibility and safety of 6.1 µm laser irradiation, the team used the technique to treat PDAC tumours on live mice. Nine days after ablation, the tumour growth rate in treated mice was significantly suppressed, with an average tumour volume of 35.3 mm3. In contrast, tumour volume in a control group of untreated mice reached an average of 292.7 mm3, roughly eight times the size of the ablated tumours. No adverse symptoms were observed following the treatment.

Clinical potential

The researchers also used 6.1 µm laser irradiation to ablate pancreatic tissue samples (including normal tissue and PDAC) from 13 patients undergoing surgical resection. They used a laser power of 1 W and four scanning speeds (0.5, 1, 2 and 3 mm/s) with 10 ablation passes, examining 20 to 40 samples for each parameter.

At the slower scanning speeds, excessive energy accumulation resulted in comparable ablation depths. At speeds of 2 or 3 mm/s, however, the average ablation depths in PDAC samples were 2.30 and 2.57 times greater than in normal pancreatic tissue, respectively, demonstrating the sought-after selective ablation. At 3 mm/s, for example, the ablation depth in tumour was 1659.09±405.97 µm, compared with 702.5±298.32 µm in normal pancreas.

The findings show that by carefully controlling the laser power, scanning speed and number of passes, near-complete ablation of PDACs can be achieved, with minimal damage to surrounding healthy tissues.

To further investigate the clinical potential of this technique, the researchers developed an anti-resonant hollow-core fibre (AR-HCF) that can deliver high-power 6.1 µm laser pulses deep inside the human body. The fibre has a core diameter of approximately 113 µm and low bending losses at radii under 10 cm. The researchers used the AR-HCF to perform 6.1 µm laser ablation of PDAC and normal pancreas samples. The ablation depth in PDAC was greater than in normal pancreas, confirming the selective ablation properties.

“We are working together with a company to make a medical-grade fibre system to deliver the mid-infrared femtosecond laser. It consists of AR-HCF to transmit mid-infrared femtosecond pulses, a puncture needle and a fibre lens to focus the light and prevent liquid tissue getting into the fibre,” explains Liang. “We are also making efforts to integrate an imaging unit into the fibre delivery system, which will enable real-time monitoring and precise surgical guidance.”

Next, the researchers aim to further optimize the laser parameters and delivery systems to improve ablation efficiency and stability. They also plan to explore the applicability of selective laser ablation to other tumour types with distinct molecular signatures, and to conduct larger-scale animal studies to verify long-term safety and therapeutic outcomes.

“Before this technology can be used for clinical applications, highly comprehensive biological safety assessments are necessary,” Liang emphasizes. “Designing well-structured clinical trials to assess efficacy and risks, as well as navigating regulatory and ethical approvals, will be critical steps toward translation. There is a long way to go.”

The post Resonant laser ablation selectively destroys pancreatic tumours appeared first on Physics World.

Le Tour de France fait son retour en Haute-Savoie

23 octobre 2025 à 08:29
Le Tour de France fait son retour en Haute-Savoie. Un coup de projecteur sur les routes de Haute-Savoie. Le directeur du Tour de France, Christian Prud'homme, est heureux de ce futur tracé. Une montée ultra pentue est attendue après deux semaines de course, ce qui pourrait faire basculer la course au classement général.

Doorway states spotted in graphene-based materials

22 octobre 2025 à 15:51

Low-energy electrons escape from some materials via distinct “doorway” states, according to a study done by physicists at Austria’s Vienna Institute of Technology. The team studied graphene-based materials and found that the nature of the doorway states depended on the number of graphene layers in the sample.

Low-energy electron (LEE) emission from solids is used across a range of materials analysis and processing applications including scanning electron microscopy and electron-beam induced deposition. However, the precise physics of the emission process is not well understood.

Electrons are ejected from a material when a beam of electrons is fired at its surface. Some of these incident electrons will impart energy to electrons residing in the material, causing some resident electrons to be emitted from the surface. In the simplest model, the minimum energy needed for this LEE emission is the electron binding energy of the material.

Frog in a box

In this new study, however, researchers have shown that exceeding the binding energy is not enough for LEE emission from graphene-based materials. Not only does the electron need this minimum energy, it must also be in a specific doorway state or it is unlikely to escape. The team compare this phenomenon to the predicament of a frog in a cardboard box with a window. Not only must the frog hop a certain height to escape the box, it must also begin its hop from a position that will result in it travelling through the hole (see figure).

For most materials, the energy spectrum of LEE electrons is featureless. However, it was known that graphite’s spectrum has an “X state” at about 3.3 eV, where emission is enhanced. This state could be related to doorway states.

To search for doorway states, the Vienna team studied LEE emission from graphite as well as from single-layer and bi-layer graphene. Graphene is a sheet of carbon just one atom thick. Sheets can stick together via the relatively weak Van der Waals force to create multilayer graphene – and ultimately graphite, which comprises a large number of layers.

Because electrons are mostly confined within the graphene layers, the electronic states of single-layer, bi-layer and multi-layer graphene are broadly similar. As a result, it was expected that these materials would have similar LEE emission spectra . However, the Vienna team found a surprising difference.

Emission and reflection

The team made their discovery by firing a beam of relatively low energy electrons (173 eV) incident at 60° to the surface of single-layer and bi-layer graphene as well as graphite. The scattered electrons are then detected at the same angle of reflection. Meanwhile, a second detector is pointed normal to the surface to capture any emitted electrons. In quantum mechanics electrons are indistinguishable, so the modifiers scattered and emitted are illustrative, rather than precise.

The team looked for coincident signals in both detectors and plotted their results as a function of energy in 2D “heat maps”. These plots revealed that bi-layer graphene and graphite each had doorway states – but at different energies. However, single-layer graphene did not appear to have any doorway states. By combining experiments with calculations, the team showed that doorway states emerge above a certain number of layers. As a result the researchers showed that graphite’s X state can be attributed in part to a doorway state that appears at about five layers of graphene.

“For the first time, we’ve shown that the shape of the electron spectrum depends not only on the material itself, but crucially on whether and where such resonant doorway states exist,” explains Anna Niggas at the Vienna Institute of Technology.

As well as providing important insights in how the electronic properties of graphene morph into the properties of graphite, the team says that their research could also shed light on the properties of other layered materials.

The research is described in Physical Review Letters.

The post Doorway states spotted in graphene-based materials appeared first on Physics World.

❌