From inflationary pressures and shifting interest rates to supply chain challenges and intensifying great-power competition, today’s macroeconomic and geopolitical forces are reshaping the future of space investment. For investors and businesses, navigating this environment means balancing capital market dynamics, technology cycles, and an increasingly complex landscape of export controls, trade restrictions, and government influence.
SAN FRANCISCO – French startup U-Space has raised €24 million ($27.8 million) to expand its role in the small satellite constellation market. With funding from the Series A round, Toulouse-based […]
“Global collaborations for European economic resilience” is the theme of SEMICON Europa 2025. The event is coming to Munich, Germany on 18–21 November and it will attract 25,000 semiconductor professionals who will enjoy presentations from over 200 speakers.
The TechARENA portion of the event will cover a wide range of technology-related issues including new materials, future computing paradigms and the development of hi-tech skills in the European workface. There will also be an Executive Forum, which will feature leaders in industry and government and will cover topics including silicon geopolitics and the use of artificial intelligence in semiconductor manufacturing.
SEMICON Europa will be held at the Messe München, where it will feature a huge exhibition with over 500 exhibitors from around the world. The exhibition is spread out over three halls and here are some of the companies and product innovations to look out for on the show floor.
Accelerating the future of electro-photonic integration with SmarAct
As the boundaries between electronic and photonic technologies continue to blur, the semiconductor industry faces a growing challenge: how to test and align increasingly complex electro-photonic chip architectures efficiently, precisely, and at scale. At SEMICON Europa 2025, SmarAct will address this challenge head-on with its latest innovation – Fast Scan Align. This is a high-speed and high-precision alignment solution that redefines the limits of testing and packaging for integrated photonics.
Fast Scan Align SmarAct’s high-speed and high-precision alignment solution redefines the limits of testing and packaging for integrated photonics. (Courtesy: SmarAct)
In the emerging era of heterogeneous integration, electronic and photonic components must be aligned and interconnected with sub-micrometre accuracy. Traditional positioning systems often struggle to deliver both speed and precision, especially when dealing with the delicate coupling between optical and electrical domains. SmarAct’s Fast Scan Align solution bridges this gap by combining modular motion platforms, real-time feedback control, and advanced metrology into one integrated system.
At its core, Fast Scan Align leverages SmarAct’s electromagnetic and piezo-driven positioning stages, which are capable of nanometre-resolution motion in multiple degrees of freedom. Fast Scan Align’s modular architecture allows users to configure systems tailored to their application – from wafer-level testing to fibre-to-chip alignment with active optical coupling. Integrated sensors and intelligent algorithms enable scanning and alignment routines that drastically reduce setup time while improving repeatability and process stability.
Fast Scan Align’s compact modules allow various measurement techniques to be integrated with unprecedented possibilities. This has become decisive for the increasing level of integration of complex electro-photonic chips.
Apart from the topics of wafer-level testing and packaging, wafer positioning with extreme precision is as crucial as never before for the highly integrated chips of the future. SmarAct’s PICOSCALE interferometer addresses the challenge of extreme position by delivering picometer-level displacement measurements directly at the point of interest.
When combined with SmarAct’s precision wafer stages, the PICOSCALE interferometer ensures highly accurate motion tracking and closed-loop control during dynamic alignment processes. This synergy between motion and metrology gives users unprecedented insight into the mechanical and optical behaviour of their devices – which is a critical advantage for high-yield testing of photonic and optoelectronic wafers.
Visitors to SEMICON Europa will also experience how all of SmarAct’s products – from motion and metrology components to modular systems and up to turn-key solutions – integrate seamlessly, offering intuitive operation, full automation capability, and compatibility with laboratory and production environments alike.
Optimized pressure monitoring: Efficient workflows with Thyracont’s VD800 digital compact vacuum meters
Thyracont Vacuum Instruments will be showcasing its precision vacuum metrology systems in exhibition hall C1. Made in Germany, the company’s broad portfolio combines diverse measurement technologies – including piezo, Pirani, capacitive, cold cathode, and hot cathode – to deliver reliable results across a pressure range from 2000 to 3e-11 mbar.
VD800 Thryracont’s series combines high accuracy with a highly intuitive user interface, defining the next generation of compact vacuum meters. (Courtesy: Thyracont)
Front-and-centre at SEMICON Europa will be Thyracont’s new series of VD800 compact vacuum meters. These instruments provide precise, on-site pressure monitoring in industrial and research environments. Featuring a direct pressure display and real-time pressure graphs, the VD800 series is ideal for service and maintenance tasks, laboratory applications, and test setups.
The VD800 series combines high accuracy with a highly intuitive user interface. This delivers real-time measurement values; pressure diagrams; and minimum and maximum pressure – all at a glance. The VD800’s 4+1 membrane keypad ensures quick access to all functions. USB-C and optional Bluetooth LE connectivity deliver seamless data readout and export. The VD800’s large internal data logger can store over 10 million measured values with their RTC data, with each measurement series saved as a separate file.
Data sampling rates can be set from 20 ms to 60 s to achieve dynamic pressure tracking or long-term measurements. Leak rates can be measured directly by monitoring the rise in pressure in the vacuum system. Intelligent energy management gives the meters extended battery life and longer operation times. Battery charging is done conveniently via USB-C.
The vacuum meters are available in several different sensor configurations, making them adaptable to a wide range of different uses. Model VD810 integrates a piezo ceramic sensor for making gas-type-independent measurements for rough vacuum applications. This sensor is insensitive to contamination, making it suitable for rough industrial environments. The VD810 measures absolute pressure from 2000 to 1 mbar and relative pressure from −1060 to +1200 mbar.
Model VD850 integrates a piezo/Pirani combination sensor, which delivers high resolution and accuracy in the rough and fine vacuum ranges. Optimized temperature compensation ensures stable measurements in the absolute pressure range from 1200 to 5e-5 mbar and in the relative pressure range from −1060 to +340 mbar.
The model VD800 is a standalone meter designed for use with Thyracont’s USB-C vacuum transducers, which are available in two models. The VSRUSB USB-C transducer is a piezo/Pirani combination sensor that measures absolute pressure in the 2000 to 5.0e-5 mbar range. The other is the VSCUSB USB-C transducer, which measures absolute pressures from 2000 down to 1 mbar and has a relative pressure range from -1060 to +1200 mbar. A USB-C cable connects the transducer to the VD800 for quick and easy data retrieval. The USB-C transducers are ideal for hard-to-reach areas of vacuum systems. The transducers can be activated while a process is running, enabling continuous monitoring and improved service diagnostics.
With its blend of precision, flexibility, and ease of use, the Thyracont VD800 series defines the next generation of compact vacuum meters. The devices’ intuitive interface, extensive data capabilities, and modern connectivity make them an indispensable tool for laboratories, service engineers, and industrial operators alike.
To experience the future of vacuum metrology in Munich, visit Thyracont at SEMICON Europa hall C1, booth 752. There you will discover how the VD800 series can optimize your pressure monitoring workflows.
Looking ahead to the future of machine learning: (clockwise from top left) Jay Lee, Jimeng Sun, Pierre Gentine and Kyle Cranmer.
IOP Publishing’s Machine Learning series is the world’s first open-access journal series dedicated to the application and development of machine learning (ML) and artificial intelligence (AI) for the sciences.
Part of the series is Machine Learning: Science and Technology, launched in 2019, which bridges the application and advances in machine learning across the sciences. Machine Learning: Earth is dedicated to the application of ML and AI across all areas of Earth, environmental and climate sciences while Machine Learning: Health covers healthcare, medical, biological, clinical, and health sciences and Machine Learning: Engineering focuses on applied AI and non-traditional machine learning to the most complex engineering challenges.
Here, the editors-in-chief (EiC) of the four journals discuss the growing importance of machine learning and their plans for the future.
Kyle Cranmer is a particle physicist and data scientist at the University of Wisconsin-Madison and is EiC of Machine Learning: Science and Technology (MLST). Pierre Gentine is a geophysicist at Columbia University and is EiC of Machine Learning: Earth. Jimeng Sun is a biophysicist at the University of Illinois at Urbana-Champaign and is EiC of Machine Learning: Health. Mechanical engineer Jay Lee is from the University of Maryland and is EiC of Machine Learning: Engineering.
What do you attribute to the huge growth over the past decade in research into and using machine learning?
Kyle Cranmer (KC): It is due to a convergence of multiple factors. The initial success of deep learning was driven largely by benchmark datasets, advances in computing with graphics processing units, and some clever algorithmic tricks. Since then, we’ve seen a huge investment in powerful, easy-to-use tools that have dramatically lowered the barrier to entry and driven extraordinary progress.
Pierre Gentine (PG): Machine learning has been transforming many fields of physics, as it can accelerate physics simulation, better handle diverse sources of data (multimodality), help us better predict.
Jimeng Sun (JS): Over the past decade, we have seen machine learning models consistently reach — and in some cases surpass — human-level performance on real-world tasks. This is not just in benchmark datasets, but in areas that directly impact operational efficiency and accuracy, such as medical imaging interpretation, clinical documentation, and speech recognition. Once ML proved it could perform reliably at human levels, many domains recognized its potential to transform labour-intensive processes.
Jay Lee (JL): Traditionally, ML growth is based on the development of three elements: algorithms, big data, and computing. The past decade’s growth in ML research is due to the perfect storm of abundant data, powerful computing, open tools, commercial incentives, and groundbreaking discoveries—all occurring in a highly interconnected global ecosystem.
What areas of machine learning excite you the most and why?
KC: The advances in generative AI and self-supervised learning are very exciting. By generative AI, I don’t mean Large Language Models — though those are exciting too — but probabilistic ML models that can be useful in a huge number of scientific applications. The advances in self-supervised learning also allows us to engage our imagination of the potential uses of ML beyond well-understood supervised learning tasks.
PG: I am very interested in the use of ML for climate simulations and fluid dynamics simulations.
JS: The emergence of agentic systems in healthcare — AI systems that can reason, plan, and interact with humans to accomplish complex goals. A compelling example is in clinical trial workflow optimization. An agentic AI could help coordinate protocol development, automatically identify eligible patients, monitor recruitment progress, and even suggest adaptive changes to trial design based on interim data. This isn’t about replacing human judgment — it’s about creating intelligent collaborators that amplify expertise, improve efficiency, and ultimately accelerate the path from research to patient benefit.
JL: One area is generative and multimodal ML — integrating text, images, video, and more — are transforming human–AI interaction, robotics, and autonomous systems. Equally exciting is applying ML to nontraditional domains like semiconductor fabs, smart grids, and electric vehicles, where complex engineering systems demand new kinds of intelligence.
What vision do you have for your journal in the coming years?
KC: The need for a venue to propagate advances in AI/ML in the sciences is clear. The large AI conferences are under stress, and their review system is designed to be a filter not a mechanism to ensure quality, improve clarity and disseminate progress. The large AI conferences also aren’t very welcoming to user-inspired research, often casting that work as purely applied. Similarly, innovation in AI/ML often takes a back seat in physics journals, which slows the propagation of those ideas to other fields. My vision for MLST is to fill this gap and nurture the community that embraces AI/ML research inspired by the physical sciences.
PG: I hope we can demonstrate that machine learning is more than a nice tool but that it can play a fundamental role in physics and Earth sciences, especially when it comes to better simulating and understanding the world.
JS: I see Machine Learning: Health becoming the premier venue for rigorous ML–health research — a place where technical novelty and genuine clinical impact go hand in hand. We want to publish work that not only advances algorithms but also demonstrates clear value in improving health outcomes and healthcare delivery. Equally important, we aim to champion open and reproducible science. That means encouraging authors to share code, data, and benchmarks whenever possible, and setting high standards for transparency in methods and reporting. By doing so, we can accelerate the pace of discovery, foster trust in AI systems, and ensure that our field’s breakthroughs are accessible to — and verifiable by — the global community.
JL: Machine Learning: Engineering envisions becoming the global platform where ML meets engineering. By fostering collaboration, ensuring rigor and interpretability, and focusing on real-world impact, we aim to redefine how AI addresses humanity’s most complex engineering challenges.
The aurora borealis is usually seen near the Arctic, but solar winds and magnetic turbulence are sparking some of the best light shows in centuries throughout the US.
The dream of accessible space travel is inching closer to reality. As private companies push the boundaries of space tourism and orbital logistics, the need for strategically located and versatile spaceports is becoming increasingly critical. While established players like Cape Canaveral Space Force Station and Vandenberg Space Force Base hold significant sway, the future of […]
An investigation into the loss of a privately operated methane monitoring satellite could not identify a single root cause for the spacecraft’s failure earlier this year.
Games played under the laws of quantum mechanics dissipate less energy than their classical equivalents. This is the finding of researchers at Singapore’s Nanyang Technological University (NTU), who worked with colleagues in the UK, Austria and the US to apply the mathematics of game theory to quantum information. The researchers also found that for more complex game strategies, the quantum-classical energy difference can increase without bound, raising the possibility of a “quantum advantage” in energy dissipation.
Game theory is the field of mathematics that aims to formally understand the payoff or gains that a person or other entity (usually called an agent) will get from following a certain strategy. Concepts from game theory are often applied to studies of quantum information, especially when trying to understand whether agents who can use the laws of quantum physics can achieve a better payoff in the game.
In the latest work, which is published in Physical Review Letters, Jayne Thompson, Mile Gu and colleagues approached the problem from a different direction. Rather than focusing on differences in payoffs, they asked how much energy must be dissipated to achieve identical payoffs for games played under the laws of classical versus quantum physics. In doing so, they were guided by Landau’s principle, an important concept in thermodynamics and information theory that states that there is a minimum energy cost to erasing a piece of information.
This Landau minimum is known to hold for both classical and quantum systems. However, in practice systems will spend more than the minimum energy erasing memory to make space for new information, and this energy will be dissipated as heat. What the NTU team showed is that this extra heat dissipation can be reduced in the quantum system compared to the classical one.
Planning for future contingencies
To understand why, consider that when a classical agent creates a strategy, it must plan for all possible future contingencies. This means it stores possibilities that never occur, wasting resources. Thompson explains this with a simple analogy. Suppose you are packing to go on a day out. Because you are not sure what the weather is going to be, you must pack items to cover all possible weather outcomes. If it’s sunny, you’d like sunglasses. If it rains, you’ll need your umbrella. But if you only end up using one of these items, you’ll have wasted space in your bag.
“It turns out that the same principle applies to information,” explains Thompson. “Depending on future outcomes, some stored information may turn out to be unnecessary – yet an agent must still maintain it to stay ready for any contingency.”
For a classical system, this can be a very wasteful process. Quantum systems, however, can use superposition to store past information more efficiently. When systems in a quantum superposition are measured, they probabilistically reveal an outcome associated with only one of the states in the superposition. Hence, while superposition can be used to store both pasts, upon measurement all excess information is automatically erased “almost as if they had never stored this information at all,” Thompson explains.
The upshot is that because information erasure has close ties to energy dissipation, this gives quantum systems an energetic advantage. “This is a fantastic result focusing on the physical aspect that many other approaches neglect,” says Vlatko Vedral, a physicist at the University of Oxford, UK who was not involved in the research.
Implications of the research
Gu and Thompson say their result could have implications for the large language models (LLMs) behind popular AI tools such as ChatGPT, as it suggests there might be theoretical advantages, from an energy consumption point of view, in using quantum computers to run them.
Another, more foundational question they hope to understand regarding LLMs is the inherent asymmetry in their behaviour. “It is likely a lot more difficult for an LLM to write a book from back cover to front cover, as opposed to in the more conventional temporal order,” Thompson notes. When considered from an information-theoretic point of view, the two tasks are equivalent, making this asymmetry somewhat surprising.
In Thompson and Gu’s view, taking waste into consideration could shed light on this asymmetry. “It is likely we have to waste more information to go in one direction over the other,” Thompson says, “and we have some tools here which could be used to analyse this”.
For Vedral, the result also has philosophical implications. If quantum agents are more optimal, he says, it is “surely is telling us that the most coherent picture of the universe is the one where the agents are also quantum and not just the underlying processes that they observe”.
Complex systems model real-world behaviour that is dynamic and often unpredictable. They are challenging to simulate because of nonlinearity, where small changes in conditions can lead to disproportionately large effects, many interacting variables, which make computational modelling cumbersome, and randomness, where outcomes are probabilistic. Machine learning is a powerful tool for understanding complex systems. It can be used to find hidden relationships in high-dimensional data and predict the future state of a system based on previous data.
This research develops a novel machine learning approach for complex systems that allows the user to extract important information about collective variables in the system, referred to as inherent structural variables. The researchers used a type of machine learning tool called an autoencoder to examine snapshots of how atoms are arranged in a system at any moment (called instantaneous atomic configurations). Then, they matched each snapshot to a more stable version of that structure (an inherent structure), which represents the system’s underlying shape or pattern. The inherent structural variables enable the analysis of structural transitions and the computation of high-resolution free-energy landscapes. These are detailed maps that show how a system’s energy changes as its structure or configuration changes, helping researchers understand stability, transitions, and dynamics in complex systems.
The model is versatile, and the authors demonstrate how it can be applied to metal nanoclusters and protein structures. In the case of Au147 nanoclusters (well-organised structures made up of 147 gold atoms), the inherent structural variables reveal three main types of stable structures that the gold nanocluster can adopt. These are called fcc (face-centred cubic), Dh (decahedral), and Ih (icosahedral). These structures represent different stable states that a nanocluster can switch between, and on the high-resolution free-energy landscape, they appear as valleys. Moving from one valley to another isn’t easy, there are narrow paths or barriers between them, known as kinetic bottlenecks.
The researchers validated their machine learning model using Markov state models, which are mathematical tools that help analyse how a system moves between different states over time, and electron microscopy, which images atomic structures and can confirm that the predicted structures exist in the gold nanoclusters. The approach also captures non-equilibrium melting and freezing processes, offering insights into polymorph selection and metastable states. Scalability is demonstrated up to Au309 clusters.
The generality of the method is further demonstrated by applying it to the bradykinin peptide, a completely different type of system, identifying distinct structural motifs and transitions. Applying the method to a biological molecule provides further evidence that the machine learning approach is a flexible, powerful technique for studying many kinds of complex systems. This work contributes to machine learning strategies, as well as experimental and theoretical studies of complex systems, with potential applications across liquids, glasses, colloids, and biomolecules.
The Standard Model of particle physics is a very well-tested theory that describes the fundamental particles and their interactions. However, it does have several key limitations. For example, it doesn’t account for dark matter or why neutrinos have masses.
One of the main aims of experimental particle physics at the moment is therefore to search for signs of new physical phenomena beyond the Standard Model.
Finding something new like this would point us towards a better theoretical model of particle physics: one that can explain things that the Standard Model isn’t able to.
These searches often involve looking for rare or unexpected signals in high-energy particle collisions such as those at CERN’s Large Hadron Collider (LHC).
In a new paper published by the CMS collaboration, a new analysis method was used to search for new particles produced by proton-proton collisions at the at the LHC.
These particles would decay into two jets, but with unusual internal structure not typical of known particles like quarks or gluons.
The researchers used advanced machine learning techniques to identify jets with different substructures, applying various anomaly detection methods to maximise sensitivity to unknown signals.
Unlike traditional strategies, anomaly detection methods allow the AI models to identify anomalous patterns in the data without being provided specific simulated examples, giving them increased sensitivity to a wider range of potential new particles.
This time, they didn’t find any significant deviations from expected background values. Although no new particles were found, the results enabled the team to put several new theoretical models to the test for the first time. They were also able to set upper bounds on the production rates of several hypothetical particles.
Most importantly, the study demonstrates that machine learning can significantly enhance the sensitivity of searches for new physics, offering a powerful tool for future discoveries at the LHC.
The top Democrat on the House Science Committee is demanding that NASA halt efforts to close facilities at the Goddard Space Flight Center, arguing it jeopardizes key agency missions.
Learn more about the surprisingly well-preserved fossilized dinosaur egg uncovered in South America and what it could tell us about dinosaur nesting and bird evolution.
A new shock-and-awe movie might sharpen public concern about American vulnerability to nuclear attack and defense against it. The fictional ”House of Dynamite” shows United States leaders trying to respond to a single bolt-from-the-blue strike. Missile defense fails. The movie could spur real-world interest in better protection. In ”House of Dynamite”, an intercontinental ballistic missile launched from the […]
One of the largest antennas in NASA’s Deep Space Network was damaged in September and may be out of service for an extended period, further straining the system.
New Mexico-based mPower Technology has started automated, high-volume production of its space-grade solar modules in New York, the company announced Nov. 11 as it ramps up to meet demand from satellite constellations.