Published: 
10.12.2024

The Fusion of Expert Knowledge and Machine Learning: AI in Action at Proxima

The most important insight in machine learning (ML) over the past decade has been the extraordinary value of data and scalable computation. As Richard Sutton writes, making clever use of computational resources tends to outperform expert knowledge when solving learning problems. But in science and engineering, expert knowledge plays a much bigger role, because generally problems can’t be fully formalized mathematically. Experience and intuition become more critical the closer one gets to real-world applications.

At Proxima, we want to bring the two worlds together by applying the insights of computer science to stellarator physics and engineering—and we want to empower domain experts to spend their time on exploring new ideas and creative problem solving, using scalable computing to figure out the small details.

Here are some of the foundational ideas behind Proxima’s approach to AI for fusion.

Data enables inter-disciplinary communication

Innovation requires collaboration. Bringing some of the best physicists, engineers, and software experts on the planet onto a single team isn’t enough: we need to enable them to exchange information and extract insights from their workflows as efficiently as possible. This demands the establishment of a data-driven culture and the development of appropriate tools. These are some key elements:

  • Pareto fronts instead of single points: Fusion science and engineering are both about trade-offs. We need to find a stellarator design that achieves the level of fusion performance we need, but we also need to be able to manufacture it. For the machine to be successful, it needs to operate economically, which means it needs to be as cheap as possible. We make these compromises visible through data, by optimizing for sets of analogously promising solutions—so-called Pareto fronts—to allow inter-disciplinary experts to explore the implications of each trade-off and make meaningful decisions together.
Scalable simulation environments allow us to explore a large number of possible stellarator designs. In this simple illustration, we extract Pareto fronts (red dots, left) to showcase the trade-off between "energy confinement" and "coil simplicity”, and use interactive 3D visualization tooling to communicate results.
  • Interactive 3D visualizations: Many problems in stellarator design are inherently geometrical: the shapes of plasma boundaries describe the fusion performance, and the geometries of coils are one of the core deciding factors for manufacturability. Pareto fronts and trade-offs are best explored in spaces that are intuitive for humans, which is why we build 3D visual tooling. Data-driven methods allow us to make them interactive, by predicting the performance of previously unseen geometries in real-time.
  • Extracting knowledge from engineers: Much of the knowledge present in the engineering space is represented in simulations, and Proxima’s push toward simulation-driven engineering makes it accessible for AI methods. However, there’s also a lot of institutional and implicit knowledge that engineers with many years of experience in a given domain possess, and that knowledge can’t be easily written down in mathematical form. We can showcase designs and ideas to our engineers and ask for their assessment. Over time, we can build a dataset to predict how the engineer is going to react to new designs, empowering us to make that reaction part of our optimization process.
There are many coil sets that generate a specific plasma shape. Engineers can explore the implications of changing parameters like the coil to plasma distance and receive immediate feedback.

Scalability unlocks new qualities in the search for an optimized stellarator design

In applying AI to fusion, we’re not looking for a 10% improvement in performance—our objective is fundamentally transforming the way we work to reach entirely new levels. In order to do that, we need to think at scale. But first, we need solid foundations. Generally, one needs to go through a sequence of developments:

Prototyping → Automation → Data Collection → AI

By “prototyping,” we mean the development of key understanding, whether through hardware or software. In stellarator design, we’re able to build on decades of public stellarator research across the world, including leading work at our key partner institution, the Max Planck Institute for Plasma Physics. The know-how and simulation tools developed through that research constitute some of our prototyping. At Proxima Fusion, we’ve put a lot of effort into automating these tools with scalable cloud resources, enabling directed data collection within Proxima’s StarFinder design framework and data models. To apply AI, the previous steps must have been successful.

The following goals drive us at all stages of development:

  • Understanding the limitations and implicit knowledge of our simulation tools: Multi-fidelity surrogate models learn the relationship between costly high-resolution models typically used for validation after optimization is finished and the simplified but much quicker simulations typically used during stellarator design. A good surrogate allows us to use the output of low-fidelity models paired with historical data from high-fidelity simulation runs to predict richer performance metrics from simpler simulations.
  • Robust optimization: Simulation-driven optimization tends to find narrow optima— in our case, stellarator geometries that perform well if we can reproduce them precisely, but whose performance falls off quickly with only slight changes to the geometry. In reality, we need solutions that are robust against slight changes. Compromises in engineering and manufacturing tolerances dictate that our final machine will not perfectly match what we optimized. Robust optimization is costly, because it requires optimizers to not only consider a single solution, but also to model realistic perturbations, requiring many more simulations to be run. Scalable computing and surrogate models make that possible.
  • Geometric ML models can learn to characterize the space of all valid stellarator designs: While searching for the right stellarator design to build, we are running a huge number of simulations, and are therefore generating a big dataset of plausible and implausible stellarators. We make it a point to search for qualitatively different and diverse geometries, which allows us to find designs in places that the fusion community hasn’t already thoroughly explored.
Scalable computational environments allow our AI approaches to explore wider areas of the design space than human-driven approaches could.

The future of fusion energy isn’t just about harnessing the power of the stars—it’s about harnessing the collective power of human ingenuity, amplified by the data-driven methods of today and tomorrow. At Proxima, that future is already being built.

You can learn more about our new "AI for Fusion Engineering" project in collaboration with Technical University of Munich, The University of Bonn, and Forschungszentrum Jülich in the below video featuring Proxima AI Lead Dr. Markus Kaiser, TUM School of Computation, Information and Technology Chair of Computer Vision and Artificial Intelligence Dr. Daniel Cremers, and the University of Bonn and the Lamarr Institute's Dr. Zorah Lähner:

Related Stories

Contact Us

Get In Touch

info@proximafusion.com

Message sent successfully.
Message not sent, please try again.