Skip to content

Digital Echo: A Process-Centric Approach to Industrial Surrogate Modeling

Discover contributions by Chris Rackauckas on JuliaHub. Gain knowledge on differential equations and scientific computing.
Chris Rackauckas
JuliaHub Blog Home Icon Gray

Industrial surrogate modeling is typically centered around neural network architectures and precision. While important, these approaches don’t necessarily solve the ‘surrogates problem’. The ‘surrogates problem’ refers to the challenges associated with integrating surrogate models into industrial processes and requirements infrastructure. The challenge with this approach is that it doesn’t integrate surrogates into industrial processes and requirements infrastructure.  For surrogate modeling to truly solve real-world industrial engineering problems, verification and validation benchmarks are equally important. This blog post looks at the three key pillars to create surrogates that are process-driven, reliable, and prioritize prediction time for varied industrial applications. 

Industrial Surrogates - More Than Just Machine Learning Architectures

Many groups approach the problem of building surrogates for industrial simulation merely as a problem of machine learning architectures. That is, the surrogate challenge is to generate a machine-learned model that approximates a physical model to a high degree of accuracy but runs orders of magnitudes faster than the original model. Here, the focus seems to be on the construction of new neural networks such as physics-informed neural networks (PINNs). These models, when given sufficient data from a simulation, can reproduce their behavior. However, this approach does not adequately address the true problem of integrating the next generation of machine learning into industrial modeling and simulation workflows.

The real problem arises when you integrate surrogates into industrial processes and requirements infrastructure. Industrial processes and requirements infrastructure ensure the reliability of products, like airplanes and drugs. For instance, processes that ensure an airplane that is designed by a controls engineer will stay in the air, new drugs that enter the market are safe, and that new products being built consider the realities of supply chains and are built in robust and reproducible ways. In the digital age, these processes have evolved to incorporate mathematical and computational advancements across the entire product lifecycle. Now, for the next generation of machine learning to succeed in advanced engineering industries, it needs to understand these processes and develop methodologies that comply with the required infrastructure. At a high level, what are the requirements that are imposed through these systems?

The Three Pillars of a Process-Centric Approach to Industrial Surrogates

  1. Any Deployable Surrogate System Must Have Predictable Accuracy

Machine learning often relies on non-convex landscapes for convergence, with the expectation of improved predictions as more data is provided. Yet, the effectiveness is highly dependent on selecting the right hyperparameters. The uncertainty in achieving better results with larger datasets, as seen in the double-descent phenomenon (Double Descent Demystified: Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle), challenges the predictability of neural network improvements.

The conventional approach to surrogates is centered on achieving optimal accuracy with fixed parameters. However, precision is critical in industrial applications, led by specific requirements such as landing force accuracy in case of the airplane use case seen above. Industrial use of machine learning needs to ensure that projects do not introduce a new vector that may seemingly randomly regress. For instance,  simulations of fluid flow through an airplane engine will be checked by engineers to reach a required prediction precision. If the system needs to be more accurate due to downstream requirements changes, the engineers know how to reliably increase this precision at the cost of compute power.

Therefore, the challenge lies in ensuring that the best neural network architecture in test scenarios meets the accuracy requirements reliably. Unlike generative AI, where a single model suffices for numerous projects, modeling and simulation engineers need adaptive solutions for evolving scenarios. The need of the hour is a machine learning process that delivers accurate m models consistently without constant retraining and tweaking.

Thus, it is not just the neural architecture but also a reliable process for creating accurate approximations that is paramount.  

  1. Verification and Validation Standards

The true accuracy of surrogate models often remains unknown, especially in academic studies. This is because testing at random points does not address critical business questions, such as: 

  • Where is the surrogate model most likely to be inaccurate?
  • For what parameters should we be most worried and perform extra verifications on the surrogate results?
  • For critical values, for example, behavior that must be predicted within 1% for safety requirements, how to ensure that the surrogate model hits this level over the whole parameter space used in downstream analyses?
  • How to report the reliability of the learned surrogate model to downstream non-technical individuals
  • How to document its global accuracy behavior?

Unfortunately, many of these questions are impossible to answer and therefore not the right questions to ask in the context of machine learning. Rather than focusing on neural network architectures, the key is understanding processes to minimize risks. The right questions to ask in this case would be:

  • What kinds of plots and visualizations help identify accurate and inaccurate areas?
  • How can we improve accuracy in specific parameter areas without sacrificing overall accuracy?
  • What feedback guides data addition and parameter adjustments?
  • Which metrics signal potential unreliability in given scenarios?
  • How can the entire process be simplified for non-ML experts?

These are the real questions that need to be addressed in order for an engineering team to safely develop processes and deploy neural surrogates throughout their domain.

  1. Reliable Prediction Time

A significant challenge when evaluating the efficacy and utility of surrogates is not merely whether the time saved in training compensates for the training time but rather if the surrogate enhances overall computational efficiency. While the investment in training may seem extensive, the critical factor lies in the surrogate's ability to deliver required accuracy within specified time frames, particularly in applications like power grid control.

Surrogates often enable analyses that would otherwise be impractical, given constraints on computational speed in certain scenarios. For instance, in power grid control, the model must operate at a specific frequency, and meeting this computational cost is crucial. The focus shifts from training duration to the surrogate's capacity to meet accuracy requirements promptly.

Additionally, the parallel nature of data generation, enabled by cloud computing, minimizes the impact of data quantity on surrogate development. Thus, the real consideration is the value that the surrogate adds compared to its training cost. In highly technical industries, where skilled engineers drive productivity, the value and impact of surrogates can be game-changing whe integrated into workflows. However, the key is reliability. That engineers can rely on the surrogate to always work as expected, and not lead to extra work of triple-checking accuracy every time surrogates are used.

When implemented effectively, surrogates have the potential to significantly boost productivity, not unlike the transformative power of digitization. Conversely, improper integration may lead to wasted efforts and inefficiencies. Therefore, the emphasis should be on seamlessly incorporating surrogates into processes, ensuring they contribute substantially to the team's effectiveness.

The Digital Echo: A Neural Surrogate Built Around Industrial Processes

Given this thread of thought, we at JuliaHub have taken it upon ourselves to address the surrogate question from the standpoint of processes and requirements. All of the core questions introduced by the above discussion are the ones that drive the development of our JuliaSim surrogate trainer product. Questions like:

  1. What are the neural architectures that will most reliably improve if trained on more data and are not hampered by issues associated with having too many hyperparameters to tune?
  2. How do we develop a visualization suite around the surrogate training processes so that scientists can be in the loop, easily understand the accuracy profile of the surrogate, and understand where or if the training must be improved?
  3. Through what means can we present an architecture that is reliable and automatically masks training costs to maximize productivity via effective use of cloud infrastructure?
  4. How do we take all of this and make it usable and accessible to domain scientists and engineers with no machine learning background?

These questions led us to develop our game-changing, new architecture, Digital Echo. Its focus is not on the mathematical structures or the layer definitions; here we give absolutely no definition of the mathematics inside how it computes NN(x), but you can already understand how it is substantially different from DeepONets, FNO, PINNs, CTESNs, and more. That's because it's not about the architecture; Digital Echo is about a process. A process for reliably creating the best surrogate it can, with processes to know when it needs to retrain, when it needs to warn you about accuracy, and processes for how to generate legible reports to document surrogate accuracy. Digital Echo is a completely different take on surrogates because it's not about winning a contest for whose line is the lowest; it's about making sure someone can click a button and get a surrogate that meets the industrial requirements set out in their specifications sheet. Digital Echo is not about performance but productivity.

If you would like to find out more about Digital Echo and Surrogate Modeling, download our white paper or request to speak with our solutions expert. You can write to sales@juliahub.com 



Recent Posts

Learn More

Want to learn more about our capabilities? We are here to help.