Why Hardware Needs a New Foundation

Modern engineering is dominated by workflows that revolve around physical modeling and designing control systems. Even here, there is a clash of cultures. Workflows are often siloed within developers and engineers, text and the GUI, software and hardware. Dyad is a modeling platform that breaks down those barriers.
The problem: Legacy engineering tools do not enable modern agile workflows
Current, state-of-the-art systems modeling tools have not kept up with major changes in software development, which accelerated through the 2010s and beyond. Agile development methodology, continuous integration testing and deployment (CI/CD), and the adoption of Git-based version control has greatly accelerated the pace of software development, but these are not integrated into the core of traditional engineering platforms.
Additionally, major advancements in underlying tooling, including the LLVM compiler for high-performance, just-in-time, multi-platform compilation, the explosion of new tooling around automatic differentiation (AD) and machine learning require entirely new solvers and compilers in order to achieve full integration. Recent advancements in domains such as physics-informed machine learning and scientific machine learning (SciML) are thus rendered inaccessible to modelers on traditional platforms.
For instance, Microsoft Office’s challenges, especially when compared to more modern tools like Google Docs, highlight the difficulty of transforming desktop-based software into truly collaborative, cloud-native platforms where a single, shared source of truth is accessed by multiple users in real time. The next generation of engineering software must be built from the ground up with cloud-native, agile principles at its core.
|
The Traditional V-diagram of modeling and simulation in product development. Above the dashed red line are the system modeling tools and the role they play in system design in verification. Below the dashed line are the high-fidelity 3D digital twins such as CFD software. Systems modeling tools must integrate with such tools but ultimately must simplify to capture the complexity of the entire system, though today much of this integration is manual. |
Multi-scale modeling requires modern tooling
Tools developed for fast system-level simulation and control have traditionally been separated from those of high-fidelity spatial 3D modeling, like computational fluid dynamics or detailed battery models. Integrating these two kinds of models is technically possible, but very challenging in practice. As we improve our tools, the complexity at each scale only increases.
The cost in compute and engineering time taken to isolate specific components, simulate them with high fidelity, and then integrate those physically faithful models into a full system-level model is simply too high. When trying to understand system-level dynamics, engineers resort to system-level tools that try to strike a balance between simplified physics for speed, and sufficiently high fidelity to make accurate and constructive decisions about the system.
For this reason, systems-level modeling today is a very manual process. The engineer must make choices about how to build the model, what compromises to make, and which physics to include and ignore as unimportant, then test it and iterate over and over. Years go by in this inefficient cycle, as changes in each individual layer must be propagated to the rest of the model, while the majority of time is spent trying to understand whether the model in its current state is a sufficient approximation to the real world.
With the rising construction of digital twins, there is an increasing push to embed higher fidelity within systems-level control models. This push is enabled by advancing edge computing capabilities; devices which previously used small microcontrollers might now use the same ARM processor that powers your phone. This means that even in real-time embedded applications, higher fidelity model-based control or multi-frequency control where lower-frequency higher-fidelity predictions are used is becoming more normalized.
|
Schema of the modeling landscape with respect to software-defined machines. In the top left there are the high-fidelity modeling systems of single assets, such as computational fluid dynamics which models every detail of airflow over an airplane wing and EDA tools which are a full specification of chips. In the bottom right you have tools which model the entire system but to low fidelity. For example, SysML uses natural language requirements specifications of hardware systems, and model-based design (causal modeling tools for embedded controls) adds mathematical descriptions of controls. In the middle you have acausal modeling tools which blend some of the accuracy of the component modeling tools while achieving a higher level system description, but require making trade-offs on both fronts. This highlights the advantage of the Dyad digital twin approach, which uses SciML in order to elevate the realism of system level models to almost achieve that of the individual component modeling tools, while being able to represent the entire system and the artifacts for its software-defined embedded controls. |
However, the compiler infrastructure of the systems level modeling tools have not kept pace with these requirements. Existing tools are well known for limitations in both memory and compute as the size of the systems grow, but even systems known for efficiency can have scalability issues when dealing with thousands of states.
Any connections with these types of models thus is black-boxed, which means that when higher fidelity sources of truth, such as Computational Fluid Dynamics (CFD) tooling and other domain-specific digital twins are integrated, black-box formulation using protocols such as the Functional Mockup Interface (FMI) which embeds the complete simulation code into a block of the modeling system. As such, these models use different solvers, time steppers, etc. which are disconnected from the rest of the model, and then stepped in a lock-step pattern known as co-simulation which leads to many artifacts in numerical stability, performance, and accuracy. The inability of modeling compilers to fully optimize the simulators across boundaries thus limits the ability for this model combination to be fully scalable.
General AI Tooling is not sufficiently trustworthy for Safety-Critical Engineering
While some Silicon Valley AI startups would lead you to believe that machine learning will replace all other forms of computation in the next 5 years, the majority of mechanical, aerospace, and automotive engineers are rightfully skeptical that there will be a complete replacement in these domains anytime soon. One major reason for this is that the process of model design is itself iterative, each attempt building on its predecessor until one achieves a working model. It is not entirely captured in a computer, but rather is a continuous process of building a model, checking the real system, finding disconnects between them, making decisions about what aspects of the model should be kept or removed, understanding what new sensors could be helpful to further refine the model, and repeating this over and over.
Machine learning solutions are black boxes – hard to understand and modify. While they can be retrained on new data, there is no guarantee whether new iterations of a machine learning model have gotten closer to this global idea of the true system, which can be difficult to capture in data and loss functions. Standard machine learned models do not have a sense of physical truth. Thus, we have no guarantee that their predictions match physical principles such as conservation of energy and momentum, leading to predictions that can drift away from reality over time and the boundary to which they are trustworthy is ill-defined.
All of this to say that it is clear that the job of the modeler is very unlikely to be replaced wholesale by a purely machine learned process, especially in regulated domains for which model building aspects are closely checked in order to achieve safety in consumer systems such as automotive and aerospace. That said, the future can certainly have machine learning in the loop, taking tools such as Large Language Models (LLMs) and AI chatbots to accelerate the usage of system modeling. However, such integration must be done with care because these tools are known to have difficulties with accuracies, known as hallucinations, and it’s well-known that just one small error in a model can make the predictions completely incorrect, and thus unlike an essay a model has almost zero tolerance for such errors. Therefore any integration needs to be carefully thought through in order to highlight the areas which the modeler should second guess for the inevitable debugging phase. In addition, systems modeling has very specific sources and is thus not likely to be part of the core corpus of training data in tools such as ChatGPT or Google’s Gemini: this domain requires specific API integrations to construct domain-specific word embeddings for the foundation models to understand system modeling will be necessary to have any level of accuracy. As such, a successful solution likely would need to integrate agentic AI tooling, which would need deep integration into the system modeling tool and change some of the standard workflows.
This post is part of our ongoing series exploring insights from the Software Defined Machines white paper. Download the full paper here.
Dyad Studio is source-available, free for personal and educational use, with commercial licenses from JuliaHub. Learn more: juliahub.com/products/dyad and help.juliahub.com/dyad/dev
About the Author

Anshul Singhvi
Anshul Singhvi is a contributor to Julia's plotting (Makie.jl), geospatial (JuliaGeo) and documentation ecosystems, and a developer on the Dyad team.