Skip to content

Why is Pumas-QSP So Fast? Some Insights Into Differentiable Simulation Performance

Discover Paul Lang’s expertise on JuliaHub. Explore innovations in algorithm design and computational research.
Paul Lang
JuliaHub Blog Home Icon Gray

Sensitivity analysis is used to calculate the gradients required for parameter estimation of models. But which methods work best for systems pharmacology models? In a recent publication, our Pumas-QSP crew along with some researchers at UCLA tested sensitivity analysis techniques on a bunch of different systems biology applications to find trends. These are summarized in the conclusions: chunked forward mode is fantastic for most use cases, multithreading chunks is a good idea. However, the tested systems biology and pharmacology models were not of the size where adjoint methods are competitive against an optimized multithreaded chunked forward mode. But there are also cases where complex step methods are useful. Another publication by our Pumas-QSP group demonstrated that a cutoff of around 100 ODEs is what is required for adjoint methods to be more efficient. 

Of course, our Pumas-QSP software automatically makes use of all of these details to achieve state-of-the-art performance, but if you wanted a quick peek under hood then this is a tl;dr summary:

 

  • Forward mode, adjoint, and complex perturbation sensitivity methods all converge to the same differential sensitivity values in non-stiff models, thus offering the same level of accuracy for all methods. For stiff models, forward mode and complex perturbation methods converge but adjoint sensitivity struggles and does not achieve convergence for realistic tolerance parameters.

  • Chunked forward mode automatic differentiation and forward mode sensitivity analysis tend to be the most computationally efficient on the tested models.

  • Complex perturbation methods are competitive and often outperform the unchunked version of forward mode automatic differentiation, while being less sensitive to stiffness than the adjoint methods.

  • Shared memory multi-threading of the complex perturbation and forward mode automatic differentiation methods provides a performance gain but only in high-dimensional systems.

  • Forward mode automatic differentiation method requires that each step of a calculation is differentiable. This renders it unusable for calculating the derivative of ensemble means of discrete state models, such as birth-death processes. For these cases, the complex perturbation method outperforms manual differentiation.

  • The complex perturbation method is competitive with automatic differentiation methods in accuracy, is more straightforward to implement, and can be applied to a wider variety of methods.

 

Of course, the benefit of Pumas-QSP is that you, the modeler, don't have to worry about knowing what any of this means!

Exposing more parallelism: Polyester-mode Parallelism in Differentiation

With this manuscript, we released new low-level tools to eek out more parallelism with the forward chunking process. Check out PolyesterForwardDiff.jl if you’re curious. The approach was optimized with Polyester.jl for an alternative multithreading model to get high performance. Extreme caution must be taken when using this multithreading system since it “takes off the training wheels”, but Pumas-QSP has been developed with its handling in mind! Here's some timings that show just how good multithreaded chunked forward mode ends up being. It's a thing of beauty. 

Computational time (μs) for the SIR ODE Model

 

Computational time (μs) for the CARGO ODE model

Computational time (μs) for the ROBER ODE model

 

So, how do I make use of all of this really cool performance research without having to understand all of the details?

For more on high-performance parameter estimation of biopharma models, check out Pumas-QSP! It uses all of these advanced differentiable simulation techniques under the hood in order to get high-performance estimation of large-scale inverse problems on differential equations.

For more information on how the Julia-based modeling and simulation tools are continuing to improve parameter estimation and model discovery, check out The Continuing Advancements of Scientific Machine Learning (SciML).

Recent Posts

Learn More

Want to learn more about our capabilities? We are here to help.