This aligns with what I've observed in computational physics.
Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.
I developed a framework that systematically separates these components: using direct elliptic solvers for constraints and explicit methods for flux evolution. The resulting algorithms are both more stable and more efficient than unified implicit approaches.
The key insight is that many systems (EM, GR, fluids, quantum mechanics) share the same pattern:
- An elliptic constraint equation (solved directly, not timestepped)
- A continuity law for charge/mass/probability flux
- Wave-like degrees of freedom (handled with explicit methods)
Witgh this structure, you can avoid the stiffness issues entirely rather than trying to power through them with implicit methods.
> Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.
When I read statements like this, I wonder if this is related to optimal policy conditions required for infinitely lived Bellman equations to have global and per-period policies in alignment
That's a fascinating parallel! both involve separating timeless constraints (value function / elliptic equation) from temporal dynamics (policy / flux evolution).
Trying to timestep through a constraint that should hold instantaneously creates artificial numerical difficulties. The Bellman equation's value iteration is actually another example of this same pattern...
The orbital example where BDF loses momentum is really about the difference between a second-order method (BDF2) and a fourth-order method (RK), rather than explicit vs implicit (but: no method with order > 2 can be A-stable; since the whole point of implict methods is to achieve stability, the higher order BDF formulas are relatively niche).
There are whole families of _symplectic_ integrators that conserve physical quantities and are much more suitable for this sort of problem than either option discussed. Even a low-order symplectic method will conserve momentum on an example like this.
The fascinating thing is that discrete symplectic integrators typically can only conserve one of the physical quantities exactly, eg angular momentum but not energy in orbital mechanics.
This is taught in the intro class in basic numerics.
Methods are considered more robust and practical if they have larger stability regions. Von-Neumann is the OG analyst for PDEs
> Those kinds of models aren’t really realistic? Energy goes to infinity, angular momentum goes to infinity, the chemical concentration goes to infinity: whatever you’re modeling just goes crazy! If you’re in this scenario, then your model is probably wrong. Or if the model isn’t wrong, the numerical methods aren’t very good anyways.
Uh... that's not how it works. The whole point of making a model and using a simulator is you don't already know what's going to happen. If your system is going to blow up, your model needs to tell you that. If it does, that's good, because it caught your mistake. And if it doesn't, it's obviously wrong, possibly in a deadly way.
Having worked on non-linear simulators/ODE solvers off and on for a decade, I agree and disagree with what you're saying.
I agree with you that that is 100% the point: you don't already know what's going to happen and you're doing modelling and simulation because it's cheaper/safer to do simulation than it is to build and test the real physical system. Finding failure modes, unexpected instability and oscillations, code bugs, etc. is an absolutely fantastic output from simulations.
Where I disagree: you don't already know what's going to happen, but you DO know generally what is going to happen. If you don't have, at a minimum, an intuition for what's going to happen, you are going to have a very hard time distinguishing between "numerical instability with the simulation approach taken", "a bug in the simulation engine", "a model that isn't accurately capturing the physical phenomena", and "an unexpected instability in an otherwise reasonably-accurate model".
For the first really challenging simulation engine that I worked on early on in my career I was fortunate: the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution, but I also had access to incredibly precise state snapshots from the real system (which was already built and on orbit) every 15 minutes. As I was developing the simulator, I was getting "reasonable" values out, but when I started comparing the results against the ground-truth snapshots I could quickly see "oh, it's out by 10 meters after 30 minutes of timesteps... there's got to be either a problem with the model or a numerical stability problem". Without that ground truth data, even just identifying that there were missing terms in the model would have been exceptionally challenging. In the end, the final term that needed to get added to the model was Solar Radiation Pressure; I wasn't taking into account the momentum transfer from the photons striking the SV and that was causing just enough error in the simulation that the results weren't quite correct.
Other simulations I've worked on were more focused on closed-loop control. There was a dynamics model and a control loop. Those can be deceptive to work on in a different way: the open-loop model can be surprisingly incorrect and a tuned closed-loop control system around the incorrect model can produce seemingly correct results. Those kinds of simulations can be quite difficult to debug as well, but if you have a decent intuition of the kinds of control forces that you aught to expect to come from the controller, you can generally figure out if it's a bad numerical simulation, bad model, or good model of a bad system... but without those kinds of gut feelings and maybe envelope math it's going to be challenging and it's going to be easy to trick yourself into thinking it's a good simulation.
Love these kind of comments where I get to learn about things that I haven't played with.
> the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution
What kind of application would require such demanding tolerances ? My first thought was nuclear fission. But then the fact that you had one in orbit sending data feed every 15 mins imploded my fanciful thinking.
Ahh, I really wish I could go into all of the details because it was a really cool project. The high-level answer (which is also related to the software-defined radio post that's on the front page right now) is that it was simulating the orbit of a satellite such that we could simulate the signals the satellite was transmitting with enough accuracy to implicitly get Doppler shift. That is... we didn't explicitly model Doppler shift in any piece of the system, it just showed up as a result of accurately modelling the orbit of the satellite's position and velocity relative to the receiver on Earth.
This aligns with what I've observed in computational physics.
Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.
I developed a framework that systematically separates these components: using direct elliptic solvers for constraints and explicit methods for flux evolution. The resulting algorithms are both more stable and more efficient than unified implicit approaches.
The key insight is that many systems (EM, GR, fluids, quantum mechanics) share the same pattern: - An elliptic constraint equation (solved directly, not timestepped) - A continuity law for charge/mass/probability flux - Wave-like degrees of freedom (handled with explicit methods)
Witgh this structure, you can avoid the stiffness issues entirely rather than trying to power through them with implicit methods.
Paper: https://zenodo.org/records/16968225
> Trying to handle instantaneous constraints and propagating modes with a single timestepper is often suboptimal.
When I read statements like this, I wonder if this is related to optimal policy conditions required for infinitely lived Bellman equations to have global and per-period policies in alignment
That's a fascinating parallel! both involve separating timeless constraints (value function / elliptic equation) from temporal dynamics (policy / flux evolution).
Trying to timestep through a constraint that should hold instantaneously creates artificial numerical difficulties. The Bellman equation's value iteration is actually another example of this same pattern...
The orbital example where BDF loses momentum is really about the difference between a second-order method (BDF2) and a fourth-order method (RK), rather than explicit vs implicit (but: no method with order > 2 can be A-stable; since the whole point of implict methods is to achieve stability, the higher order BDF formulas are relatively niche).
There are whole families of _symplectic_ integrators that conserve physical quantities and are much more suitable for this sort of problem than either option discussed. Even a low-order symplectic method will conserve momentum on an example like this.
Obviously^1. But it illustrates the broader point of the article, even if for the concrete problem even better choices are available.
1) if you have studied these things in depth. Which many/most users of solver packages have not.
The fascinating thing is that discrete symplectic integrators typically can only conserve one of the physical quantities exactly, eg angular momentum but not energy in orbital mechanics.
leapfrog!
This is taught in the intro class in basic numerics. Methods are considered more robust and practical if they have larger stability regions. Von-Neumann is the OG analyst for PDEs
> Those kinds of models aren’t really realistic? Energy goes to infinity, angular momentum goes to infinity, the chemical concentration goes to infinity: whatever you’re modeling just goes crazy! If you’re in this scenario, then your model is probably wrong. Or if the model isn’t wrong, the numerical methods aren’t very good anyways.
Uh... that's not how it works. The whole point of making a model and using a simulator is you don't already know what's going to happen. If your system is going to blow up, your model needs to tell you that. If it does, that's good, because it caught your mistake. And if it doesn't, it's obviously wrong, possibly in a deadly way.
Having worked on non-linear simulators/ODE solvers off and on for a decade, I agree and disagree with what you're saying.
I agree with you that that is 100% the point: you don't already know what's going to happen and you're doing modelling and simulation because it's cheaper/safer to do simulation than it is to build and test the real physical system. Finding failure modes, unexpected instability and oscillations, code bugs, etc. is an absolutely fantastic output from simulations.
Where I disagree: you don't already know what's going to happen, but you DO know generally what is going to happen. If you don't have, at a minimum, an intuition for what's going to happen, you are going to have a very hard time distinguishing between "numerical instability with the simulation approach taken", "a bug in the simulation engine", "a model that isn't accurately capturing the physical phenomena", and "an unexpected instability in an otherwise reasonably-accurate model".
For the first really challenging simulation engine that I worked on early on in my career I was fortunate: the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution, but I also had access to incredibly precise state snapshots from the real system (which was already built and on orbit) every 15 minutes. As I was developing the simulator, I was getting "reasonable" values out, but when I started comparing the results against the ground-truth snapshots I could quickly see "oh, it's out by 10 meters after 30 minutes of timesteps... there's got to be either a problem with the model or a numerical stability problem". Without that ground truth data, even just identifying that there were missing terms in the model would have been exceptionally challenging. In the end, the final term that needed to get added to the model was Solar Radiation Pressure; I wasn't taking into account the momentum transfer from the photons striking the SV and that was causing just enough error in the simulation that the results weren't quite correct.
Other simulations I've worked on were more focused on closed-loop control. There was a dynamics model and a control loop. Those can be deceptive to work on in a different way: the open-loop model can be surprisingly incorrect and a tuned closed-loop control system around the incorrect model can produce seemingly correct results. Those kinds of simulations can be quite difficult to debug as well, but if you have a decent intuition of the kinds of control forces that you aught to expect to come from the controller, you can generally figure out if it's a bad numerical simulation, bad model, or good model of a bad system... but without those kinds of gut feelings and maybe envelope math it's going to be challenging and it's going to be easy to trick yourself into thinking it's a good simulation.
Love these kind of comments where I get to learn about things that I haven't played with.
> the simulation itself needed to be accurate to 8-9 sig figs with nanosecond resolution
What kind of application would require such demanding tolerances ? My first thought was nuclear fission. But then the fact that you had one in orbit sending data feed every 15 mins imploded my fanciful thinking.
Ahh, I really wish I could go into all of the details because it was a really cool project. The high-level answer (which is also related to the software-defined radio post that's on the front page right now) is that it was simulating the orbit of a satellite such that we could simulate the signals the satellite was transmitting with enough accuracy to implicitly get Doppler shift. That is... we didn't explicitly model Doppler shift in any piece of the system, it just showed up as a result of accurately modelling the orbit of the satellite's position and velocity relative to the receiver on Earth.
Fantastic.
My second guess was that this was a part of a GPS like service.