-
Notifications
You must be signed in to change notification settings - Fork 22
Description
One of the papers Dennis Bernstein sent us, https://inria.hal.science/hal-02960923/file/Mojallizadeh_etal_022021.pdf (100+ pages), deals specifically with online (one-sided, real-time) differentiation. There is emphasis on a family that I'm generally referring to as "sliding-mode", after a few of the methods therein. They compare a ton of variants and come up with a few that are best in the presence of noise, although it seems like their metrics and hyperparameter tuning rely on knowledge of the true derivative. @florisvb, this seems closest to lineardiff to me, but also kind of related to Kalman methods because there appears to be a stochastic diffeq at the heart. Do you have any idea how similar these methods are? Should we aspire to revamp lineardiff to be more like one of these, since it currently doesn't work super well (such that I excluded it from analysis in the taxonomy paper) or add these to the same or a new module?
Some futher thoughts from Chat:
The paper’s claim is:
> Many reported “noise problems” are actually discretization artifacts, not fundamental noise sensitivity.
That’s why they obsess over:
- implicit vs explicit updates,
- solver choice,
- step-size effects.
They’re trying to separate: “bad algorithm” from “bad digital implementation.”
The goal is to answer:
- Which algorithms degrade gracefully under noise?
- Which ones fail catastrophically due to discretization?
- Which ones are robust to step-size and quantization?
In other words: Their metrics are *diagnostic*, not operational.
Here’s the thesis, stripped of ornamentation:
> Most differentiators fail in practice not because of noise, but because of naive discretization of nonsmooth dynamics.
And the corollary:
> If you discretize sliding-mode differentiators properly (implicitly), they behave far better than their reputation suggests.