Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.





This method got some attention in our discussion on #48 as possibly redundant, and as a dependency of$N$ , so giving a longer sequence won't really hurt. The problem is still strongly convex, so CVXPY will still use OSQP and converge fast. Rather, breaking the problem up into sections that overlap and then smoothly combining sections of the results takes significantly more computation, because we run the convex solver over any given datapoint several times.
slide_function, it makes #173 slightly harder to address. Thinking more about it, the original stated reason for havingjerk_sliding—i.e. that the convex solver might struggle and be slow when there are too many points—isn't actually particularly addressed by this thing. For TVR, solve time is linear in the data sequence length,What will change in using

jerkvsjerk_slidingis that you won't get that blending between solutions. The kernel looks like a ramp up for 1/5 of the window, flat for 3/5, ramp down for 1/5, and the stride is 1/5 such that the overlap looks like:Does combining solutions smoothly like this provide any kind of benefit? I'm not sure, and it's possible, but my statistical intuition says "no", because this kind of local ensembling doesn't have access to any more information than the global algorithm. Especially given all the other methods' approximately-equal performance, I doubt these kinds of games could make the solution more accurate, but I didn't test it against everyone else in notebook 4, because this method is limited solely to
order=3for jerk currently, which is often not the optimal choice. The method could be extended to offer different kernel choices and 1st, 2nd, and 3rd order, but I question the value of torturing this thing with these kinds of manipulations when the core algorithm,tvrdiff, is natively able to just handle the whole series. This is not quite likepolydifforlineardiffwhere by necessity we have to break the problem up.