An interp that supports gradient#194
Conversation
It shall give idential result to numpy's version, but this computes the matrix, and thus is differentiable.
import as done in numpy.numpy_grads.
|
@duvenaud : opinions? |
|
This is a reasonable approach, but this PR replaces numpy's interp() with a slightly incomplete implementation. I would suggest keeping but hiding your implementation of interp, and using its gradient to define the gradient of numpy's interp. |
|
Yes. I was playing with the idea too. Last time I tried, I could not import vector_product_jacobian from autograd due to a circular import-like issue
|
86820fd to
2f6cc22
Compare
I am not sure if this is the right approach. It is impossible (at least when I fiddled) to directly use numpy.interp as an implementation of vjp of itself, because the width of the windows must be fixed. Therefore I put up a reimplementation of interp as a matrix product. I am not using a sparse matrix because I don't know if it is appropiate to pull in scipy.sparse for a numpy gradient.
The current version only allows propagating the gradient on the yp argument. It is trivial to add others -- just use the derivative of W instead of W. Adding support to period != None mode should be possible too.
But we need to convince ourself if this is the right approach to support non-trivial gradient of numpy functions. I think at least numpy.bincount will be similar to this; there may be more in the horizon.
This code may be expanded to handle other real space convolution based operators (with a finite support), I think.