You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 15, 2019. It is now read-only.
I do not quite understand the mechanism behind @customop('numpy').
I find that there's an intermediate variable 'Q' which is expensive to compute and appears in computing both the output and gradient.
By the way, I also wonder if I can create gradient function for multiple parameters (e.g. w1 and w2 as in the code below).
e.g.
@customop('numpy')
def my_operator(X,w1,w2):
Q = f(X,w1,w2)
H = g1(Q)
return H
def my_operator_grad1(ans,X,w1,w2):
def grad1(g):
Q = f(X,w1,w2)
R = g2(Q)
return R
return grad1
def my_operator_grad2(ans,X,w1,w2):
def grad2(g):
Q = f(X,w1,w2)
R = g3(Q)
return R
return grad2
my_operator.def_grad(my_operator_grad1,argnum=1)
my_operator.def_grad(my_operator_grad2,argnum=2)
I do not quite understand the mechanism behind @customop('numpy').
I find that there's an intermediate variable 'Q' which is expensive to compute and appears in computing both the output and gradient.
By the way, I also wonder if I can create gradient function for multiple parameters (e.g. w1 and w2 as in the code below).
e.g.
Thanks!