Hello Torchsparse devs, I understand the torch.compile of torchsparse is impossible due to custom cuda kernels? I understand we can wrap torch custom op over torchsparse, however that would do no `grad`. Is there a workaround? Thank you.
Hello Torchsparse devs,
I understand the torch.compile of torchsparse is impossible due to custom cuda kernels?
I understand we can wrap torch custom op over torchsparse, however that would do no
grad.Is there a workaround?
Thank you.