Conversation
cschenck
left a comment
There was a problem hiding this comment.
Overall, it's good. I've made some comments. The vast majority are me being nit-picky, mostly saying you can remove commented out code if it's not needed. If you wouldn't mind addressing those just so the code can be nice and clean, that would be appreciated.
The only real question I have is it seems you've changed all the forward/backward methods to staticmethods, which is totally fine, if that's what it takes to work. But I'll admit, I haven't worked with PyTorch extensions for awhile, so I'm unclear as to why this should be done. If you wouldn't mind giving me a little insight into your reasoning there, that would be most helpful.
| obj_poses = torch.zeros(1, 1, 7).cuda() | ||
| obj_poses[:, :, -1] = 1.0 | ||
| while True: | ||
| # pdb.set_trace() |
| hashorder = _HashgridOrderFunction(self.radius, self.max_grid_dim, self.cellIDs, | ||
| self.cuda_buffer) | ||
| idxs = hashorder(locs, lower_bounds, grid_dims) | ||
| # hashorder = _HashgridOrderFunction(self.radius, self.max_grid_dim, self.cellIDs, |
There was a problem hiding this comment.
You can probably just remove the commented out code if it's not needed anymore. I'm assuming this is updated to the new style of pytorch?
| self.cellStarts, self.cellEnds, self.include_self) | ||
| neighbors = coll(qlocs if qlocs is not None else locs, | ||
| locs, lower_bounds, grid_dims) | ||
| # coll = _ParticleCollisionFunction(self.radius, self.max_collisions, self.cellIDs, |
| self.cuda_buffer = cuda_buffer | ||
|
|
||
| def forward(self, locs, lower_bounds, grid_dims): | ||
| @staticmethod |
There was a problem hiding this comment.
What's the thinking here on making this a static method? I'll admit, I haven't looked at this code in awhile, but a static method wouldn't have a self argument. And I see self being used for save_for_backward. Although it looks like you removed the rest of the self uses.
|
|
||
| @staticmethod | ||
| def backward(self, grad_idxs): | ||
| locs, lower_bounds, grid_dims = self.saved_tensors |
There was a problem hiding this comment.
It looks like the saved tensors from forward are grabbed here for the backward pass. Without self I'm not sure where those would come from. Did pytorch change how this works?
|
|
||
| @staticmethod | ||
| def backward(self, grad_output): | ||
| pdb.set_trace() |
There was a problem hiding this comment.
This looks like debugging code?
| # Do the compution. | ||
| convsp = _ConvSPFunction(self.radius, self.kernel_size, self.dilation, | ||
| self.dis_norm, self.kernel_fn, self.ncells, self.nshared_device_mem) | ||
| # convsp = _ConvSPFunction(self.radius, self.kernel_size, self.dilation, self.dis_norm, self.kernel_fn, self.ncells, self.nshared_device_mem) |
There was a problem hiding this comment.
Same same, if the commented out code is not needed, delete it.
| # data.shape = BxCxN | ||
| data = convsp(qlocs if qlocs is not None else locs, | ||
| locs, data, neighbors, self.weight, self.bias) | ||
| # data = convsp(qlocs if qlocs is not None else locs, |
| def forward(self, qlocs, locs, data, neighbors, weight, bias): | ||
| self.save_for_backward(qlocs, locs, data, neighbors, weight, bias) | ||
|
|
||
| #def __init__(self, radius, kernel_size, dilation, dis_norm, kernel_fn, |
| #include <stdio.h> | ||
|
|
||
| #include "../external/cub-1.3.2/cub/cub.cuh" | ||
| #include "cub/cub.cuh" |
There was a problem hiding this comment.
This is much better than that hackey nonsense I was doing before :) .
Hello Connor, I am a student from UCSD, I believe we contacted you a few weeks ago about using your SPNet work. We updated the code to work with the newest versions of pytorch and CUDA. We confirmed your example code (fluid_sim.py) was able to run on Pytorch 1.2.0, CUDA 10.1, and Python 3.7.9 now and it should also work with newer versions after these as well, feel free to contact me as s1duan@ucsd.edu if you have any questions. Thanks!