-
Notifications
You must be signed in to change notification settings - Fork 7.2k
GaussianBlur CV-CUDA Backend #9280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/9280
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 754223f with merge base aa35ca1 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @justincdavis! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
AntoineSimoulin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @justincdavis. Left a few comments, looking good otherwise!
test/test_transforms_v2.py
Outdated
| actual_torch = F.cvcuda_to_tensor(actual) | ||
|
|
||
| if dtype.is_floating_point: | ||
| torch.testing.assert_close(actual_torch, expected, rtol=0, atol=0.3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why setting atol=0.3 here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question! I added a comment on atol=0.3, most likely from floating point differences between the underlying filter2d in CV-CUDA compared to torch.conv2d. Let me know if you want more explanation and/or something else here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we set it as in test_functional_image_correctness with torch.testing.assert_close(actual, expected, rtol=0, atol=1) for consistency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AntoineSimoulin I ended up rewriting the test setup, and moved all the tests into a single block. Both CV-CUDA and torchvision share the same assert statement now. LMK if you think it looks like a good change.
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
75e4b20 to
1cb4629
Compare
1cb4629 to
2c6c99f
Compare
…import, explicit imports in func
0019e9f to
3bcc517
Compare
zy1git
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’ve left some comments on this PR.
Please feel free to address them or reach out if you’d like to discuss any points further.
test/test_transforms_v2.py
Outdated
| if dtype is torch.float16 and device == "cpu": | ||
| pytest.skip("The CPU implementation of float16 on CPU differs from opencv") | ||
| if (dtype != torch.float32 and dtype != torch.uint8) and input_type == "cvcuda.Tensor": | ||
| pytest.skip("CVCUDA does not support non-float32 or uint8 dtypes for gaussian blur") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel that this comment is bit confusing:
-
Does it mean "non-(float32 or uint8)" → neither float32 nor uint8?
-
Or "(non-float32) or uint8" → something else entirely?
Thus, I recommend to use "CVCUDA only supports float32 and uint8 dtypes for gaussian blur".
test/test_transforms_v2.py
Outdated
|
|
||
| if input_type == "cvcuda.Tensor": | ||
| actual = F.cvcuda_to_tensor(actual) | ||
| actual = actual.squeeze(0).to(device=device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can also use actual=actual[0].to(device=device) since batch size is guaranteed to be 1 in this case. Not sure we need to be consistent to the implementation here: https://github.com/pytorch/vision/pull/9277/changes#diff-9c2dde92db86c123fee225e39b7c1ef96e08a3e79a9dcc9a2d68b21ed51a81d0R1315
test/test_transforms_v2.py
Outdated
|
|
||
| if input_type == "cvcuda.Tensor": | ||
| actual = F.cvcuda_to_tensor(actual) | ||
| actual = actual.squeeze(0).to(device=device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can also use actual = actual[0].to(device=device) since the batch size is always 1 in this case. Not sure we need to be consistent with the implementation here: https://github.com/pytorch/vision/pull/9277/changes#diff-9c2dde92db86c123fee225e39b7c1ef96e08a3e79a9dcc9a2d68b21ed51a81d0R1315
test/test_transforms_v2.py
Outdated
| make_image, | ||
| make_video, | ||
| pytest.param( | ||
| make_image_cvcuda, marks=pytest.mark.skipif(not CVCUDA_AVAILABLE, reason="CVCUDA is not available") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See this PR: https://github.com/pytorch/vision/pull/9305/changes
There are other parts with similar issues that also need to be addressed.
Summary
Implement the CV-CUDA backend kernel for gaussian_blur
How to use