Fix #8802: read nda_croppeds from input dict when pre-computed#8809
Fix #8802: read nda_croppeds from input dict when pre-computed#8809williams145 wants to merge 3 commits intoProject-MONAI:devfrom
Conversation
…mputed ImageStats.__call__ crashed with UnboundLocalError when a caller pre-populated "nda_croppeds" in the input dict. The variable was only assigned in the if-branch but referenced unconditionally on both paths. Added the missing else-branch to read the pre-computed value from the dict, and wrapped the method body in try/finally to guarantee grad state is always restored on exit. Fixes Project-MONAI#8802
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
✅ Files skipped from review due to trivial changes (1)
📝 WalkthroughWalkthroughImageStats.call now wraps channel-wise crop derivation, report construction/validation, and assignment in a try block with a finally that restores the prior PyTorch grad-enabled state. nda_croppeds is reused if present; otherwise computed per channel via get_foreground_image. When provided, nda_croppeds must be a list/tuple matching the channel count, otherwise ValueError is raised. Two unit tests were added: one verifies behavior with precomputed nda_croppeds, the other verifies global grad-enabled state is unchanged after calls. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@monai/auto3dseg/analyzer.py`:
- Around line 260-264: The code uses caller-supplied d["nda_croppeds"] without
validation which can yield incorrect per-channel stats; in the block handling
nda_croppeds (the if/else around "nda_croppeds" and get_foreground_image),
verify that d["nda_croppeds"] is a sequence with the same length as ndas and
that each element is a valid array-like (e.g., numpy ndarray) with expected
shape/dtype; if the check fails, fall back to recomputing nda_croppeds =
[get_foreground_image(nda) for nda in ndas]; update the branch that assigns
nda_croppeds so it validates d["nda_croppeds"] before using it and documents the
fallback behavior.
In `@tests/apps/test_auto3dseg.py`:
- Around line 554-570: The test test_analyzer_grad_state_restored_after_call
currently mutates global torch grad mode without guaranteeing restoration on
exceptions; wrap the two analyzer(data) calls in a try/finally: capture the
original state with orig = torch.is_grad_enabled(), set the required state for
each subcase with torch.set_grad_enabled(True/False), call analyzer(data),
assert the state, and in the finally restore torch.set_grad_enabled(orig) so the
global grad mode is always returned (update/remove the trailing
torch.set_grad_enabled(True) in favor of the finally restore).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 967bb49c-5419-4036-be0a-b25f2e9d6a47
📒 Files selected for processing (2)
monai/auto3dseg/analyzer.pytests/apps/test_auto3dseg.py
tests/apps/test_auto3dseg.py
Outdated
| def test_analyzer_grad_state_restored_after_call(self): | ||
| # Verify that ImageStats.__call__ always restores the grad-enabled state it found | ||
| # on entry, regardless of which state that was. | ||
| analyzer = ImageStats(image_key="image") | ||
| image = torch.rand(1, 10, 10, 10) | ||
| data = {"image": MetaTensor(image)} | ||
|
|
||
| # grad enabled before call → must still be enabled after | ||
| torch.set_grad_enabled(True) | ||
| analyzer(data) | ||
| assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call" | ||
|
|
||
| # grad disabled before call → must still be disabled after | ||
| torch.set_grad_enabled(False) | ||
| analyzer(data) | ||
| assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call" | ||
| torch.set_grad_enabled(True) # restore for subsequent tests |
There was a problem hiding this comment.
Make grad-state cleanup exception-safe in the test.
The test mutates global grad mode; restore the original state in finally to prevent cross-test leakage on failures.
Proposed fix
def test_analyzer_grad_state_restored_after_call(self):
# Verify that ImageStats.__call__ always restores the grad-enabled state it found
# on entry, regardless of which state that was.
analyzer = ImageStats(image_key="image")
image = torch.rand(1, 10, 10, 10)
data = {"image": MetaTensor(image)}
-
- # grad enabled before call → must still be enabled after
- torch.set_grad_enabled(True)
- analyzer(data)
- assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
-
- # grad disabled before call → must still be disabled after
- torch.set_grad_enabled(False)
- analyzer(data)
- assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
- torch.set_grad_enabled(True) # restore for subsequent tests
+ original_grad_state = torch.is_grad_enabled()
+ try:
+ # grad enabled before call → must still be enabled after
+ torch.set_grad_enabled(True)
+ analyzer(data)
+ assert torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
+
+ # grad disabled before call → must still be disabled after
+ torch.set_grad_enabled(False)
+ analyzer(data)
+ assert not torch.is_grad_enabled(), "grad state was not restored after ImageStats call"
+ finally:
+ torch.set_grad_enabled(original_grad_state)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/apps/test_auto3dseg.py` around lines 554 - 570, The test
test_analyzer_grad_state_restored_after_call currently mutates global torch grad
mode without guaranteeing restoration on exceptions; wrap the two analyzer(data)
calls in a try/finally: capture the original state with orig =
torch.is_grad_enabled(), set the required state for each subcase with
torch.set_grad_enabled(True/False), call analyzer(data), assert the state, and
in the finally restore torch.set_grad_enabled(orig) so the global grad mode is
always returned (update/remove the trailing torch.set_grad_enabled(True) in
favor of the finally restore).
atharvajoshi01
left a comment
There was a problem hiding this comment.
Two fixes in one: reading precomputed nda_croppeds from the dict (the actual bug) and wrapping the whole block in try/finally to restore grad state on exception. Both are correct. The else branch on line ~261 was the missing piece from the original code.
…h@gmail.com> I, UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>, hereby add my Signed-off-by to this commit: 2774923 Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
…and add test docstrings - Validate pre-computed nda_croppeds is a list with one entry per channel, raising ValueError with a clear message if not - Convert inline comments to docstrings on test methods to satisfy docstring coverage CI check - Wrap grad-disabled test leg in try/finally so global state is always restored Signed-off-by: UGBOMEH OGOCHUKWU WILLIAMS <williamsugbomeh@gmail.com>
|
The quick-py3 (macOS-latest) failure is a pre-existing infrastructure issue, pytype 2024.4.11 fails to install on the macOS runner due to a missing pybind11 dependency, which is unrelated to the changes in this PR. The same failure appears on other open PRs against dev for the same reason. |
Note: PR #8803 by @bluehyena covers overlapping ground on issue #8802. The key difference in this PR is explicit validation of the pre-computed nda_croppeds value, wrong type or mismatched channel count raises a clear ValueError rather than propagating silently to a confusing downstream error. |
Problem
ImageStats.__call__crashes withUnboundLocalError: local variable 'nda_croppeds' referenced before assignmentwhen the caller pre-populates"nda_croppeds"in the input dict.Root Cause
nda_croppedswas only assigned inside theif "nda_croppeds" not in d:branch, but used unconditionally at lines 267 and 279 regardless of which branch was taken.Fix
Added the missing
elsebranch to read the pre-computed value from the dict. Also wrapped the method body intry/finallyto guaranteetorch.set_grad_enabledis always restored on exit, even if an exception is raised mid-computation.Testing