fix: skip questions without forecasts in bulk withdraw#4720
Conversation
When a user clicks "Withdraw all" on a question group, the frontend sends every question it considers predicted (including resolved siblings the user never forecasted on). The backend then raised a ValidationError on the first such question, aborting the entire operation. Make `withdraw_forecast_bulk` skip questions where the user has no active forecast at the withdrawal time instead, so bulk withdrawals succeed across mixed groups. Fixes #4707 Co-authored-by: Cemre Inanc <cemreinanc@users.noreply.github.com>
📝 WalkthroughWalkthroughThis PR removes the ValidationError exception that was raised when a bulk forecast withdrawal request encountered a question without an active forecast at the withdrawal time. The function now skips such questions silently, allowing withdraw-all operations to succeed even when users have not forecasted on all questions. A test verifies the no-op behavior for this case. ChangesWithdraw-all no-op handling
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Poem
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/unit/test_questions/test_views.py (1)
441-444: ⚡ Quick winAssert no side-effects on other users’ forecasts in the no-op test.
Add an assertion that the pre-existing forecast on this question (e.g.,
author=user1) still exists after user2’s withdraw request. That makes the no-op guarantee explicit and protects against cross-user regression.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/unit/test_questions/test_views.py` around lines 441 - 444, Add an assertion to ensure the existing forecast for the original user remains after user2's withdraw no-op: check that Forecast.objects.filter(question=question_binary_with_forecast_user_1, author=user1).exists() is True (or use assert Forecast.objects.filter(...).exists()). This should be added alongside the existing assertion that user2's forecast does not exist to explicitly verify no cross-user side-effects in the test in test_views.py.questions/services/forecasts.py (1)
188-207: ⚡ Quick winOnly enqueue post recalculation when a withdrawal actually changed data.
postis added topostsbefore the no-op guard, so skipped questions still triggerrun_on_post_forecast. Moveposts.add(post)below theexists()check to avoid unnecessary async work on no-op withdrawals.Proposed diff
for withdrawal in withdrawals: question = cast(Question, withdrawal["question"]) post = question.get_post() - posts.add(post) withdraw_at = withdrawal["withdraw_at"] @@ if not user_forecasts.exists(): continue + + posts.add(post)Also applies to: 226-233
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@questions/services/forecasts.py` around lines 188 - 207, The code adds post to the posts set before checking whether the withdrawal actually affects any forecasts, causing no-op withdrawals to still trigger post recalculation; move the posts.add(post) call so it only runs after confirming user_forecasts.exists() (i.e., place posts.add(post) below the if not user_forecasts.exists(): continue guard) and apply the same change to the analogous block around the other occurrence (the block referenced at lines 226-233) so run_on_post_forecast is only enqueued when data was actually changed.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@questions/services/forecasts.py`:
- Around line 188-207: The code adds post to the posts set before checking
whether the withdrawal actually affects any forecasts, causing no-op withdrawals
to still trigger post recalculation; move the posts.add(post) call so it only
runs after confirming user_forecasts.exists() (i.e., place posts.add(post) below
the if not user_forecasts.exists(): continue guard) and apply the same change to
the analogous block around the other occurrence (the block referenced at lines
226-233) so run_on_post_forecast is only enqueued when data was actually
changed.
In `@tests/unit/test_questions/test_views.py`:
- Around line 441-444: Add an assertion to ensure the existing forecast for the
original user remains after user2's withdraw no-op: check that
Forecast.objects.filter(question=question_binary_with_forecast_user_1,
author=user1).exists() is True (or use assert
Forecast.objects.filter(...).exists()). This should be added alongside the
existing assertion that user2's forecast does not exist to explicitly verify no
cross-user side-effects in the test in test_views.py.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4ef23375-4900-4aac-a412-f9fd735b9cdd
📒 Files selected for processing (2)
questions/services/forecasts.pytests/unit/test_questions/test_views.py
🚀 Preview EnvironmentYour preview environment is ready!
Details
ℹ️ Preview Environment InfoIsolation:
Limitations:
Cleanup:
|
Summary
withdraw_forecast_bulknow skips questions where the user has no active forecast atwithdraw_atinstead of raisingValidationError, so "Withdraw all" works on mixed question groups.Fixes #4707
Test plan
pytest tests/unit/test_questions/test_views.py::TestQuestionWithdrawGenerated with Claude Code
Summary by CodeRabbit
Release Notes