Skip to content

Fix Report.save_report hang on actively-running projects#181

Open
c-fang wants to merge 4 commits intomainfrom
c-fang/save-report-poll-status
Open

Fix Report.save_report hang on actively-running projects#181
c-fang wants to merge 4 commits intomainfrom
c-fang/save-report-poll-status

Conversation

@c-fang
Copy link
Copy Markdown
Contributor

@c-fang c-fang commented Apr 30, 2026

Summary

Report.save_report can hang on projects that are still receiving task responses: each polling iteration re-calls request, which re-enters the report-generation flow on the server. Every incoming response invalidates the cached report, so each iteration kicks off a brand-new generation job and the report never reaches READY before the next response invalidates it.

This PR switches the polling to mirror a refresh-style consumer: call request once to start (or reuse) a generation job, capture the returned job_id, and then poll check_status against that specific job. The job will eventually COMPLETED (or ERROR) regardless of incoming responses, so the loop terminates.

Changes

  • surge/reports.pysave_report now calls request once and polls check_status(project_id, job_id). New poll_interval kwarg (default 2s, matching previous behavior). Cleaned up the download path slightly (removed the inner-response-shadowing in the urlopen block; preserved the str-vs-IO filepath handling).
  • tests/test_reports.py — adds tests for READY-immediate, CREATING→IN_PROGRESS→COMPLETED, error status, and timeout. Existing empty-project test preserved.
  • setup.py — bump 1.5.21 → 1.5.22.

Behavior preserved

  • Public API surface (positional/kwarg signature, defaults, return type).
  • download_json (which wraps save_report) inherits the fix automatically.
  • HTTP-error handling via SurgeRequestError is unchanged.

Test plan

  • pytest — 40 passed
  • yapf -d -r -e '.cci_pycache/**' . clean

save_report's polling loop re-called request every iteration, hitting
the report-creation endpoint each time. On a project that is still
receiving task responses, every new response invalidates the
server-side cached report, so each iteration of the loop kicks off a
brand-new generation job and the report never reaches READY before
the next response invalidates it. Result: save_report hangs until
poll_time expires.

Switch the polling to mirror how a refresh-style consumer would do
it: call request once to start (or reuse) a generation job, capture
the returned job_id, then poll check_status against that specific
job. The job will COMPLETED (or ERROR) regardless of incoming
responses, so the loop terminates.

Also adds a poll_interval kwarg (default 2s, matching previous
behavior) and tests for the READY-immediate, polling, error, and
timeout paths.

Bumps version to 1.5.22.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 016fcc3f50

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread surge/reports.py Outdated
c-fang added 3 commits April 29, 2026 23:15
Per Codex review: check_status's IN_PROGRESS response (HTTP 202) is
documented as `{status: "IN_PROGRESS"}` with no job_id, so reading
response.job_id on every loop iteration would AttributeError on the
second poll for any report that takes more than poll_interval to
generate.

Capture the job_id once from the initial CREATING response and reuse
it across polls. Update only when the server returns RETRYING (which
does carry a new job_id, indicating a fresh underlying job).

The previous test masked the bug by giving the mocked IN_PROGRESS
response a job_id attribute it wouldn't have in reality. Fix the
mock and add a test for the RETRYING job_id switch.
Pre-existing leak: gzip.open(tmp_file.name, 'r').read() returned the
buffer but never closed the underlying file. Wrap the gzip.open in a
with block so the handle is closed deterministically rather than
relying on garbage collection.
NamedTemporaryFile can't be reopened by path on Windows while the
original handle is open. Switch from gzip.open(tmp_file.name) to
gzip.GzipFile(fileobj=tmp_file) so we reuse the existing handle.
Pre-existing issue, but I'm rewriting this block.
@c-fang c-fang requested a review from timbauman April 30, 2026 21:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants