Conversation
We're seeing a number of connection issues with the Serper API that resolve themselves when retrying. Instead of requiring the agent to do this and handling it in the Runbook, we should handle it in the action.
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if not is_connection_error and not isinstance(e, HTTPError): | ||
| # Not a retryable error, raise immediately | ||
| raise ActionError(f"Non-retryable error occurred: {str(e)}") | ||
|
|
||
| if attempt == max_retries: |
There was a problem hiding this comment.
Stop retrying non-connection HTTP errors
HTTP failures from Serper (e.g., 401 for an invalid key or 400 for a bad query) are now retried up to the max_retries count because HTTPError does not hit the non-retryable branch at lines 49‑51 and only raises on the final attempt. That introduces extra delay and repeated API calls for deterministic client errors that previously failed fast and were meant to be non-retryable per the description of only handling connection drops.
Useful? React with 👍 / 👎.
We're seeing a number of connection issues with the Serper API that resolve themselves when retrying. Instead of requiring the agent to do this and handling it in the Runbook, we should handle it in the action.
Here's the filled-out PR template for the retry logic addition:
Description
This PR adds exponential backoff retry logic to the
search_googleaction to handle intermittentRemoteDisconnectedexceptions that occur when the Serper API connection is unexpectedly closed.Problem: The action was experiencing indeterminate
RemoteDisconnected("Remote end closed connection without response")exceptions that would resolve themselves when retrying, causing unreliable search operations.Solution: Implemented a retry mechanism with exponential backoff (1s, 2s, 4s, 5s max) that automatically retries connection-related failures up to 3 times before failing.
Dependencies: No new dependencies required - uses existing
timemodule and string pattern matching for error detection.How can (was) this tested?
The retry logic can be tested by:
Test configuration:
Screenshots (if needed)
N/A - This is a backend reliability improvement without UI changes.
Checklist:
Note: The retry logic is designed to be transparent to users - no changes to the action's public API or behavior, only improved reliability for connection failures.