-
Notifications
You must be signed in to change notification settings - Fork 12
x402 Client reuse #162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x402 Client reuse #162
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -122,3 +122,13 @@ def __init__( | |
|
|
||
| if twins_api_key is not None: | ||
| self.twins = Twins(api_key=twins_api_key) | ||
|
|
||
| def close(self) -> None: | ||
| """Close underlying SDK resources.""" | ||
| self.llm.close() | ||
|
|
||
| def __enter__(self): | ||
| return self | ||
|
|
||
| def __exit__(self, exc_type, exc, tb): | ||
| self.close() | ||
|
Comment on lines
+126
to
+134
|
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -2,6 +2,8 @@ | |||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| import asyncio | ||||||||||||||||||||||||||||||||||||||
| import json | ||||||||||||||||||||||||||||||||||||||
| import threading | ||||||||||||||||||||||||||||||||||||||
| from queue import Queue | ||||||||||||||||||||||||||||||||||||||
|
Comment on lines
+5
to
+6
|
||||||||||||||||||||||||||||||||||||||
| from typing import AsyncGenerator, Dict, List, Optional, Union | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| import httpx | ||||||||||||||||||||||||||||||||||||||
|
|
@@ -62,6 +64,58 @@ def __init__(self, wallet_account: LocalAccount, og_llm_server_url: str, og_llm_ | |||||||||||||||||||||||||||||||||||||
| self._og_llm_server_url = og_llm_server_url | ||||||||||||||||||||||||||||||||||||||
| self._og_llm_streaming_server_url = og_llm_streaming_server_url | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| signer = EthAccountSignerv2(self._wallet_account) | ||||||||||||||||||||||||||||||||||||||
| self._x402_client = x402Clientv2() | ||||||||||||||||||||||||||||||||||||||
| register_exact_evm_clientv2(self._x402_client, signer, networks=[BASE_TESTNET_NETWORK]) | ||||||||||||||||||||||||||||||||||||||
| register_upto_evm_clientv2(self._x402_client, signer, networks=[BASE_TESTNET_NETWORK]) | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| self._request_client_ctx = None | ||||||||||||||||||||||||||||||||||||||
| self._request_client = None | ||||||||||||||||||||||||||||||||||||||
| self._stream_client_ctx = None | ||||||||||||||||||||||||||||||||||||||
| self._stream_client = None | ||||||||||||||||||||||||||||||||||||||
| self._closed = False | ||||||||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||||||||
| self._loop = asyncio.new_event_loop() | ||||||||||||||||||||||||||||||||||||||
| self._loop_thread = threading.Thread(target=self._run_event_loop, daemon=True) | ||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||
| self._loop_thread = threading.Thread(target=self._run_event_loop, daemon=True) | |
| self._loop_thread = threading.Thread(target=self._run_event_loop) |
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If an exception occurs during _run_coroutine(self._initialize_http_clients()) on line 81, the event loop thread will be left running because close() won't be called. Consider wrapping the initialization in a try-except block to ensure cleanup on failure, or implementing a del method as a safety net to stop the event loop thread if the object is garbage collected without being properly closed.
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a potentially breaking API change. Previously, users could create an LLM client and let it be garbage collected without explicitly managing resources. Now, with the persistent event loop thread and HTTP clients, users should either use the context manager pattern or explicitly call close(). However, the daemon thread setting (line 79) means the old behavior still works (thread exits when program exits), but it's not clean. Consider updating the class docstring to document the new resource management requirements and provide usage examples with the context manager.
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _run_coroutine method calls future.result() without a timeout on line 91. If the coroutine hangs or takes a very long time, this will block indefinitely. Consider adding a timeout parameter to future.result() to prevent indefinite blocking, especially since network operations are involved in the HTTP client initialization and requests.
| def _run_coroutine(self, coroutine): | |
| if self._closed: | |
| raise OpenGradientError("LLM client is closed.") | |
| future = asyncio.run_coroutine_threadsafe(coroutine, self._loop) | |
| return future.result() | |
| def _run_coroutine(self, coroutine, timeout: Optional[float] = None): | |
| if self._closed: | |
| raise OpenGradientError("LLM client is closed.") | |
| future = asyncio.run_coroutine_threadsafe(coroutine, self._loop) | |
| try: | |
| # Default to the configured overall HTTP timeout if none is provided. | |
| effective_timeout = TIMEOUT.timeout if timeout is None else timeout | |
| return future.result(timeout=effective_timeout) | |
| except asyncio.TimeoutError as exc: | |
| future.cancel() | |
| raise OpenGradientError( | |
| f"Timed out after {effective_timeout} seconds while waiting for asynchronous operation to complete." | |
| ) from exc |
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no synchronization between close() and ongoing requests. If close() is called while a request is in progress via _run_coroutine(), the HTTP clients will be closed (line 114) while the request coroutine is still using them. This could lead to errors in the ongoing request. Consider either: (1) using a lock to prevent close() from proceeding while requests are in progress, (2) keeping track of active requests and waiting for them to complete in close(), or (3) documenting that users must ensure no requests are in progress before calling close().
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thread join on line 117 has a 5-second timeout, but there's no handling for the case where the thread doesn't join within that time. If the event loop thread is still running after 5 seconds (e.g., due to a stuck coroutine), the daemon thread will be left running. Consider either removing the timeout to wait indefinitely, or adding logging/error handling for timeout cases. Additionally, consider forcefully canceling all pending tasks in the loop before stopping it to ensure a clean shutdown.
| self._loop_thread.join(timeout=5) | |
| self._loop_thread.join() |
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _loop variable holds a reference to an event loop that is never explicitly closed. While asyncio.new_event_loop() creates a loop, the loop should be closed using loop.close() after it has been stopped to properly release resources. Consider adding self._loop.close() after self._loop_thread.join(timeout=5) in the close() method to ensure the event loop is properly disposed of.
| self._loop_thread.join(timeout=5) | |
| self._loop_thread.join(timeout=5) | |
| if not self._loop.is_running() and not self._loop.is_closed(): | |
| self._loop.close() |
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a potential race condition in the close() method. If multiple threads call close() concurrently, both could pass the check on line 112 before either sets _closed = True on line 115. This could lead to attempting to close the HTTP clients multiple times or stopping the event loop multiple times. Consider using a lock to protect the close() method or checking _closed again after closing the HTTP clients but before stopping the loop.
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The close() method is a new public API method but lacks documentation. All other public methods in this class have comprehensive docstrings (e.g., ensure_opg_approval, completion, chat). The close() method should include a docstring explaining its purpose, when it should be called, and what resources it releases. This is especially important since this is a breaking change affecting resource management.
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The order of operations in close() could be problematic. The method sets _closed = True (line 115) before stopping the event loop (line 116). If any coroutine running in the event loop tries to spawn a new coroutine via _run_coroutine after _closed is set but before the loop stops, it will raise "LLM client is closed" error. Consider setting _closed = True after successfully stopping the event loop and joining the thread.
| self._closed = True | |
| self._loop.call_soon_threadsafe(self._loop.stop) | |
| self._loop_thread.join(timeout=5) | |
| self._loop.call_soon_threadsafe(self._loop.stop) | |
| self._loop_thread.join(timeout=5) | |
| self._closed = True |
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a shared HTTP client across multiple requests could cause issues if a request fails and leaves the client in an inconsistent state. Consider whether connection pooling errors or HTTP connection issues could affect subsequent requests. The httpx library's connection pool should handle this gracefully, but it would be helpful to verify this behavior with the x402HttpxClientv2 wrapper. Additionally, ensure that the client can handle concurrent requests if multiple threads call completion() or chat() simultaneously.
| response = await self._request_client.post( | |
| self._og_llm_server_url + "/v1/completions", json=payload, headers=headers, timeout=60 | |
| ) | |
| async with x402HttpxClientv2(timeout=TIMEOUT, limits=LIMITS) as client: | |
| response = await client.post( | |
| self._og_llm_server_url + "/v1/completions", | |
| json=payload, | |
| headers=headers, | |
| timeout=60, | |
| ) |
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _tee_llm_chat_stream_sync method submits a coroutine to the shared event loop for streaming. If multiple threads call this method concurrently, they'll all be using the same event loop thread. While asyncio event loops are designed to handle multiple coroutines, the queue-based communication pattern could lead to mixed chunks from different streams if not handled carefully. Verify that each streaming call gets its own queue instance (which it does on line 380) and that there's no cross-contamination between concurrent streaming requests.
Copilot
AI
Feb 18, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the finally block, the future is cancelled on line 417 even if it has already completed successfully. Cancelling a completed future is harmless but unnecessary. Consider only cancelling if the future is not done, or restructure the error handling to avoid double cancellation (both in the except block on line 413 and in the finally block on line 417).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Client class's context manager implementation (enter and exit) is missing documentation. The class docstring should be updated to mention that the Client supports the context manager protocol and that users should either use it with a
withstatement or explicitly call close() when done. The existing usage examples in the docstring don't demonstrate proper resource cleanup.