Skip to content

Updated changes for TEE x402#165

Open
kylexqian wants to merge 5 commits intomainfrom
kyle/tee-changes-2
Open

Updated changes for TEE x402#165
kylexqian wants to merge 5 commits intomainfrom
kyle/tee-changes-2

Conversation

@kylexqian
Copy link
Contributor

Currently using experimental x402 module locally, will remove and use official pypi version once validated.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the SDK’s TEE/x402 integration to work with newer model identifiers and an experimental x402 module, while adding client-side normalization for block-structured chat content and changing the default TEE LLM endpoint.

Changes:

  • Update TEE_LLM model enum values (add new models, remove older identifiers).
  • Add ad-hoc TLS certificate fetching/pinning and pass verify= into the x402 httpx client.
  • Change default TEE LLM server URLs to an HTTPS IP endpoint; bump SDK + dependency versions; tweak CLI/Makefile examples.

Reviewed changes

Copilot reviewed 6 out of 7 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
src/opengradient/types.py Updates the list of supported TEE model IDs (but removes existing enum members).
src/opengradient/defaults.py Points default LLM endpoints to a hard-coded IP address and adds a TODO about registry-based discovery.
src/opengradient/client/llm.py Adds TLS “pinning” logic, wires it into the HTTP client, and normalizes block-list content responses.
src/opengradient/cli.py Normalizes list-of-block content when printing chat output.
requirements.txt Bumps og-test-v2-x402 version.
pyproject.toml Bumps package version and og-test-v2-x402 dependency.
Makefile Updates default MODEL and adds tool-calling examples; adjusts .PHONY.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

CLAUDE_HAIKU_4_5 = "anthropic/claude-haiku-4-5"
CLAUDE_OPUS_4_5 = "anthropic/claude-opus-4-5"
CLAUDE_OPUS_4_6 = "anthropic/claude-opus-4-6"

Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Anthropic enum names were replaced (e.g., CLAUDE_3_5_HAIKU/CLAUDE_3_7_SONNET removed). This will break existing SDK usage and internal docs/tests that reference the old names; consider keeping backwards-compatible aliases or updating all references together.

Suggested change
# Backwards-compatible aliases for deprecated Anthropic enum names
# These map older CLAUDE_3_* names to the newer 4.x model identifiers.
CLAUDE_3_7_SONNET = CLAUDE_SONNET_4_5
CLAUDE_3_5_HAIKU = CLAUDE_HAIKU_4_5

Copilot uses AI. Check for mistakes.

.PHONY: install build publish check docs test utils_test client_test langchain_adapter_test opg_token_test integrationtest examples \
infer completion chat chat-stream chat-tool
.PHONY: install build publish check docs test utils_test client_test integrationtest examples \
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.PHONY no longer lists langchain_adapter_test and opg_token_test, but those targets still exist and are dependencies of test. If a file/dir with the same name exists, make may skip running the recipe; add them back to .PHONY (or remove them from test if intentionally deprecated).

Suggested change
.PHONY: install build publish check docs test utils_test client_test integrationtest examples \
.PHONY: install build publish check docs test utils_test client_test langchain_adapter_test opg_token_test integrationtest examples \

Copilot uses AI. Check for mistakes.
@@ -1,7 +1,7 @@
# OpenGradient SDK Makefile

# Default model for testing (override with: make chat MODEL=openai/gpt-4o)
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The header comment still says the default model override example is openai/gpt-4o, but the default has changed to Gemini 3. Update the comment so the documented override matches the actual defaults.

Suggested change
# Default model for testing (override with: make chat MODEL=openai/gpt-4o)
# Default model for testing (override with: make chat MODEL=google/gemini-3-pro-preview)

Copilot uses AI. Check for mistakes.
@@ -609,7 +609,15 @@ def print_llm_chat_result(model_cid, tx_hash, finish_reason, chat_output, is_van
click.echo()
for key, value in chat_output.items():
if value != None and value != "" and value != "[]" and value != []:
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use is not None for None checks (and consider simplifying the condition using truthiness) rather than value != None, which is non-idiomatic and can be incorrect for types with custom equality.

Suggested change
if value != None and value != "" and value != "[]" and value != []:
if value is not None and value not in ("", "[]", []):

Copilot uses AI. Check for mistakes.
Comment on lines +119 to +122
_fetch_tls_cert_as_ssl_context(self._og_llm_server_url) or False
)
self._streaming_tls_verify: Union[ssl.SSLContext, bool] = (
_fetch_tls_cert_as_ssl_context(self._og_llm_streaming_server_url) or False
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_fetch_tls_cert_as_ssl_context(...) or False disables TLS verification whenever the cert fetch fails (unreachable server/parse error/etc). That means a transient failure silently downgrades to insecure HTTPS for all requests; prefer defaulting to True (system CAs) or raising unless the user explicitly opts into insecure mode.

Suggested change
_fetch_tls_cert_as_ssl_context(self._og_llm_server_url) or False
)
self._streaming_tls_verify: Union[ssl.SSLContext, bool] = (
_fetch_tls_cert_as_ssl_context(self._og_llm_streaming_server_url) or False
_fetch_tls_cert_as_ssl_context(self._og_llm_server_url) or True
)
self._streaming_tls_verify: Union[ssl.SSLContext, bool] = (
_fetch_tls_cert_as_ssl_context(self._og_llm_streaming_server_url) or True

Copilot uses AI. Check for mistakes.
Comment on lines +43 to +47
def _fetch_tls_cert_as_ssl_context(server_url: str) -> Optional[ssl.SSLContext]:
"""
Connect to a server, retrieve its TLS certificate (TOFU),
and return an ssl.SSLContext that trusts ONLY that certificate.
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New TLS pinning/verification logic (_fetch_tls_cert_as_ssl_context and passing verify= into the x402 httpx client) isn’t covered by existing unit tests (current tests mostly patch _tee_llm_chat/_tee_llm_completion). Add tests that mock cert fetch success/failure and assert the verify argument passed to the http client, including ensuring failure paths don’t silently downgrade to insecure TLS.

Copilot uses AI. Check for mistakes.
_fetch_tls_cert_as_ssl_context(self._og_llm_streaming_server_url) or False
)

print(f"Using Wallet Account Address: {self._wallet_account.key.hex()}")
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log line prints self._wallet_account.key.hex(), which is the private key, not the wallet address. This leaks credentials to stdout and can compromise funds; remove this print or log only self._wallet_account.address (and preferably behind a debug logger).

Suggested change
print(f"Using Wallet Account Address: {self._wallet_account.key.hex()}")
print(f"Using Wallet Account Address: {self._wallet_account.address}")

Copilot uses AI. Check for mistakes.
Comment on lines +61 to +65
# Connect without verification to retrieve the server's certificate
fetch_ctx = ssl.create_default_context()
fetch_ctx.check_hostname = False
fetch_ctx.verify_mode = ssl.CERT_NONE

Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is implementing TOFU by fetching the peer certificate with CERT_NONE and disabling hostname verification. Without persisting and validating a pinned fingerprint across runs, a MITM on the first connection can become the trust anchor for that process; consider pinning against a known CA/fingerprint shipped/configured by the SDK or persisting/verifying the first-seen fingerprint.

Copilot uses AI. Check for mistakes.
Comment on lines 434 to +439
# OpenAI models via TEE
GPT_4_1_2025_04_14 = "openai/gpt-4.1-2025-04-14"
GPT_4O = "openai/gpt-4o"
O4_MINI = "openai/o4-mini"
GPT_5 = "openai/gpt-5"
GPT_5_MINI = "openai/gpt-5-mini"
GPT_5_2 = "openai/gpt-5.2"
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing existing TEE_LLM members (e.g., GPT_4O) is a breaking API change. There are many in-repo references (tests, README/docs, SDK examples) that will now fail at import/runtime; consider keeping old members as aliases/deprecated entries or update all references in the repo in the same PR.

Copilot uses AI. Check for mistakes.
DEFAULT_BLOCKCHAIN_EXPLORER = "https://explorer.opengradient.ai/tx/"
DEFAULT_OPENGRADIENT_LLM_SERVER_URL = "https://llm.opengradient.ai"
DEFAULT_OPENGRADIENT_LLM_STREAMING_SERVER_URL = "https://llm.opengradient.ai"
# TODO (Kyle): Add a process to fetch these IPS from the TEE registry
Copy link

Copilot AI Feb 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo in the TODO: "IPS" should be "IPs" (plural of IP).

Suggested change
# TODO (Kyle): Add a process to fetch these IPS from the TEE registry
# TODO (Kyle): Add a process to fetch these IPs from the TEE registry

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants