A Python SDK for decentralized model management and inference services on the OpenGradient platform. The SDK enables programmatic access to our model repository and decentralized AI infrastructure.
- Model management and versioning
- Decentralized model inference
- Support for LLM inference with various models
- Trusted Execution Environment (TEE) inference with cryptographic attestation
- End-to-end verified AI execution
- Command-line interface (CLI) for direct access
Browse and discover AI models on our Model Hub. The Hub provides:
- Registry of models and LLMs
- Easy model discovery and deployment
- Direct integration with the SDK
pip install opengradientNote: Windows users should temporarily enable WSL when installing opengradient (fix in progress).
You'll need:
- Private key: An Ethereum-compatible wallet private key for OpenGradient transactions
- Model Hub account (optional): Only needed for uploading models. Create one at Hub Sign Up
The easiest way to set up your configuration is through our wizard:
opengradient config initimport os
import opengradient as og
og_client = og.new_client(
email=None, # Optional: only needed for model uploads
password=None,
private_key=os.environ.get("OG_PRIVATE_KEY"),
)completion = og_client.llm_chat(
model_cid=og.TEE_LLM.GPT_4O,
messages=[{"role": "user", "content": "Hello!"}],
inference_mode=og.LlmInferenceMode.TEE,
)
print(f"Response: {completion.chat_output['content']}")
print(f"Tx hash: {completion.transaction_hash}")Browse models on our Model Hub or upload your own:
result = og_client.infer(
model_cid="your-model-cid",
model_input={"input": [1.0, 2.0, 3.0]},
inference_mode=og.InferenceMode.VANILLA,
)
print(f"Output: {result.model_output}")OpenGradient supports secure, verifiable inference through TEE for leading LLM providers. Access models from OpenAI, Anthropic, Google, and xAI with cryptographic attestation:
# Use TEE mode for verifiable AI execution
completion = og_client.llm_chat(
model_cid=og.TEE_LLM.CLAUDE_3_7_SONNET,
messages=[{"role": "user", "content": "Your message here"}],
inference_mode=og.LlmInferenceMode.TEE,
)
print(f"Response: {completion.chat_output['content']}")Available TEE Models:
The SDK includes models from multiple providers accessible via the og.TEE_LLM enum:
- OpenAI: GPT-4.1, GPT-4o, o4-mini
- Anthropic: Claude 3.7 Sonnet, Claude 3.5 Haiku, Claude 4.0 Sonnet
- Google: Gemini 2.5 Flash, Gemini 2.5 Pro, Gemini 2.0 Flash
- xAI: Grok 3 Beta, Grok 3 Mini Beta, Grok 4.1 Fast
For the complete list, check the og.TEE_LLM enum in your IDE or see the API documentation.
See code examples under examples.
The SDK includes a command-line interface for quick operations. First, verify your configuration:
opengradient config showRun a test inference:
opengradient infer -m QmbUqS93oc4JTLMHwpVxsE39mhNxy6hpf6Py3r9oANr8aZ \
--input '{"num_input1":[1.0, 2.0, 3.0], "num_input2":10}'-
Off-chain Applications: Use OpenGradient as a decentralized alternative to centralized AI providers like HuggingFace and OpenAI.
-
Verifiable AI Execution: Leverage TEE inference for cryptographically attested AI outputs, enabling trustless AI applications.
-
Model Development: Manage models on the Model Hub and integrate directly into your development workflow.
For comprehensive documentation, API reference, and examples, visit:
If you use Claude Code, copy docs/CLAUDE_SDK_USERS.md to your project's CLAUDE.md to help Claude assist you with OpenGradient SDK development.
- Run
opengradient --helpfor CLI command reference - Visit our documentation for detailed guides
- Join our community for support