You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -291,6 +291,18 @@ result = client.llm.chat(
291
291
)
292
292
```
293
293
294
+
### OPG Token Approval
295
+
296
+
LLM inference payments use OPG tokens via the [Permit2](https://github.com/Uniswap/permit2) protocol. Before making requests, ensure your wallet has approved sufficient OPG for spending:
297
+
298
+
```python
299
+
# Checks current Permit2 allowance — only sends an on-chain transaction
300
+
# if the allowance is below the requested amount.
301
+
client.llm.ensure_opg_approval(opg_amount=5)
302
+
```
303
+
304
+
This is idempotent: if your wallet already has an allowance >= the requested amount, no transaction is sent.
305
+
294
306
## Examples
295
307
296
308
Additional code examples are available in the [examples](./examples) directory.
Copy file name to clipboardExpand all lines: docs/opengradient/client/index.md
+30-7Lines changed: 30 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,18 +12,29 @@ OpenGradient Client -- the central entry point to all SDK services.
12
12
13
13
The [Client](./client) class provides unified access to four service namespaces:
14
14
15
-
-**[llm](./llm)** -- LLM chat and text completion with TEE-verified execution and x402 payment settlement
15
+
-**[llm](./llm)** -- LLM chat and text completion with TEE-verified execution and x402 payment settlement (Base Sepolia OPG tokens)
16
16
-**[model_hub](./model_hub)** -- Model repository management: create, version, and upload ML models
17
-
-**[alpha](./alpha)** -- Alpha Testnet features: on-chain ONNX model inference (VANILLA, TEE, ZKML modes), workflow deployment, and scheduled ML model execution
17
+
-**[alpha](./alpha)** -- Alpha Testnet features: on-chain ONNX model inference (VANILLA, TEE, ZKML modes), workflow deployment, and scheduled ML model execution (OpenGradient testnet gas tokens)
18
18
-**[twins](./twins)** -- Digital twins chat via OpenGradient verifiable inference
19
19
20
+
## Private Keys
21
+
22
+
The SDK operates across two chains:
23
+
24
+
-**`private_key`** -- used for LLM inference (``client.llm``). Pays via x402 on **Base Sepolia** with OPG tokens.
25
+
-**`alpha_private_key`***(optional)* -- used for Alpha Testnet features (``client.alpha``). Pays gas on the **OpenGradient network** with testnet tokens. Falls back to ``private_key`` when omitted.
26
+
20
27
## Usage
21
28
22
29
```python
23
30
import opengradient as og
24
31
32
+
# Single key for both chains (backward compatible)
25
33
client = og.init(private_key="0x...")
26
34
35
+
# Separate keys: Base Sepolia OPG for LLM, OpenGradient testnet gas for Alpha
Copy file name to clipboardExpand all lines: docs/opengradient/index.md
+37-10Lines changed: 37 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ opengradient
6
6
7
7
# Package opengradient
8
8
9
-
**Version: 0.7.0**
9
+
**Version: 0.7.1**
10
10
11
11
OpenGradient Python SDK for decentralized AI inference with end-to-end verification.
12
12
@@ -55,12 +55,24 @@ result = client.alpha.infer(
55
55
print(result.model_output)
56
56
```
57
57
58
+
## Private Keys
59
+
60
+
The SDK operates across two chains. You can use a single key for both, or provide separate keys:
61
+
62
+
-**``private_key``** -- pays for LLM inference via x402 on **Base Sepolia** (requires OPG tokens)
63
+
-**``alpha_private_key``***(optional)* -- pays gas for Alpha Testnet on-chain inference on the **OpenGradient network** (requires testnet gas tokens). Falls back to ``private_key`` when omitted.
The [Client](./client/index) object exposes four namespaces:
61
73
62
-
-**[llm](./client/llm)** -- Verifiable LLM chat and completion via TEE-verified execution with x402 payments
63
-
-**[alpha](./client/alpha)** -- On-chain ONNX model inference, workflow deployment, and scheduled ML model execution (only available on the Alpha Testnet)
74
+
-**[llm](./client/llm)** -- Verifiable LLM chat and completion via TEE-verified execution with x402 payments (Base Sepolia OPG tokens)
75
+
-**[alpha](./client/alpha)** -- On-chain ONNX model inference, workflow deployment, and scheduled ML model execution (OpenGradient testnet gas tokens)
64
76
-**[model_hub](./client/model_hub)** -- Model repository management
65
77
-**[twins](./client/twins)** -- Digital twins chat via OpenGradient verifiable inference (requires twins API key)
66
78
@@ -96,7 +108,7 @@ The SDK includes adapters for popular AI frameworks -- see the `agents` submodul
0 commit comments