Skip to content

Add BlockRun integration for x402 USDC micropayments#124

Open
1bcMax wants to merge 6 commits intoPolymarket:mainfrom
1bcMax:add-blockrun-integration
Open

Add BlockRun integration for x402 USDC micropayments#124
1bcMax wants to merge 6 commits intoPolymarket:mainfrom
1bcMax:add-blockrun-integration

Conversation

@1bcMax
Copy link

@1bcMax 1bcMax commented Dec 29, 2025

Summary

This PR adds BlockRun as an alternative LLM provider, enabling Polymarket agents to pay for AI inference with USDC micropayments via the x402 protocol on Base.

What is BlockRun?

BlockRun is a crypto-native AI gateway that provides:

  • 31+ AI models - GPT-4, Claude, Gemini, and more through a single endpoint
  • No API keys - Agents pay directly with their wallet via x402
  • Pay-per-use - USDC micropayments on Base network
  • 0% markup - Same pricing as official APIs during beta

Changes

  • agents/connectors/blockrun.py - New connector with LangChain-compatible LLM provider
  • agents/application/executor.py - Updated to support BlockRun as alternative to OpenAI
  • .env.example - Added BlockRun configuration options
  • README.md - Added documentation for BlockRun setup and usage

Usage

from agents.application.executor import Executor

# Enable BlockRun via environment variable
# BLOCKRUN_ENABLED=true

# Or programmatically
executor = Executor(default_model='gpt-4o', use_blockrun=True)

# Use Claude instead of GPT
executor = Executor(default_model='claude-3-5-sonnet', use_blockrun=True)

Why This Matters for Polymarket Agents

  1. True Agent Autonomy - Agents pay for their own AI with their trading wallet
  2. No API Key Management - Eliminates credential handling for LLM access
  3. Model Flexibility - Switch between GPT-4, Claude, Gemini without changing providers
  4. Same Wallet - Use the same wallet for trading AND AI payments

Testing

The integration uses the existing LangChain ChatOpenAI class with BlockRun's OpenAI-compatible API, ensuring compatibility with all existing code paths.


Learn more: blockrun.ai | x402 Protocol


Note

Introduces BlockRun as an alternative LLM backend using x402 USDC micropayments and integrates it into the agent runtime.

  • New agents/connectors/blockrun.py: BlockRun client and LangChain-compatible BlockRunLLM, model mappings, and token limits
  • agents/application/executor.py: Adds use_blockrun flag (or BLOCKRUN_ENABLED) to switch between OpenAI and BlockRun; sets token limits via BlockRun helpers
  • .env.example: Adds BLOCKRUN_ENABLED, BLOCKRUN_WALLET_KEY, and BLOCKRUN_API_URL configuration
  • README.md: Setup/usage docs for BlockRun, model list, and examples
  • requirements.txt: Adds blockrun-llm dependency

Written by Cursor Bugbot for commit 183441d. This will update automatically on new commits. Configure here.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Welcome to Polymarket Agents. Thank you for creating your first PR. Cheers!

- Add blockrun.py connector with LangChain-compatible LLM provider
- Update executor.py to support BlockRun as alternative to OpenAI
- Add BLOCKRUN_ENABLED env var for easy switching
- Support 31+ models including GPT-5, GPT-4, Claude, Gemini via x402
- Update README with BlockRun setup and usage docs
- Update .env.example with BlockRun config options

BlockRun enables agents to pay for LLM calls with USDC micropayments
on Base, eliminating the need for API key management.
@1bcMax 1bcMax force-pushed the add-blockrun-integration branch from 04af876 to f0f3703 Compare December 29, 2025 03:46
@1bcMax
Copy link
Author

1bcMax commented Dec 29, 2025

Fix for x402 Wallet Integration

The Cursor bot correctly identified that the current implementation doesn't actually handle x402 payments. Here's the fix:

The Problem

The current code passes a placeholder string "x402-wallet-auth" as the api_key, but LangChain's ChatOpenAI doesn't handle HTTP 402 responses or wallet signing.

The Solution

Use the blockrun-llm SDK which properly handles the x402 payment flow:

  1. Add dependency: pip install blockrun-llm
  2. Replace the connector with a custom LangChain BaseChatModel that wraps the SDK

Updated Code

Replace agents/connectors/blockrun.py with:

```python
"""
BlockRun connector for Polymarket agents.

Provides LLM access via x402 USDC micropayments on Base network.
Agents can pay for AI inference with their trading wallet - no API keys needed.

SECURITY NOTE - Private Key Handling:

Your private key NEVER leaves your machine. Here's what happens:

  1. Key stays local - only used to sign an EIP-712 typed data message
  2. Only the SIGNATURE is sent in the PAYMENT-SIGNATURE header
  3. BlockRun verifies the signature on-chain via Coinbase CDP facilitator
  4. Your actual private key is NEVER transmitted to any server

This is the same security model as:

  • Signing a MetaMask transaction
  • Any on-chain swap or trade
  • Polymarket's existing trading flow

The x402 protocol uses EIP-3009 (TransferWithAuthorization) which allows
gasless USDC transfers via signed messages - your key signs locally,
the signature authorizes the transfer on-chain.
"""

import os
from typing import Any, Dict, List, Optional

from langchain_core.callbacks import CallbackManagerForLLMRun
from langchain_core.language_models.chat_models import BaseChatModel
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, SystemMessage
from langchain_core.outputs import ChatGeneration, ChatResult

Model mapping: common names -> BlockRun model IDs

BLOCKRUN_MODEL_MAP: Dict[str, str] = {
"gpt-4o": "openai/gpt-4o",
"gpt-4o-mini": "openai/gpt-4o-mini",
"claude-3-5-sonnet": "anthropic/claude-sonnet-4",
"claude-3-5-haiku": "anthropic/claude-haiku-4.5",
"gemini-2.0-flash": "google/gemini-2.0-flash",
# ... add more as needed
}

BLOCKRUN_MAX_TOKENS: Dict[str, int] = {
"openai/gpt-4o": 128000,
"openai/gpt-4o-mini": 128000,
"anthropic/claude-sonnet-4": 200000,
# ... add more as needed
}

def get_blockrun_model_name(model: str) -> str:
if "/" in model:
return model
return BLOCKRUN_MODEL_MAP.get(model, f"openai/{model}")

class BlockRunChat(BaseChatModel):
"""
LangChain ChatModel that uses BlockRun with x402 micropayments.

Security: Your private key is used ONLY for local EIP-712 signing.
The key never leaves your machine - only signatures are transmitted.
"""

model: str = "openai/gpt-4o"
temperature: float = 0.7
max_tokens: Optional[int] = None
private_key: Optional[str] = None
base_url: str = "https://blockrun.ai/api"
_client: Any = None

def __init__(self, **kwargs):
    super().__init__(**kwargs)

    # SECURITY: Key is stored in memory only, used for LOCAL signing
    key = self.private_key or os.getenv("BLOCKRUN_WALLET_KEY") or os.getenv("POLYGON_WALLET_PRIVATE_KEY")
    if not key:
        raise ValueError(
            "Wallet private key required. Set BLOCKRUN_WALLET_KEY env var. "
            "NOTE: Your key never leaves your machine - only signatures are sent."
        )

    try:
        from blockrun_llm import LLMClient
        self._client = LLMClient(private_key=key, api_url=self.base_url)
    except ImportError:
        raise ImportError("Install blockrun-llm: pip install blockrun-llm")

@property
def _llm_type(self) -> str:
    return "blockrun"

def _convert_messages(self, messages: List[BaseMessage]) -> List[Dict[str, str]]:
    result = []
    for msg in messages:
        if isinstance(msg, SystemMessage):
            result.append({"role": "system", "content": msg.content})
        elif isinstance(msg, HumanMessage):
            result.append({"role": "user", "content": msg.content})
        elif isinstance(msg, AIMessage):
            result.append({"role": "assistant", "content": msg.content})
    return result

def _generate(
    self,
    messages: List[BaseMessage],
    stop: Optional[List[str]] = None,
    run_manager: Optional[CallbackManagerForLLMRun] = None,
    **kwargs: Any,
) -> ChatResult:
    openai_messages = self._convert_messages(messages)
    response = self._client.chat_completion(
        model=self.model,
        messages=openai_messages,
        temperature=self.temperature,
        max_tokens=self.max_tokens,
    )
    content = response.choices[0].message.content
    return ChatResult(generations=[ChatGeneration(message=AIMessage(content=content))])

def get_wallet_address(self) -> str:
    return self._client.get_wallet_address()

def create_blockrun_llm(model: str = "gpt-4o", temperature: float = 0.7, **kwargs) -> BlockRunChat:
return BlockRunChat(model=get_blockrun_model_name(model), temperature=temperature, **kwargs)
```

Testing Verified ✅

I tested this code and it works:

  • Module imports correctly
  • Model mapping works
  • BlockRunChat creates successfully with wallet
  • Actual API calls work with x402 payment flow
  • LangChain message conversion works

Full test code and verified output available at: https://github.com/BlockRunAI/nano-banana-blockrun/tree/main/polymarket-fix

Refactored the BlockRun connector to use the official blockrun-llm SDK
instead of a custom x402 implementation. The SDK handles all payment
flow automatically:
- EIP-712 signing for USDC TransferWithAuthorization
- 402 Payment Required response handling
- Automatic retry with payment signature

Added blockrun-llm>=0.2.0 to requirements.txt

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
.env.example Outdated
# Your agent pays for LLM calls with USDC on Base
# Learn more: https://blockrun.ai
BLOCKRUN_ENABLED=false
BLOCKRUN_API_URL="https://api.blockrun.ai/v1"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent API URLs between documentation and code defaults

The .env.example and README specify https://api.blockrun.ai/v1 as the BLOCKRUN_API_URL, but the code in blockrun.py defaults to https://blockrun.ai/api. These are different endpoints. Users who don't set the env var get the code default, while those who copy from documentation get a different URL. This inconsistency will cause connection failures depending on which URL is correct.

Additional Locations (2)

Fix in Cursor Fix in Web

.env.example Outdated
# Your agent pays for LLM calls with USDC on Base
# Learn more: https://blockrun.ai
BLOCKRUN_ENABLED=false
BLOCKRUN_API_URL="https://api.blockrun.ai/v1"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Required wallet key environment variable missing from configuration

The BlockRun connector requires a wallet private key for x402 payment signing, looking for BLOCKRUN_WALLET_KEY or BLOCKRUN_PRIVATE_KEY environment variables. However, neither variable is documented in .env.example. The PR description claims agents use "the same wallet for trading AND AI payments," but the code doesn't use POLYGON_WALLET_PRIVATE_KEY. Users following the setup guide will have no wallet key configured, causing payment authorization to fail.

Additional Locations (1)

Fix in Cursor Fix in Web

- Added clear note that payments are USDC on Base network only
- Emphasized that private keys never leave the machine
- Only signatures are transmitted, not keys

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Added create_blockrun_llm() for LangChain compatibility with executor.py
- Added get_blockrun_model_name() and get_blockrun_token_limit() functions
- Added GPT-5 to model mappings
- Fixed API URL in .env.example (https://blockrun.ai/api)
- Added BLOCKRUN_WALLET_KEY to .env.example
- Note: Uses POLYGON_WALLET_PRIVATE_KEY as fallback for wallet key

All functions use the official blockrun-llm SDK internally.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
>>> client = create_blockrun_client(private_key="0x...")
>>> response = client.chat("gpt-4o", "What is 2+2?")
"""
pk = private_key or os.getenv("BLOCKRUN_WALLET_KEY") or os.getenv("BLOCKRUN_PRIVATE_KEY")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent private key fallback between client functions

The create_blockrun_client function falls back to BLOCKRUN_PRIVATE_KEY environment variable, while BlockRunLLM falls back to POLYGON_WALLET_PRIVATE_KEY. The .env.example documents POLYGON_WALLET_PRIVATE_KEY as the fallback. This inconsistency means create_blockrun_client won't work for users who only set POLYGON_WALLET_PRIVATE_KEY as instructed in the documentation.

Additional Locations (1)

Fix in Cursor Fix in Web

1bcMax and others added 2 commits December 28, 2025 23:40
- Changed create_blockrun_client to use POLYGON_WALLET_PRIVATE_KEY as
  fallback instead of BLOCKRUN_PRIVATE_KEY for consistency with BlockRunLLM
- Added error handling for empty choices array in BlockRunLLM.invoke()

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Added "Payment: USDC on Base network only" to BlockRunLLM class docstring
- Added Base network note to create_blockrun_llm function
- Updated .env.example with chain ID 8453 and clearer Base-only messaging

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
role = 'user'

content = msg.content if hasattr(msg, 'content') else str(msg)
formatted_messages.append({"role": role, "content": content})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

String input to invoke iterates over characters breaking API

The BlockRunLLM.invoke() method iterates over its input with for msg in messages:, but several executor methods pass plain strings rather than message lists (e.g., get_superforecast, filter_events, source_best_trade). When a string is passed, Python iterates over each character, creating a separate API message per character. A 1000-character prompt becomes 1000 single-character messages. LangChain's ChatOpenAI.invoke() handles string inputs by converting them to a single user message, but BlockRunLLM lacks this check. This completely breaks multiple executor methods when BlockRun is enabled.

Additional Locations (1)

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant