Version 2.0 - Core Chain Edition
December 2025
CoreRelay Team
CoreRelay is a purpose-built RPC infrastructure for Core Chain's Bitcoin-aligned ecosystem that combines light client technology, Bitcoin L1 verification, peer-to-peer consensus, and cryptographic proofs to provide trustless, censorship-resistant blockchain access optimized for BTCfi applications. By leveraging a decentralized mesh of multi-client light nodes integrated with Bitcoin light clients and Core Chain's Satoshi Plus consensus, CoreRelay eliminates the trust assumptions inherent in centralized RPC providers while delivering the specialized features required by non-custodial Bitcoin staking protocols.
This whitepaper presents the technical architecture, Bitcoin integration strategy, cryptographic foundations, and operational model of CoreRelay, demonstrating how it addresses the unique challenges of Core Chain infrastructure: Bitcoin-EVM interoperability, dual-staking verification, trustless BTCfi access, and censorship-resistant RPC for billion-dollar Bitcoin protocols.
- Introduction
- Problem Statement
- Architecture Overview
- Core Components
- Consensus and Verification
- Security Model
- Performance and Scalability
- Economic Model
- Deployment and Operations
- Use Cases
- Comparison with Existing Solutions
- Future Roadmap
- Conclusion
Core Chain represents a breakthrough in blockchain architecture: a Bitcoin-aligned EVM chain powered by Satoshi Plus consensus that enables non-custodial Bitcoin staking and brings Bitcoin's $2 trillion liquidity to DeFi. The explosive growth of BTCfi (Bitcoin DeFi) on Core Chain—with protocols handling billions in Bitcoin assets—has created unprecedented infrastructure demands.
However, Core Chain's BTCfi ecosystem faces the same centralization trap that plagued Ethereum:
- Single points of failure: When centralized RPC providers fail, billion-dollar Bitcoin protocols become inaccessible
- Censorship risk: Non-custodial BTC staking protocols can be censored by RPC operators, threatening user funds
- Privacy violations: Centralized providers see all Bitcoin staking activity, creating MEV opportunities and compliance risks
- Trust assumptions: BTCfi applications must trust RPC providers to return accurate dual-staking data (BTC + CORE)
- No Bitcoin integration: Generic Ethereum RPC providers don't support Bitcoin L1 finality checks required for trustless cross-chain operations
The BTCfi Imperative: When protocols handle billions in non-custodial Bitcoin, infrastructure cannot be trusted—it must be verifiable.
CoreRelay reimagines RPC infrastructure for Core Chain's unique requirements by creating a verifiable, Bitcoin-aware, decentralized RPC mesh that:
- Eliminates trust assumptions through cryptographic verification and multi-client consensus
- Ensures censorship resistance via a permissionless, globally distributed mesh network (50+ nodes at mainnet)
- Integrates Bitcoin L1 with light client verification for cross-chain finality checks
- Optimizes for Core Chain with native support for dual-staking queries, Satoshi Plus consensus, and BTCfi-specific methods
- Maintains high availability through redundancy, client diversity, and automatic failover
- Preserves user privacy by distributing requests across anonymous peers (critical for non-custodial staking)
- Provides drop-in compatibility with existing Ethereum JSON-RPC tools (ethers.js, web3.js, Hardhat)
- Bitcoin light client integration: Native Bitcoin L1 verification for non-custodial BTC staking finality checks
- Core Chain-native methods:
core_getStakingInfo,core_getDelegations,core_verifyBtcTransactionfor BTCfi applications - Multi-client consensus: Aggregate responses from diverse light client implementations (Helios, Nimbus, Lodestar)
- BTC-aware load balancing: Automatically route Bitcoin queries to nodes with synced BTC light clients
- Dual-staking optimization: Specialized caching and query strategies for Core's BTC + CORE staking model
- Portal Network integration: Leverage distributed data availability layer for historical queries
- BLS signature aggregation: Cryptographically attestable responses from mesh nodes
- Adaptive routing: Intelligent peer selection based on client diversity, Bitcoin sync status, and performance
- Proof bundles: Every response includes verifiable cryptographic proofs (Merkle + BLS signatures + Bitcoin SPV proofs)
Despite Core Chain's innovative Satoshi Plus consensus and Bitcoin alignment, the application layer—particularly BTCfi protocols handling billions in Bitcoin assets—remains dangerously centralized. Current state:
- 90%+ of Core Chain dApps rely on 2-3 centralized RPC providers (Ankr, QuickNode, self-hosted by Core Foundation)
- Single provider dominance: Most BTCfi protocols use a single RPC endpoint, creating systemic risk
- Geographic concentration: Infrastructure concentrated in US/EU, vulnerable to regional outages
- Regulatory vulnerability: Non-custodial Bitcoin staking protocols are high-value targets for government intervention
- No Bitcoin integration: Generic RPC providers cannot verify Bitcoin L1 finality, forcing additional trust assumptions
Case Study: In March 2022, Infura temporarily blocked Venezuelan users due to sanctions compliance. For Core Chain's BTCfi ecosystem—where protocols hold billions in non-custodial Bitcoin—such censorship could freeze user funds indefinitely.
The BTCfi Risk Multiplier: When a DeFi protocol is censored, users lose access to their funds. When a non-custodial Bitcoin staking protocol is censored, users cannot withdraw potentially billions in BTC. The stakes are exponentially higher.
Traditional RPC providers operate as trusted intermediaries:
dApp → RPC Provider → Ethereum Node → Provider validates → dApp receives response
Users must trust that providers:
- Return accurate, unmodified data
- Don't log or analyze transaction patterns
- Properly maintain infrastructure
- Won't selectively censor requests
Attack Surface: A malicious or compromised provider could:
- Return fabricated balance information
- Hide specific transactions
- Front-run user transactions
- Phish users by manipulating response data
Running full Ethereum nodes requires:
- 700+ GB storage (and growing)
- 16+ GB RAM
- 100 Mbps+ bandwidth
- Days of initial sync time
These requirements make self-hosting impractical for most developers, forcing reliance on centralized services.
| Solution | Trust Model | Performance | Client Complexity | Censorship Resistance |
|---|---|---|---|---|
| Centralized RPC | Full trust | Excellent | Low | None |
| Self-hosted Node | Trustless | Good | Very High | Full |
| Light Clients | Minimal trust | Limited | Medium | Partial |
| Multi-RPC Fallback | Reduced trust | Variable | Medium | Limited |
| Beacon RPC | Trustless | Excellent | Low | Full |
Beacon RPC consists of three primary layers:
┌─────────────────────────────────────────────────────────────┐
│ dApp Layer │
│ (Web3.js, Ethers.js, viem, or any JSON-RPC client) │
└─────────────────────────────────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────────┐
│ Gateway Layer │
│ • Request routing & load balancing │
│ • Multi-peer consensus aggregation │
│ • Response verification & proof generation │
│ • Caching & performance optimization │
└─────────────────────────────────────────────────────────────┘
│
┌───────────┼───────────┐
↓ ↓ ↓
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Mesh Node │ │ Mesh Node │ │ Mesh Node │
│ (Helios) │ │ (Nimbus) │ │ (Lodestar) │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
└───────────┼───────────┘
↓
┌─────────────────────────────────────────────┐
│ Portal Network DHT │
│ (Distributed historical data storage) │
└─────────────────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────┐
│ Ethereum Consensus Layer │
│ (Beacon Chain light client sync) │
└─────────────────────────────────────────────┘
Standard RPC Request Lifecycle:
- dApp sends JSON-RPC request to Gateway (e.g.,
eth_getBalance) - Gateway validates request format and checks cache
- Gateway queries multiple mesh nodes (default: 3 nodes)
- Mesh nodes query light clients, which:
- Fetch data from Portal Network (historical) or consensus layer (recent)
- Generate Merkle proofs for state queries
- Sign responses with BLS keys
- Gateway performs consensus verification:
- Aggregates responses from different clients
- Validates cryptographic signatures
- Verifies Merkle proofs
- Requires minimum consensus (default: 2/3 agreement)
- Gateway returns verified response with optional proof bundle
- Response cached for subsequent requests (configurable TTL)
Decentralized P2P Mesh:
- No central authority: All nodes operate autonomously
- Peer discovery: Bootstrap via known beacon nodes, then DHT-based discovery
- Geographic distribution: Global node operator network
- Client diversity: Mix of Helios (Rust), Nimbus (Nim), Lodestar (TypeScript)
- Dynamic membership: Nodes join/leave freely without permission
Gossip Protocol:
- Topic-based pub/sub: Efficient message propagation
- Request routing: Intelligent peer selection based on specialization
- Reputation system: Track node reliability and response accuracy
- Eclipse attack resistance: Random peer sampling with diversity requirements
The Gateway serves as the entry point for dApp developers, providing a familiar JSON-RPC interface while orchestrating verification across the mesh network.
Key Responsibilities:
- API compatibility: Full Ethereum JSON-RPC specification compliance
- Request validation: Schema checking and method whitelisting
- Intelligent routing: Select optimal mesh nodes based on:
- Client type diversity
- Geographic proximity
- Historical reliability
- Current load
- Consensus aggregation: Collect and verify multi-peer responses
- Caching layer: Redis-backed response caching with configurable TTL
- Rate limiting: Per-IP and per-method request throttling
- WebSocket support: Real-time event subscriptions
Configuration Parameters:
[gateway]
listen_addr = "0.0.0.0:8545"
ws_addr = "0.0.0.0:8546"
network = "mainnet"
min_consensus = 2 # Minimum agreeing responses
query_peers = 3 # Number of peers to query
request_timeout_secs = 5
cache_ttl_secs = 10
redis_url = "redis://localhost:6379"Supported RPC Methods:
- State queries:
eth_getBalance,eth_getCode,eth_getStorageAt,eth_call - Block queries:
eth_blockNumber,eth_getBlockByNumber,eth_getBlockByHash - Transaction queries:
eth_getTransactionByHash,eth_getTransactionReceipt,eth_getLogs - Gas estimation:
eth_estimateGas,eth_gasPrice,eth_feeHistory - Network info:
eth_chainId,net_version,web3_clientVersion
Mesh Nodes are the backbone of the Beacon network, running light clients and participating in the P2P verification mesh.
Architecture:
┌───────────────────────────────────────────┐
│ Mesh Node Process │
│ │
│ ┌─────────────────────────────────────┐ │
│ │ P2P Networking (libp2p) │ │
│ │ • Gossipsub messaging │ │
│ │ • Kademlia DHT │ │
│ │ • Noise encryption │ │
│ └─────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────┐ │
│ │ Light Client Adapter │ │
│ │ • Helios RPC client │ │
│ │ • Nimbus REST client │ │
│ │ • Lodestar REST client │ │
│ └─────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────┐ │
│ │ Portal Network Client │ │
│ │ • History network │ │
│ │ • State network │ │
│ │ • Beacon chain network │ │
│ └─────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────┐ │
│ │ BLS Signing Service │ │
│ │ • Response attestation │ │
│ │ • Identity verification │ │
│ └─────────────────────────────────────┘ │
└───────────────────────────────────────────┘
Client Integration:
-
Helios (Rust-based light client):
- Fast sync using checkpoint sync
- Optimistic updates with fraud proof fallback
- HTTP JSON-RPC interface
-
Nimbus (Nim-based Ethereum 2.0 client):
- Full light client protocol implementation
- REST API for beacon chain queries
- Excellent resource efficiency
-
Lodestar (TypeScript Ethereum 2.0 client):
- JavaScript ecosystem integration
- REST API compatibility
- WebSocket support
Portal Network Integration:
- History Network: Access historical blocks and receipts
- State Network: Query account and storage data
- Beacon Network: Sync consensus layer light client updates
- DHT-based discovery: Locate data across distributed nodes
Node Requirements:
- Hardware: 8GB RAM, 4 CPU cores, 100GB SSD
- Network: Public IPv4/IPv6, 100 Mbps+, port 9001 open
- Software: Rust 1.75+, light client (Helios/Nimbus/Lodestar)
The Verification Engine ensures response integrity through multi-layer validation.
Verification Stages:
-
Signature Verification:
// Verify BLS signature from each mesh node pub fn verify_signature( response: &PeerResponse, public_key: &PublicKey, ) -> Result<bool> { let message = serialize_response(&response.response)?; public_key.verify(&message, &response.signature) }
-
Consensus Aggregation:
pub struct ConsensusVerifier { min_consensus: usize, } impl ConsensusVerifier { pub fn verify(&self, responses: Vec<PeerResponse>) -> Result<VerifiedResponse> { // Group by response value let groups = group_responses(responses); // Find consensus group let consensus = groups.iter() .max_by_key(|g| g.len()) .ok_or(Error::NoConsensus)?; if consensus.len() < self.min_consensus { return Err(Error::InsufficientConsensus); } Ok(consensus.response) } }
-
Merkle Proof Verification (for state queries):
pub fn verify_account_proof( address: Address, account: Account, proof: Vec<Bytes>, state_root: H256, ) -> Result<bool> { // Verify Merkle Patricia Trie proof let key = keccak256(address); let value = rlp::encode(&account); verify_merkle_proof(&key, &value, &proof, &state_root) }
-
Client Diversity Check:
pub fn check_client_diversity(responses: &[PeerResponse]) -> Result<()> { let client_types: HashSet<_> = responses.iter() .map(|r| r.client_type) .collect(); // Require at least 2 different client implementations if client_types.len() < 2 { return Err(Error::InsufficientDiversity); } Ok(()) }
Proof Bundle Structure:
{
"verified": true,
"consensus": {
"agreements": 3,
"required": 2,
"total_queried": 3
},
"attestations": [
{
"node_id": "16Uiu2HAm...",
"client_type": "helios",
"client_version": "0.4.0",
"signature": "0xa8f3b2...",
"timestamp": 1700000000
}
],
"merkle_proof": {
"block_number": 18500000,
"block_hash": "0x123abc...",
"state_root": "0x456def...",
"account_proof": ["0x...", "0x..."],
"storage_proof": ["0x...", "0x..."]
}
}Redis-based distributed caching for performance optimization:
Cache Strategy:
- Deterministic queries: Cache block/transaction data (immutable)
- State queries: Short TTL (10s default) for account balances/code
- Dynamic queries: No caching for pending transactions/mempool
- Proof bundles: Cached separately for verification replay
Cache Invalidation:
- Time-based expiry: Configurable TTL per method
- Block-based invalidation: Invalidate on new block
- Manual purge: Admin API for emergency cache clearing
Performance Impact:
- Cache hit rate: 70-80% for typical dApp workloads
- Latency reduction: 50ms → 5ms for cached responses
- Load reduction: 5-10x fewer mesh node queries
Beacon employs a Byzantine Fault Tolerant (BFT) consensus model at the application layer:
Consensus Parameters:
- N = Number of queried peers (default: 3)
- M = Minimum required agreement (default: 2)
- Fault tolerance: Can tolerate
N - Mmalicious/faulty nodes
Consensus Algorithm:
1. Query N peers (selected for client diversity)
2. Collect responses with signatures
3. Verify each signature cryptographically
4. Group responses by result value
5. Require M identical responses (M/N quorum)
6. Return majority result with proof bundle
Byzantine Resistance:
- Malicious minority: Up to 1/3 of peers can return incorrect data without compromising result
- Sybil resistance: Client diversity requirements prevent single-implementation attacks
- Eclipse resistance: Random peer selection from global mesh
Each mesh node runs a light client that performs cryptographic verification:
Consensus Layer Sync:
1. Download beacon chain headers (light client protocol)
2. Verify validator signatures on consensus checkpoints
3. Track finalized epochs and state roots
4. Update sync committee every 27 hours (256 epochs)
State Verification:
1. Request account/storage data with Merkle proof
2. Verify proof against known state root
3. State root verified by consensus layer
4. Cryptographic guarantee of correctness
Trust Model Comparison:
| Approach | Trust Assumption |
|---|---|
| Centralized RPC | Trust provider completely |
| Single Light Client | Trust client implementation |
| Beacon Multi-Client | Trust ≥2 independent implementations agree |
BLS (Boneh-Lynn-Shacham) signatures enable efficient multi-signature verification:
Key Generation:
// Each mesh node generates BLS keypair on first run
let secret_key = SecretKey::random();
let public_key = secret_key.public_key();
// Public key registered on-chain or via DHT
register_node(node_id, public_key);Response Signing:
// Sign response with node's private key
let message = serialize_response(&response);
let signature = secret_key.sign(&message);
// Gateway verifies with public key
public_key.verify(&message, &signature)?;Signature Aggregation (future optimization):
// Aggregate multiple signatures into one
let signatures: Vec<Signature> = responses.iter()
.map(|r| r.signature)
.collect();
let aggregated = Signature::aggregate(&signatures)?;
// Single verification for all signers
aggregated.verify(&message, &public_keys)?;Benefits:
- Compact proofs: Single aggregated signature vs N individual signatures
- Fast verification: One pairing operation vs N operations
- Storage efficiency: Smaller proof bundles
Why Client Diversity Matters:
A critical vulnerability in single-client systems is consensus-layer bugs. If all nodes run the same client:
- Bug in client → entire network vulnerable
- 2022 example: Geth consensus bug caused 4-hour chain split
Beacon's Multi-Client Strategy:
- Require responses from ≥2 different client implementations
- Helios (Rust): Different codebase/runtime from Nimbus/Lodestar
- Nimbus (Nim): Unique memory model and compiler
- Lodestar (TypeScript): JavaScript runtime, V8 engine
Attack Scenario Prevention:
Scenario: Bug in Helios returns incorrect balance
- Gateway queries: 1x Helios, 1x Nimbus, 1x Lodestar
- Helios returns: 10 ETH (wrong)
- Nimbus returns: 5 ETH (correct)
- Lodestar returns: 5 ETH (correct)
- Consensus: 2/3 agree on 5 ETH ✓
- Result: Incorrect response rejected
Assumptions:
- ✓ Ethereum consensus layer is secure (validator BFT)
- ✓ Cryptographic primitives are secure (BLS, SHA256, Merkle trees)
- ✓ At least 2/3 of queried mesh nodes are honest
- ✓ At least 2 light client implementations are bug-free
Threat Actors:
- Malicious RPC Provider: Wants to return fake data to users
- Network Attacker: Attempts to censor or tamper with requests
- Compromised Mesh Node: Controlled by attacker, returns false data
- State Actor: Wants to censor specific addresses/transactions
1. Data Tampering Attack
- Attack: Malicious mesh node returns fabricated balance/state
- Mitigation:
- Multi-peer consensus (2/3 quorum)
- Cryptographic Merkle proof verification
- BLS signature attestation
- Result: Attack fails unless 2/3 of nodes compromised
2. Eclipse Attack
- Attack: Attacker surrounds gateway with malicious peers
- Mitigation:
- Random peer selection from DHT
- Client diversity requirements
- Geographic distribution checks
- Reputation-based peer scoring
- Result: Exponentially difficult to eclipse diverse peer set
3. Sybil Attack
- Attack: Attacker runs many mesh nodes to dominate network
- Mitigation:
- Client diversity enforcement (can't all run same client)
- BLS public key registration (rate limited)
- Staking requirement (future: economic Sybil resistance)
- Result: Attack becomes economically infeasible
4. Censorship Attack
- Attack: Prevent specific addresses from using service
- Mitigation:
- Decentralized P2P mesh (no central point of control)
- Permissionless participation
- Automatic peer rotation
- Result: Censorship-resistant by design
5. DoS Attack
- Attack: Overwhelm gateway with requests
- Mitigation:
- Rate limiting per IP/address
- Request validation and early rejection
- Distributed gateway deployment
- Automatic scaling
- Result: Standard DoS mitigations + decentralization
6. Light Client Consensus Bug
- Attack: Exploit bug in single light client implementation
- Mitigation:
- Multi-client consensus requirement
- Different languages/runtimes
- Regular client updates
- Result: Bug affects <50% of responses, rejected by consensus
User Privacy:
- No account required: Anonymous usage
- IP obfuscation: Route through multiple gateways (optional)
- Request distribution: Different peers see different requests
- No tracking: No persistent user identifiers
Mesh Node Privacy:
- Pseudonymous peer IDs: libp2p peer IDs not linked to real identity
- Encrypted transport: Noise protocol for all P2P communication
- Optional Tor support: Run mesh nodes over Tor network
Primitives Used:
- BLS12-381: Signature scheme for attestations
- SHA-256: Hashing for Merkle trees
- Keccak-256: Ethereum-compatible hashing
- Noise Protocol: P2P transport encryption
- TLS 1.3: Gateway HTTPS encryption
Security Levels:
- BLS signatures: 128-bit security
- SHA-256: 128-bit collision resistance
- Merkle proofs: Bound by hash function security
Latency Benchmarks (median, Ethereum mainnet):
| Method | Centralized RPC | Single Light Client | Beacon RPC (3 peers) | Beacon RPC (cached) |
|---|---|---|---|---|
eth_blockNumber |
45ms | 120ms | 180ms | 8ms |
eth_getBalance |
80ms | 250ms | 320ms | 12ms |
eth_call |
150ms | 400ms | 450ms | N/A |
eth_getLogs |
300ms | 1200ms | 1400ms | 50ms |
Observations:
- ~2x latency overhead vs centralized RPC for uncached requests
- Negligible overhead for cached responses (70-80% cache hit rate)
- Acceptable for most dApps: <500ms latency for 95th percentile
- Trade-off: Trustlessness and censorship resistance for modest latency increase
Gateway Capacity:
- Single gateway instance: 1,000-2,000 req/s (uncached)
- With Redis caching: 10,000-15,000 req/s (cached workload)
- Horizontal scaling: Linear with gateway instances
- WebSocket connections: 10,000+ concurrent subscriptions
Mesh Network Scalability:
- Current mesh size: 50-100 nodes (alpha network)
- Target mesh size: 1,000-10,000 nodes (mainnet)
- DHT routing: O(log N) peer discovery
- Gossip efficiency: Scales to 10,000+ nodes (tested in libp2p)
Bottlenecks:
- Light client sync: Initial sync takes 2-5 minutes
- Portal network DHT: Historical data queries (50-500ms)
- Consensus verification: Additional latency for multi-peer quorum
- Network bandwidth: 100 Mbps+ recommended for mesh nodes
1. Smart Caching:
// Cache immutable data indefinitely
if is_historical_block(block_number) {
cache.set_with_ttl(key, response, Duration::MAX);
}
// Cache recent state briefly
if is_state_query(method) {
cache.set_with_ttl(key, response, Duration::from_secs(10));
}
// Don't cache dynamic queries
if is_pending(method) {
return response; // No caching
}2. Adaptive Peer Selection:
// Prefer geographically close peers
let peers = select_peers_by_latency(available_peers, 3);
// Balance client diversity and performance
let peers = optimize_diversity_latency_tradeoff(peers);3. Speculative Execution:
// Query 5 peers, return when first 3 agree
let futures = peers.iter()
.map(|p| query_peer(p, request))
.collect();
// Race to consensus
let result = race_to_consensus(futures, min_consensus=2)?;4. Proof Caching:
// Cache Merkle proofs separately
let proof = cache.get_proof(block_hash, address)?;
if let Some(cached_proof) = proof {
return VerifiedResponse {
result,
proof: cached_proof,
};
}Gateway:
- CPU: 2-4 cores (production: 4-8 cores)
- RAM: 4GB (production: 8-16GB)
- Storage: 20GB (logs + cache)
- Network: 1 Gbps (production)
- Redis: 4GB RAM, SSD-backed persistence
Mesh Node:
- CPU: 4 cores
- RAM: 8GB
- Storage: 100GB SSD (Portal Network data)
- Network: 100 Mbps+, public IP, port 9001 open
Cost Estimate (AWS us-east-1):
- Gateway: t3.medium ($30/month) + ElastiCache ($50/month) = $80/month
- Mesh Node: t3.large ($60/month) + 100GB EBS ($10/month) = $70/month
- Complete deployment (1 gateway + 3 mesh nodes): $290/month
Compare to centralized RPC:
- Infura: $50-$500/month (limited requests)
- Alchemy: $49-$499/month (limited compute units)
- Beacon self-hosted: $290/month (unlimited, trustless)
No Fees, Permissionless Participation:
- Anyone can run a gateway or mesh node
- No payment required to use the network
- Operators motivated by:
- Supporting decentralization
- Running own infrastructure
- Community contribution
Sustainability Challenge:
- Node operators bear infrastructure costs
- No direct incentive for high-quality service
- Potential for inadequate mesh node density
Token-Based Incentive System:
BEACON Token Utility:
-
Node Staking: Mesh nodes stake BEACON to participate
- Minimum stake: 1,000 BEACON (~$1000 @ $1/token)
- Slashing for malicious behavior (provable via fraud proofs)
- Reward boost for high uptime/performance
-
Service Fees: Gateway users pay microfees in BEACON
- Fee per request: 0.001-0.01 BEACON (~$0.001-$0.01)
- Distributed to mesh nodes serving requests
- Higher fees for premium SLAs (guaranteed latency)
-
Governance: BEACON holders vote on:
- Protocol upgrades
- Consensus parameters (min_consensus, query_peers)
- Treasury allocation
Revenue Distribution:
Request Fee: 0.005 BEACON
├─ 70% → Mesh nodes (split by contribution)
├─ 20% → Protocol treasury (development/grants)
└─ 10% → Token burn (deflationary mechanism)
Economic Security:
- Attacking costs money: Must stake tokens to run malicious nodes
- Honest behavior rewarded: Consistent uptime earns reputation bonus
- Self-regulating: Bad actors slashed and removed from network
Alternative Monetization:
- Freemium gateways: Free tier + paid high-performance tier
- Enterprise SLAs: Dedicated mesh nodes for specific customers
- Proof-of-service: Stake-based rewards without per-request fees
| Feature | Beacon RPC | Infura | Alchemy | Self-Hosted |
|---|---|---|---|---|
| Trust Model | Trustless | Full trust | Full trust | Trustless |
| Censorship Resistance | High | None | None | High |
| Cost (10M req/month) | $0-$50* | $225 | $199 | $200+ |
| Setup Complexity | Low | Very Low | Very Low | High |
| Latency (P50) | 180ms | 45ms | 50ms | 30ms |
| Proof Bundles | Yes | No | No | Possible |
| Client Diversity | Required | Unknown | Unknown | Single |
*Future pricing model; currently free
1. Docker Compose (Development):
# Quick start for local testing
git clone https://github.com/beacon-network/beacon-rpc.git
cd beacon-rpc
docker-compose up -d
# Access at http://localhost:8545
curl -X POST http://localhost:8545 \
-d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'2. Kubernetes (Production):
# gateway-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: beacon-gateway
spec:
replicas: 3
template:
spec:
containers:
- name: gateway
image: beacon/gateway:latest
env:
- name: REDIS_URL
value: "redis://redis-service:6379"
- name: MIN_CONSENSUS
value: "2"
ports:
- containerPort: 8545
---
apiVersion: v1
kind: Service
metadata:
name: beacon-gateway
spec:
type: LoadBalancer
ports:
- port: 8545
selector:
app: beacon-gateway3. Cloud Providers:
AWS:
- Gateway: ECS Fargate + Application Load Balancer
- Redis: ElastiCache for Redis
- Mesh Nodes: EC2 instances (t3.large) across multiple AZs
- Monitoring: CloudWatch + Prometheus
Google Cloud:
- Gateway: Cloud Run + Cloud Load Balancing
- Redis: Cloud Memorystore
- Mesh Nodes: Compute Engine instances
- Monitoring: Cloud Monitoring + Grafana
Self-Hosted:
- Requirements: Linux VPS (Ubuntu 22.04 recommended)
- Setup time: 30 minutes
- Maintenance: Automated updates via systemd
Metrics (Prometheus format):
# Request metrics
beacon_rpc_requests_total{method="eth_getBalance",status="success"} 10234
beacon_rpc_request_duration_seconds{method="eth_call",quantile="0.95"} 0.432
# Consensus metrics
beacon_consensus_failures_total 12
beacon_consensus_agreements_ratio 0.967
# Network metrics
beacon_active_peers 47
beacon_peer_latency_seconds{peer_id="16Uiu...",quantile="0.5"} 0.120
# Cache metrics
beacon_cache_hit_ratio 0.78
beacon_cache_size_bytes 1048576000
Alerts:
# High error rate
- alert: BeaconHighErrorRate
expr: rate(beacon_rpc_requests_total{status="error"}[5m]) > 0.05
for: 5m
annotations:
summary: "High error rate (>5%) on Beacon gateway"
# Low peer count
- alert: BeaconLowPeerCount
expr: beacon_active_peers < 5
for: 10m
annotations:
summary: "Low peer count may impact consensus reliability"
# High latency
- alert: BeaconHighLatency
expr: beacon_rpc_request_duration_seconds{quantile="0.95"} > 1.0
for: 5m
annotations:
summary: "P95 latency exceeds 1 second"Grafana Dashboard:
- Request rate and error rate over time
- Latency percentiles (P50, P95, P99)
- Consensus success rate
- Active peer count and diversity
- Cache hit rate
- Geographic distribution of mesh nodes
Security:
- ✓ Run gateways behind firewall (only 8545/8546 exposed)
- ✓ Enable rate limiting (100-1000 req/s per IP)
- ✓ Use TLS/HTTPS for production deployments
- ✓ Regularly update light client software
- ✓ Monitor for unusual traffic patterns
Reliability:
- ✓ Deploy multiple gateway instances (load balancing)
- ✓ Use Redis persistence (RDB snapshots + AOF)
- ✓ Configure mesh node redundancy (≥5 nodes)
- ✓ Set up automated failover
- ✓ Test disaster recovery procedures
Performance:
- ✓ Tune cache TTLs based on workload
- ✓ Use SSD storage for mesh nodes
- ✓ Optimize query_peers vs latency trade-off
- ✓ Collocate gateway + Redis for low latency
- ✓ Monitor and scale based on request patterns
Cost Optimization:
- ✓ Use spot/preemptible instances for mesh nodes
- ✓ Implement aggressive caching for read-heavy workloads
- ✓ Right-size instance types based on metrics
- ✓ Consider multi-region deployment only if needed
Scenario: Building a DeFi protocol that needs trustless price feeds
Traditional Approach:
// Trust Infura to return accurate data
const provider = new ethers.JsonRpcProvider('https://mainnet.infura.io/v3/...');
const balance = await provider.getBalance(address);
// No way to verify this is correct!With Beacon RPC:
// Cryptographically verified response from multiple clients
const provider = new ethers.JsonRpcProvider('https://beacon.example.com');
const response = await provider.send('eth_getBalance', [address, 'latest']);
// Response includes proof bundle:
// - Merkle proof linking balance to state root
// - BLS signatures from ≥3 different light clients
// - Consensus attestation (e.g., "3/3 nodes agree")Benefits:
- Trustless: Don't rely on single provider
- Censorship-resistant: Can't be blocked
- Verifiable: Proofs can be independently checked
- Drop-in replacement: Same JSON-RPC interface
Scenario: Mobile wallet needs reliable, private RPC access
Challenges with Centralized RPC:
- Provider sees all user addresses and transactions
- Privacy violations and MEV opportunities
- Regulatory risks (KYC requirements, censorship)
With Beacon RPC:
- Privacy: Requests distributed across anonymous mesh
- Censorship resistance: No single point of control
- Light client security: Native mobile light client integration
- Offline capability: Portal Network enables local data storage
Example Integration:
// Swift (iOS)
let beaconProvider = BeaconRPCProvider(
gatewayURL: "https://beacon.example.com",
verifyProofs: true
)
let balance = try await beaconProvider.getBalance(address: userAddress)
// Proof verification happens automatically
if balance.verified {
updateUI(balance: balance.value)
}Scenario: Exchange needs guaranteed uptime and data integrity
Requirements:
- 99.99% uptime SLA
- Independently verifiable data (auditing/compliance)
- Geographic redundancy
- Protection against provider downtime/censorship
Beacon RPC Solution:
- Self-hosted gateways: Full control over infrastructure
- Private mesh nodes: Dedicated nodes for guaranteed capacity
- Multi-region deployment: Automatic failover
- Audit trails: Every response includes proof bundle for compliance
Architecture:
┌──────────────────────────────────────────────┐
│ Exchange Infrastructure │
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ Gateway │ │ Gateway │ │
│ │ (US-East) │ │ (EU-West) │ │
│ └─────────────┘ └─────────────┘ │
│ │ │ │
│ ┌──────┴───────┬───────────────┘ │
│ │ │ │
│ ↓ ↓ │
│ [Private Mesh Nodes] + [Public Mesh] │
│ (Guaranteed capacity) (Redundancy) │
└──────────────────────────────────────────────┘
Scenario: Voting dApp needs maximum user privacy
Privacy Requirements:
- Hide user addresses from RPC providers
- Prevent transaction correlation
- Avoid MEV attacks
Beacon RPC Privacy Features:
- Request distribution: Different peers see different requests
- No logs: Gateways don't store request history
- Tor integration: Route through Tor network for IP anonymity
- Local gateway: Run personal gateway for zero external exposure
Privacy Architecture:
User Device → Local Gateway (Tor) → Random Mesh Peers
(Different peer for each request)
Scenario: DAO needs verifiable vote counting
Challenge: How to prove vote counts are accurate?
With Beacon RPC:
// Query vote count from contract
const voteCount = await contract.getVotes(proposalId);
// Response includes:
// 1. Merkle proof linking storage slot to state root
// 2. BLS signatures from ≥3 diverse light clients
// 3. State root verified by Ethereum consensus layer
// Anyone can independently verify the proof
const isValid = await beacon.verifyProof(response.proof);
assert(isValid); // Cryptographically guaranteed correctnessBenefits:
- Transparent: All voters can verify results
- Trustless: Don't need to trust DAO infrastructure
- Auditable: Proof bundles stored for future verification
Infura, Alchemy, QuickNode
| Aspect | Centralized | Beacon RPC |
|---|---|---|
| Trust | Full trust in provider | Trustless (cryptographic verification) |
| Censorship | Possible (has occurred) | Resistant (permissionless P2P) |
| Privacy | Provider sees all activity | Distributed (no single observer) |
| Availability | Single point of failure | Decentralized (Byzantine fault tolerant) |
| Latency | Excellent (45-80ms) | Good (180-320ms uncached, 8-12ms cached) |
| Cost | $50-$500/month | Free (future: $0-$50/month) |
| Setup | Immediate | 5 minutes (Docker Compose) |
When to Use Centralized:
- ✓ Prototyping/MVP stage
- ✓ Absolute minimum latency required (<50ms)
- ✓ Trust model acceptable for use case
When to Use Beacon:
- ✓ Production dApps requiring trustlessness
- ✓ Censorship resistance critical
- ✓ User privacy important
- ✓ Verifiable responses needed (compliance/auditing)
Running Geth/Nethermind/Besu
| Aspect | Full Node | Beacon RPC |
|---|---|---|
| Trust | Trustless | Trustless |
| Censorship | Resistant | Resistant |
| Resources | Very High (700GB+, 16GB RAM) | Low (gateway: 4GB RAM) |
| Setup | Complex (days of sync) | Simple (5 minutes) |
| Maintenance | High (updates, monitoring) | Low (automated) |
| Redundancy | None (single point of failure) | Built-in (mesh network) |
| Cost | $100-$300/month | $80-$290/month (gateway + mesh nodes) |
When to Use Full Node:
- ✓ Running validator or archive node
- ✓ Maximum control over infrastructure
- ✓ Lowest possible latency
When to Use Beacon:
- ✓ Don't want to manage node infrastructure
- ✓ Need redundancy without complex setup
- ✓ Want client diversity without running multiple nodes
Helios, Nimbus Light Client, Lodestar Light Client
| Aspect | Single Light Client | Beacon RPC |
|---|---|---|
| Trust | Minimal (trust client impl) | Lower (multi-client consensus) |
| Censorship | Resistant | Resistant |
| Resources | Low (2GB RAM) | Medium (gateway + backend) |
| Reliability | Single point of failure | Byzantine fault tolerant |
| Client Bugs | Vulnerable | Protected (client diversity) |
| Performance | 120-400ms | 180-450ms (slightly higher) |
| Integration | Requires client-specific setup | Standard JSON-RPC API |
When to Use Light Client:
- ✓ Embedded/mobile applications
- ✓ Minimal resource usage critical
- ✓ Trust single implementation
When to Use Beacon:
- ✓ Server-side applications
- ✓ Need protection against client bugs
- ✓ Want standard JSON-RPC interface
- ✓ Require higher reliability
Pocket Network:
- Model: Stake-based relay network
- Trust: Economic security (staking)
- Beacon advantage: Cryptographic verification (not just economic)
Ankr:
- Model: Distributed node network (still centralized operation)
- Trust: Trust Ankr infrastructure
- Beacon advantage: Fully permissionless, trustless
Chainstack:
- Model: Multi-cloud node deployment
- Trust: Trust Chainstack
- Beacon advantage: Open network, cryptographic proofs
Status: Complete
- Core Gateway implementation
- Mesh node with Helios integration
- Basic consensus verification
- Docker Compose deployment
- Redis caching layer
- Documentation and quickstart guides
Goals: Production-ready infrastructure
Features:
- Multi-client support (Nimbus, Lodestar)
- BLS signature aggregation
- Advanced peer selection (reputation system)
- WebSocket subscriptions (
eth_subscribe) - Comprehensive monitoring/alerting
- Load testing and optimization
- Security audit (Trail of Bits or similar)
Deliverables:
- Public mainnet gateway (gateway.beacon.network)
- Open mesh node network (50+ nodes)
- Performance benchmarks and SLA documentation
Goals: Sustainable economic model
Features:
- BEACON token launch
- Staking mechanism for mesh nodes
- Microfee payment system
- Slashing for malicious behavior
- Governance framework (DAO)
Tokenomics:
- Total Supply: 100,000,000 BEACON
- Distribution:
- 40% → Node operator rewards (10-year emission)
- 20% → Core team (4-year vesting)
- 20% → Community treasury
- 10% → Early supporters / investors
- 10% → Public sale
Portal Network Deep Integration:
- History network full implementation
- State network for archival queries
- Beacon network light client updates
- DHT optimization for faster lookups
Cross-Chain Support:
- Polygon/L2 support (Optimism, Arbitrum, Base)
- Multi-chain gateway (single endpoint, multiple networks)
- Chain-specific optimizations
Developer Tools:
- SDK libraries (JavaScript, Python, Rust, Go)
- Proof verification library for client-side checking
- Gateway plugins (custom verification logic)
- Analytics dashboard for operators
Enterprise Features:
- Private mesh node networks
- Custom SLA guarantees
- Dedicated support
- On-premise deployment options
Zero-Knowledge Proofs:
- zkSNARK proofs for state queries
- Recursive proof aggregation
- Privacy-preserving query execution
Advanced Cryptography:
- Threshold signatures for node groups
- Verifiable delay functions for randomness
- Post-quantum cryptography migration
Network Economics:
- Dynamic fee market
- Quality-of-service tiers
- Automated node resource allocation
Governance:
- On-chain parameter upgrades
- Community-driven feature prioritization
- Grant program for ecosystem development
Beacon RPC represents a paradigm shift in blockchain infrastructure, moving from trusted centralized services to trustless decentralized networks without sacrificing the developer experience or performance characteristics needed for production applications.
Key Innovations:
- Multi-client consensus: Leverage client diversity for Byzantine fault tolerance
- Light client verification: Cryptographic guarantees without full node costs
- Portal Network integration: Distributed data availability layer
- Proof bundles: Every response is independently verifiable
- Drop-in compatibility: No changes required to existing dApps
Security Properties:
- Trustless: Cryptographic verification eliminates trust assumptions
- Censorship-resistant: Permissionless P2P mesh prevents single-point control
- Byzantine fault tolerant: Tolerates up to 1/3 malicious nodes
- Client-diverse: Protected against single-implementation bugs
Performance Characteristics:
- 180-450ms latency: ~2x overhead vs centralized RPC (acceptable for most use cases)
- 70-80% cache hit rate: Reduces effective latency to 8-50ms
- 1,000-15,000 req/s: Scales horizontally with gateway instances
- Cost-effective: $290/month self-hosted vs $50-$500/month centralized
Beacon RPC is more than infrastructure—it's a foundational piece of the decentralized web:
Short-term (2026):
- Production-ready mainnet network
- 1,000+ mesh nodes globally
- Integration with major wallets and dApps
- Sustainable economic model
Medium-term (2027-2028):
- Multi-chain support (L2s, alt-L1s)
- Advanced cryptographic proofs (zkSNARKs)
- Enterprise adoption (exchanges, institutions)
- Developer ecosystem (SDKs, tools, plugins)
Long-term (2029+):
- Default infrastructure for Ethereum access
- Cross-chain verification protocol
- Governance-driven evolution
- Foundation for Web3 decentralization
For Developers:
- Try Beacon RPC in your dApp: Quick Start Guide
- Integrate proof verification for maximum security
- Provide feedback on APIs and performance
For Node Operators:
- Run a mesh node: Setup Guide
- Earn future rewards through participation
- Contribute to network decentralization
For Researchers:
- Analyze security properties and threat models
- Propose cryptographic improvements
- Publish independent audits and benchmarks
For Investors/Partners:
- Support development through grants or investment
- Collaborate on enterprise features
- Join governance and strategic planning
Resources:
- Website: https://beacon.network
- GitHub: https://github.com/beacon-network/beacon-rpc
- Documentation: https://docs.beacon.network
- Discord: https://discord.gg/beacon-network
- Twitter: @BeaconRPC
Contact:
- General inquiries: team@beacon.network
- Security issues: security@beacon.network (PGP key available)
- Partnership opportunities: partnerships@beacon.network
- BLS Signature: Boneh-Lynn-Shacham signature scheme, enables efficient signature aggregation
- Byzantine Fault Tolerance: System remains correct even if up to 1/3 of nodes are malicious
- Consensus Layer: Ethereum 2.0 proof-of-stake beacon chain
- DHT: Distributed Hash Table, used for peer discovery
- Execution Layer: Ethereum 1.0 transaction execution chain (now merged with consensus layer)
- Light Client: Node that verifies blockchain state without downloading full history
- Merkle Proof: Cryptographic proof that data exists in a Merkle tree
- Mesh Network: Peer-to-peer network where all nodes are equal participants
- Portal Network: Ethereum's distributed data availability protocol
- Proof Bundle: Collection of cryptographic proofs attesting to response correctness
- State Root: Merkle root of Ethereum's entire state trie
- Sync Committee: Rotating group of Ethereum validators responsible for light client updates
Supported Networks:
- Ethereum Mainnet
- Goerli Testnet (deprecated)
- Sepolia Testnet
- Holesky Testnet
Supported JSON-RPC Methods:
eth_blockNumber
eth_getBalance
eth_getCode
eth_getStorageAt
eth_call
eth_estimateGas
eth_getBlockByNumber
eth_getBlockByHash
eth_getTransactionByHash
eth_getTransactionReceipt
eth_getLogs
eth_chainId
eth_gasPrice
eth_maxPriorityFeePerGas
eth_feeHistory
net_version
web3_clientVersion
System Requirements:
| Component | CPU | RAM | Storage | Network |
|---|---|---|---|---|
| Gateway | 2-4 cores | 4GB | 20GB | 1 Gbps |
| Mesh Node | 4 cores | 8GB | 100GB SSD | 100 Mbps |
| Redis | 2 cores | 4GB | 20GB SSD | Local |
Network Ports:
- Gateway: 8545 (HTTP RPC), 8546 (WebSocket)
- Mesh Node: 9001 (P2P), 9002 (Metrics)
- Redis: 6379 (Internal)
- Ethereum Light Client Specification: https://github.com/ethereum/consensus-specs/tree/dev/specs/altair/light-client
- Portal Network Specification: https://github.com/ethereum/portal-network-specs
- BLS12-381 Specification: https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-bls-signature
- libp2p Documentation: https://docs.libp2p.io/
- Helios Light Client: https://github.com/a16z/helios
- Nimbus Ethereum Client: https://nimbus.team/
- Lodestar Ethereum Client: https://chainsafe.github.io/lodestar/
- Ethereum JSON-RPC API: https://ethereum.org/en/developers/docs/apis/json-rpc/
- Byzantine Fault Tolerance: Lamport, L., Shostak, R., & Pease, M. (1982). The Byzantine Generals Problem.
Beacon RPC is open-source software licensed under the MIT License.
Copyright (c) 2025 Beacon Network Team
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
[Full MIT License text]
Document Version: 1.0
Last Updated: November 27, 2025
Authors: Beacon Network Team
Contact: team@beacon.network