Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
102 changes: 77 additions & 25 deletions magicblock-processor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,41 +4,93 @@ Core transaction processing engine for the Magicblock validator.

## Overview

This crate is the heart of the validator's execution layer. It provides a high-performance, parallel transaction processing pipeline built around the Solana Virtual Machine (SVM). Its primary responsibility is to take sanitized transactions from the rest of the system (e.g., `aperture`), execute or simulate them, commit the resulting state changes, and broadcast the outcomes.
This crate is the heart of the validator's execution layer. It provides a high-performance, parallel transaction processing pipeline built around the Solana Virtual Machine (SVM). Its primary responsibility is to take sanitized transactions from the rest of the system, execute or simulate them, commit the resulting state changes, and broadcast the outcomes.

The design is centered around a central **Scheduler** that distributes work to a pool of isolated **Executor** workers, enabling concurrent transaction processing.

## Core Concepts
## Architecture

The architecture is designed for performance and clear separation of concerns, revolving around a few key components:
```
┌─────────────────────────────────────────┐
│ TransactionScheduler │
│ ┌───────────────────────────────────┐ │
Transactions ───▶ │ │ ExecutionCoordinator │ │
│ │ ┌─────────┐ ┌───────────────┐ │ │
│ │ │ Locks │ │ Blocked Queues│ │ │
│ │ │ (u64/ │ │ (BinaryHeap) │ │ │
│ │ │ account)│ │ │ │ │
│ │ └─────────┘ └───────────────┘ │ │
│ └───────────────────────────────────┘ │
└──────────────┬──────────────────────────┘
┌────────────────────┼────────────────────┐
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Executor 0 │ │ Executor 1 │ │ Executor N │
│ (thread) │ │ (thread) │ │ (thread) │
└─────────────┘ └─────────────┘ └─────────────┘
```
Comment thread
bmuddha marked this conversation as resolved.

- **`TransactionScheduler`**: The central coordinator and single entry point for all transactions. It receives transactions from a global queue and dispatches them to available `TransactionExecutor` workers.
- **`TransactionExecutor`**: The workhorse of the system. Each executor runs in its own dedicated OS thread with a private Tokio runtime. It is responsible for the entire lifecycle of a single transaction: loading accounts, executing with the SVM, committing state changes to the `AccountsDb` and `Ledger`, and broadcasting the results.
- **`TransactionSchedulerState`**: A shared context object that acts as a dependency container. It holds `Arc` handles to global state (like `AccountsDb` and `Ledger`) and the communication channels required for the scheduler and executors to operate.
- **`link` function**: A helper method that creates the paired MPSC and Flume channels connecting the processor to the rest of the validator (the "dispatch" side). This decouples the processing core from the external API layer.
## Core Components

---
### TransactionScheduler

## Transaction Workflow
The central coordinator and single entry point for all transactions. Runs in a dedicated thread with its own Tokio runtime.

A typical transaction flows through the system as follows:
- Receives transactions from external components via MPSC channel
- Dispatches transactions to available executors
- Handles executor readiness notifications
- Manages slot transitions (sysvar updates, program cache pruning)

1. An external component (e.g., an RPC handler) receives a transaction.
2. It calls a method on the `TransactionSchedulerHandle` (e.g., `execute` or `simulate`).
3. The handle sends a `ProcessableTransaction` message to the `TransactionScheduler` over a multi-producer, single-consumer channel.
4. The `TransactionScheduler` receives the message and forwards it to an available `TransactionExecutor` worker.
5. The `TransactionExecutor` processes the transaction using the Solana SVM.
6. If the transaction is not a simulation:
- The executor commits modified account states to the `AccountsDb`.
- It writes the transaction and its metadata to the `Ledger`.
- It forwards a `TransactionStatus` update and any `AccountUpdate` notifications over global channels.
7. The `TransactionExecutor` signals its readiness back to the `TransactionScheduler` to receive more work.
### ExecutionCoordinator

## Performance Considerations
Manages transaction scheduling and account locking:

- **Locking**: Bitmask-based read/write locks per account (single `u64` per account)
- Supports up to 63 concurrent executors
- Multiple readers OR single writer per account
- **Queuing**: Blocked transactions queued behind the blocking executor
- Min-heap ordering by transaction ID (FIFO)
- Transactions keep their ID when requeued (older transactions get priority)
- **No fairness blocking**: Transactions only block on actual lock conflicts, not on queued transactions

### TransactionExecutor

The workhorse of the system. Each executor runs in its own dedicated OS thread:

- Loads accounts from AccountsDb
- Executes transactions via SVM
- Commits state changes
- Writes to ledger
- Broadcasts status updates

## Scheduling Strategy

The processor is designed with several key performance optimizations:
The scheduler uses a simple, deadlock-free approach:

- **Thread Isolation**: The scheduler and each executor run in dedicated OS threads to prevent contention and leverage multi-core CPUs.
- **Dedicated Runtimes**: Each thread manages its own single-threaded Tokio runtime. This provides concurrency for CPU-bound tasks without interfering with the multi-threaded, work-stealing scheduler.
- **Shared Program Cache**: All `TransactionExecutor` instances share a single, global `ProgramCache`. This ensures that a BPF program is loaded and compiled only once, with the result being immediately available to all workers.
1. **Try all locks**: Attempt to acquire all account locks for a transaction
2. **On conflict**: Release any partial locks, queue behind the blocking executor
3. **On executor ready**: Drain its blocked queue, retry transactions (oldest first)

This design:
- **Prevents deadlocks**: No circular wait conditions possible
- **Allows livelocks**: A transaction may be repeatedly requeued (acceptable trade-off)
- **Maintains FIFO ordering**: Within each executor's queue via min-heap

## Transaction Workflow

1. External component sends `ProcessableTransaction` to scheduler
2. Scheduler assigns to a ready executor
3. Coordinator attempts to acquire account locks:
- **Success**: Transaction sent to executor for processing
- **Conflict**: Transaction queued behind blocking executor, original executor released
4. Executor processes transaction via SVM
5. On completion: commits state, writes to ledger, broadcasts status
6. Executor signals ready, scheduler drains its blocked queue

## Performance Considerations

- **Thread isolation**: Scheduler and each executor run in dedicated OS threads
- **Lock efficiency**: Single `u64` bitmask per account (no heap allocations for locks)
- **Shared program cache**: BPF programs compiled once, shared across all executors
- **No contention tracking overhead**: Simplified scheduler removes fairness bookkeeping
Loading