BlockRaFT is a crash-tolerant and scalable distributed framework designed to improve the availability and performance of blockchain nodes, particularly in permissioned settings. Instead of replicating full nodes per organization, BlockRaFT distributes the internal workload of a single blockchain node across a cluster using a RAFT-based leader–follower architecture. This design preserves fairness in consensus while enabling scalability and resilience to node crashes. Additionally, the framework introduces a concurrent Merkle tree optimization that decouples smart contract execution from state updates, significantly reducing execution overhead. Experimental results demonstrate substantial performance improvements over traditional single-core and multi-core baselines while maintaining strong fault tolerance.
To evaluate BlockRaFT, we compare our distributed framework against two single-node baselines: a serial scheduler and a parallel multi-core scheduler. The goal is to isolate the benefits of distributed workload partitioning and crash tolerance beyond local parallelism.
A C++-based distributed framework built with CMake, supporting REST APIs, clients, DAG-based execution, protobuf-based communication, and coordination via etcd and Redpanda.
The repository is organized as follows:
ICSA_ARTIFACT/
├── BlockRaFT-distributed_node --> BlockRaFT
├── singleNode-parallel --> Parallel execution mode
├── singleNode-serial --> Serial execution mode
└── Experiment_files --> All necessary exp related files
The initial setup requires more than 30 minutes, primarily due to the following steps:
- Provisioning and configuring multiple virtual machines (VMs) to emulate a distributed cluster environment.
- Installation and configuration of etcd for distributed coordination and RAFT-based leader election.
- Installation of required dependencies including: Google Protocol Buffers (protobuf) for serialization.
- Google Test (gtest) for unit testing.
The setup time may vary depending on system configuration, network bandwidth, and VM provisioning speed. The framework supports two deployment configurations:
- Single Node Framework
- BlockRaFT distributed Node Framework
This configuration runs the framework on a single machine using one VM (nodeS).
Ensure the following are installed:
- CMake >= 3.20
- GCC >= 11.4.0
- Google Protocol Buffers
- Boost
- gRPC
- RocksDB
- etcd client libraries (if required)
- Multipass
The VM is created using Multipass, and all dependencies are installed inside the VM. After building via CMake, correctness can be verified using ctest. aq
Install snap in case not availbale
sudo apt update
sudo apt install snapdsudo snap install multipassVerify installation:
sudo multipass versionsudo multipass launch -n nodeS --cpus 8 --mem 10G --disk 20G 24.04Enter the VM:
sudo multipass shell nodeSsudo apt-get update && \
sudo apt-get install -y cmake unzip python3-pip protobuf-compiler libprotobuf-dev \
libboost-all-dev libssl-dev libcpprest-dev libboost-system-dev \
libcurl4-openssl-dev librdkafka-dev libasio-dev libgflags-dev \
nlohmann-json3-dev libgtest-dev librocksdb-dev \
libgrpc-dev libgrpc++-dev protobuf-compiler-grpc etcd-client libtbb-devAfter completing the above steps, your Single Node environment is ready.
Copy the required module folder depending on the execution mode:
- Serial Execution → transfer to
nodeS:singleNode-serial - Parallel Execution → transfer to
nodeS:singleNode-parallel
Example:
cd singleNode-serial
sudo multipass transfer -r ./ nodeS:singleNode-serialOR,
cd singleNode-parallel
sudo multipass transfer -r ./ nodeS:singleNode-parallelInside the VM:
For Serial execution mode :
cd singleNode-serial
mkdir -p build
cd build
cmake ..
makeFor Parallel execution mode :
cd singleNode-parallel
mkdir -p build
cd build
cmake ..
makeFor verify the proper setup :
cd build
ctestThis configuration runs the framework across multiple VMs (node0, node1, node2) using etcd and Redpanda.
Distributed_Framework/
├── build/
├── blocksDB
├── block_producer
├── DAG-module
├── merkelTree
├── Crow/
├── googletest/
├── protos/
├── client/
├── RestAPI/
├── auto_deploy.sh
└── CMakeLists.txt
- CMake >= 3.20
- GCC >= 11.4.0
- Google Protocol Buffers
- Multipass
sudo snap install multipassVerify installation:
sudo multipass versioncd BlockRaFT-distributed_node
chmod +x autoDeploy.sh
sudo ./autoDeploy.shsudo multipass transfer -r ./ node0:Distributed_Framework
sudo multipass transfer -r ./ node1:Distributed_Framework
sudo multipass transfer -r ./ node2:Distributed_FrameworkEnter each VM in three different terminals:
sudo multipass shell node0
sudo multipass shell node1
sudo multipass shell node2Install dependencies inside each VM:
sudo apt-get update && \
sudo apt-get install -y cmake unzip python3-pip protobuf-compiler libprotobuf-dev \
libboost-all-dev libssl-dev libcpprest-dev libboost-system-dev \
libcurl4-openssl-dev librdkafka-dev libasio-dev libgflags-dev \
nlohmann-json3-dev libgtest-dev librocksdb-dev \
libgrpc-dev libgrpc++-dev protobuf-compiler-grpc etcd-client libtbb-devcurl -LO https://github.com/redpanda-data/redpanda/releases/latest/download/rpk-linux-amd64.zip && \
mkdir -p ~/.local/bin && \
export PATH="$HOME/.local/bin:$PATH" && \
unzip rpk-linux-amd64.zip -d ~/.local/bin/git clone --recurse-submodules https://github.com/etcd-cpp-apiv3/etcd-cpp-apiv3.git
cd etcd-cpp-apiv3
mkdir build && cd build
cmake ..
sudo make -j4
sudo make installInside each VM:
cd Distributed_Framework
mkdir -p build
cd build
cmake ..
makeFor verify the proper setup :
cd build
ctestThis section explains how to configure and reproduce experiments.
Each execution folder contains:
config.json
Example:
{
"threadCount": 8,
"txnCount": 10000,
"blocks": 2,
"scheduler": "parallel",
"mode": "production"
}threadCount→ Worker threads (parallel mode)txnCount→ Number of transactionsblocks→ Blocks generatedscheduler→"serial"or"parallel"mode→ Execution mode
Located in:
ICSA_ARTIFACT
└──Experiment_files/
├── Wallet/
└── Voting/<TxnCount>-<ConflictPercentage>.txt
Examples:
1000-0.txt5000-50.txt
Choose from:
Experiment_files/Wallet/or
Experiment_files/Voting/Copy into:
singleNode-serial/testFile/testFile.txtor
singleNode-parallel/testFile/testFile.txtinside selected execution folder.
Copy into:
BlockRAFT-distributed_node/leader/testFile.txtinside selected execution folder.
Copy:
Experiment_files/Wallet/setup.txtor
Experiment_files/Voting/setup.txtCopy into:
singleNode-serial/testFile/setup.txtor
singleNode-parallel/testFile/setup.txtinside selected execution folder.
Copy:
Experiment_files/Wallet/setup.txtCopy into:
BlockRAFT-distributed_node/leader/setupFile.txtinside selected execution folder.
cd singleNode-serial/build
./nodecd singleNode-parallel/build
./nodeRun the below mentioned command in each VMs for example for 3 Node setup i.e (node0, node1, node2).
cd BlockRAFT-distributed_node/build
./nodesudo multipass stop node0 node1 node2
sudo multipass delete node0 node1 node2
sudo multipass purgeTo reproduce results reported in the paper:
- Use identical
config.jsonparameters - Use the same workload file
- Use identical VM specs
- Avoid background workloads
- Run multiple iterations and report averages
- All distributed nodes must use identical builds.
- Ensure etcd and Redpanda ports are open.
- Maintain consistent startup order.
- Recommended VM: 8 CPUs, 10GB RAM.