A production-ready, distributed key-value store implementation using the Raft consensus algorithm. This project demonstrates comprehensive understanding of distributed systems, consensus mechanisms, and fault tolerance.
- Raft Consensus Algorithm: Complete implementation of leader election, log replication, and commit logic
- Distributed Key-Value Store: Thread-safe KV operations across multiple nodes
- Multi-Node Cluster: Support for 3+ node clusters with automatic failover
- REST API Gateway: HTTP interface for client interactions
- Persistence: Log persistence with snapshot management (planned)
- Fault Tolerance: Continues operation during node failures and network partitions
- Docker Support: Multi-container deployment with Docker Compose
distributed-raft-kv-store/
βββ raft/ # Raft consensus implementation
β βββ types.go # Core data types (Node, LogEntry, RPC types)
β βββ node.go # Node lifecycle and election logic
β βββ rpc.go # RPC handlers (RequestVote, AppendEntries)
βββ kvstore/ # Key-value store state machine
β βββ store.go # In-memory KV store with metadata
βββ cmd/
β βββ raft-node/ # Raft node CLI
β β βββ main.go # Node entry point
β βββ kv-gateway/ # REST API gateway
β βββ main.go # Gateway entry point with HTTP handlers
βββ internal/ # Shared utilities
βββ Makefile # Build and deployment commands
βββ docker-compose.yml # Multi-node cluster configuration
βββ .env.example # Configuration template
βββ go.mod # Go module definition
βββ README.md # This file
- Go 1.21+
- Docker & Docker Compose
- Make
# Build binaries
make build
# Run 3-node cluster with Docker Compose
make run
# Test KV operations
curl http://localhost:8080/health
curl -X PUT http://localhost:8080/kv/mykey -H "Content-Type: application/json" -d '{"value": "myvalue"}'
curl http://localhost:8080/kv/mykey
# Stop services
make stopGET /healthResponse: {"status": "healthy", "keys": "0"}
PUT /kv/{key}
Content-Type: application/json
{"value": "your-value"}GET /kv/{key}Response: {"key": "mykey", "value": "myvalue"}
DELETE /kv/{key}Response: {"key": "mykey", "status": "deleted"}
- Follower: Default state, receives RPCs from leader
- Candidate: Requests votes during election timeout
- Leader: Sends heartbeats and replicates log entries
- Leader Election: Random election timeouts (150-300ms) prevent split votes
- Log Replication: AppendEntries RPC ensures consistency across nodes
- Safety: Only entries committed to majority are applied to state machine
Edit .env.example and rename to .env:
# Node Configuration
NODE_ID=1
NODE_ADDR=localhost:50051
CLUSTER_NODES=localhost:50051,localhost:50052,localhost:50053
# Timing Parameters
ELECTION_TIMEOUT_MIN=150 # milliseconds
ELECTION_TIMEOUT_MAX=300 # milliseconds
HEARTBEAT_INTERVAL=50 # milliseconds
# Gateway
GATEWAY_ADDR=0.0.0.0:8080docker-compose up -d
# Monitor logs
docker-compose logs -f- Raft Node 1: 50051
- Raft Node 2: 50052
- Raft Node 3: 50053
- REST API Gateway: 8080
make test# Check health
curl http://localhost:8080/health
# Create keys
for i in {1..10}; do
curl -X PUT http://localhost:8080/kv/key$i \
-H "Content-Type: application/json" \
-d '{"value": "value'$i'"}'
done
# Retrieve keys
curl http://localhost:8080/kv/key1
# Delete keys
curl -X DELETE http://localhost:8080/kv/key1- Horizontal scaling by adding more Raft nodes
- Log compaction through snapshots (planned)
- Async replication for high throughput
- TLS support for inter-node communication (planned)
- Authentication and authorization (planned)
- Rate limiting on API endpoints (planned)
- Snapshot management and log compaction
- TLS encryption for RPC
- gRPC for inter-node communication
- Persistence layer (RocksDB)
- Metrics and monitoring (Prometheus)
- Web UI for cluster visualization
- Benchmarking suite
- Configuration hot-reload
MIT License - See LICENSE file for details
DevPatel-11
Status: MVP Complete - Core Raft consensus and KV store operational Last Updated: December 2025