Skip to content

DevPatel-11/distributed-raft-kv-store

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Distributed Raft KV Store

A production-ready, distributed key-value store implementation using the Raft consensus algorithm. This project demonstrates comprehensive understanding of distributed systems, consensus mechanisms, and fault tolerance.

🎯 Features

  • Raft Consensus Algorithm: Complete implementation of leader election, log replication, and commit logic
  • Distributed Key-Value Store: Thread-safe KV operations across multiple nodes
  • Multi-Node Cluster: Support for 3+ node clusters with automatic failover
  • REST API Gateway: HTTP interface for client interactions
  • Persistence: Log persistence with snapshot management (planned)
  • Fault Tolerance: Continues operation during node failures and network partitions
  • Docker Support: Multi-container deployment with Docker Compose

πŸ—οΈ Architecture

Project Structure

distributed-raft-kv-store/
β”œβ”€β”€ raft/                    # Raft consensus implementation
β”‚   β”œβ”€β”€ types.go            # Core data types (Node, LogEntry, RPC types)
β”‚   β”œβ”€β”€ node.go             # Node lifecycle and election logic
β”‚   └── rpc.go              # RPC handlers (RequestVote, AppendEntries)
β”œβ”€β”€ kvstore/                # Key-value store state machine
β”‚   └── store.go            # In-memory KV store with metadata
β”œβ”€β”€ cmd/
β”‚   β”œβ”€β”€ raft-node/          # Raft node CLI
β”‚   β”‚   └── main.go         # Node entry point
β”‚   └── kv-gateway/         # REST API gateway
β”‚       └── main.go         # Gateway entry point with HTTP handlers
β”œβ”€β”€ internal/               # Shared utilities
β”œβ”€β”€ Makefile               # Build and deployment commands
β”œβ”€β”€ docker-compose.yml     # Multi-node cluster configuration
β”œβ”€β”€ .env.example          # Configuration template
β”œβ”€β”€ go.mod                # Go module definition
└── README.md             # This file

πŸš€ Quick Start

Prerequisites

  • Go 1.21+
  • Docker & Docker Compose
  • Make

Local Development

# Build binaries
make build

# Run 3-node cluster with Docker Compose
make run

# Test KV operations
curl http://localhost:8080/health
curl -X PUT http://localhost:8080/kv/mykey -H "Content-Type: application/json" -d '{"value": "myvalue"}'
curl http://localhost:8080/kv/mykey

# Stop services
make stop

πŸ“‘ REST API Endpoints

Health Check

GET /health

Response: {"status": "healthy", "keys": "0"}

Set Key-Value

PUT /kv/{key}
Content-Type: application/json
{"value": "your-value"}

Get Value

GET /kv/{key}

Response: {"key": "mykey", "value": "myvalue"}

Delete Key

DELETE /kv/{key}

Response: {"key": "mykey", "status": "deleted"}

πŸ”„ Raft Consensus Explained

States

  • Follower: Default state, receives RPCs from leader
  • Candidate: Requests votes during election timeout
  • Leader: Sends heartbeats and replicates log entries

Key Mechanisms

  • Leader Election: Random election timeouts (150-300ms) prevent split votes
  • Log Replication: AppendEntries RPC ensures consistency across nodes
  • Safety: Only entries committed to majority are applied to state machine

πŸ“š Configuration

Edit .env.example and rename to .env:

# Node Configuration
NODE_ID=1
NODE_ADDR=localhost:50051
CLUSTER_NODES=localhost:50051,localhost:50052,localhost:50053

# Timing Parameters
ELECTION_TIMEOUT_MIN=150        # milliseconds
ELECTION_TIMEOUT_MAX=300        # milliseconds
HEARTBEAT_INTERVAL=50           # milliseconds

# Gateway
GATEWAY_ADDR=0.0.0.0:8080

🐳 Docker Deployment

Start 3-Node Cluster

docker-compose up -d

# Monitor logs
docker-compose logs -f

Ports

  • Raft Node 1: 50051
  • Raft Node 2: 50052
  • Raft Node 3: 50053
  • REST API Gateway: 8080

πŸ§ͺ Testing

Unit Tests

make test

Manual Testing

# Check health
curl http://localhost:8080/health

# Create keys
for i in {1..10}; do
  curl -X PUT http://localhost:8080/kv/key$i \
    -H "Content-Type: application/json" \
    -d '{"value": "value'$i'"}'
done

# Retrieve keys
curl http://localhost:8080/kv/key1

# Delete keys
curl -X DELETE http://localhost:8080/kv/key1

πŸ“ˆ Scalability

  • Horizontal scaling by adding more Raft nodes
  • Log compaction through snapshots (planned)
  • Async replication for high throughput

πŸ” Security Considerations

  • TLS support for inter-node communication (planned)
  • Authentication and authorization (planned)
  • Rate limiting on API endpoints (planned)

🚧 Future Enhancements

  • Snapshot management and log compaction
  • TLS encryption for RPC
  • gRPC for inter-node communication
  • Persistence layer (RocksDB)
  • Metrics and monitoring (Prometheus)
  • Web UI for cluster visualization
  • Benchmarking suite
  • Configuration hot-reload

πŸ“– Learning Resources

πŸ“„ License

MIT License - See LICENSE file for details

πŸ‘€ Author

DevPatel-11


Status: MVP Complete - Core Raft consensus and KV store operational Last Updated: December 2025

About

Distributed Key-Value Store with Raft Consensus Algorithm - Implementation of a production-ready KV store with leader election, log replication, snapshots, and fault tolerance

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors