A thread-safe, network-accessible LRU cache server written in Go.
GoCache is a from-scratch implementation of an in-memory cache system featuring:
- LRU eviction policy - Automatically removes least recently used items when at capacity
- Thread-safe operations - Handles concurrent access from multiple clients
- TCP network protocol - Remote access via simple text commands
- Prometheus metrics - Exposes cache metrics for monitoring
- Grafana dashboards - Real-time visualization of cache performance
- O(1) operations - Constant-time get, set, and delete operations
GoCache combines two data structures for optimal performance:
- Hash map - O(1) key lookups
- Doubly-linked list - O(1) insertion, deletion, and LRU ordering
Client → TCP Server → Cache (Hash Map + Doubly-Linked List)
When the cache reaches capacity, it automatically evicts the least recently used item. Every Get() or Set() operation moves the accessed item to the "most recent" position.
Test Environment:
- OS: Linux (amd64)
- CPU: Intel Core Ultra 7 155H
- RAM: 32GB
- Go version: 1.21+
- Cache capacity: 1024 keys
Average Time per Operation:
| Operation | Average Time |
|---|---|
| Get (hit) | ~125 ns/op |
| Get (miss) | ~145 ns/op |
| Set (eviction) | ~400 ns/op |
| Concurrent workload | ~370 ns/op |
Hit Rate:
| Scenario | Keys | Workers | Requests | Target Hit Rate | Actual Hit Rate | Notes |
|---|---|---|---|---|---|---|
| Accuracy | 512 (50% capacity) | 20 | 100,000 | 80% | 79.93% | Baseline accuracy |
| Evictions | 1,100 (107% capacity) | 20 | 100,000 | 80% | 74.31% | Under eviction pressure |
| Concurrency | 750 (73% capacity) | 50 | 100,000 | 80% | 80.16% | High concurrency |
All scenarios ran for 5 minutes with 15-second Prometheus scrape intervals (20 data points)
- Docker and Docker Compose installed on your system
- Alternatively, Go 1.21+ if running locally without Docker
-
Clone the repository:
git clone https://github.com/BlaiseLM/gocache.git cd gocache -
Build the Docker images & run the services (in detached mode):
docker compose up --build -d
-
Verify the services:
- Cache server:
localhost:8080 - Prometheus metrics:
localhost:8081/metrics - Prometheus dashboard:
localhost:9090 - Grafana dashboard:
localhost:3000
- Cache server:
If you prefer to run the server without Docker:
go run server.goThe server will listen on localhost:8080 by default. Prometheus metrics will be available at localhost:8081/metrics.
Connect to the server using nc (netcat) or telnet:
nc localhost 8080Available Commands:
SET key value # Store a key-value pair
GET key # Retrieve a value by key
DELETE key # Remove a key-value pair
FLUSH # Clear entire cache
END # Close connection
Example Session:
$ nc localhost 8080
SET user:1 alice
OK
GET user:1
alice
GET user:2
(nil)
SET user:2 bob
OK
DELETE user:1
OK
GET user:1
(nil)
FLUSH
OK
GET user:2
(nil)
END
Closing connectionFor important details about the server's protocol and its compatibility with tools like telnet, see the Protocol Documentation.
- Start all services:
docker compose up -d- Verify services are running:
docker psYou should see containers for:
- Cache server (ports 8080, 8081)
- Prometheus (port 9090)
- Grafana (port 3000)
- Access the dashboards:
- Prometheus dashboard:
localhost:9090 - Grafana dashboard:
localhost:3000
- Prometheus dashboard:
- Navigate to
localhost:9090/query - Enter this PromQL query:
(total_cache_hits / (total_cache_hits + total_cache_misses)) * 100
- Click "Execute" to see the current hit rate
- Navigate to
localhost:3000 - Create a new dashboard
- Select Prometheus as data source
- Select "Time series" visualization
- Toggle "Code"
- Add the same PromQL query:
(total_cache_hits / (total_cache_hits + total_cache_misses)) * 100
- Click "Run queries"
- Start the load generator:
./load_generator.sh -k 1000 -w 20 -h 0.8 -d 300This runs for 5 minutes (300 seconds) with:
- 1000 keys
- 20 concurrent workers
- 80% target hit ratio
-
Verify metrics are updating:
- Check
localhost:8081/metrics - Look for
total_cache_hitsandtotal_cache_missesincrementing
- Check
-
Watch the hit rate graph:
- In Prometheus (localhost:9090) or Grafana (localhost:3000)
- The graph will update as the script runs
- With Prometheus scraping every 15 seconds, a 5-minute test provides 20 data points
Note: Running tests for at least 5 minutes is recommended to get sufficient data points (20+) for accurate hit rate calculations.
The benchmark results in this README were generated using:
Baseline accuracy:
./load_generator.sh -k 512 -w 20 -h 0.8 -d 300Under eviction pressure:
./load_generator.sh -k 1100 -w 20 -h 0.8 -d 300High concurrency:
./load_generator.sh -k 750 -w 50 -h 0.8 -d 300Run the test suite:
go test -vRun with race detection to verify thread safety:
go test -race
