A concurrent hashmap in Go that uses only atomic operations. No mutexes, no RWLocks.
Wanted to see if I could beat sync.Map and mutex-based maps under heavy contention. Turns out you can, especially for write-heavy workloads where locks become a bottleneck.
Ran with 1000 readers + 500 writers for 30 seconds:
| Map | Total ops/s | Notes |
|---|---|---|
| This one | 6.16M | Balanced reads (3M) and writes (3.2M) |
| Sharded (256) | 4.57M | Fast reads but writes tank to 94K/s |
| sync.Map | 4.32M | Decent all-around |
| RWMutex | 4.04M | Reads fine, writes drop to 20K/s |
| Simple Mutex | 2.88M | Not great under contention |
The interesting part is latency. Mutex-based maps spike to 100-200ms P99 for writes under load. This stays under 1Β΅s.
Each bucket is a linked list. Insertions use CompareAndSwap to atomically prepend items. Reads just walk the chain with atomic loads.
Impl[T]
βββ active -> Map with N buckets
β βββ buckets[i] -> sentinel -> item -> item -> ...
βββ secondary -> (used during resize)
βββ transferProgress -> (resize coordination)
When a bucket gets too long (>4 items), the map resizes:
- Create new map with 8x more buckets
- New writes go to both old and new
- Copy everything over
- Atomic swap
Reads keep working the whole time.
const (
initialBuckets = 8 // start small
bucketGrowFactor = 8 // 8 -> 64 -> 512 -> 4K -> 32K -> ...
maxChainLength = 4 // resize when chain exceeds this
)Tune these based on your expected load. If you know you'll have 1M keys, maybe start with more buckets.
import "custommap/map/service"
m := service.NewImpl[string]()
m.Put("foo", "bar")
val, err := m.Get("foo")
if err != nil {
// not found
}# correctness test with race detector
go run -race scripts/verify.go
# benchmarks
go run scripts/tester.go- More atomic ops per operation than a simple mutex
- Allocates a new Item struct on every write
- Resize logic is tricky to get right (ask me how I know)
But: no deadlocks, no priority inversion, consistent latency under load.
MIT