Reproducible LLM inference benchmarks (prefill vs decode throughput) to inform requirements for an intermediate memory tier (HBF-class) between HBM and SSD.
-
Updated
Feb 26, 2026 - Python
Reproducible LLM inference benchmarks (prefill vs decode throughput) to inform requirements for an intermediate memory tier (HBF-class) between HBM and SSD.
Interactive 2D/3D visualizer for llama.cpp benchmarks • Find optimal ngl / batch / ubatch parameters with dynamic filtering
Add a description, image, and links to the llama-bench topic page so that developers can more easily learn about it.
To associate your repository with the llama-bench topic, visit your repo's landing page and select "manage topics."