You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: add benchmark max tokens column to inference benchmarks table
Clarify the actual LLM_MAX_TOKENS used during benchmarking for each
provider, especially vLLM (2048 due to shared input+output context)
and Ollama (8192). Add notes on vllm-metal requirement for macOS and
qwen3:4b-instruct tag for non-thinking mode.
Copy file name to clipboardExpand all lines: README.md
+11-8Lines changed: 11 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -366,18 +366,21 @@ The app defaults to dark mode. Click the theme toggle in the header to switch to
366
366
The table below compares inference performance across different providers, deployment modes, and hardware profiles using a standardized code-translation workload (averaged over 3 runs).
> - All benchmarks use the same CodeTrans translation prompt. Token counts may vary slightly per run due to non-deterministic model output.
380
-
> - Ollama on Apple Silicon uses Metal (MPS) GPU acceleration — running it inside Docker would fall back to CPU-only inference.
379
+
> -**Benchmark Max Tokens** = `LLM_MAX_TOKENS` setting used during benchmarking (max output tokens per request).
380
+
> -\* vLLM was served with `--max-model-len 4096`, which is shared between input and output. `LLM_MAX_TOKENS` was set to 2,048 to leave room for input tokens within the 4,096 total context.
381
+
> - All benchmarks use the same CodeTrans translation prompt and identical inputs (3 runs: small python→java, medium python→rust, large python→go). Token counts may vary slightly per run due to non-deterministic model output.
382
+
> - Ollama on Apple Silicon uses Metal (MPS) GPU acceleration — running it inside Docker would fall back to CPU-only inference. The `qwen3:4b-instruct` tag must be used (not `qwen3:4b`) to disable the default thinking mode.
383
+
> - vLLM on Apple Silicon uses [vllm-metal](https://github.com/vllm-project/vllm-metal) — the standard `pip install vllm` does not support macOS.
381
384
> -[Intel OPEA Enterprise Inference](https://github.com/opea-project/Enterprise-Inference) runs on Intel Xeon CPUs without GPU acceleration.
0 commit comments