Skip to content

[cuda] initial Q1_0 backend#18

Draft
khosravipasha wants to merge 3 commits intomasterfrom
q1-cuda
Draft

[cuda] initial Q1_0 backend#18
khosravipasha wants to merge 3 commits intomasterfrom
q1-cuda

Conversation

@khosravipasha
Copy link
Copy Markdown
Collaborator

DRAFT for running CIs and etc, main PR will go main llama.cpp.

@khosravipasha
Copy link
Copy Markdown
Collaborator Author

  Benchmark Results — RTX 5090, Q1_0 CUDA, Flash Attention

  Speed (llama-bench -fa 1)

  ┌─────────────┬──────────┬─────────────┬─────────────┐
  │    Model    │   Size   │ pp512 (t/s) │ tg128 (t/s) │
  ├─────────────┼──────────┼─────────────┼─────────────┤
  │ Bonsai-1.7B │ 231 MiB  │ 29,815      │ 627         │
  ├─────────────┼──────────┼─────────────┼─────────────┤
  │ Bonsai-4B   │ 540 MiB  │ 18,585      │ 486         │
  ├─────────────┼──────────┼─────────────┼─────────────┤
  │ Bonsai-8B   │ 1.07 GiB │ 12,238      │ 373         │
  └─────────────┴──────────┴─────────────┴─────────────┘

  KL Validation (Q1_0 vs dequantized F16, WikiText-2, 20 chunks)

  ┌─────────────┬──────────┬────────────┬────────┬────────┐
  │    Model    │ Mean KLD │ Same Top p │ RMS Δp │ Status │
  ├─────────────┼──────────┼────────────┼────────┼────────┤
  │ Bonsai-1.7B │ 0.000419 │ 98.94%     │ 0.555% │ Pass   │
  ├─────────────┼──────────┼────────────┼────────┼────────┤
  │ Bonsai-4B   │ 0.000429 │ 98.51%     │ 0.593% │ Pass   │
  ├─────────────┼──────────┼────────────┼────────┼────────┤
  │ Bonsai-8B   │ 0.000514 │ 98.71%     │ 0.635% │ Pass   │
  └─────────────┴──────────┴────────────┴────────┴────────┘

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds initial CUDA backend support for the GGML_TYPE_Q1_0 quantization format, wiring it into CUDA dequantization/conversion paths and the quantized matmul kernels (MMVQ/MMQ).

Changes:

  • Implemented Q1_0×Q8_1 dot-product support and hooked it into mul_mat_vec_q (MMVQ).
  • Added MMQ (mul_mat_q) support for Q1_0, including MMA-tile loading and template instantiation generation.
  • Enabled Q1_0 for CUDA get-rows and type conversion/dequantization helpers.

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
ggml/src/ggml-cuda/vecdotq.cuh Adds Q1_0 dot-product helpers (Q1_0 × Q8_1) and VDR constants.
ggml/src/ggml-cuda/template-instances/mmq-instance-q1_0.cu Adds generated MMQ instantiation unit for Q1_0.
ggml/src/ggml-cuda/template-instances/generate_cu_files.py Ensures MMQ instantiation files are generated for Q1_0.
ggml/src/ggml-cuda/mmvq.cu Routes Q1_0 through MMVQ vector-dot and VDR selection, and adds type dispatch.
ggml/src/ggml-cuda/mmq.cuh Adds Q1_0 MMQ traits + tile loader and ds-layout selection.
ggml/src/ggml-cuda/mmq.cu Adds Q1_0 MMQ type dispatch and selection logic gating.
ggml/src/ggml-cuda/ggml-cuda.cu Advertises Q1_0 support in CUDA backend op capability checks.
ggml/src/ggml-cuda/getrows.cu Enables get_rows for Q1_0 via dequantize kernel.
ggml/src/ggml-cuda/dequantize.cuh Adds device-side dequantization routine for Q1_0.
ggml/src/ggml-cuda/convert.cu Enables Q1_0 conversions to FP16/FP32 (contiguous and non-contiguous).
ggml/src/ggml-cuda/common.cuh Adds CUDA type traits for Q1_0 (qk/qr/qi).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@khosravipasha
Copy link
Copy Markdown
Collaborator Author

khosravipasha commented Apr 8, 2026

  ┌───────┬───────────┬───────────┬───────────┬───────────┐
  │ Model │ Old pp512 │ New pp512 │ Old tg128 │ New tg128 │
  ├───────┼───────────┼───────────┼───────────┼───────────┤
  │ 1.7B  │ 29,815    │ 29,250    │ 627       │ 626       │
  ├───────┼───────────┼───────────┼───────────┼───────────┤
  │ 4B    │ 18,585    │ 18,621    │ 486       │ 485       │
  ├───────┼───────────┼───────────┼───────────┼───────────┤
  │ 8B    │ 12,238    │ 12,287    │ 373       │ 374       │
  └───────┴───────────┴───────────┴───────────┴───────────┘
  
  
 ┌───────┬──────────┬────────────┬────────┐
  │ Model │ Mean KLD │ Same Top p │ RMS Δp │
  ├───────┼──────────┼────────────┼────────┤
  │ 1.7B  │ 0.000419 │ 98.94%     │ 0.555% │
  ├───────┼──────────┼────────────┼────────┤
  │ 4B    │ 0.000429 │ 98.51%     │ 0.593% │
  ├───────┼──────────┼────────────┼────────┤
  │ 8B    │ 0.000514 │ 98.71%     │ 0.635% │
  └───────┴──────────┴────────────┴────────┘
  

khosravipasha pushed a commit that referenced this pull request Apr 10, 2026
)

* ggml: backend-agnostic tensor parallelism

* support for GPT-OSS, Qwen 3 MoE

* partial Vulkan fix

* add support for 4/8 GPUs

* unconditional peer access

* re-use buffers + ggml contexts

* fix output pattern

* NCCL support

* GGML: HIP: add RCCL support

* Remove shfl and AllReduce from backend interface

* move allocation workaround out of ggml-alloc.c

* 2d tensor set/get support

* Fix the seg fault without NCCL

* Apply suggestion from JohannesGaessler

* support for tensor dims % n_devs != 0

* fix view_offs scaling

* arbitrary num. of GPUs/tensor split

* fix compilation

* better granularity estimate

* Support device-specific host buffer types if all underlying backends expose the same type. This allows using pinned memory instead of pageable memory for CUDA.

Fix compilation errors.

* partial Qwen 3 Next support

* Fix qwen3 30b (#8)

* Fix crash with Qwen-30B-A3B Q4_0

Qwen-30B-A3B Q4_0 has an intermediate dimension of 768. Using a granularity of 256 forces an uneven split between GPUs, which is not supported by the current implementation.

* Decide block size based on tensor quantization type

* Fix crashes due to KV cache serialization (#9)

KV cache serialization requires non-zero offsets on the tensor. Add support in the meta backend to set/get a tensor with a non-zero offset.

* metal : fix build (#7)

* static memory allocations, fix usage count

* fix tensor granularity

* more even memory distribution

* use BF16 for allreduce

* rebase fixup

* better error message for unsupported architectures

* Fix device mismatch during scatter of allReduce. (#11)

There is a mismatch between the dst buffer device and the backend device, causing the use of sync copies

* Enable the previous allreduce implementation. It is better in both perf and stability (#12)

* delay AllReduce for Moe for less I/O

* build : clean-up compile warnings

* backend : move most of the meta backend API to ggml-backend-impl.h

* cont : hide unused public API in the implementation

* llama : use llama_device + remove ggml_backend_dev_is_meta()

* ggml-backend : remove unused alloc include

* minor : remove regex include

* ggml : introduce ggml-ext.h for staging new APIs

* rebase fixup

* fix tests

* llama : more robust logic for determining Meta devices (#16)

* llama : more robust logic for determining Meta devices

* cont : fix devs size check

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* cont : fix log type

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* disable roundtrip for meta backend

* fix arch selection

* Qwen 3.5 support

* fix Gemma 4 MoE

* fix OpenVino, SYCL

* fix test-llama-archs for CPU-only builds

* Fix Qwen 3.5 MoE

* disable meta backend tests for WebGPU

* tests : filter CPU-based devices from the Meta backend tests (#17)

* meta : formatting, naming, indentation (#18)

* formatting : llama-model.cpp

* formatting : ggml-ext.h

* formatting : ggml-backend-meta.cpp

* meta : add TODO

* add documentation

* better error messages

* fix GPT-OSS

---------

Co-authored-by: Carl Philipp Klemm <carl@uvos.xyz>
Co-authored-by: Gaurav Garg <gaugarg@nvidia.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants