x86: implement AVX2 kernel for ggml_vec_dot_q1_0_g128_q8_0#11
x86: implement AVX2 kernel for ggml_vec_dot_q1_0_g128_q8_0#11SimesD61 wants to merge 1 commit intoPrismML-Eng:prismfrom
Conversation
The x86 implementation was a stub that called the scalar generic fallback. The ARM NEON kernel was already fully vectorized. This implements the same algorithm using AVX2 intrinsics. Key techniques: - vpshufb (mm_shuffle_epi8) to broadcast each 4-byte sub-block to 32 lanes - AND+cmpeq to decode 1-bit weights to sign bytes (+1/-1) - maddubs_epi16 + madd_epi16 for INT8 dot product reduction - 4 independent FMA accumulators to hide the 5-cycle FMA latency Performance on Intel i7-8700B (no AVX-512): - Before: ~0.04 tok/s (scalar fallback, 67x slower than ARM CPU) - After: ~8 tok/s (AVX2, matches compute-bound ceiling for Q1_0_g128) - ~200x speedup over the stub Falls back to generic implementation on non-AVX2 targets.
There was a problem hiding this comment.
Pull request overview
This PR replaces the x86 stub implementation of ggml_vec_dot_q1_0_g128_q8_0 with a real AVX2-optimized kernel, bringing x86 performance in line with the existing ARM NEON vectorized path.
Changes:
- Implement an AVX2 version of
ggml_vec_dot_q1_0_g128_q8_0using shuffle-based bit expansion and INT8 dot-product primitives. - Use multiple independent accumulators and reduce them at the end for improved throughput.
- Keep a generic fallback for non-AVX2 builds.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| GGML_ASSERT(n % QK1_0_g128 == 0); | ||
| GGML_ASSERT(nrc == 1); |
There was a problem hiding this comment.
This function uses GGML_ASSERT for argument checks, but the other vec_dot kernels in this file consistently use assert() (e.g., ggml_vec_dot_q4_0_q8_0 just below). For consistency and to avoid changing behavior in release builds (GGML_ASSERT may not be compiled out like assert), consider switching these to assert() like the rest of the file or documenting why GGML_ASSERT is required here.
| GGML_ASSERT(n % QK1_0_g128 == 0); | |
| GGML_ASSERT(nrc == 1); | |
| assert(n % QK1_0_g128 == 0); | |
| assert(nrc == 1); |
|
|
||
| #define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0) |
There was a problem hiding this comment.
The DOT_SUB macro uses _mm256_fmadd_ps but the implementation is only guarded by AVX2. AVX2 does not necessarily imply FMA is enabled at compile time (GGML_FMA is a separate CMake option), so this can fail to compile when building with -mavx2 but without -mfma. Consider either guarding this path with FMA/GGML_FMA or replacing the FMA with separate multiply + add so it builds correctly under AVX2-only.
| #define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0) | |
| #if defined(__FMA__) | |
| #define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0) | |
| #else | |
| #define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_add_ps(_mm256_mul_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d))), (acc)); } while (0) | |
| #endif |
| #define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0) | ||
|
|
||
| DOT_SUB(bv0, &y[i*4+0], sumf0); | ||
| DOT_SUB(bv1, &y[i*4+1], sumf1); | ||
| DOT_SUB(bv2, &y[i*4+2], sumf2); | ||
| DOT_SUB(bv3, &y[i*4+3], sumf3); |
There was a problem hiding this comment.
DOT_SUB is defined as a very large single-line macro inside the loop body, which makes the kernel hard to read, debug, and maintain (and increases the risk of subtle macro issues if the arguments ever change). Consider replacing it with a small static inline helper function (or at least a multi-line macro defined outside the loop) to improve maintainability without affecting performance.
|
Just tested the AVX2 impl on my i5 box. |
|
Good new our first CPU PR just got merged int llama.cpp master branch now, if you are still working on this please rebase with PrismML's master (just pulled the main llama.cpp) Changes: Q1_0_g128 naming is gone now, the original Q1_0 with group size 32 was deleted and Q1_0_g128 was renamed to Q1_0 now by default has group size 128. https://github.com/PrismML-Eng/llama.cpp/tree/master This one only has generic cpu (slow), and ARM NEON path, planning to gather the best x86 kernels from here and to send a PR there (and tag all the contributers). |
|
There is a lot of CPU PRs, planning to gether all in one and then send to the main llama.cpp |
) * ggml: backend-agnostic tensor parallelism * support for GPT-OSS, Qwen 3 MoE * partial Vulkan fix * add support for 4/8 GPUs * unconditional peer access * re-use buffers + ggml contexts * fix output pattern * NCCL support * GGML: HIP: add RCCL support * Remove shfl and AllReduce from backend interface * move allocation workaround out of ggml-alloc.c * 2d tensor set/get support * Fix the seg fault without NCCL * Apply suggestion from JohannesGaessler * support for tensor dims % n_devs != 0 * fix view_offs scaling * arbitrary num. of GPUs/tensor split * fix compilation * better granularity estimate * Support device-specific host buffer types if all underlying backends expose the same type. This allows using pinned memory instead of pageable memory for CUDA. Fix compilation errors. * partial Qwen 3 Next support * Fix qwen3 30b (#8) * Fix crash with Qwen-30B-A3B Q4_0 Qwen-30B-A3B Q4_0 has an intermediate dimension of 768. Using a granularity of 256 forces an uneven split between GPUs, which is not supported by the current implementation. * Decide block size based on tensor quantization type * Fix crashes due to KV cache serialization (#9) KV cache serialization requires non-zero offsets on the tensor. Add support in the meta backend to set/get a tensor with a non-zero offset. * metal : fix build (#7) * static memory allocations, fix usage count * fix tensor granularity * more even memory distribution * use BF16 for allreduce * rebase fixup * better error message for unsupported architectures * Fix device mismatch during scatter of allReduce. (#11) There is a mismatch between the dst buffer device and the backend device, causing the use of sync copies * Enable the previous allreduce implementation. It is better in both perf and stability (#12) * delay AllReduce for Moe for less I/O * build : clean-up compile warnings * backend : move most of the meta backend API to ggml-backend-impl.h * cont : hide unused public API in the implementation * llama : use llama_device + remove ggml_backend_dev_is_meta() * ggml-backend : remove unused alloc include * minor : remove regex include * ggml : introduce ggml-ext.h for staging new APIs * rebase fixup * fix tests * llama : more robust logic for determining Meta devices (#16) * llama : more robust logic for determining Meta devices * cont : fix devs size check Co-authored-by: Johannes Gäßler <johannesg@5d6.de> * cont : fix log type Co-authored-by: Johannes Gäßler <johannesg@5d6.de> --------- Co-authored-by: Johannes Gäßler <johannesg@5d6.de> * disable roundtrip for meta backend * fix arch selection * Qwen 3.5 support * fix Gemma 4 MoE * fix OpenVino, SYCL * fix test-llama-archs for CPU-only builds * Fix Qwen 3.5 MoE * disable meta backend tests for WebGPU * tests : filter CPU-based devices from the Meta backend tests (#17) * meta : formatting, naming, indentation (#18) * formatting : llama-model.cpp * formatting : ggml-ext.h * formatting : ggml-backend-meta.cpp * meta : add TODO * add documentation * better error messages * fix GPT-OSS --------- Co-authored-by: Carl Philipp Klemm <carl@uvos.xyz> Co-authored-by: Gaurav Garg <gaugarg@nvidia.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Problem
The x86 implementation of
ggml_vec_dot_q1_0_g128_q8_0inggml/src/ggml-cpu/arch/x86/quants.cwas a stub that immediately fell through to the scalar generic fallback:The ARM NEON implementation was already fully vectorized. On x86 this meant Bonsai 8B ran at ~0.04 tok/s — 67× slower than the ARM CPU path.
Solution
Full AVX2 implementation using the same algorithm as the NEON kernel:
vpshufbbit expansion: Each 32-bit sub-block is broadcast to 32 bytes via_mm_shuffle_epi8, then AND+cmpeq decodes 1-bit weights to sign bytes (+1/-1)maddubs_epi16+madd_epi16for efficient 8-bit multiply-accumulateblock_q1_0_g128layout)Performance (Intel i7-8700B, AVX2, no AVX-512)
The 8 tok/s result is at the compute-bound ceiling for Q1_0_g128 on this CPU — Q1_0_g128 is ~4x more compute-intensive per byte than Q4_0, so further gains would require AVX-512 or a fundamentally different algorithm.