Skip to content

carlosfundora/llama.cpp-1-bit-turbo

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8,689 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llama.cpp 1-Bit Turbo EAGLE

License: MIT


llama.cpp 1-Bit Turbo EAGLE — AMD ROCm Inference with 1-Bit Models, RotorQuant KV, and EAGLE3 Speculative Decoding

A fork of llama.cpp that brings high-performance inference to AMD consumer GPUs (RX 6000/7000 series) with 1-bit GGUF model support, RotorQuant KV cache compression, EAGLE3 speculative decoding, and PHANTOM-X ghost-draft speculation.


Core Features

📦 PrismML 1-Bit GGUF Model Support (Q1_0_G128)

Native Q1_0_G128 ternary quantization (-1, 0, +1) with 128-element groups:

  • HIP/CUDA/Metal GPU dequantization and dot-product kernels
  • CPU fallback dequant and quantize-fns
  • Enables GPU inference of PrismML Bonsai 1-bit GGUF models
  • GGUF type 41 remapped to GGML_TYPE_Q1_0_g128 for PrismML compatibility

Benchmarks (AMD RX 6700 XT, 12GB VRAM):

Model Prompt (t/s) Generation (t/s) VRAM
Bonsai-1.7B 2152 209 ~0.5 GB
Bonsai-4B 867 122 ~1.1 GB
Bonsai-8B 454 92 ~2.2 GB

🌀 RotorQuant KV Cache Compression

Geometric-rotation-based KV cache quantization ported from the RotorQuant paper by Scrya (ICLR 2026). Registered as native GGML types with CPU, CUDA/HIP, and Flash Attention support:

GGML Type Method Bits Rotation
GGML_TYPE_PLANAR3_0 (44) PlanarQuant 3-bit 2D Givens
GGML_TYPE_PLANAR4_0 (45) PlanarQuant 4-bit 2D Givens
GGML_TYPE_ISO3_0 (46) IsoQuant 3-bit Hadamard/Isometric
GGML_TYPE_ISO4_0 (47) IsoQuant 4-bit Hadamard/Isometric
  • CPU quantize/dequantize, CUDA/HIP dispatch, Flash Attention V-dequant path
  • Data-oblivious (no calibration needed)

🦅 EAGLE3 Speculative Decoding

Full integration of EAGLE3 (Lossless Acceleration of LLM Decoding by Feature Extrapolation) by Yuhui Li et al. / SafeAILab:

  • Hidden state extraction → FC encoder → 1-layer transformer decoder → autoregressive draft loop
  • GGUF converter for EAGLE3 safetensors (convert_hf_to_gguf.py)
  • Automatic weight tying (lm_head → token_embd) for EAGLE3 models
  • spec_harness reusable validation tool for any speculative draft model
  • Works with llama-speculative-simple — just pass -md <eagle3.gguf> alongside your target model

AMD ROCm Exclusive Features

👻 PHANTOM-X — Zero-Copy Ghost-Draft N-Gram Speculation

Single-file C++ implementation (common/phantom.h, 525 lines) of our novel speculative decoding algorithm:

  • Bloom filter negative bigram learning with FNV-1a hash and aging
  • N-gram corpus for context-aware draft generation
  • Pinned memory ring buffer for zero-copy CPU→GPU DMA
  • Adaptive ghost-pool scaling (1–8 workers)
  • Integrated into common/speculative.cpp

🔧 Wave32 RDNA2 Kernel Optimizations

Custom 128-thread Wave32 paths for RDNA2 architecture:

  • RMSNorm — Wave32-optimized reduction
  • RoPE — Rotary position embedding for RDNA2 wavefront size

🔗 gfx1031 Compatibility

Full compatibility layer for gfx1031 (RX 6700 XT) via gfx1030 normalization:

  • HSA_OVERRIDE_GFX_VERSION=10.3.0 environment variable
  • GPU_TARGETS=gfx1030 build targeting
  • Null context guard for model loading failures (fail-fast on OOM)

Branches

Branch What
master Main — Q1_0_G128 + EAGLE3 + RotorQuant KV + PHANTOM-X + Wave32 RDNA2
audio/lfm2.5-bringup LFM2.5-Audio pipeline (experimental)

Quick Start (ROCm)

Prerequisites

  • AMD GPU with ROCm support (tested on RX 6700 XT / gfx1031)
  • ROCm 6.x
  • CMake 3.14+

Build

git clone https://github.com/carlosfundora/llama.cpp-1-bit-turbo.git
cd llama.cpp-1-bit-turbo

export HSA_OVERRIDE_GFX_VERSION=10.3.0    # Required for gfx1031 → gfx1030
export GPU_TARGETS=gfx1030

cmake -B build -DGGML_HIP=ON -DGPU_TARGETS=gfx1030 -DCMAKE_BUILD_TYPE=Release
cmake --build build -j$(nproc)

Serve a 1-Bit Model

./build/bin/llama-server \
  -m /path/to/Bonsai-1.7B-Q1_0.gguf \
  --host 0.0.0.0 --port 8080

EAGLE3 Speculative Decoding

./build/bin/llama-speculative-simple \
  -m /path/to/Bonsai-4B.gguf \
  -md /path/to/Bonsai-4B-EAGLE3-f16.gguf \
  -ngl 99 -ngld 99 \
  -p "The capital of France is" \
  -n 128

Upstream README

Quick start

Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:

Once installed, you'll need a model to work with. Head to the Obtaining and quantizing models section to learn more.

Example command:

# Use a local model file
llama-cli -m my_model.gguf

# Or download and run a model directly from Hugging Face
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF

# Launch OpenAI-compatible API server
llama-server -hf ggml-org/gemma-3-1b-it-GGUF

Description

The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud.

  • Plain C/C++ implementation without any dependencies
  • Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
  • AVX, AVX2, AVX512 and AMX support for x86 architectures
  • RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V architectures
  • 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
  • Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
  • Vulkan and SYCL backend support
  • CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity

The llama.cpp project is the main playground for developing new features for the ggml library.

Models

Typically finetunes of the base models below are supported as well.

Instructions for adding support for new models: HOWTO-add-model.md

Text-only

Multimodal

Bindings
UIs

(to have a project listed here, it should clearly state that it depends on llama.cpp)

Tools
  • akx/ggify – download PyTorch models from Hugging Face Hub and convert them to GGML
  • akx/ollama-dl – download models from the Ollama library to be used directly with llama.cpp
  • crashr/gppm – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
  • gpustack/gguf-parser - review/check the GGUF file and estimate the memory usage
  • Styled Lines (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
  • unslothai/unsloth – 🦥 exports/saves fine-tuned and trained models to GGUF (Apache-2.0)
Infrastructure
  • Paddler - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
  • GPUStack - Manage GPU clusters for running LLMs
  • llama_cpp_canister - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
  • llama-swap - transparent proxy that adds automatic model switching with llama-server
  • Kalavai - Crowdsource end to end LLM deployment at any scale
  • llmaz - ☸️ Easy, advanced inference platform for large language models on Kubernetes.
  • LLMKube - Kubernetes operator for llama.cpp with multi-GPU and Apple Silicon Metal support"
Games
  • Lucy's Labyrinth - A simple maze game where agents controlled by an AI model will try to trick you.

Supported backends

Backend Target devices
Metal Apple Silicon
BLAS All
BLIS All
SYCL Intel and Nvidia GPU
OpenVINO [In Progress] Intel CPUs, GPUs, and NPUs
MUSA Moore Threads GPU
CUDA Nvidia GPU
HIP AMD GPU
ZenDNN AMD CPU
Vulkan GPU
CANN Ascend NPU
OpenCL Adreno GPU
IBM zDNN IBM Z & LinuxONE
WebGPU [In Progress] All
RPC All
Hexagon [In Progress] Snapdragon
VirtGPU VirtGPU APIR

Obtaining and quantizing models

The Hugging Face platform hosts a number of LLMs compatible with llama.cpp:

You can either manually download the GGUF file or directly use any llama.cpp-compatible models from Hugging Face or other model hosting sites, by using this CLI argument: -hf <user>/<model>[:quant]. For example:

llama-cli -hf ggml-org/gemma-3-1b-it-GGUF

By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable MODEL_ENDPOINT. The MODEL_ENDPOINT must point to a Hugging Face compatible API endpoint.

After downloading a model, use the CLI tools to run it locally - see below.

llama.cpp requires the model to be stored in the GGUF file format. Models in other data formats can be converted to GGUF using the convert_*.py Python scripts in this repo.

The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with llama.cpp:

To learn more about model quantization, read this documentation

A CLI tool for accessing and experimenting with most of llama.cpp's functionality.

  • Run in conversation mode

    Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding -cnv and specifying a suitable chat template with --chat-template NAME

    llama-cli -m model.gguf
    
    # > hi, who are you?
    # Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today?
    #
    # > what is 1+1?
    # Easy peasy! The answer to 1+1 is... 2!
  • Run in conversation mode with custom chat template
    # use the "chatml" template (use -h to see the list of supported templates)
    llama-cli -m model.gguf -cnv --chat-template chatml
    
    # use a custom template
    llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:'
  • Constrain the output with a custom grammar
    llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:'
    
    # {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}

    The grammars/ folder contains a handful of sample grammars. To write your own, check out the GBNF Guide.

    For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/

A lightweight, OpenAI API compatible, HTTP server for serving LLMs.

  • Start a local HTTP server with default configuration on port 8080
    llama-server -m model.gguf --port 8080
    
    # Basic web UI can be accessed via browser: http://localhost:8080
    # Chat completion endpoint: http://localhost:8080/v1/chat/completions
  • Support multiple-users and parallel decoding
    # up to 4 concurrent requests, each with 4096 max context
    llama-server -m model.gguf -c 16384 -np 4
  • Enable speculative decoding
    # the draft.gguf model should be a small variant of the target model.gguf
    llama-server -m model.gguf -md draft.gguf
  • Serve an embedding model
    # use the /embedding endpoint
    llama-server -m model.gguf --embedding --pooling cls -ub 8192
  • Serve a reranking model
    # use the /reranking endpoint
    llama-server -m model.gguf --reranking
  • Constrain all outputs with a grammar
    # custom grammar
    llama-server -m model.gguf --grammar-file grammar.gbnf
    
    # JSON
    llama-server -m model.gguf --grammar-file grammars/json.gbnf

A tool for measuring the perplexity 1 (and other quality metrics) of a model over a given text.

  • Measure the perplexity over a text file
    llama-perplexity -m model.gguf -f file.txt
    
    # [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ...
    # Final estimate: PPL = 5.4007 +/- 0.67339
  • Measure KL divergence
    # TODO

Benchmark the performance of the inference for various parameters.

  • Run default benchmark
    llama-bench -m model.gguf
    
    # Output:
    # | model               |       size |     params | backend    | threads |          test |                  t/s |
    # | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |
    # | qwen2 1.5B Q4_0     | 885.97 MiB |     1.54 B | Metal,BLAS |      16 |         pp512 |      5765.41 ± 20.55 |
    # | qwen2 1.5B Q4_0     | 885.97 MiB |     1.54 B | Metal,BLAS |      16 |         tg128 |        197.71 ± 0.81 |
    #
    # build: 3e0ba0e60 (4229)

A minimal example for implementing apps with llama.cpp. Useful for developers.

  • Basic text completion
    llama-simple -m model.gguf
    
    # Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of

Contributing

  • Contributors can open PRs
  • Collaborators will be invited based on contributions
  • Maintainers can push to branches in the llama.cpp repo and merge PRs into the master branch
  • Any help with managing issues, PRs and projects is very appreciated!
  • See good first issues for tasks suitable for first contributions
  • Read the CONTRIBUTING.md for more information
  • Make sure to read this: Inference at the edge
  • A bit of backstory for those who are interested: Changelog podcast

Other documentation

Development documentation

Seminal papers and background on the models

If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:

XCFramework

The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS, and macOS. It can be used in Swift projects without the need to compile the library from source. For example:

// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.

import PackageDescription

let package = Package(
    name: "MyLlamaPackage",
    targets: [
        .executableTarget(
            name: "MyLlamaPackage",
            dependencies: [
                "LlamaFramework"
            ]),
        .binaryTarget(
            name: "LlamaFramework",
            url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
            checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
        )
    ]
)

The above example is using an intermediate build b5046 of the library. This can be modified to use a different version by changing the URL and checksum.

Completions

Command-line completion is available for some environments.

Bash Completion

$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash

Optionally this can be added to your .bashrc or .bash_profile to load it automatically. For example:

$ echo "source ~/.llama-completion.bash" >> ~/.bashrc

Dependencies

  • yhirose/cpp-httplib - Single-header HTTP server, used by llama-server - MIT license
  • stb-image - Single-header image format decoder, used by multimodal subsystem - Public domain
  • nlohmann/json - Single-header JSON library, used by various tools/examples - MIT License
  • miniaudio.h - Single-header audio format decoder, used by multimodal subsystem - Public domain
  • subprocess.h - Single-header process launching solution for C and C++ - Public domain

1-Bit Turbo EAGLE — CLI Reference

EAGLE3 Speculative Decoding

Run EAGLE3 speculative decoding with llama-speculative-simple:

# Basic EAGLE3 speculative decoding
./build/bin/llama-speculative-simple \
  -m /path/to/Bonsai-4B.gguf \
  -md /path/to/Bonsai-4B-EAGLE3-f16.gguf \
  -ngl 99 -ngld 99 \
  -p "The capital of France is" \
  -n 128

# With FlashAttention enabled
./build/bin/llama-speculative-simple \
  -m /path/to/Bonsai-4B.gguf \
  -md /path/to/Bonsai-4B-EAGLE3-f16.gguf \
  -ngl 99 -ngld 99 --flash-attn \
  -p "Write a Python function to compute fibonacci numbers" \
  -n 256
Flag Description
-m <path> Target model (Q1_0_G128 GGUF)
-md <path> EAGLE3 draft model (F16 or quantized GGUF)
-ngl <n> GPU layers for target model (99 = all)
-ngld <n> GPU layers for draft model (99 = all)
--flash-attn / -fa Enable FlashAttention
-n <n> Number of tokens to generate
-p <text> Prompt text
--draft-max <n> Max draft tokens per round (default: 5)
--draft-p-min <f> Min confidence to continue drafting (default: 0.5)

Converting EAGLE3 Models to GGUF

# Convert EAGLE3 safetensors → GGUF (auto-detects EAGLE3 architecture)
python convert_hf_to_gguf.py /path/to/Bonsai-4B-EAGLE3/ --outtype f16

# Weight tying: lm_head is automatically skipped, runtime uses token_embd

Speculative Harness (spec_harness) — Validation Tool

Reusable tool for validating any speculative draft model against its target:

# Build
cmake --build build --target llama-spec-harness

# Capture target hidden states + tokens
./build/bin/llama-spec-harness \
  -m /path/to/Bonsai-4B.gguf -ngl 99 \
  -p "The capital of France is" -n 50 \
  -o /tmp/spec_capture.bin

# Validate with Python reference decoder
python scripts/spec_harness.py validate \
  --capture /tmp/spec_capture.bin \
  --eagle3-model /path/to/Bonsai-4B-EAGLE3/ \
  --report

Q1_0_G128 Benchmarks (AMD RX 6700 XT, gfx1030, 12GB)

Model           pp (t/s)    tg (t/s)    VRAM
Bonsai-1.7B     2152        209         ~0.5 GB
Bonsai-4B       867         122         ~1.1 GB
Bonsai-8B       454         92          ~2.2 GB

Environment

export HSA_OVERRIDE_GFX_VERSION=10.3.0   # Required for gfx1031 → gfx1030
export GPU_TARGETS=gfx1030               # Build target

# Build
cmake -B build -DGGML_HIP=ON -DGPU_TARGETS=gfx1030 -DCMAKE_BUILD_TYPE=Release
cmake --build build -j$(nproc)

Footnotes

  1. https://huggingface.co/docs/transformers/perplexity

About

HIP/ROCm fork optimized for AMD RDNA2 (gfx1030) with PrismML Q1_0_G128 1-bit quant support, RotorQuant, TurboQuant, EAGLE3 and P-EAGLE speculative decoding, and full Wave32 kernel optimizations.

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages

  • C++ 56.7%
  • C 12.5%
  • Python 7.5%
  • Cuda 6.1%
  • HTML 3.7%
  • TypeScript 2.9%
  • Other 10.6%