Skip to content
#

low-precision

Here are 12 public repositories matching this topic...

Implemented post-training quantisation (PTQ) on transformer-based reasoning models using 8-bit and 4-bit weight quantisation (INT8, INT4) with frameworks like PyTorch and Hugging Face Transformers. Leveraged libraries such as bitsandbytes to reduce model size and accelerate inference, while evaluating performance degradation on reasoning tasks. Com

  • Updated Apr 21, 2026
  • Jupyter Notebook

QuantLab-8bit is a reproducible benchmark of 8-bit quantization on compact vision backbones. It includes FP32 baselines, PTQ (dynamic & static), QAT, ONNX exports, parity checks, ORT CPU latency, and visual diagnostics.

  • Updated Sep 25, 2025
  • Python

Improve this page

Add a description, image, and links to the low-precision topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the low-precision topic, visit your repo's landing page and select "manage topics."

Learn more