-
Notifications
You must be signed in to change notification settings - Fork 96
Pull requests: intel/auto-round
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
quick fix: gptqmodel no longer includes gptqmodel_marlin_kernels
#1671
opened Apr 9, 2026 by
xin3he
Contributor
Loading…
1 of 9 tasks
Fix omni model test CI issue
#1667
opened Apr 7, 2026 by
lvliang-intel
Contributor
Loading…
2 of 9 tasks
fix hadamard transform weight dtype, using float32 as default and in-place transformed weight .
#1665
opened Apr 7, 2026 by
lkk12014402
Contributor
Loading…
[step 1]support variable block input shapes for gemma4
#1656
opened Apr 3, 2026 by
wenhuach21
Contributor
Loading…
2 of 9 tasks
Enable low_cpu_mem_usage for mxfp/nvfp
#1648
opened Apr 2, 2026 by
Kaihui-intel
Contributor
Loading…
1 of 9 tasks
support WOQ model input, such as kimi2.5
#1642
opened Mar 31, 2026 by
xin3he
Contributor
Loading…
9 tasks
Enable NextStepDiffusion and support multi-device tuning for diffusion
#1640
opened Mar 30, 2026 by
xin3he
Contributor
Loading…
9 tasks
[Draft] Support TurboQuant KV-cache quantization
#1634
opened Mar 27, 2026 by
lvliang-intel
Contributor
•
Draft
2 of 9 tasks
Support ByteDance-Seed/BAGEL-7B-MoT quantization in w4a16 format
#1633
opened Mar 27, 2026 by
lvliang-intel
Contributor
Loading…
2 of 9 tasks
Support diffusion model AIDC-AI/Ovis-Image-7B quantization
#1616
opened Mar 25, 2026 by
lvliang-intel
Contributor
Loading…
2 of 9 tasks
Refactor module access to use PyTorch get_submodule / set_submodule
#1590
opened Mar 20, 2026 by
scopophobic
Contributor
Loading…
Robust FP8 layer detection for ignore_layers (#1283)
#1289
opened Jan 15, 2026 by
scopophobic
Contributor
Loading…
Fix ignore_layers not working for FP8 models
#1286
opened Jan 15, 2026 by
Copilot
AI
Loading…
11 tasks done
Previous Next
ProTip!
Adding no:label will show everything without a label.