I've tried to apply SpinQuant to llama.cpp by following steps
- Fuse R1, R2 Rotation Matrix (UnQuantized)
- Convert to gguf file (UnQuantized)
- Quantize to data_types supported in llama.cpp (ex) Q8_0,Q4_0 (Quantized)
However, Models fused by R1,R2 Matrix show worse perplexity than models quantized by Round-to-Nearest supported in llama.cpp.
In order to implement this, I've skipped fusing R4 Inverse Matrix,since it would require additional implementation to llama.cpp file.
Have you found these issues. If so, could you reveal the reason behind this?
I've tried to apply SpinQuant to llama.cpp by following steps
However, Models fused by R1,R2 Matrix show worse perplexity than models quantized by Round-to-Nearest supported in llama.cpp.
In order to implement this, I've skipped fusing R4 Inverse Matrix,since it would require additional implementation to llama.cpp file.
Have you found these issues. If so, could you reveal the reason behind this?