Skip to content

[Inquiry]: Unable to use PEFT for training #48

@moyitech

Description

@moyitech

Basic Information - Models Used

MiniMax-M2

Description

When I try to load it using bf16, the printed model still shows the linear layers as fp8, and it cannot be dequantized.

Image

Image

If you don't dequantize, the Peft process will display the error message: "The model you are trying to fine-tune is quantized with QuantizationMethod.FP8, but that quantization method does not support training."

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions