Update mixtral.md#1940
Conversation
Exllama kernels in GPTQConfig for faster inference and production load.
added link to official exllama github repo
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
pcuenca
left a comment
There was a problem hiding this comment.
Thank you! I have some questions / doubts about how it works, so I'd suggest we try to dispel them for readers too.
| print(tokenizer.decode(output[0], skip_special_tokens=True)) | ||
| ``` | ||
|
|
||
| If left unset , the "use_exllama" parameter defaults to True , enabling the exllama backend functionality, specifically designed to work with the "bits" value of 4. |
There was a problem hiding this comment.
| If left unset , the "use_exllama" parameter defaults to True , enabling the exllama backend functionality, specifically designed to work with the "bits" value of 4. | |
| If left unset, `use_exllama` defaults to `True` when kernels are installed. |
I don't fully follow, sorry. If the backend is designed for 4-bits and use_exllama is True by default, then it means:
- We can't use any other value (4 bits) in the GPTQConfig.
- Exllama would be enabled anyway if we don't provide the configuration object.
Is that correct? If it is, then I'd simply mention in a paragraph that exllama will be used when installed, and wouldn't provide a code example that might confuse readers.
There was a problem hiding this comment.
The exllama kernels are passed through the GPTQConfig object.Simply passing the GPTQConfig would do the trick for LLama Based LLMS.But the GPTQConfig object needs to be passed
There was a problem hiding this comment.
I created the GPTQConfig with other parameters defined
gptq_config = GPTQConfig(bits=4, use_exllama=True)
to help educate readers about some basic parameters in GPTQConfig object , when using exllama kernels .
|
|
||
| If left unset , the "use_exllama" parameter defaults to True , enabling the exllama backend functionality, specifically designed to work with the "bits" value of 4. | ||
|
|
||
| Note that for both QLoRA and GPTQ you need at least 30 GB of GPU VRAM to fit the model. You can make it work with 24 GB if you use `device_map="auto"`, like in the example above, so some layers are offloaded to CPU. |
There was a problem hiding this comment.
Is this also true when exllama is enabled?
There was a problem hiding this comment.
Using exllama kernels would significantly reduce only the inferencing speed of the fitted model as it uses 4-bit GPTQ weights for faster computation
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Exllama kernels using GPTQConfig for faster inference and production load. @davanstrien @younesbelkada @pcuenca