Handle LiteLLM local VS remote parameters (like n_ctx, keep_alive, ...) from config yaml file to llm client. Invalid parameters cause errors when launching a completion.
llm_client.py:274
completion_args = {
"model": self.model,
"messages": messages,
"temperature": temp,
"keep_alive": self.keep_alive,
"num_ctx": self.num_ctx,
**kwargs,
}
Here, should be considered as some generic kwargs? Parsed considering the model type string?
Handle LiteLLM local VS remote parameters (like n_ctx, keep_alive, ...) from config yaml file to llm client. Invalid parameters cause errors when launching a completion.
Here, should be considered as some generic kwargs? Parsed considering the model type string?