-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Summary
The Brev model-serving guides instruct users to run pip install litellm. This installs the SDK but not the proxy dependencies. Running litellm as a server then fails with:
ImportError: Missing dependency No module named 'backoff'.
Run `pip install 'litellm[proxy]'`
This error surfaces after the model has already downloaded (~45 minutes for an 80 GB model), making it feel like a fresh blocker at the worst possible moment.
There is a second issue on Ubuntu (the default Brev instance OS): pip install places executables in ~/.local/bin, which is not in the default PATH. Running litellm immediately after install produces command not found.
Steps to reproduce
- Follow any Brev model-serving guide that includes
pip install litellm - Wait for model download to complete
- Run
litellm --model openai/... --api_base http://localhost:8000/v1 --port 4000 - See
ImportError: No module named 'backoff'
Expected behavior
The documented install command produces a fully functional litellm CLI.
Correct install and start sequence
pip install 'litellm[proxy]' # NOT pip install litellm
export PATH="$HOME/.local/bin:$PATH" # ~/.local/bin not in PATH by default on Ubuntu
export OPENAI_API_KEY=dummy # litellm requires this even for local endpoints
nohup litellm \
--model openai/qwen3-coder-next \
--api_base http://localhost:8000/v1 \
--drop_params \
--port 4000 > /tmp/litellm.log 2>&1 &Requested doc changes
- Replace
pip install litellmwithpip install 'litellm[proxy]'in all model-serving guides - Add PATH export for Ubuntu:
export PATH="$HOME/.local/bin:$PATH" - Add a verification step immediately after install:
litellm --version # should print version without ImportError
Impact
Trivial fix (docs-only). High user impact due to timing — hits after the longest wait in the entire workflow.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels