-
Notifications
You must be signed in to change notification settings - Fork 5
Expand file tree
/
Copy pathdefault-values.env
More file actions
47 lines (39 loc) · 1.11 KB
/
default-values.env
File metadata and controls
47 lines (39 loc) · 1.11 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# Note: You only need to set the variables you normally would with '-e' flags.
# You do not need to set them all if they will go unused.
# Enable Inference Providers
## Set any providers you want enabled to 'true'
## E.g. ENABLE_VLLM=true
## Leave all disabled providers EMPTY
## E.g. ENABLE_OPENAI=
ENABLE_VLLM=
ENABLE_VERTEX_AI=
ENABLE_OPENAI=
ENABLE_OLLAMA=
# vLLM Inference Settings
VLLM_URL=
VLLM_API_KEY=
# vLLM Optional Variables
VLLM_MAX_TOKENS=
VLLM_TLS_VERIFY=
# OpenAI Inference Settings
OPENAI_API_KEY=
# Vertex AI Inference Settings
VERTEX_AI_PROJECT=
VERTEX_AI_LOCATION=
GOOGLE_APPLICATION_CREDENTIALS=
# Ollama Inference Settings
OLLAMA_URL=
# Question Validation Safety Shield Settings
## Ensure VALIDATION_PROVIDER is one of your enabled Inference Providers
## E.g. VALIDATION_PROVIDER=vllm if ENABLE_VLLM=true
VALIDATION_PROVIDER=
VALIDATION_MODEL_NAME=
# Llama Guard Settings
## Defaults to llama-guard3:8b if not set
SAFETY_MODEL=
## Defaults to http://host.docker.internal:11434/v1 if not set
SAFETY_URL=
## Only required for non-local environments with a api key
SAFETY_API_KEY=
# Other
LLAMA_STACK_LOGGING=