Skip to content

Commit a991c75

Browse files
authored
Vllm update (#302)
* upated vllm setup * updated vllm
1 parent 7494de9 commit a991c75

1 file changed

Lines changed: 36 additions & 32 deletions

File tree

docs/installation/advanced/vllm.md

Lines changed: 36 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ VLLM_MODEL=''
116116

117117
### Start vLLM
118118

119-
You can start vLLM by running the following command inside inside `/opt/seatable-compose`:
119+
You can start vLLM by running the following command inside `/opt/seatable-compose`:
120120

121121
```bash
122122
docker compose up -d
@@ -126,24 +126,47 @@ Starting vLLM may take several minutes depending on the model size and computing
126126
On first startup, vLLM will automatically download the configured model from [HuggingFace](https://huggingface.co).
127127
Wait for the Docker container to report a healthy status before proceeding.
128128

129-
### Test vLLM from the command line
129+
Perfekt! Your local vLLM deployment is ready to use.
130+
131+
## SeaTable AI Configuration
132+
133+
To use vLLM for AI-based automation inside SeaTable, add the following settings to the `.env` file on the host where SeaTable AI is deployed:
134+
135+
```ini
136+
SEATABLE_AI_LLM_TYPE='hosted_vllm'
137+
138+
SEATABLE_AI_LLM_URL='https://<YOUR_VLLM_HOSTNAME>/v1'
139+
140+
# API key for requests to vLLM (use the same key as above)
141+
SEATABLE_AI_LLM_KEY=''
142+
143+
# Model identifier from HuggingFace
144+
# (e.g. RedHatAI/gemma-3-12b-it-quantized.w4a16)
145+
SEATABLE_AI_LLM_MODEL=''
146+
```
147+
148+
Remember to restart SeaTable AI after making any changes.
149+
150+
## Test vLLM
130151

131-
You can send your first request to vLLM from the command line with the following example. This requires `curl` and `jq` to be installed on the server.
132-
To prevent Caddy from blocking the request, add the server's IP address to `VLLM_ALLOWED_IPS` environment variable.
152+
### Send a request
153+
154+
You can send your first request to vLLM with the following example from the command line of the **SeaTable AI container**.
155+
This makes sure that all your environment variables are correctly set and that the ip address of SeaTable AI is part of the list of allowed IP addresses configured with `VLLM_ALLOWED_IPS`.
133156

134157
```bash
135-
cd /opt/seatable-compose
136-
source ./.env
137-
curl -fsSL https://${VLLM_HOSTNAME}/v1/chat/completions \
158+
docker exec -i seatable-ai bash -s <<EOF
159+
curl -fsSL \$SEATABLE_AI_LLM_URL/chat/completions \
138160
-H "Content-Type: application/json" \
139-
-H "Authorization: Bearer ${VLLM_API_KEY}" \
161+
-H "Authorization: Bearer \$SEATABLE_AI_LLM_KEY" \
140162
-d '{
141-
"model": "RedHatAI/gemma-3-12b-it-quantized.w4a16",
163+
"model": "$SEATABLE_AI_LLM_MODEL",
142164
"messages": [
143-
{"role": "system", "content": "You are a helpful assistant."},
144-
{"role": "user", "content": "How many inhabitants does Germany have?"}
165+
{"role": "system", "content": "You are a helpful assistant."},
166+
{"role": "user", "content": "How many inhabitants does Germany have?"}
145167
]
146-
}' | jq
168+
}'
169+
EOF
147170
```
148171

149172
### Example output
@@ -192,23 +215,4 @@ Here is an example response from vLLM:
192215
}
193216
```
194217

195-
Perfekt! Your local vLLM deployment is ready to use.
196-
197-
## SeaTable AI Configuration
198-
199-
To use vLLM for AI-based automation inside SeaTable, add the following settings to the `.env` file on the host where SeaTable AI is deployed:
200-
201-
```ini
202-
SEATABLE_AI_LLM_TYPE='hosted_vllm'
203-
204-
SEATABLE_AI_LLM_URL='https://<YOUR_VLLM_HOSTNAME>/v1'
205-
206-
# API key for requests to vLLM (use the same key as above)
207-
SEATABLE_AI_LLM_KEY=''
208-
209-
# Model identifier from HuggingFace
210-
# (e.g. RedHatAI/gemma-3-12b-it-quantized.w4a16)
211-
SEATABLE_AI_LLM_MODEL=''
212-
```
213-
214-
Remember to restart SeaTable AI after making any changes.
218+
:partying_face: **Congratulations!** Everything is set up and you can start to build AI-powered automations in SeaTable with your own self-hosted vLLM.

0 commit comments

Comments
 (0)