You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# API key for requests to vLLM (use the same key as above)
141
+
SEATABLE_AI_LLM_KEY=''
142
+
143
+
# Model identifier from HuggingFace
144
+
# (e.g. RedHatAI/gemma-3-12b-it-quantized.w4a16)
145
+
SEATABLE_AI_LLM_MODEL=''
146
+
```
147
+
148
+
Remember to restart SeaTable AI after making any changes.
149
+
150
+
## Test vLLM
130
151
131
-
You can send your first request to vLLM from the command line with the following example. This requires `curl` and `jq` to be installed on the server.
132
-
To prevent Caddy from blocking the request, add the server's IP address to `VLLM_ALLOWED_IPS` environment variable.
152
+
### Send a request
153
+
154
+
You can send your first request to vLLM with the following example from the command line of the **SeaTable AI container**.
155
+
This makes sure that all your environment variables are correctly set and that the ip address of SeaTable AI is part of the list of allowed IP addresses configured with `VLLM_ALLOWED_IPS`.
# API key for requests to vLLM (use the same key as above)
207
-
SEATABLE_AI_LLM_KEY=''
208
-
209
-
# Model identifier from HuggingFace
210
-
# (e.g. RedHatAI/gemma-3-12b-it-quantized.w4a16)
211
-
SEATABLE_AI_LLM_MODEL=''
212
-
```
213
-
214
-
Remember to restart SeaTable AI after making any changes.
218
+
:partying_face: **Congratulations!** Everything is set up and you can start to build AI-powered automations in SeaTable with your own self-hosted vLLM.
0 commit comments