Qualifire OpenAI‑Compatible Model for n8n Agents. This node injects the required
X-Qualifire-Api-Key header and uses the Qualifire OpenAI‑compatible base URL,
so you can connect it directly to the Agent → Chat Model input.
Base URL used by default:
https://proxy.qualifire.ai/api/providers/openai
docker-compose.yml
services:
n8n:
image: n8nio/n8n:latest
ports: ["5678:5678"]
environment:
- N8N_CUSTOM_EXTENSIONS=/extensions
volumes:
- ./extensions:/extensionsPlace the built package here:
./extensions/n8n-nodes-qualifire-model/
Then restart the container.
npm i
npm run build
# copy dist/ into ./extensions/n8n-nodes-qualifire-model/If your n8n version shows peer dep warnings, install matching versions:
npm i -D n8n-workflow@<your-n8n-version> n8n-core@<your-n8n-version>
- Add Qualifire Model node.
- Set credentials:
- OpenAI API (for Qualifire Model)
- Qualifire API
- (Optional) Toggle Responses API Mode if you prefer
/responsesover/chat/completions. - Connect the node's Model output to Agent → Chat Model.
- Open chat and talk. The Agent will call OpenAI through Qualifire.
Click Execute node on the Qualifire Model node. The first output (main)
returns a live call with a test prompt. If it fails, check keys and ensure the URL
does not include /v1.
- If your Agent refuses to connect to the node's Model output, your n8n build may expect a
different output key (e.g.,
ai_modelinstead ofaiModel). In that case, change theoutputsarray inQualifireModel.node.tsto['main', 'ai_model'], rebuild, and try again. - You can also switch the node to read keys from env by enabling Expose Bearer from Env Instead of Credential.