Skip to content
#

llm-infrastructure

Here are 32 public repositories matching this topic...

Multi-model AI agent runtime. Define agents in YAML, connect 6 LLM providers, orchestrate with ReAct/Plan&Execute/Fan-Out/Pipeline/Supervisor/Swarm patterns, and deploy as REST/WebSocket API with RAG, memory, MCP tools, guardrails, and OpenTelemetry observability.

  • Updated Apr 7, 2026
  • Python

A production-ready, enterprise-grade Agentic RAG ingestion pipeline built with n8n, Supabase (pgvector), and AI embeddings. Implements event-driven orchestration, hybrid RAG for structured and unstructured data, vector similarity search, and multi-tenant architecture to deliver client-isolated, retrieval-ready knowledge bases.

  • Updated Jan 10, 2026
  • PLpgSQL

Improve this page

Add a description, image, and links to the llm-infrastructure topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-infrastructure topic, visit your repo's landing page and select "manage topics."

Learn more