📌 QuanAgent - A customized fork of Dify for quantitative analysis and AI-powered workflows
QuanAgent Repository · Self-hosting Guide · Documentation
QuanAgent is a customized fork of Dify, an open-source platform for developing LLM applications. It maintains all the powerful features of Dify including agentic AI workflows, RAG pipelines, agent capabilities, model management, and observability features—with custom branding and configurations for quantitative analysis use cases.
Before installing QuanAgent, make sure your machine meets the following minimum system requirements:
- CPU >= 2 Core
- RAM >= 4 GiB
For production deployment with Nginx reverse proxy and SSL support, please refer to our comprehensive guide:
This guide covers:
- Docker + Docker Compose setup
- Nginx reverse proxy configuration
- SSL certificate with Let's Encrypt
- Domain configuration
- Performance optimization
- Common troubleshooting
The easiest way to start the QuanAgent server locally is through Docker Compose:
cd QuanAgent/docker
cp .env.example .env
docker compose up -dAfter running, access the dashboard at http://localhost/install and start the initialization process.
- Production Deployment: See DEPLOYMENT.md
- General Issues: Refer to the Dify FAQ
- Development: Check the Dify deployment guide
1. Workflow: Build and test powerful AI workflows on a visual canvas, leveraging all the following features and beyond.
2. Comprehensive model support: Seamless integration with hundreds of proprietary / open-source LLMs from dozens of inference providers and self-hosted solutions, covering GPT, Mistral, Llama3, and any OpenAI API-compatible models. A full list of supported model providers can be found here.
3. Prompt IDE: Intuitive interface for crafting prompts, comparing model performance, and adding additional features such as text-to-speech to a chat-based app.
4. RAG Pipeline: Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
5. Agent capabilities: You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. The platform provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
6. LLMOps: Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
7. Backend-as-a-Service: All offerings come with corresponding APIs, so you could effortlessly integrate QuanAgent into your own business logic.
-
Self-hosting QuanAgent
Quickly get QuanAgent running in your environment with this starter guide. Use the Dify documentation for further references and more in-depth instructions, as QuanAgent is based on Dify. -
About Dify
QuanAgent is a fork of Dify, an excellent open-source LLM application platform. For the original Dify Cloud service and enterprise features, please visit dify.ai.
Star QuanAgent on GitHub and be instantly notified of new releases. Also consider starring the original Dify project to support the upstream development.
If you need to customize the configuration, please refer to the comments in our .env.example file and update the corresponding values in your .env file. Additionally, you might need to make adjustments to the docker-compose.yaml file itself, such as changing image versions, port mappings, or volume mounts, based on your specific deployment environment and requirements. After making any changes, please re-run docker-compose up -d. You can find the full list of available environment variables here.
Import the dashboard to Grafana, using QuanAgent's PostgreSQL database as data source, to monitor metrics in granularity of apps, tenants, messages, and more.
If you'd like to configure a highly-available setup, there are community-contributed Helm Charts and YAML files which allow QuanAgent (based on Dify) to be deployed on Kubernetes.
- Helm Chart by @LeoQuote
- Helm Chart by @BorisPolonsky
- Helm Chart by @magicsong
- YAML file by @Winson-030
- YAML file by @wyy-holding
- 🚀 NEW! YAML files (Supports Dify v1.6.0) by @Zhoneym
Deploy Dify to Cloud Platform with a single click using terraform
Deploy Dify to AWS with CDK
Quickly deploy Dify to Alibaba cloud with Alibaba Cloud Computing Nest
One-Click deploy Dify to Alibaba Cloud with Alibaba Cloud Data Management
One-Click deploy Dify to AKS with Azure Devops Pipeline Helm Chart by @LeoZhang
For those who'd like to contribute code, see our Contribution Guide. At the same time, please consider supporting Dify by sharing it on social media and at events and conferences.
We are looking for contributors to help translate Dify into languages other than Mandarin or English. If you are interested in helping, please see the i18n README for more information, and leave us a comment in the
global-userschannel of our Discord Community Server.
- GitHub Discussion. Best for: sharing feedback and asking questions.
- GitHub Issues. Best for: bugs you encounter using Dify.AI, and feature proposals. See our Contribution Guide.
- Discord. Best for: sharing your applications and hanging out with the community.
- X(Twitter). Best for: sharing your applications and hanging out with the community.
Contributors
To protect your privacy, please avoid posting security issues on GitHub. Instead, report issues to security@dify.ai, and our team will respond with detailed answer.
This repository is licensed under the Dify Open Source License, based on Apache 2.0 with additional conditions.


