English | 中文
WHartTest is an AI-driven test automation platform built on Django REST Framework. Its core capability is generating test cases with AI. The platform integrates LangChain, MCP (Model Context Protocol) tool calling, project management, requirement review, test case management, and advanced knowledge-base management with document understanding. By leveraging large language models and multiple embedding providers (OpenAI, Azure OpenAI, Ollama, etc.), it automatically produces high-quality test cases and uses the knowledge base to provide more accurate testing assistance, delivering a complete intelligent testing management solution for QA teams.
Based on large language models (LLMs), it transforms requirements into test cases:
- Multi-source input: supports requirement documents, conversational input, API docs, and more
- Structured output: automatically generates complete cases including steps, preconditions, input data, expected results, and priority (P0-P3)
- Multi-model support: integrates OpenAI, Azure OpenAI, Ollama, Xinference, and other embedding services
Build project-level knowledge bases for accurate AI context:
- Multi-format support: PDF, Word, Excel, PPT, Markdown, web links, and more
- Intelligent chunking: automatic parsing, chunking, and vector storage
- Semantic retrieval: precise vector-similarity search
- Reranker: configurable reranking models to improve retrieval accuracy
Connect AI with testing tools via Model Context Protocol:
- Browser automation: integrates Playwright for actions, selectors, screenshots, and recordings
- Multi-protocol support: streamable-http, SSE, and more
- HITL approvals: sensitive actions require human confirmation
- Custom extensions: connect third-party tools and services
AI-driven requirement quality evaluation to surface risks early:
- Six-dimension scoring: completeness, clarity, consistency, testability, feasibility, and logic
- Issue tracking: automatically flags ambiguities, conflicts, and omissions
- Improvement suggestions: targeted optimizations and testing strategy advice
Full lifecycle management for test cases:
- Multi-level modules: supports 5-level module hierarchies
- Case review: pending, approved, optimized, or invalid status flows
- Test suites: flexible case combinations with parallel execution settings
- Batch operations: bulk create, edit, move, and delete
Low-code UI automation to reduce barriers:
- Multiple locator strategies: CSS, XPath, Text, Role, Label, and more
- Visual orchestration: drag-and-drop steps with conditions, loops, and assertions
- Environment management: multi-browser (Chromium/Firefox/WebKit) and environment switching
- Execution records: screenshots, videos, Trace, and logs
Integrated mobile-mcp for mobile automation:
- Multi-platform support: Android and iOS
- Device management: connection status checks and device info
- App operations: install, uninstall, launch, and stop
- Element interactions: tap, swipe, input, and gestures
- Screen operations: screenshots, recordings, brightness/volume control
- System operations: notification shade, permissions, and network switching
Extensible agent skill management framework:
- Multiple import sources: ZIP upload and Git repo import
- SKILL.md spec: standardized skill descriptors
- Security isolation: isolated storage to prevent path traversal
- Exclusive resources: join the tech group to access fine-tuned Skills and MCP toolkits
Comprehensive execution analytics:
- Real-time monitoring: live progress and status updates
- Multi-dimensional stats: pass rate, duration, and failure analysis
- Historical comparison: trend analysis across runs
For full documentation, visit: https://mgdaaslab.github.io/WHartTest/
# 1. Clone the repo
git clone https://github.com/MGdaasLab/WHartTest.git
cd WHartTest
# 2. Prepare config (use defaults with auto-generated API Key)
cp .env.example .env
# 3. One-command start (choose one of the two)
# Option A: use the deployment script (recommended, auto-selects registry mirrors)
./run_compose.sh
# Option B: use docker-compose directly
docker-compose up -d
# 4. Open the system
# http://localhost:8913 (admin/admin123456)That's it! The system will automatically create a default API Key and enable MCP services out of the box.
If you use the built-in deployment script, it now asks you to choose between remote image pull and local image build at startup:
./run_compose.shThe script now:
- Prompts for a deployment mode:
remote(pull prebuilt images) orlocal(build locally) - In
remotemode, auto-tests built-in registries and lets you pick the fastest (usually just choose1) remotechooses by registry type: Docker Hub uses official /docker.1panel.live/docker.1ms.run/docker.xuanyuan.me/docker.m.daocloud.io; GHCR uses official /ghcr.1ms.run/ghcr.nju.edu.cn/ghcr.m.daocloud.io; MCR uses official /mcr.azure.cn/mcr.m.daocloud.iolocalmode auto-detects faster mirrors forAPT / PyPI / npm / Hugging Face- Python dependencies now support automatic fallback: prefer the fastest PyPI mirror, then switch to other candidates if a package download times out
localcandidates include official, Tsinghua, USTC, Alibaba Cloud, Tencent Cloud, Huawei Cloud, BFSU, Shanghai Jiao Tong,npmmirror,hf-mirror, and more- Supports adding your own mirror candidates via environment variables
- Local builds now use Docker cache by default (no longer always
--no-cache)
Common examples:
# Interactively choose deployment mode
./run_compose.sh
# Use remote prebuilt images
./run_compose.sh remote
# Build locally and auto-select faster mirrors
./run_compose.sh local
# Force native official sources when building locally
DOCKER_SOURCE_PROFILE=native ./run_compose.sh local
# Force mirror-only selection when building locally
DOCKER_SOURCE_PROFILE=mirror ./run_compose.sh local
# Add custom PyPI candidates (wrap with quotes)
DOCKER_PIP_CANDIDATES_EXTRA="corp|https://pypi.example.com/simple|https://pypi.example.com/simple/pip/" ./run_compose.sh local
# Disable cache only for full local rebuilds
DOCKER_BUILD_NO_CACHE=1 ./run_compose.sh local
⚠️ Production note: Log in to the admin panel, delete the default API Key, and create a new secure key. See: Quick Start Guide
Detailed deployment docs:
- Quick Start Guide - recommended for new users
- GitHub Auto-Build Deployment Guide
- Full deployment docs
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
- Fork the project
- Create a feature branch
- Commit your changes
- Open a Pull Request
For questions or suggestions:
- Open an Issue
- Use the project Discussions
- When adding WeChat, mention
githubso we can add you to the group chat. - Join the group to get the latest updates and Skills.
QQ groups:
- 8xxxxxxxx0 (full)
- 1017708746
Because the Skills module has high system execution privileges, please take the following security precautions:
Deployment recommendation: Only deploy in an intranet or trusted private network. Access control: Do not expose the service to the public Internet or grant access to unauthenticated or untrusted users. Disclaimer: This project (WHartTest) is for learning and research purposes only. Users are responsible for all security risks and consequences caused by unsafe deployment (such as public exposure or missing authentication). The WHartTest team is not liable for any security incidents, including data leaks or server compromise, caused by improper configuration.
WHartTest - AI-powered test case generation that makes testing smarter and development more efficient!
















