A comprehensive Claude Desktop replacement with enhanced functionality, multi-provider LLM support, and advanced MCP server management.
Enterprise-grade MCP client that bypasses Claude Desktop restrictions while providing superior functionality:
- π Multi-Provider LLM Support: Gemini, Ollama, OpenAI, Anthropic with dynamic model selection
- π οΈ Advanced MCP Server Management: Visual configuration, templates, real-time monitoring
- π₯οΈ Claude Desktop-inspired UI: Rich HTML rendering, centralized settings, intuitive navigation
- β‘ Real-time Streaming: Live tool execution with progress monitoring and error recovery
- π’ Enterprise Ready: On-premises deployment, robust error handling, production architecture
- π¨ Template-based Setup: Quick MCP server configuration with popular service templates
SyncPilot (Custom MCP Client)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Frontend (Next.js 14) β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β Chat UI β β Server Mgr β β Settings β β
β β β’ HTML β β β’ Add/Removeβ β β’ Providers β β
β β β’ Markdown β β β’ Connect β β β’ API Keys β β
β β β’ Tool Viz β β β’ Monitor β β β’ Models β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β HTTP/SSE
βββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββ
β Backend (Python FastAPI) β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β LLM Provider Layer β β
β β ββββββββββ βββββββββββ ββββββββββ βββββββββββ β β
β β β Gemini β β Ollama β β OpenAI β βAnthropicβ β β
β β ββββββββββ βββββββββββ ββββββββββ βββββββββββ β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β MCP Manager β β
β β β’ Multi-server connections β β
β β β’ Tool discovery & caching β β
β β β’ Parallel tool execution β β
β β β’ Error handling & recovery β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β MCP Protocol (stdio/HTTP/WS)
βββββββββββββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββ
β MCP Servers β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
β β Filesystem β β Database β β Custom β β
β β Server β β Server β β Servers β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
SyncPilot includes unified start scripts that handle everything automatically:
Linux/macOS:
./start.shCross-platform (Python):
python3 start.pyWindows:
start.batUsing npm:
npm startβ Check dependencies (Python, Node.js, npm) β Create Python virtual environment β Install backend dependencies β Install frontend dependencies β Create .env file from template β Start both backend and frontend β Monitor and restart if processes crash
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Docs: http://localhost:8000/api/docs
If you prefer manual setup:
Backend:
cd backend
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
cp .env.example .env # Edit with your API keys
uvicorn app.main:app --reload --port 8000Frontend (in new terminal):
cd frontend
npm install
npm run devConfigure your preferred AI providers in the Settings > LLM Providers panel:
- β Dynamic Model Discovery: Auto-fetch available models from each provider
- β Real-time Validation: Test API keys and connections
- β Smart Defaults: Fallback models when API calls fail
- β Temperature & Token Control: Fine-tune model behavior
{
"gemini": {
"enabled": true,
"api_key": "your-google-api-key",
"default_model": "gemini-2.5-pro",
"temperature": 0.7,
"max_tokens": 4096
},
"ollama": {
"enabled": true,
"base_url": "http://localhost:11434",
"default_model": "llama3.1:latest",
"temperature": 0.7
}
}Add MCP servers via Settings > MCP Servers with one-click templates:
- File System: Local file access with directory restrictions
- GitHub: Repository integration with personal access tokens
- PostgreSQL: Database connectivity with connection strings
- Custom: Manual configuration for specialized servers
{
"mcpServers": {
"ptp-operator": {
"command": "node",
"args": ["/path/to/your/mcp-server/index.js"],
"env": {
"KUBECONFIG": "/home/user/.kube/config",
"PTP_AGENT_URL": "https://your-ptp-agent.example.com",
"NODE_TLS_REJECT_UNAUTHORIZED": "0"
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"],
"env": {},
"auto_connect": true,
"timeout": 30
}
}
}- STDIO: Local process communication (most common)
- HTTP SSE: Remote server via Server-Sent Events
- WebSocket: Real-time bidirectional communication
- Gemini AI: Latest Google models (gemini-2.5-pro, gemini-1.5-flash) with auto-discovery
- Ollama: Local LLM support for privacy/offline use with real-time model listing
- OpenAI: GPT-4o and other OpenAI models with dynamic model fetching
- Anthropic: Claude 3.5 Sonnet and other Claude models with API validation
- Dynamic Model Discovery: Auto-fetch and update available models for each provider
- Smart Fallbacks: Graceful degradation when APIs are unavailable
- Template-based Setup: One-click configuration for popular services
- Visual Configuration: Intuitive forms with real-time validation
- Multiple Transport Protocols: stdio, HTTP SSE, WebSocket with auto-detection
- Real-time Monitoring: Connection health, tool discovery, error tracking
- Edit & Update: Modify server configurations without restart
- Auto-discovery: Tools and resources automatically detected and cached
- Parallel Execution: Multiple tool calls with progress monitoring
- Error Recovery: Automatic reconnection and circuit breaker patterns
- Centralized Settings: All configuration in one intuitive interface
- Rich HTML Rendering: Full Claude Desktop-style message rendering
- Real-time Updates: Live connection status and tool execution progress
- Split-screen Layout: Chat and management side-by-side
- Template Dropdowns: Quick server setup with popular configurations
- Progress Visualization: Tool execution with detailed status updates
- β No Claude Desktop dependency: Deploy anywhere
- β Corporate compliance: Keep data on-premises with Ollama
- β Multi-provider flexibility: Not locked to any single AI provider
- β Enhanced monitoring: Full visibility into tool execution
- β Source code control: Customize and extend as needed
- β Production ready: Async architecture, error handling, type safety
syncpilot/
βββ backend/ # Python FastAPI backend
β βββ app/
β β βββ core/ # MCP manager, config
β β βββ providers/ # LLM provider implementations
β β βββ api/ # REST API endpoints
β β βββ models/ # Pydantic data models
β βββ requirements.txt
βββ frontend/ # Next.js frontend
β βββ src/
β β βββ app/ # Next.js app router
β β βββ components/ # React components
β β βββ lib/ # Utilities and stores
β βββ package.json
βββ README.md
βββ IMPLEMENTATION_SUMMARY.md
- Setup Guide: See above Quick Start section
- Detailed Implementation: See IMPLEMENTATION_SUMMARY.md
- API Documentation: Available at http://localhost:8000/api/docs when running
SyncPilot is designed for enterprise deployment:
- Docker: Container-ready architecture
- Cloud: Deploy on AWS, GCP, Azure
- On-premises: Full local deployment with Ollama
- Kubernetes: Scalable container orchestration
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
See LICENSE file for details.
SyncPilot - Because your AI workflow shouldn't be limited by corporate restrictions. π