Skip to content

SyncPilot - Enterprise MCP client with multi-provider AI support and intelligent PTP analysis

Notifications You must be signed in to change notification settings

aneeshkp/syncpilot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

SyncPilot - Enterprise MCP Client

A comprehensive Claude Desktop replacement with enhanced functionality, multi-provider LLM support, and advanced MCP server management.

🎯 Why SyncPilot?

Enterprise-grade MCP client that bypasses Claude Desktop restrictions while providing superior functionality:

  • πŸ”Œ Multi-Provider LLM Support: Gemini, Ollama, OpenAI, Anthropic with dynamic model selection
  • πŸ› οΈ Advanced MCP Server Management: Visual configuration, templates, real-time monitoring
  • πŸ–₯️ Claude Desktop-inspired UI: Rich HTML rendering, centralized settings, intuitive navigation
  • ⚑ Real-time Streaming: Live tool execution with progress monitoring and error recovery
  • 🏒 Enterprise Ready: On-premises deployment, robust error handling, production architecture
  • 🎨 Template-based Setup: Quick MCP server configuration with popular service templates

πŸ—οΈ Architecture

SyncPilot (Custom MCP Client)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Frontend (Next.js 14)                     β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”            β”‚
β”‚  β”‚ Chat UI     β”‚ β”‚ Server Mgr  β”‚ β”‚ Settings    β”‚            β”‚
β”‚  β”‚ β€’ HTML      β”‚ β”‚ β€’ Add/Removeβ”‚ β”‚ β€’ Providers β”‚            β”‚
β”‚  β”‚ β€’ Markdown  β”‚ β”‚ β€’ Connect   β”‚ β”‚ β€’ API Keys  β”‚            β”‚
β”‚  β”‚ β€’ Tool Viz  β”‚ β”‚ β€’ Monitor   β”‚ β”‚ β€’ Models    β”‚            β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚ HTTP/SSE
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  Backend (Python FastAPI)                    β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚  β”‚              LLM Provider Layer                     β”‚    β”‚
β”‚  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚    β”‚
β”‚  β”‚  β”‚ Gemini β”‚ β”‚ Ollama  β”‚ β”‚ OpenAI β”‚ β”‚Anthropicβ”‚    β”‚    β”‚
β”‚  β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚    β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚  β”‚                MCP Manager                          β”‚    β”‚
β”‚  β”‚  β€’ Multi-server connections                         β”‚    β”‚
β”‚  β”‚  β€’ Tool discovery & caching                         β”‚    β”‚
β”‚  β”‚  β€’ Parallel tool execution                          β”‚    β”‚
β”‚  β”‚  β€’ Error handling & recovery                        β”‚    β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚ MCP Protocol (stdio/HTTP/WS)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    MCP Servers                               β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”‚
β”‚  β”‚ Filesystem  β”‚ β”‚ Database    β”‚ β”‚ Custom      β”‚           β”‚
β”‚  β”‚ Server      β”‚ β”‚ Server      β”‚ β”‚ Servers     β”‚           β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

🎯 One-Command Start (Recommended)

SyncPilot includes unified start scripts that handle everything automatically:

Linux/macOS:

./start.sh

Cross-platform (Python):

python3 start.py

Windows:

start.bat

Using npm:

npm start

πŸ“‹ What the start scripts do:

βœ… Check dependencies (Python, Node.js, npm) βœ… Create Python virtual environment βœ… Install backend dependencies βœ… Install frontend dependencies βœ… Create .env file from template βœ… Start both backend and frontend βœ… Monitor and restart if processes crash

🌐 Access SyncPilot

βš™οΈ Manual Setup (Advanced)

If you prefer manual setup:

Backend:

cd backend
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt
cp .env.example .env  # Edit with your API keys
uvicorn app.main:app --reload --port 8000

Frontend (in new terminal):

cd frontend
npm install
npm run dev

πŸ”§ Configuration

LLM Providers with Dynamic Model Selection

Configure your preferred AI providers in the Settings > LLM Providers panel:

  • βœ… Dynamic Model Discovery: Auto-fetch available models from each provider
  • βœ… Real-time Validation: Test API keys and connections
  • βœ… Smart Defaults: Fallback models when API calls fail
  • βœ… Temperature & Token Control: Fine-tune model behavior
{
  "gemini": {
    "enabled": true,
    "api_key": "your-google-api-key",
    "default_model": "gemini-2.5-pro",
    "temperature": 0.7,
    "max_tokens": 4096
  },
  "ollama": {
    "enabled": true,
    "base_url": "http://localhost:11434",
    "default_model": "llama3.1:latest",
    "temperature": 0.7
  }
}

MCP Servers with Template Support

Add MCP servers via Settings > MCP Servers with one-click templates:

πŸ“ Popular Templates:

  • File System: Local file access with directory restrictions
  • GitHub: Repository integration with personal access tokens
  • PostgreSQL: Database connectivity with connection strings
  • Custom: Manual configuration for specialized servers

Example Configuration:

{
  "mcpServers": {
    "ptp-operator": {
      "command": "node",
      "args": ["/path/to/your/mcp-server/index.js"],
      "env": {
        "KUBECONFIG": "/home/user/.kube/config",
        "PTP_AGENT_URL": "https://your-ptp-agent.example.com",
        "NODE_TLS_REJECT_UNAUTHORIZED": "0"
      }
    },
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/allowed/path"],
      "env": {},
      "auto_connect": true,
      "timeout": 30
    }
  }
}

πŸ”§ Transport Types:

  • STDIO: Local process communication (most common)
  • HTTP SSE: Remote server via Server-Sent Events
  • WebSocket: Real-time bidirectional communication

✨ Key Features

πŸ”Œ Multi-Provider Support with Smart Model Selection

  • Gemini AI: Latest Google models (gemini-2.5-pro, gemini-1.5-flash) with auto-discovery
  • Ollama: Local LLM support for privacy/offline use with real-time model listing
  • OpenAI: GPT-4o and other OpenAI models with dynamic model fetching
  • Anthropic: Claude 3.5 Sonnet and other Claude models with API validation
  • Dynamic Model Discovery: Auto-fetch and update available models for each provider
  • Smart Fallbacks: Graceful degradation when APIs are unavailable

πŸ› οΈ Enterprise MCP Management

  • Template-based Setup: One-click configuration for popular services
  • Visual Configuration: Intuitive forms with real-time validation
  • Multiple Transport Protocols: stdio, HTTP SSE, WebSocket with auto-detection
  • Real-time Monitoring: Connection health, tool discovery, error tracking
  • Edit & Update: Modify server configurations without restart
  • Auto-discovery: Tools and resources automatically detected and cached
  • Parallel Execution: Multiple tool calls with progress monitoring
  • Error Recovery: Automatic reconnection and circuit breaker patterns

πŸ–₯️ Superior UI Experience

  • Centralized Settings: All configuration in one intuitive interface
  • Rich HTML Rendering: Full Claude Desktop-style message rendering
  • Real-time Updates: Live connection status and tool execution progress
  • Split-screen Layout: Chat and management side-by-side
  • Template Dropdowns: Quick server setup with popular configurations
  • Progress Visualization: Tool execution with detailed status updates

🎯 Enterprise Benefits

  • βœ… No Claude Desktop dependency: Deploy anywhere
  • βœ… Corporate compliance: Keep data on-premises with Ollama
  • βœ… Multi-provider flexibility: Not locked to any single AI provider
  • βœ… Enhanced monitoring: Full visibility into tool execution
  • βœ… Source code control: Customize and extend as needed
  • βœ… Production ready: Async architecture, error handling, type safety

πŸ“ Project Structure

syncpilot/
β”œβ”€β”€ backend/                 # Python FastAPI backend
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ core/           # MCP manager, config
β”‚   β”‚   β”œβ”€β”€ providers/      # LLM provider implementations
β”‚   β”‚   β”œβ”€β”€ api/            # REST API endpoints
β”‚   β”‚   └── models/         # Pydantic data models
β”‚   └── requirements.txt
β”œβ”€β”€ frontend/               # Next.js frontend
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ app/           # Next.js app router
β”‚   β”‚   β”œβ”€β”€ components/    # React components
β”‚   β”‚   └── lib/           # Utilities and stores
β”‚   └── package.json
β”œβ”€β”€ README.md
└── IMPLEMENTATION_SUMMARY.md

πŸ“š Documentation

πŸš€ Deployment

SyncPilot is designed for enterprise deployment:

  • Docker: Container-ready architecture
  • Cloud: Deploy on AWS, GCP, Azure
  • On-premises: Full local deployment with Ollama
  • Kubernetes: Scalable container orchestration

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

πŸ“„ License

See LICENSE file for details.


SyncPilot - Because your AI workflow shouldn't be limited by corporate restrictions. πŸš€

About

SyncPilot - Enterprise MCP client with multi-provider AI support and intelligent PTP analysis

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •