An MCP (Model Context Protocol) server that analyzes git repositories to extract application installation requirements and validates them against OpenShift/Kubernetes clusters.
- Git Repository Support: Fetches README and deployment files from GitHub/GitLab repositories
- YAML Parsing: Extracts resource requirements from Helm charts, Kubernetes manifests, ConfigMaps, and CRDs
- Smart Extraction: Identifies CPU, memory, GPU, storage requirements, and node selectors
- CRD Detection: Extracts Custom Resource Definition requirements from deployment manifests
- Resource Discovery: Scans connected OpenShift/Kubernetes clusters for available resources
- Node Analysis: Detects CPU, memory, GPU capacity and allocatable resources
- Current Usage Tracking: Monitors real-time resource consumption (via metrics-server)
- GPU Model Detection: Identifies specific GPU models (A100, H100, MI250, etc.) from node labels
- GPU Memory Detection: Extracts GPU VRAM from node labels (e.g., 24GB for A10G, 80GB for A100)
- Targeted Scanning: Optimized scanning that only fetches resources needed for validation (30-50% faster)
- Storage Classes: Lists available storage classes and default configurations
- Operator Detection: Scans for installed operators (OpenShift OLM)
- CRD Inventory: Lists all Custom Resource Definitions in the cluster
- Resource Validation: Compares requirements against cluster capacity
- GPU Class Validation: Validates GPU models/classes, not just quantity
- Datacenter-class requirements (A100, H100, H200, MI250, etc.)
- Specific model matching (e.g., "A100/L4", "H100 or newer")
- Rejects consumer GPUs (RTX, GTX, T4) for datacenter requirements
- GPU Memory Validation: Validates GPU VRAM requirements (critical for LLM workloads)
- Compares required vs available GPU memory (e.g., 24Gi vs 80GB A100)
- Clear error messages when GPU memory is insufficient
- CRD Conflict Detection: Checks for CRD name conflicts, API group mismatches, and version compatibility
- Available Resource Calculation: Uses current usage to determine actually available resources
- Confidence Scoring: Provides high/medium/low confidence based on available data
- Claude Code: Works seamlessly with Claude CLI
- Cursor: Integrates as MCP tool in Cursor IDE
- Multi-Platform: Supports both GitHub and GitLab repositories
- Clone this repository:
git clone https://github.com/Hadar301/mcp-openshift-installer-checker.git
cd mcp-openshift-installer-checker- Install dependencies using
uv:
uv sync- (Optional) Set up GitHub/GitLab tokens to avoid rate limits:
cp .env.example .env
# Edit .env and add your tokens- (Optional) Log in to your OpenShift/Kubernetes cluster for scanning features:
# For OpenShift
oc login <cluster-url>
# For Kubernetes
kubectl config use-context <context-name>- Python 3.10+
- uv package manager:
pip install uv
- oc (OpenShift CLI) or kubectl (Kubernetes CLI)
- metrics-server installed in cluster (for current usage tracking)
- Active cluster connection (
oc loginorkubectl config use-context)
First, clone the repository:
git clone https://github.com/Hadar301/mcp-openshift-installer-checker.git
cd mcp-openshift-installer-checker
uv syncThen edit ~/.claude.json and add the MCP server configuration to the project where you want to use it. For example, to configure it for your home directory (/Users/yourusername):
{
"projects": {
"/Users/yourusername": {
"mcpServers": {
"openshift-installer-checker": {
"type": "stdio",
"command": "uv",
"args": [
"--directory",
"/path/to/mcp-openshift-installer-checker",
"run",
"python",
"main.py"
],
"env": {
"GITHUB_TOKEN": "<your-github-token>"
}
}
}
}
}
}Note:
- Replace
/Users/yourusernamewith your actual home directory path - Replace
/path/to/mcp-openshift-installer-checkerwith the actual path where you cloned the repository - Replace
<your-github-token>with your GitHub personal access token
Then use Claude Code:
claude chatAsk Claude:
- "Can I install https://github.com/nvidia/NeMo-Microservices on my cluster?"
- "Check if https://github.com/kubeflow/kubeflow will fit on my cluster"
First, clone the repository:
git clone https://github.com/Hadar301/mcp-openshift-installer-checker.git
cd mcp-openshift-installer-checker
uv syncThen edit ~/.cursor/mcp.json:
{
"mcpServers": {
"openshift-installer-checker": {
"command": "uv",
"args": [
"--directory",
"/path/to/mcp-openshift-installer-checker",
"run",
"python",
"main.py"
],
"cwd": "/path/to/mcp-openshift-installer-checker",
"env": {
"GITHUB_TOKEN": "<your-github-token>"
}
}
}
}Note: Replace /path/to/mcp-openshift-installer-checker with the actual path where you cloned the repository, and <your-github-token> with your GitHub personal access token.
Then ask in Cursor chat: "Analyze installation requirements for https://github.com/your/repo and check if it can be installed"
- URL Parsing: Extracts platform (GitHub/GitLab), owner, and repository name
- README Fetching: Downloads README.md via GitHub/GitLab API
- Deployment File Discovery: Searches common paths (
helm/,deploy/,k8s/,manifests/, etc.) - YAML Parsing: Extracts resource specifications from Kubernetes manifests
- CRD Extraction: Identifies Custom Resource Definitions to be installed
- Requirement Aggregation: Combines requirements from multiple sources
- CLI Detection: Tries
ocfirst (OpenShift), falls back tokubectl - Targeted Scanning: Only fetches resources needed based on requirements (performance optimization)
- Node Scanning: Collects capacity, allocatable resources, GPU models, and GPU memory
- Usage Tracking: Fetches current resource consumption (requires metrics-server)
- Storage Classes: Lists available storage provisioners (only if storage required)
- Software Inventory: Scans for installed operators and CRDs (only if needed)
- Available Calculation: Computes free resources (allocatable - used)
- Resource Validation: Compares CPU, memory, GPU against cluster capacity
- GPU Model Validation: Validates GPU class/model requirements
- Datacenter-class: A100, H100, H200, L4, L40, MI250, MI300, etc.
- Consumer GPUs rejected: T4, RTX, GTX, Quadro, Titan
- GPU Memory Validation: Validates GPU VRAM requirements
- Compares required memory (e.g., 24Gi, 80GB) against available GPU memory
- Critical for LLM deployments (Llama-70B needs 80GB, DeepSeek-V3 needs 600GB+)
- Storage Validation: Checks for available storage classes
- CRD Conflict Detection: Identifies potential CRD conflicts
- Confidence Scoring: Assigns confidence level based on available data
Returns structured data for Claude/Cursor to analyze and present to user
mcp-openshift-installer-checker/
├── main.py # MCP server entry point
├── .env.example # Example environment variables
├── pyproject.toml # Project dependencies (uv)
├── uv.lock # Dependency lock file
├── LICENSE # MIT license
├── README.md # This file
├── src/
│ ├── __init__.py
│ ├── cluster_analyzer/
│ │ ├── scanner.py # Cluster resource scanner (with targeted scanning)
│ │ └── __init__.py
│ ├── cluster_checker/
│ │ ├── feasibility.py # Requirement validation (with GPU memory checks)
│ │ └── __init__.py
│ └── requirements_extractor/
│ ├── extractor.py # Main orchestrator
│ ├── git_handler.py # GitHub/GitLab API client
│ ├── __init__.py
│ ├── parser/
│ │ ├── yaml_parser.py # YAML resource extraction
│ │ └── __init__.py
│ ├── models/
│ │ ├── requirements.py # Pydantic data models
│ │ └── __init__.py
│ └── utils/
│ ├── resource_comparisons.py # CPU/memory comparison utilities
│ └── __init__.py
└── test/
├── test_crd_detection.py # CRD conflict detection tests
├── test_gpu_model_validation.py # GPU model validation tests
├── test_command_injection_protection.py # Security tests
├── test_extraction.py # Requirement extraction tests
├── test_usage_tracking.py # Resource usage tracking tests
├── failed_attempt.md # Test documentation
└── successful_attempt.md # Test documentation
Once configured as an MCP server, you can ask Claude or Cursor:
-
Basic Analysis:
- "What are the requirements for https://github.com/prometheus/prometheus?"
- "Analyze hardware needs for https://github.com/argoproj/argo-cd"
-
Feasibility Checking (requires cluster connection):
- "Can I install https://github.com/nvidia/NeMo-Microservices on my cluster?"
- "Will https://github.com/kubeflow/kubeflow fit on my cluster?"
- "Check if my cluster can handle https://github.com/ray-project/kuberay"
-
GPU Validation:
- "Does my cluster have the right GPUs for https://github.com/vllm-project/vllm?"
- "Can I run this ML workload with my current GPU setup?"
-
CRD Conflict Detection:
- "Will installing this operator conflict with my existing CRDs?"
- "What CRDs will be created by this application?"
Claude/Cursor will automatically:
- Call the
analyze_app_requirementstool - Scan the connected cluster (if available)
- Validate requirements against cluster capacity
- Check for CRD conflicts
- Present results in a clear, formatted output
- Active connection to OpenShift/Kubernetes cluster
oc(OpenShift CLI) orkubectlinstalled and in PATH- User logged in with read permissions
If cluster scanning fails:
- Verify CLI tool is installed:
oc versionorkubectl version - Check cluster connection:
oc whoamiorkubectl cluster-info - The tool continues to work for repository analysis even without cluster access
If usage tracking fails:
- Install metrics-server:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml - Feasibility checks fall back to allocatable resources (still functional)
If GPU models show as empty:
- Check if nodes have GPU labels:
kubectl get nodes -o json | jq '.items[].metadata.labels' - GPU device plugins (NVIDIA, AMD) add these labels automatically
- Manual labeling:
kubectl label node <node-name> nvidia.com/gpu.product=NVIDIA-A100-SXM4-40GB
- Check the MCP config file path is correct
- Verify
uvis installed:uv --version - Try running manually:
uv run python main.py - Use the MCP Inspector to debug:
npx @modelcontextprotocol/inspector uv run python main.py
Contributions welcome! This project focuses on building practical MCP servers for DevOps automation.
git clone https://github.com/Hadar301/mcp-openshift-installer-checker.git
cd mcp-openshift-installer-checker
uv sync# Test CRD detection
PYTHONPATH=. uv run python test/test_crd_detection.py
# Test GPU validation
PYTHONPATH=. uv run python test/test_gpu_model_validation.pysrc/cluster_analyzer/- Cluster resource scanning with targeted optimizationsrc/cluster_checker/- Feasibility validation with GPU memory checkssrc/requirements_extractor/- Repository analysis and requirement extractiontest/- Test scriptsmain.py- MCP server entry point
MIT