Kubernetes-native AI automation platform for intelligent agentic sessions with multi-agent collaboration
Note: This project was formerly known as "vTeam". While the project has been rebranded to Ambient Code Platform, the name "vTeam" still appears in various technical artifacts for backward compatibility (see Legacy vTeam References below).
The Ambient Code Platform is an AI automation platform that combines Claude Code CLI with multi-agent collaboration capabilities. The platform enables teams to create and manage intelligent agentic sessions through a modern web interface.
- Intelligent Agentic Sessions: AI-powered automation for analysis, research, content creation, and development tasks
- Multi-Agent Workflows: Specialized AI agents model realistic software team dynamics
- Git Provider Support: Native integration with GitHub and GitLab (SaaS and self-hosted)
- Kubernetes Native: Built with Custom Resources, Operators, and proper RBAC for enterprise deployment
- Real-time Monitoring: Live status updates and job execution tracking
- 🤖 Amber Background Agent: Automated issue-to-PR workflows via GitHub Actions (quickstart)
Amber is a background agent that handles GitHub issues automatically:
- 🤖 Auto-Fix: Create issue with
amber:auto-fixlabel → Amber creates PR with linting/formatting fixes - 🔧 Refactoring: Label issue
amber:refactor→ Amber breaks large files, extracts patterns - 🧪 Test Coverage: Use
amber:test-coverage→ Amber adds missing tests
Quick Links:
The platform consists of containerized microservices orchestrated via Kubernetes:
| Component | Technology | Description |
|---|---|---|
| Frontend | NextJS + Shadcn | User interface for managing agentic sessions |
| Backend API | Go + Gin | REST API for managing Kubernetes Custom Resources (multi-tenant: projects, sessions, access control) |
| Agentic Operator | Go | Kubernetes operator that watches CRs and creates Jobs |
| Claude Code Runner | Python + Claude Code CLI | Pod that executes AI with multi-agent collaboration capabilities |
- Create Session: User creates agentic session via web UI with task description
- API Processing: Backend creates
AgenticSessionCustom Resource in Kubernetes - Job Scheduling: Operator detects CR and creates Kubernetes Job with runner pod
- AI Execution: Pod runs Claude Code CLI with multi-agent collaboration for intelligent analysis
- Result Storage: Analysis results stored back in Custom Resource status
- UI Updates: Frontend displays real-time progress and completed results
Get started in under 5 minutes!
See QUICK_START.md for the fastest way to run vTeam locally.
# Install prerequisites (one-time)
brew install minikube kubectl # macOS
# or follow QUICK_START.md for Linux
# Start
make local-up
# Check status
make local-statusThat's it! Access the app at http://$(minikube ip):30030 (get IP with make local-url).
GitHub:
- ✅ GitHub.com (public and private repositories)
- ✅ GitHub Enterprise Server
- ✅ GitHub App authentication
- ✅ Personal Access Token authentication
GitLab (v1.1.0+):
- ✅ GitLab.com (SaaS)
- ✅ Self-hosted GitLab (Community & Enterprise editions)
- ✅ Personal Access Token authentication
- ✅ HTTPS and SSH URL formats
- ✅ Custom domains and ports
- Automatic Provider Detection: Repositories automatically identified as GitHub or GitLab from URL
- Multi-Provider Projects: Use GitHub and GitLab repositories in the same project
- Secure Token Storage: All credentials encrypted in Kubernetes Secrets
- Provider-Specific Error Handling: Clear, actionable error messages for each platform
- Create Personal Access Token: GitLab PAT Setup Guide
- Connect Account: Settings → Integrations → GitLab
- Configure Repository: Add GitLab repository URL to project settings
- Create Sessions: AgenticSessions work seamlessly with GitLab repos
Documentation:
- GitLab Integration Guide - Complete user guide
- GitLab Token Setup - Step-by-step PAT creation
- Self-Hosted GitLab - Enterprise configuration
- Minikube for local development or OpenShift cluster for production
- kubectl v1.28+ configured to access your cluster
- Podman for building container images (or Docker as alternative)
- Container registry access (Docker Hub, Quay.io, ECR, etc.) for production
- Go 1.24+ for building backend services (if building from source)
- Node.js 20+ and npm for the frontend (if building from source)
- Anthropic API Key - Get from Anthropic Console
- Configure via web UI: Settings → Runner Secrets after deployment
Deploy using the default images from quay.io/ambient_code:
# From repo root, prepare env for deploy script (required once)
cp components/manifests/env.example components/manifests/.env
# Edit .env and set at least ANTHROPIC_API_KEY
# Deploy to ambient-code namespace (default)
make deploy
# Or deploy to custom namespace
make deploy NAMESPACE=my-namespace# Check pod status
oc get pods -n ambient-code
# Check services and routes
oc get services,routes -n ambient-code# Get the route URL
oc get route frontend-route -n ambient-code
# Or use port forwarding as fallback
kubectl port-forward svc/frontend-service 3000:3000 -n ambient-code- Access the web interface
- Navigate to Settings → Runner Secrets
- Add your Anthropic API key
- Access Web Interface: Navigate to your deployed route URL
- Create New Session:
- Prompt: Task description (e.g., "Review this codebase for security vulnerabilities and suggest improvements")
- Model: Choose AI model (Claude Sonnet/Haiku)
- Settings: Adjust temperature, token limits, timeout (default: 300s)
- Monitor Progress: View real-time status updates and execution logs
- Review Results: Download analysis results and structured output
- Code Analysis: Security reviews, code quality assessments, architecture analysis
- Technical Documentation: API documentation, user guides, technical specifications
- Project Planning: Feature specifications, implementation plans, task breakdowns
- Research & Analysis: Technology research, competitive analysis, requirement gathering
- Development Workflows: Code reviews, testing strategies, deployment planning
To build and deploy your own container images:
# Set your container registry
export REGISTRY="quay.io/your-username"
# Build all images
make build-all
# Push to registry (requires authentication)
make push-all REGISTRY=$REGISTRY
# Deploy with custom images
cd components/manifests
REGISTRY=$REGISTRY ./deploy.sh# Build with Podman (default)
make build-all
# Use Docker instead of Podman
make build-all CONTAINER_ENGINE=docker
# Build for specific platform
# Default is linux/amd64
make build-all PLATFORM=linux/arm64
# Build with additional flags
make build-all BUILD_FLAGS="--no-cache --pull"For cluster-based authentication and authorization, the deployment script can configure the Route host, create an OAuthClient, and set the frontend secret when provided a .env file. See the guide for details and a manual alternative:
The operator supports two modes for accessing Claude AI:
Use operator-config.yaml or operator-config-crc.yaml for standard deployments:
# Apply the standard config (Vertex AI disabled)
kubectl apply -f components/manifests/operator-config.yaml -n ambient-codeWhen to use:
- Standard cloud deployments without Google Cloud integration
- Local development with CRC/Minikube
- Any environment using direct Anthropic API access
Configuration: Sets CLAUDE_CODE_USE_VERTEX=0
Use operator-config-openshift.yaml for production OpenShift deployments with Vertex AI:
# Apply the Vertex AI config
kubectl apply -f components/manifests/operator-config-openshift.yaml -n ambient-codeWhen to use:
- Production deployments on Google Cloud
- Environments requiring Vertex AI integration
- Enterprise deployments with Google Cloud service accounts
Configuration: Sets CLAUDE_CODE_USE_VERTEX=1 and configures:
CLOUD_ML_REGION: Google Cloud region (default: "global")ANTHROPIC_VERTEX_PROJECT_ID: Your GCP project IDGOOGLE_APPLICATION_CREDENTIALS: Path to service account key file
Creating the Vertex AI Secret:
When using Vertex AI, you must create a secret containing your Google Cloud service account key:
# The key file MUST be named ambient-code-key.json
kubectl create secret generic ambient-vertex \
--from-file=ambient-code-key.json=ambient-code-key.json \
-n ambient-codeImportant Requirements:
- ✅ Secret name must be
ambient-vertex - ✅ Key file must be named
ambient-code-key.json - ✅ Service account must have Vertex AI API access
- ✅ Project ID in config must match the service account's project
Sessions have a configurable timeout (default: 300 seconds):
- Environment Variable: Set
TIMEOUT=1800for 30-minute sessions - CRD Default: Modify
components/manifests/crds/agenticsessions-crd.yaml - Interactive Mode: Set
interactive: truefor unlimited chat-based sessions
Configure AI API keys and integrations via the web interface:
- Settings → Runner Secrets: Add Anthropic API keys
- Project-scoped: Each project namespace has isolated secret management
- Security: All secrets stored as Kubernetes Secrets with proper RBAC
Pods Not Starting:
oc describe pod <pod-name> -n ambient-code
oc logs <pod-name> -n ambient-codeAPI Connection Issues:
oc get endpoints -n ambient-code
oc exec -it <pod-name> -- curl http://backend-service:8080/healthJob Failures:
oc get jobs -n ambient-code
oc describe job <job-name> -n ambient-code
oc logs <failed-pod-name> -n ambient-code# Check all resources
oc get all -l app=ambient-code -n ambient-code
# View recent events
oc get events --sort-by='.lastTimestamp' -n ambient-code
# Test frontend access
curl -f http://localhost:3000 || echo "Frontend not accessible"
# Test backend API
kubectl port-forward svc/backend-service 8080:8080 -n ambient-code &
curl http://localhost:8080/health- API Key Management: Store Anthropic API keys securely in Kubernetes secrets
- RBAC: Configure appropriate role-based access controls
- Network Policies: Implement network isolation between components
- Image Scanning: Scan container images for vulnerabilities before deployment
- Prometheus Metrics: Configure metrics collection for all components
- Log Aggregation: Set up centralized logging (ELK, Loki, etc.)
- Alerting: Configure alerts for pod failures, resource exhaustion
- Health Checks: Implement comprehensive health endpoints
- Horizontal Pod Autoscaling: Configure HPA based on CPU/memory usage
- Resource Limits: Set appropriate resource requests and limits
- Node Affinity: Configure pod placement for optimal resource usage
Single Command Setup:
# Start complete local development environment
make local-startWhat this provides:
- ✅ Local Kubernetes cluster with minikube
- ✅ No authentication required - automatic login as "developer"
- ✅ Automatic image builds and deployments
- ✅ Working frontend-backend integration
- ✅ Ingress configuration for easy access
- ✅ Faster startup than OpenShift (2-3 minutes)
Prerequisites:
# Install minikube and kubectl (macOS)
brew install minikube kubectl
# Then start development
make local-startLocal MiniKube Access URLs:
Or using NodePort (no /etc/hosts needed):
- Frontend:
http://$(minikube ip):30030 - Backend:
http://$(minikube ip):30080
Common Commands:
make local-start # Start minikube and deploy
make local-stop # Stop deployment (keep minikube)
make local-delete # Delete minikube cluster
make local-status # Check deployment status
make local-logs # View backend logs
make dev-test # Run testsFor detailed local development guide, see:
# Build all images locally
make build-all
# Build specific components
make build-frontend
make build-backend
make build-operator
make build-runnervTeam/
├── components/ # 🚀 Ambient Code Platform Components
│ ├── frontend/ # NextJS web interface
│ ├── backend/ # Go API service
│ ├── operator/ # Kubernetes operator
│ ├── runners/ # AI runner services
│ │ └── claude-code-runner/ # Python Claude Code CLI service
│ └── manifests/ # Kubernetes deployment manifests
├── docs/ # Documentation
│ ├── OPENSHIFT_DEPLOY.md # Detailed deployment guide
│ └── OPENSHIFT_OAUTH.md # OAuth configuration
├── tools/ # Supporting development tools
│ ├── vteam_shared_configs/ # Team configuration management
│ └── mcp_client_integration/ # MCP client library
└── Makefile # Build and deployment automation
- RBAC: Comprehensive role-based access controls
- Network Policies: Component isolation and secure communication
- Secret Management: Kubernetes-native secret storage with encryption
- Image Scanning: Vulnerability scanning for all container images
- Health Checks: Comprehensive health endpoints for all services
- Metrics: Prometheus-compatible metrics collection
- Logging: Structured logging with OpenShift logging integration
- Alerting: Integration with OpenShift monitoring and alerting
- Horizontal Pod Autoscaling: Auto-scaling based on CPU/memory metrics
- Resource Management: Proper requests/limits for optimal resource usage
- Job Queuing: Intelligent job scheduling and resource allocation
- Multi-tenancy: Project-based isolation with shared infrastructure
We welcome contributions! Please follow these guidelines to ensure code quality and consistency.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes following the existing patterns
- Run code quality checks (see below)
- Add tests if applicable
- Commit with conventional commit messages
- Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Before committing Go code, run these checks locally:
# Backend
cd components/backend
gofmt -l . # Check formatting
go vet ./... # Run go vet
golangci-lint run # Run full linting suite
# Operator
cd components/operator
gofmt -l . # Check formatting
go vet ./... # Run go vet
golangci-lint run # Run full linting suiteInstall golangci-lint:
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latestAuto-format your code:
# Format all Go files
gofmt -w components/backend components/operatorCI/CD: All pull requests automatically run these checks via GitHub Actions. Your PR must pass all linting checks before merging.
cd components/frontend
npm run lint # ESLint checks
npm run type-check # TypeScript checks (if available)
npm run format # Prettier formatting# Backend tests
cd components/backend
make test # Run all tests
make test-unit # Unit tests only
make test-integration # Integration tests
# Operator tests
cd components/operator
go test ./... -v # Run all tests
# Frontend tests
cd components/frontend
npm test # Run test suiteRun automated end-to-end tests in a local kind cluster:
make e2e-test # Full test suite (setup, deploy, test, cleanup)Or run steps individually:
cd e2e
./scripts/setup-kind.sh # Create kind cluster
./scripts/deploy.sh # Deploy vTeam
./scripts/run-tests.sh # Run Cypress tests
./scripts/cleanup.sh # Clean upThe e2e tests deploy the complete vTeam stack to a kind (Kubernetes in Docker) cluster and verify core functionality including project creation and UI navigation. Tests run automatically in GitHub Actions on every PR.
See e2e/README.md for detailed documentation, troubleshooting, and development guide.
- To ensure maximum focus and efficiency for the current RFE (Request for Enhancement) pilot, we are temporarily streamlining the active agent pool.
- Active Agents (Focused Scope): The 5 agents required for this specific RFE workflow are currently located in the agents folder.
- Agent Bullpen (Holding Pattern): All remaining agent definitions have been relocated to the "agent bullpen" folder. This transition does not signify the deprecation of any roles.
- Future Planning: Agents in the "agent bullpen" are designated for future reintegration and will be actively utilized as we expand to address subsequent processes and workflows across the organization.
- Update relevant documentation when changing functionality
- Follow existing documentation style (Markdown)
- Add code comments for complex logic
- Update CLAUDE.md if adding new patterns or standards
- Deployment Guide: docs/OPENSHIFT_DEPLOY.md
- OAuth Setup: docs/OPENSHIFT_OAUTH.md
- Architecture Details: diagrams/
- API Documentation: Available in web interface after deployment
- GitLab Integration Guide: docs/gitlab-integration.md
- GitLab Token Setup: docs/gitlab-token-setup.md
- Self-Hosted GitLab: docs/gitlab-self-hosted.md
- GitLab Testing: docs/gitlab-testing-procedures.md
While the project is now branded as Ambient Code Platform, the name "vTeam" still appears in various technical components for backward compatibility and to avoid breaking changes. You will encounter "vTeam" or "vteam" in:
- GitHub Repository:
github.com/ambient-code/vTeam(repository name unchanged) - Container Images:
vteam_frontend,vteam_backend,vteam_operator,vteam_claude_runner - Kubernetes API Group:
vteam.ambient-code(used in Custom Resource Definitions) - Development Namespace:
vteam-dev(local development environment)
- Local Development Routes:
https://vteam-frontend-vteam-dev.apps-crc.testinghttps://vteam-backend-vteam-dev.apps-crc.testing
- File paths: Repository directory structure (
/path/to/vTeam/...) - Go package references: Internal Kubernetes resource types
- RBAC resources: ClusterRole and RoleBinding names
- Makefile targets: Development commands reference
vteam-devnamespace - Kubernetes resources: Deployment names (
vteam-frontend,vteam-backend,vteam-operator) - Environment variables:
VTEAM_VERSIONin frontend deployment
These technical references remain unchanged to maintain compatibility with existing deployments and to avoid requiring migration for current users. Future major versions may fully transition these artifacts to use "Ambient Code Platform" or "ambient-code" naming.
This project is licensed under the MIT License - see the LICENSE file for details.