Skip to content

ambient-code/platform

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Ambient Code Platform

Kubernetes-native AI automation platform for intelligent agentic sessions with multi-agent collaboration

Note: This project was formerly known as "vTeam". While the project has been rebranded to Ambient Code Platform, the name "vTeam" still appears in various technical artifacts for backward compatibility (see Legacy vTeam References below).

Overview

The Ambient Code Platform is an AI automation platform that combines Claude Code CLI with multi-agent collaboration capabilities. The platform enables teams to create and manage intelligent agentic sessions through a modern web interface.

Key Capabilities

  • Intelligent Agentic Sessions: AI-powered automation for analysis, research, content creation, and development tasks
  • Multi-Agent Workflows: Specialized AI agents model realistic software team dynamics
  • Git Provider Support: Native integration with GitHub and GitLab (SaaS and self-hosted)
  • Kubernetes Native: Built with Custom Resources, Operators, and proper RBAC for enterprise deployment
  • Real-time Monitoring: Live status updates and job execution tracking
  • 🤖 Amber Background Agent: Automated issue-to-PR workflows via GitHub Actions (quickstart)

Amber: Self-Service Automation

Amber is a background agent that handles GitHub issues automatically:

  • 🤖 Auto-Fix: Create issue with amber:auto-fix label → Amber creates PR with linting/formatting fixes
  • 🔧 Refactoring: Label issue amber:refactor → Amber breaks large files, extracts patterns
  • 🧪 Test Coverage: Use amber:test-coverage → Amber adds missing tests

Quick Links:

Architecture

The platform consists of containerized microservices orchestrated via Kubernetes:

Component Technology Description
Frontend NextJS + Shadcn User interface for managing agentic sessions
Backend API Go + Gin REST API for managing Kubernetes Custom Resources (multi-tenant: projects, sessions, access control)
Agentic Operator Go Kubernetes operator that watches CRs and creates Jobs
Claude Code Runner Python + Claude Code CLI Pod that executes AI with multi-agent collaboration capabilities

Agentic Session Flow

  1. Create Session: User creates agentic session via web UI with task description
  2. API Processing: Backend creates AgenticSession Custom Resource in Kubernetes
  3. Job Scheduling: Operator detects CR and creates Kubernetes Job with runner pod
  4. AI Execution: Pod runs Claude Code CLI with multi-agent collaboration for intelligent analysis
  5. Result Storage: Analysis results stored back in Custom Resource status
  6. UI Updates: Frontend displays real-time progress and completed results

🚀 Quick Start

Get started in under 5 minutes!

See QUICK_START.md for the fastest way to run vTeam locally.

# Install prerequisites (one-time)
brew install minikube kubectl  # macOS
# or follow QUICK_START.md for Linux

# Start
make local-up

# Check status
make local-status

That's it! Access the app at http://$(minikube ip):30030 (get IP with make local-url).


Git Provider Support

Supported Providers

GitHub:

  • ✅ GitHub.com (public and private repositories)
  • ✅ GitHub Enterprise Server
  • ✅ GitHub App authentication
  • ✅ Personal Access Token authentication

GitLab (v1.1.0+):

  • ✅ GitLab.com (SaaS)
  • ✅ Self-hosted GitLab (Community & Enterprise editions)
  • ✅ Personal Access Token authentication
  • ✅ HTTPS and SSH URL formats
  • ✅ Custom domains and ports

Key Features

  • Automatic Provider Detection: Repositories automatically identified as GitHub or GitLab from URL
  • Multi-Provider Projects: Use GitHub and GitLab repositories in the same project
  • Secure Token Storage: All credentials encrypted in Kubernetes Secrets
  • Provider-Specific Error Handling: Clear, actionable error messages for each platform

Getting Started with GitLab

  1. Create Personal Access Token: GitLab PAT Setup Guide
  2. Connect Account: Settings → Integrations → GitLab
  3. Configure Repository: Add GitLab repository URL to project settings
  4. Create Sessions: AgenticSessions work seamlessly with GitLab repos

Documentation:

Prerequisites

Required Tools

  • Minikube for local development or OpenShift cluster for production
  • kubectl v1.28+ configured to access your cluster
  • Podman for building container images (or Docker as alternative)
  • Container registry access (Docker Hub, Quay.io, ECR, etc.) for production
  • Go 1.24+ for building backend services (if building from source)
  • Node.js 20+ and npm for the frontend (if building from source)

Required API Keys

  • Anthropic API Key - Get from Anthropic Console
    • Configure via web UI: Settings → Runner Secrets after deployment

Quick Start

1. Deploy to OpenShift

Deploy using the default images from quay.io/ambient_code:

# From repo root, prepare env for deploy script (required once)
cp components/manifests/env.example components/manifests/.env
# Edit .env and set at least ANTHROPIC_API_KEY

# Deploy to ambient-code namespace (default)
make deploy

# Or deploy to custom namespace
make deploy NAMESPACE=my-namespace

2. Verify Deployment

# Check pod status
oc get pods -n ambient-code

# Check services and routes
oc get services,routes -n ambient-code

3. Access the Web Interface

# Get the route URL
oc get route frontend-route -n ambient-code

# Or use port forwarding as fallback
kubectl port-forward svc/frontend-service 3000:3000 -n ambient-code

4. Configure API Keys

  1. Access the web interface
  2. Navigate to Settings → Runner Secrets
  3. Add your Anthropic API key

Usage

Creating an Agentic Session

  1. Access Web Interface: Navigate to your deployed route URL
  2. Create New Session:
    • Prompt: Task description (e.g., "Review this codebase for security vulnerabilities and suggest improvements")
    • Model: Choose AI model (Claude Sonnet/Haiku)
    • Settings: Adjust temperature, token limits, timeout (default: 300s)
  3. Monitor Progress: View real-time status updates and execution logs
  4. Review Results: Download analysis results and structured output

Example Use Cases

  • Code Analysis: Security reviews, code quality assessments, architecture analysis
  • Technical Documentation: API documentation, user guides, technical specifications
  • Project Planning: Feature specifications, implementation plans, task breakdowns
  • Research & Analysis: Technology research, competitive analysis, requirement gathering
  • Development Workflows: Code reviews, testing strategies, deployment planning

Advanced Configuration

Building Custom Images

To build and deploy your own container images:

# Set your container registry
export REGISTRY="quay.io/your-username"

# Build all images
make build-all

# Push to registry (requires authentication)
make push-all REGISTRY=$REGISTRY

# Deploy with custom images
cd components/manifests
REGISTRY=$REGISTRY ./deploy.sh

Container Engine Options

# Build with Podman (default)
make build-all

# Use Docker instead of Podman
make build-all CONTAINER_ENGINE=docker

# Build for specific platform
# Default is linux/amd64
make build-all PLATFORM=linux/arm64

# Build with additional flags
make build-all BUILD_FLAGS="--no-cache --pull"

OpenShift OAuth Integration

For cluster-based authentication and authorization, the deployment script can configure the Route host, create an OAuthClient, and set the frontend secret when provided a .env file. See the guide for details and a manual alternative:

Configuration & Secrets

Operator Configuration (Vertex AI vs Direct API)

The operator supports two modes for accessing Claude AI:

Direct Anthropic API (Default)

Use operator-config.yaml or operator-config-crc.yaml for standard deployments:

# Apply the standard config (Vertex AI disabled)
kubectl apply -f components/manifests/operator-config.yaml -n ambient-code

When to use:

  • Standard cloud deployments without Google Cloud integration
  • Local development with CRC/Minikube
  • Any environment using direct Anthropic API access

Configuration: Sets CLAUDE_CODE_USE_VERTEX=0

Google Cloud Vertex AI

Use operator-config-openshift.yaml for production OpenShift deployments with Vertex AI:

# Apply the Vertex AI config
kubectl apply -f components/manifests/operator-config-openshift.yaml -n ambient-code

When to use:

  • Production deployments on Google Cloud
  • Environments requiring Vertex AI integration
  • Enterprise deployments with Google Cloud service accounts

Configuration: Sets CLAUDE_CODE_USE_VERTEX=1 and configures:

  • CLOUD_ML_REGION: Google Cloud region (default: "global")
  • ANTHROPIC_VERTEX_PROJECT_ID: Your GCP project ID
  • GOOGLE_APPLICATION_CREDENTIALS: Path to service account key file

Creating the Vertex AI Secret:

When using Vertex AI, you must create a secret containing your Google Cloud service account key:

# The key file MUST be named ambient-code-key.json
kubectl create secret generic ambient-vertex \
  --from-file=ambient-code-key.json=ambient-code-key.json \
  -n ambient-code

Important Requirements:

  • ✅ Secret name must be ambient-vertex
  • ✅ Key file must be named ambient-code-key.json
  • ✅ Service account must have Vertex AI API access
  • ✅ Project ID in config must match the service account's project

Session Timeout Configuration

Sessions have a configurable timeout (default: 300 seconds):

  • Environment Variable: Set TIMEOUT=1800 for 30-minute sessions
  • CRD Default: Modify components/manifests/crds/agenticsessions-crd.yaml
  • Interactive Mode: Set interactive: true for unlimited chat-based sessions

Runner Secrets Management

Configure AI API keys and integrations via the web interface:

  • Settings → Runner Secrets: Add Anthropic API keys
  • Project-scoped: Each project namespace has isolated secret management
  • Security: All secrets stored as Kubernetes Secrets with proper RBAC

Troubleshooting

Common Issues

Pods Not Starting:

oc describe pod <pod-name> -n ambient-code
oc logs <pod-name> -n ambient-code

API Connection Issues:

oc get endpoints -n ambient-code
oc exec -it <pod-name> -- curl http://backend-service:8080/health

Job Failures:

oc get jobs -n ambient-code
oc describe job <job-name> -n ambient-code
oc logs <failed-pod-name> -n ambient-code

Verification Commands

# Check all resources
oc get all -l app=ambient-code -n ambient-code

# View recent events
oc get events --sort-by='.lastTimestamp' -n ambient-code

# Test frontend access
curl -f http://localhost:3000 || echo "Frontend not accessible"

# Test backend API
kubectl port-forward svc/backend-service 8080:8080 -n ambient-code &
curl http://localhost:8080/health

Production Considerations

Security

  • API Key Management: Store Anthropic API keys securely in Kubernetes secrets
  • RBAC: Configure appropriate role-based access controls
  • Network Policies: Implement network isolation between components
  • Image Scanning: Scan container images for vulnerabilities before deployment

Monitoring

  • Prometheus Metrics: Configure metrics collection for all components
  • Log Aggregation: Set up centralized logging (ELK, Loki, etc.)
  • Alerting: Configure alerts for pod failures, resource exhaustion
  • Health Checks: Implement comprehensive health endpoints

Scaling

  • Horizontal Pod Autoscaling: Configure HPA based on CPU/memory usage
  • Resource Limits: Set appropriate resource requests and limits
  • Node Affinity: Configure pod placement for optimal resource usage

Development

Local Development with Minikube

Single Command Setup:

# Start complete local development environment
make local-start

What this provides:

  • ✅ Local Kubernetes cluster with minikube
  • ✅ No authentication required - automatic login as "developer"
  • ✅ Automatic image builds and deployments
  • ✅ Working frontend-backend integration
  • ✅ Ingress configuration for easy access
  • ✅ Faster startup than OpenShift (2-3 minutes)

Prerequisites:

# Install minikube and kubectl (macOS)
brew install minikube kubectl

# Then start development
make local-start

Local MiniKube Access URLs:

Or using NodePort (no /etc/hosts needed):

  • Frontend: http://$(minikube ip):30030
  • Backend: http://$(minikube ip):30080

Common Commands:

make local-start     # Start minikube and deploy
make local-stop      # Stop deployment (keep minikube)
make local-delete    # Delete minikube cluster
make local-status    # Check deployment status
make local-logs      # View backend logs
make dev-test        # Run tests

For detailed local development guide, see:

Building from Source

# Build all images locally
make build-all

# Build specific components
make build-frontend
make build-backend
make build-operator
make build-runner

File Structure

vTeam/
├── components/                     # 🚀 Ambient Code Platform Components
│   ├── frontend/                   # NextJS web interface
│   ├── backend/                    # Go API service
│   ├── operator/                   # Kubernetes operator
│   ├── runners/                   # AI runner services
│   │   └── claude-code-runner/    # Python Claude Code CLI service
│   └── manifests/                  # Kubernetes deployment manifests
├── docs/                           # Documentation
│   ├── OPENSHIFT_DEPLOY.md        # Detailed deployment guide
│   └── OPENSHIFT_OAUTH.md         # OAuth configuration
├── tools/                          # Supporting development tools
│   ├── vteam_shared_configs/       # Team configuration management
│   └── mcp_client_integration/     # MCP client library
└── Makefile                        # Build and deployment automation

Production Considerations

Security

  • RBAC: Comprehensive role-based access controls
  • Network Policies: Component isolation and secure communication
  • Secret Management: Kubernetes-native secret storage with encryption
  • Image Scanning: Vulnerability scanning for all container images

Monitoring & Observability

  • Health Checks: Comprehensive health endpoints for all services
  • Metrics: Prometheus-compatible metrics collection
  • Logging: Structured logging with OpenShift logging integration
  • Alerting: Integration with OpenShift monitoring and alerting

Scaling & Performance

  • Horizontal Pod Autoscaling: Auto-scaling based on CPU/memory metrics
  • Resource Management: Proper requests/limits for optimal resource usage
  • Job Queuing: Intelligent job scheduling and resource allocation
  • Multi-tenancy: Project-based isolation with shared infrastructure

Contributing

We welcome contributions! Please follow these guidelines to ensure code quality and consistency.

Development Workflow

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes following the existing patterns
  4. Run code quality checks (see below)
  5. Add tests if applicable
  6. Commit with conventional commit messages
  7. Push to the branch (git push origin feature/amazing-feature)
  8. Open a Pull Request

Code Quality Standards

Go Code (Backend & Operator)

Before committing Go code, run these checks locally:

# Backend
cd components/backend
gofmt -l .                    # Check formatting
go vet ./...                  # Run go vet
golangci-lint run            # Run full linting suite

# Operator
cd components/operator
gofmt -l .                    # Check formatting
go vet ./...                  # Run go vet
golangci-lint run            # Run full linting suite

Install golangci-lint:

go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest

Auto-format your code:

# Format all Go files
gofmt -w components/backend components/operator

CI/CD: All pull requests automatically run these checks via GitHub Actions. Your PR must pass all linting checks before merging.

Frontend Code

cd components/frontend
npm run lint                  # ESLint checks
npm run type-check            # TypeScript checks (if available)
npm run format                # Prettier formatting

Testing

# Backend tests
cd components/backend
make test                     # Run all tests
make test-unit                # Unit tests only
make test-integration         # Integration tests

# Operator tests
cd components/operator
go test ./... -v              # Run all tests

# Frontend tests
cd components/frontend
npm test                      # Run test suite

E2E Testing

Run automated end-to-end tests in a local kind cluster:

make e2e-test                # Full test suite (setup, deploy, test, cleanup)

Or run steps individually:

cd e2e
./scripts/setup-kind.sh      # Create kind cluster
./scripts/deploy.sh          # Deploy vTeam
./scripts/run-tests.sh       # Run Cypress tests
./scripts/cleanup.sh         # Clean up

The e2e tests deploy the complete vTeam stack to a kind (Kubernetes in Docker) cluster and verify core functionality including project creation and UI navigation. Tests run automatically in GitHub Actions on every PR.

See e2e/README.md for detailed documentation, troubleshooting, and development guide.

Agent Strategy for Pilot

  • To ensure maximum focus and efficiency for the current RFE (Request for Enhancement) pilot, we are temporarily streamlining the active agent pool.
  • Active Agents (Focused Scope): The 5 agents required for this specific RFE workflow are currently located in the agents folder.
  • Agent Bullpen (Holding Pattern): All remaining agent definitions have been relocated to the "agent bullpen" folder. This transition does not signify the deprecation of any roles.
  • Future Planning: Agents in the "agent bullpen" are designated for future reintegration and will be actively utilized as we expand to address subsequent processes and workflows across the organization.

Documentation

  • Update relevant documentation when changing functionality
  • Follow existing documentation style (Markdown)
  • Add code comments for complex logic
  • Update CLAUDE.md if adding new patterns or standards

Support & Documentation

Deployment & Configuration

GitLab Integration

Legacy vTeam References

While the project is now branded as Ambient Code Platform, the name "vTeam" still appears in various technical components for backward compatibility and to avoid breaking changes. You will encounter "vTeam" or "vteam" in:

Infrastructure & Deployment

  • GitHub Repository: github.com/ambient-code/vTeam (repository name unchanged)
  • Container Images: vteam_frontend, vteam_backend, vteam_operator, vteam_claude_runner
  • Kubernetes API Group: vteam.ambient-code (used in Custom Resource Definitions)
  • Development Namespace: vteam-dev (local development environment)

URLs & Routes

  • Local Development Routes:
    • https://vteam-frontend-vteam-dev.apps-crc.testing
    • https://vteam-backend-vteam-dev.apps-crc.testing

Code & Configuration

  • File paths: Repository directory structure (/path/to/vTeam/...)
  • Go package references: Internal Kubernetes resource types
  • RBAC resources: ClusterRole and RoleBinding names
  • Makefile targets: Development commands reference vteam-dev namespace
  • Kubernetes resources: Deployment names (vteam-frontend, vteam-backend, vteam-operator)
  • Environment variables: VTEAM_VERSION in frontend deployment

These technical references remain unchanged to maintain compatibility with existing deployments and to avoid requiring migration for current users. Future major versions may fully transition these artifacts to use "Ambient Code Platform" or "ambient-code" naming.

License

This project is licensed under the MIT License - see the LICENSE file for details.