diff --git a/.devcontainer/.gitignore b/.devcontainer/.gitignore new file mode 100755 index 000000000..0247178b6 --- /dev/null +++ b/.devcontainer/.gitignore @@ -0,0 +1 @@ +home \ No newline at end of file diff --git a/.devcontainer/DOCKER_SECURITY.md b/.devcontainer/DOCKER_SECURITY.md new file mode 100644 index 000000000..de044014d --- /dev/null +++ b/.devcontainer/DOCKER_SECURITY.md @@ -0,0 +1,287 @@ +# Docker Socket Proxy Security + +This dev container uses [Tecnativa's docker-socket-proxy](https://github.com/Tecnativa/docker-socket-proxy) to provide secure, filtered access to the Docker daemon. + +## Why Socket Proxy? + +### The Problem +Direct access to `/var/run/docker.sock` gives near-root access to the host system: +- Can mount any host directory +- Can run privileged containers +- Can escape container isolation +- Supply chain attacks can exploit this access + +### The Solution: API-Level Filtering +Instead of mounting the Docker socket directly, we use a proxy container: + +``` +Dev Container → TCP:2375 → docker-socket-proxy → /var/run/docker.sock → Docker Daemon + (filtered) (direct access) +``` + +## Security Features + +### ✅ What's Allowed + +**Read Operations** (completely safe): +- `docker ps`, `docker images`, `docker logs` +- `docker inspect`, `docker version`, `docker info` +- Viewing events and monitoring + +**Build Operations** (needed for development): +- `docker build` - Building images +- `docker commit` - Committing container changes + +**Container Operations** (standard dev work): +- `docker run` - Starting containers +- `docker create` - Creating containers +- `docker exec` - Executing commands in containers +- `docker stop`, `docker start`, `docker restart` + +**Network & Volume Operations** (needed for docker-compose): +- Creating and managing networks +- Creating and managing volumes + +### ❌ What's Blocked + +**Dangerous Operations** (blocked at API level): +- `docker run --privileged` - **BLOCKED** by proxy +- Host namespace modes require privileged - **BLOCKED** +- Device access requires privileged - **BLOCKED** + +**Mount Restrictions** (client-side validation): +- Mounting paths outside `/workspace` - **BLOCKED** by validator +- System directories (`/`, `/etc`, `/home`, etc.) - **BLOCKED** +- Only workspace paths allowed for security + +## Configuration + +### Socket Proxy (API-Level Filtering) + +The proxy is configured via environment variables in `docker-compose.yml`: + +```yaml +environment: + # Read operations - SAFE + EVENTS: 1 + PING: 1 + VERSION: 1 + IMAGES: 1 + INFO: 1 + CONTAINERS: 1 + + # Write operations - NEEDED FOR DEV + POST: 1 # Create containers + BUILD: 1 # Build images + EXEC: 1 # Execute in containers + NETWORKS: 1 # Manage networks + VOLUMES: 1 # Manage volumes + + # Dangerous operations - DISABLED + ALLOW_PRIVILEGED: 0 # Block --privileged +``` + +### Mount Validator (Client-Side Filtering) + +Additional validation is performed client-side before API calls: + +```bash +# Configured via environment variables +DOCKER_MOUNT_VALIDATION=on # Enable validation +WORKSPACE_ROOT=/workspace # Allowed mount prefix + +# Applied via alias +alias docker='/usr/local/share/dev-scripts/docker-mount-validator.sh' +``` + +**Why Both Layers?** +- **Socket Proxy**: Blocks privileged operations (cannot bypass) +- **Mount Validator**: Blocks dangerous mounts (adds convenience layer) +- **Defense in Depth**: Multiple security layers + +## How It Works + +### 1. Proxy Container Startup +The proxy starts automatically with docker-compose when you open the dev container. + +### 2. Dev Container Connection +```bash +# Dev container uses TCP to connect to proxy via localhost +export DOCKER_HOST=tcp://127.0.0.1:2375 +docker ps # Proxied through security layer +``` + +### 3. API Filtering +```bash +# Allowed operation +docker run ubuntu echo "hello" +# → API call permitted → executes + +# Blocked operation +docker run --privileged ubuntu +# → API call blocked by proxy → error returned +``` + +## Usage Examples + +### Normal Development (All Work Normally) + +```bash +# Build images +docker build -t myapp . + +# Run containers +docker run -d --name myapp -p 8080:80 myapp + +# Debug containers +docker exec -it myapp bash +docker logs myapp + +# Use docker-compose +docker-compose up -d + +# Manage resources +docker images +docker ps -a +docker volume ls +``` + +### What Gets Blocked + +```bash +# Privileged container - BLOCKED by proxy +docker run --privileged ubuntu +# Error: API call rejected by proxy + +# System directory mounts - BLOCKED by validator +docker run -v /etc:/etc ubuntu +# ❌ MOUNT VALIDATION FAILED: +# Attempting to mount: /etc +# Only /workspace paths are allowed for security. + +# Workspace mounts - ALLOWED +docker run -v /workspace/data:/data ubuntu +# ✅ Works fine +``` + +## Advantages + +### 1. **No Bypass Possible** +- Security enforced at network/API level +- Cannot circumvent by calling different binary +- Cannot disable with environment variable +- True defense in depth + +### 2. **Battle-Tested** +- Used in production by thousands of projects +- Active development and security updates +- Well-documented and community-supported + +### 3. **Granular Control** +- Enable only needed API endpoints +- Adjust permissions per environment +- Clear security policy via configuration + +### 4. **True Privileged Blocking** +Unlike wrappers that just warn, this actually BLOCKS: +```bash +docker run --privileged ubuntu +# Error: Forbidden - privileged mode is not allowed +``` + +## Configuration Options + +### Stricter Security (Limit More) + +Edit `docker-compose.yml`: +```yaml +environment: + CONTAINERS: 1 # Read only + POST: 0 # Block creating containers + BUILD: 1 # Allow builds only + EXEC: 0 # Block exec +``` + +### More Permissive (Enable More) + +```yaml +environment: + # Enable Docker Swarm + SERVICES: 1 + SWARM: 1 + NODES: 1 + + # Enable secrets + SECRETS: 1 +``` + +## Troubleshooting + +### "Cannot connect to Docker daemon" + +Check proxy is running: +```bash +docker ps | grep cloudharness-docker-proxy +``` + +Check environment variable: +```bash +echo $DOCKER_HOST +# Should show: tcp://127.0.0.1:2375 +``` + +### "Forbidden" or "Permission Denied" + +The proxy is blocking the operation (e.g., `--privileged`). This is intentional for security. + +To allow specific operations, edit `docker-compose.yml` and restart the proxy. + +### "MOUNT VALIDATION FAILED" + +The mount validator is blocking paths outside `/workspace`. This is intentional for security. + +To bypass temporarily: +```bash +DOCKER_MOUNT_VALIDATION=off docker run -v /path:/path ubuntu +``` + +To allow specific paths, edit `.devcontainer/dev-scripts/docker-mount-validator.sh`: +```bash +ALLOWED_MOUNT_PATHS=( + "/workspace" + "/your/custom/path" # Add here +) +``` + +## Monitoring + +### View Proxy Logs +```bash +# See what API calls are being made +docker logs -f cloudharness-docker-proxy + +# Look for blocked operations +docker logs cloudharness-docker-proxy | grep -i forbidden +``` + +## Security Checklist + +- [x] Docker socket not mounted in dev container +- [x] Proxy has read-only access to socket +- [x] Privileged mode blocked (`ALLOW_PRIVILEGED=0`) +- [x] Mount validator restricts paths to /workspace only +- [x] Proxy runs on localhost only (`127.0.0.1:2375`) +- [x] Minimal API endpoints enabled +- [x] Proxy logs available for auditing +- [x] Proxy automatically starts with dev container +- [x] Client-side mount validation enabled + +## Additional Resources + +- [Tecnativa docker-socket-proxy](https://github.com/Tecnativa/docker-socket-proxy) +- [Docker Socket Security](https://docs.docker.com/engine/security/) +- [Container Security Best Practices](https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html) + +--- + +**Remember**: This provides defense-in-depth security. The proxy adds a strong security layer, but always follow best practices: use trusted images, scan for vulnerabilities, and monitor container activity. diff --git a/.devcontainer/DOTFILES.md b/.devcontainer/DOTFILES.md new file mode 100755 index 000000000..9a703c0c7 --- /dev/null +++ b/.devcontainer/DOTFILES.md @@ -0,0 +1,236 @@ +# Custom Dotfiles Support + +The dev container supports custom dotfiles for personal configuration preferences. + +## Using Your Own Dotfiles + +### Option 1: Using VS Code's Dotfiles Support + +VS Code can automatically clone and apply your dotfiles repository: + +1. Open VS Code Settings (Cmd/Ctrl + ,) +2. Search for "dotfiles" +3. Set "Dotfiles: Repository" to your dotfiles repo (e.g., `username/dotfiles`) +4. Optionally set "Dotfiles: Install Command" (default: `install.sh`) +5. Optionally set "Dotfiles: Target Path" (default: `~/dotfiles`) + +VS Code will automatically clone and install your dotfiles when creating the dev container. + +### Option 2: Manual Setup + +Place your dotfiles in `.devcontainer/home/`: + +```bash +# Example structure +.devcontainer/home/ + .bashrc # Your custom bash config (will be merged) + .gitconfig # Your git configuration + .tmux.conf # Your tmux configuration + .config/ + nvim/ + init.vim # Your neovim configuration + starship.toml # Your starship prompt config +``` + +These files are mounted into the container at `/root/` and persist across rebuilds. + +## Common Customizations + +### Bash + +The container sources `/usr/local/share/dev-scripts/common-bashrc.sh` for shared aliases. +Add your personal aliases to `~/.bashrc` or `.devcontainer/home/.bashrc`: + +```bash +# Personal aliases +alias myproject='cd /workspace/applications/myapp' +alias restart-server='docker-compose restart backend' +``` + +### Git + +Create `.devcontainer/home/.gitconfig`: + +```ini +[user] + name = Your Name + email = your.email@example.com +[core] + editor = nvim +[alias] + st = status + co = checkout + br = branch +``` + +### Neovim + +The default config is at `/root/.config/nvim/init.vim`. +You can override it by placing your config in `.devcontainer/home/.config/nvim/init.vim`. + +For plugin managers, consider: +- [lazy.nvim](https://github.com/folke/lazy.nvim) +- [packer.nvim](https://github.com/wbthomason/packer.nvim) + +### Starship Prompt + +Customize the prompt by editing `.devcontainer/home/.config/starship.toml`: + +```toml +# Example: Change prompt format +format = """ +$username\ +$hostname\ +$directory\ +$git_branch\ +$character""" + +[character] +success_symbol = "[➜](bold green)" +error_symbol = "[✗](bold red)" +``` + +See [Starship Configuration](https://starship.rs/config/) for all options. + +### Tmux + +Create `.devcontainer/home/.tmux.conf` for your tmux preferences: + +```bash +# Example customizations +set -g prefix C-b # Use Ctrl-b instead of Ctrl-a +set -g mouse off # Disable mouse mode +``` + +### asdf Version Manager + +Install language plugins as needed: + +```bash +# Install Node.js plugin +asdf plugin add nodejs + +# Install specific version +asdf install nodejs 20.10.0 +asdf global nodejs 20.10.0 + +# Install Python plugin (if you need different versions) +asdf plugin add python +asdf install python 3.11.6 +``` + +Popular plugins: +- `nodejs` - Node.js versions +- `python` - Python versions +- `golang` - Go versions +- `ruby` - Ruby versions +- `terraform` - Terraform versions + +See [asdf plugins](https://github.com/asdf-vm/asdf-plugins) for full list. + +## Sharing Team Configurations + +To share configurations across the team: + +1. Add them to `/usr/local/share/dev-scripts/common-bashrc.sh` for aliases +2. Create default configs in `.devcontainer/home/.config/` +3. Document them in this file +4. Commit to the repository + +Team members can override these with their personal dotfiles. + +## Tools Already Configured + +The container includes sensible defaults for: + +- **Bash**: Common aliases for git, docker, kubernetes +- **Starship**: Minimal prompt showing context, git status, k8s context +- **Neovim**: Basic IDE-like features without plugins +- **Tmux**: Ergonomic key bindings and status bar +- **Atuin**: Enhanced shell history with sync support + +## Tips + +### Atuin Shell History + +Atuin provides enhanced shell history with: +- Full-text search: `Ctrl+R` +- Statistics: `atuin stats` +- Sync across machines: `atuin sync` + +To enable sync: +```bash +atuin register -u -e +atuin login -u +atuin sync +``` + +### k9s Kubernetes UI + +Launch the interactive Kubernetes dashboard: +```bash +k9s +``` + +Key shortcuts: +- `:pod` - View pods +- `:svc` - View services +- `:deploy` - View deployments +- `?` - Help +- `/` - Filter +- `l` - Logs +- `d` - Describe +- `e` - Edit + +### Neovim LSP (Optional) + +To add LSP support, create a plugin config. Example with lazy.nvim: + +```vim +" In ~/.config/nvim/init.vim, add: +lua << EOF +-- Bootstrap lazy.nvim +local lazypath = vim.fn.stdpath("data") .. "/lazy/lazy.nvim" +if not vim.loop.fs_stat(lazypath) then + vim.fn.system({ + "git", "clone", "--filter=blob:none", + "https://github.com/folke/lazy.nvim.git", + "--branch=stable", lazypath, + }) +end +vim.opt.rtp:prepend(lazypath) + +-- Install plugins +require("lazy").setup({ + "neovim/nvim-lspconfig", + "hrsh7th/nvim-cmp", + "hrsh7th/cmp-nvim-lsp", +}) +EOF +``` + +## Troubleshooting + +### Dotfiles not loading + +- Check file permissions: `ls -la ~/.config/` +- Verify mount in devcontainer.json +- Reload window: Cmd/Ctrl + Shift + P → "Reload Window" + +### Conflicts with default configs + +Files in `.devcontainer/home/` override defaults. If you want to extend instead: + +```bash +# In your .bashrc +source /usr/local/share/dev-scripts/common-bashrc.sh +# Your customizations here +``` + +### asdf not working + +Ensure it's sourced in your shell: +```bash +echo '. $HOME/.asdf/asdf.sh' >> ~/.bashrc +source ~/.bashrc +``` diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile new file mode 100755 index 000000000..95438c168 --- /dev/null +++ b/.devcontainer/Dockerfile @@ -0,0 +1,202 @@ +FROM python:3.12 + +# Install Node.js 20, OpenJDK and other system dependencies + +RUN apt-get update && apt-get install -y \ + curl \ + git \ + build-essential \ + nfs-common \ + default-jdk \ + apt-transport-https \ + ca-certificates \ + gnupg \ + lsb-release \ + iputils-ping \ + net-tools \ + wget \ + neovim \ + tmux \ + htop \ + tig \ + fzf \ + && rm -rf /var/lib/apt/lists/* + +# Install Docker CLI +# Note: Docker daemon access is secured via docker-socket-proxy +RUN install -m 0755 -d /etc/apt/keyrings && \ + curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc && \ + chmod a+r /etc/apt/keyrings/docker.asc && \ + echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \ + $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null && \ + apt-get update && \ + apt-get install -y docker-ce-cli docker-compose-plugin && \ + rm -rf /var/lib/apt/lists/* + +# Install kubectl with security verification +# Using stable version with checksum verification to prevent supply chain attacks +RUN KUBECTL_VERSION="v1.34.2" && \ + curl -LO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl" && \ + curl -LO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl.sha256" && \ + echo "$(cat kubectl.sha256) kubectl" | sha256sum --check || (echo "kubectl checksum verification failed" && exit 1) && \ + install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl && \ + rm kubectl kubectl.sha256 && \ + kubectl version --client + +# Install Google Cloud SDK and gke-gcloud-auth-plugin for GKE authentication +RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | \ + tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && \ + curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | \ + gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg && \ + apt-get update && \ + apt-get install -y google-cloud-cli google-cloud-sdk-gke-gcloud-auth-plugin && \ + rm -rf /var/lib/apt/lists/* && \ + gcloud version && \ + gke-gcloud-auth-plugin --version + +# Install Helm with security verification +RUN HELM_VERSION="v4.0.0" && \ + HELM_CHECKSUM="c77e9e7c1cc96e066bd240d190d1beed9a6b08060b2043ef0862c4f865eca08f" && \ + curl -LO "https://get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz" && \ + echo "${HELM_CHECKSUM} helm-${HELM_VERSION}-linux-amd64.tar.gz" | sha256sum --check || (echo "Helm checksum verification failed" && exit 1) && \ + tar -zxvf helm-${HELM_VERSION}-linux-amd64.tar.gz && \ + mv linux-amd64/helm /usr/local/bin/helm && \ + rm -rf linux-amd64 helm-${HELM_VERSION}-linux-amd64.tar.gz && \ + helm version + +# Install Skaffold with security verification +RUN SKAFFOLD_VERSION="v2.14.2" && \ + SKAFFOLD_CHECKSUM="2209463bafd0e021907c1efe72063d6b9ca3244a72b437a51aff061b0b97087a" && \ + curl -LO "https://storage.googleapis.com/skaffold/releases/${SKAFFOLD_VERSION}/skaffold-linux-amd64" && \ + echo "${SKAFFOLD_CHECKSUM} skaffold-linux-amd64" | sha256sum --check || (echo "Skaffold checksum verification failed" && exit 1) && \ + install -o root -g root -m 0755 skaffold-linux-amd64 /usr/local/bin/skaffold && \ + rm skaffold-linux-amd64 && \ + skaffold version + +# Install k9s - Kubernetes CLI UI +RUN K9S_VERSION="v0.32.5" && \ + K9S_CHECKSUM="33c31bf5feba292b59b8dabe5547cb52ab565521ee5619b52eb4bd4bf226cea3" && \ + curl -LO "https://github.com/derailed/k9s/releases/download/${K9S_VERSION}/k9s_Linux_amd64.tar.gz" && \ + echo "${K9S_CHECKSUM} k9s_Linux_amd64.tar.gz" | sha256sum --check || (echo "k9s checksum verification failed" && exit 1) && \ + tar -xzf k9s_Linux_amd64.tar.gz k9s && \ + install -o root -g root -m 0755 k9s /usr/local/bin/k9s && \ + rm k9s_Linux_amd64.tar.gz k9s && \ + k9s version + +# Install Starship prompt +RUN STARSHIP_VERSION="v1.20.1" && \ + curl -LO "https://github.com/starship/starship/releases/download/${STARSHIP_VERSION}/starship-x86_64-unknown-linux-gnu.tar.gz" && \ + tar -xzf starship-x86_64-unknown-linux-gnu.tar.gz && \ + install -o root -g root -m 0755 starship /usr/local/bin/starship && \ + rm starship-x86_64-unknown-linux-gnu.tar.gz starship && \ + starship --version + +# Install Atuin - Shell history tool +RUN ATUIN_VERSION="v18.3.0" && \ + curl -LO "https://github.com/atuinsh/atuin/releases/download/${ATUIN_VERSION}/atuin-x86_64-unknown-linux-gnu.tar.gz" && \ + tar -xzf atuin-x86_64-unknown-linux-gnu.tar.gz && \ + install -o root -g root -m 0755 atuin-x86_64-unknown-linux-gnu/atuin /usr/local/bin/atuin && \ + rm -rf atuin-x86_64-unknown-linux-gnu.tar.gz atuin-x86_64-unknown-linux-gnu && \ + atuin --version + +# Install gmap - Git activity visualizer +RUN curl -L "https://github.com/seeyebe/gmap/releases/latest/download/gmap-linux-amd64" -o /usr/local/bin/gmap && \ + chmod +x /usr/local/bin/gmap + +# Install asdf version manager +RUN git clone https://github.com/asdf-vm/asdf.git /root/.asdf --branch v0.14.1 + +# Install bash-preexec (required for atuin history capture) +RUN curl -sL https://raw.githubusercontent.com/rcaloras/bash-preexec/master/bash-preexec.sh -o /root/.bash-preexec.sh + +# Install Node.js 20.x + +RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \ + apt-get install -y nodejs + +# Upgrade system pip (separate for clearer error logs) +RUN python -m pip install --upgrade pip + +# Update npm globally +RUN npm install -g npm@latest + +# Enable corepack if present (Node 20 ships it); ignore if missing +RUN (command -v corepack >/dev/null 2>&1 && corepack enable || echo "corepack not available, continuing") + +# Install yarn (try npm classic install first, fallback to corepack prepare) +RUN (npm install -g yarn@latest || (command -v corepack >/dev/null 2>&1 && corepack prepare yarn@stable --activate) || echo "Yarn installation fallback used") + +# Set working directory +WORKDIR /cloudharness + +# Copy all requirements files first for better Docker layer caching +COPY libraries/models/requirements.txt ./libraries/models/ +COPY libraries/cloudharness-utils/requirements.txt ./libraries/cloudharness-utils/ +COPY libraries/cloudharness-common/requirements.txt ./libraries/cloudharness-common/ +COPY libraries/client/cloudharness_cli/requirements.txt ./libraries/client/cloudharness_cli/ +COPY tools/deployment-cli-tools/requirements.txt ./tools/deployment-cli-tools/ + +# Install all external dependencies with caching +RUN --mount=type=cache,target=/root/.cache \ + pip install -r libraries/models/requirements.txt --prefer-binary && \ + pip install -r libraries/cloudharness-utils/requirements.txt --prefer-binary && \ + pip install -r libraries/cloudharness-common/requirements.txt --prefer-binary && \ + pip install -r libraries/client/cloudharness_cli/requirements.txt --prefer-binary && \ + pip install -r tools/deployment-cli-tools/requirements.txt --prefer-binary + +# Copy requirements files for common framework libraries +COPY infrastructure/common-images/cloudharness-flask/requirements.txt ./infrastructure/flask-requirements.txt +COPY infrastructure/common-images/cloudharness-django/libraries/cloudharness-django/requirements.txt ./infrastructure/django-requirements.txt + +# Install additional tools and common framework libraries +RUN --mount=type=cache,target=/root/.cache \ + pip install pytest debugpy --prefer-binary && \ + pip install -r infrastructure/flask-requirements.txt --prefer-binary && \ + pip install -r infrastructure/django-requirements.txt --prefer-binary + +# Copy and install libraries one by one +COPY libraries/models ./libraries/models +RUN pip install -e libraries/models --no-cache-dir + +COPY libraries/cloudharness-utils ./libraries/cloudharness-utils +RUN pip install -e libraries/cloudharness-utils --no-cache-dir + +COPY libraries/cloudharness-common ./libraries/cloudharness-common +RUN pip install -e libraries/cloudharness-common --no-cache-dir + +COPY libraries/client/cloudharness_cli ./libraries/client/cloudharness_cli +RUN pip install -e libraries/client/cloudharness_cli --no-cache-dir + +COPY tools/deployment-cli-tools ./tools/deployment-cli-tools +RUN pip install -e tools/deployment-cli-tools --no-cache-dir + + +# Copy and install cloudharness framework libraries (last to ensure they override any conflicts) +COPY infrastructure/common-images/cloudharness-django/libraries/cloudharness-django infrastructure/cloudharness-django +RUN pip install -e infrastructure/cloudharness-django --no-cache-dir || echo "cloudharness-django not installable" + +# Ensure latest npm & yarn still available after project copy (optional refresh) +RUN npm install -g npm@latest yarn@latest || true + +# Copy dev scripts from .devcontainer for portability +COPY .devcontainer/dev-scripts /usr/local/share/dev-scripts +RUN chmod +x /usr/local/share/dev-scripts/*.sh /usr/local/share/dev-scripts/use-venv && \ + ln -s /usr/local/share/dev-scripts /root/dev-scripts + +# Create directories for neovim +RUN mkdir -p /root/.config/nvim + +# Add the cloudharness CLI tools to PATH +ENV PATH="/cloudharness/tools/deployment-cli-tools:${PATH}" + +# Set the default working directory +WORKDIR /workspace + +# Set bash as the default shell for RUN commands +SHELL ["/bin/bash", "-c"] + +# Set environment to ensure bash is used +ENV SHELL=/bin/bash + +# Default command +CMD ["/bin/bash"] diff --git a/.devcontainer/KUBERNETES_SECURITY.md b/.devcontainer/KUBERNETES_SECURITY.md new file mode 100755 index 000000000..e1aeaf5d5 --- /dev/null +++ b/.devcontainer/KUBERNETES_SECURITY.md @@ -0,0 +1,164 @@ +# Kubernetes Security in Dev Container + +This dev container includes Kubernetes tooling (kubectl, helm, skaffold) with security measures to protect against supply chain attacks and accidental cluster modifications. + +## Security Measures + +### 1. Tool Installation Security + +All Kubernetes tools are installed with: +- **Pinned versions**: Specific versions are used (not "latest") to ensure reproducibility +- **Checksum verification**: SHA256 checksums are verified during installation to detect tampering +- **Official sources only**: Tools are downloaded directly from official repositories + +Current versions: +- kubectl: v1.31.2 +- Helm: v3.16.2 +- Skaffold: v2.13.2 + +### 2. Read-Only Kubeconfig Mount + +The host's `~/.kube` directory is mounted as **read-only** inside the container: +- Container cannot modify kubeconfig files +- Cannot add malicious clusters or contexts +- Cannot steal or exfiltrate credentials by modifying config + +### 3. Production Cluster Filtering + +Production clusters are automatically filtered out from the kubeconfig inside the container: +- **Blocked patterns**: `production`, `prod`, `mnp-cluster-production` +- Clusters, contexts, and users matching these patterns are removed +- Container cannot accidentally connect to production environments +- Filtering happens on container startup via `setup-docker-desktop-kube.sh` + +### 4. Kubectl Safety Wrapper + +A kubectl wrapper script provides safety checks for destructive operations: + +#### Protected Operations +The following operations require confirmation: +- `delete`, `apply`, `create`, `replace`, `patch`, `edit` +- `drain`, `cordon`, `taint`, `label`, `annotate` +- `scale`, `rollout`, `set` + +#### Critical Namespace Protection +Extra warnings are shown for operations on: +- `kube-system`, `kube-public`, `kube-node-lease`, `default` + +#### Usage + +The wrapper is enabled by default. To bypass (use with caution): +```bash +# Disable safety wrapper for a single command +KUBECTL_SAFE_MODE=off kubectl delete pod my-pod + +# Use the real kubectl binary directly (not recommended) +/usr/local/bin/kubectl delete pod my-pod +``` + +To enable the wrapper in your shell, add to your `.bashrc`: +```bash +alias kubectl='/usr/local/share/dev-scripts/kubectl-wrapper.sh' +``` + +## Best Practices + +### For Development +1. **Use separate contexts**: Create dev-specific kubeconfig contexts +2. **Limit permissions**: Use RBAC to limit what the dev context can do +3. **Read-only by default**: Use `kubectl get`, `describe`, `logs` for most tasks +4. **Test in safe namespaces**: Create dedicated development namespaces + +### For CI/CD +1. **Use service accounts**: Don't share personal credentials with containers +2. **Principle of least privilege**: Grant only necessary permissions +3. **Audit logging**: Enable and monitor cluster audit logs +4. **Network policies**: Restrict container network access if possible + +### Protecting Against Supply Chain Attacks + +1. **Verify checksums**: When updating tool versions in Dockerfile, always: + ```bash + # Download and calculate checksum + curl -LO https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl + sha256sum kubectl + ``` + +2. **Review changes**: Always review Dockerfile changes that update tools + +3. **Pin dependencies**: Never use `latest` tags or unpinned versions + +4. **Scan images**: Run security scans on built images: + ```bash + docker scan cloudharness-dev:latest + ``` + +## Emergency Response + +If you suspect a compromised container: + +1. **Stop the container immediately** + ```bash + docker stop cloudharness-dev + ``` + +2. **Check cluster audit logs** for suspicious activity + +3. **Rotate credentials**: + ```bash + # Regenerate kubeconfig + kubectl config view --raw > ~/.kube/config.backup + # Request new certificates from cluster admin + ``` + +4. **Review cluster resources** for unauthorized changes: + ```bash + kubectl get all --all-namespaces + kubectl get secrets --all-namespaces + ``` + +## Configuration + +### Environment Variables + +- `KUBECONFIG`: Points to `/root/.kube/config` (read-only mount) +- `KUBECTL_SAFE_MODE`: Set to `off` to disable safety wrapper (not recommended) + +### Disabling Kubernetes Access + +To run the dev container without Kubernetes access: + +1. Comment out the kubeconfig mount in `.devcontainer/devcontainer.json`: + ```jsonc + // "source=${localEnv:HOME}${localEnv:USERPROFILE}/.kube,target=/root/.kube,type=bind,readonly", + ``` + +2. Remove KUBECONFIG environment variable + +3. Rebuild the container + +## Updating Tool Versions + +To update kubectl, helm, or skaffold versions: + +1. Find the latest stable version from official sources +2. Download and calculate SHA256 checksum +3. Update version and checksum in `Dockerfile` +4. Test the build before committing +5. Document the update in your commit message + +Example: +```bash +# For kubectl +KUBECTL_VERSION="v1.32.0" +curl -LO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl" +sha256sum kubectl + +# Update Dockerfile with new version and checksum +``` + +## Additional Resources + +- [Kubernetes Security Best Practices](https://kubernetes.io/docs/concepts/security/) +- [Supply Chain Security](https://slsa.dev/) +- [Docker Security](https://docs.docker.com/engine/security/) diff --git a/.devcontainer/README.md b/.devcontainer/README.md new file mode 100755 index 000000000..80571391d --- /dev/null +++ b/.devcontainer/README.md @@ -0,0 +1,215 @@ +# CloudHarness Dev Container + +This directory contains the configuration for the CloudHarness development container. + +## Overview + +The dev container provides a complete development environment with: +- Python 3.12 with all CloudHarness libraries +- Node.js 20 and package managers (npm, yarn) +- Docker CLI (connects to host Docker daemon) +- Kubernetes tools (kubectl, helm, skaffold, k9s) with security features +- Modern shell tools (starship prompt, atuin history, tmux, fzf) +- Enhanced editors (neovim with sensible defaults) +- Development utilities (htop, tig, gmap, rhttp) +- Version management (asdf) +- VS Code extensions and configurations + +## Security Features + +### Docker Access +- Docker CLI installed in container +- **Secured via docker-socket-proxy** - API-level filtering +- **Privileged containers BLOCKED** - Cannot bypass +- **Mount restrictions** - Only /workspace paths allowed +- **No direct socket mount** - Connects through TCP proxy +- **Battle-tested solution** - Used in production environments +- See [DOCKER_SECURITY.md](DOCKER_SECURITY.md) for detailed security information + +### Kubernetes Access +- kubectl, helm, and skaffold installed with checksum verification +- Host kubeconfig mounted **read-only** at `~/.kube` +- Safety wrapper on kubectl to prevent accidental destructive operations +- See [KUBERNETES_SECURITY.md](KUBERNETES_SECURITY.md) for detailed security information + +## Files + +- `devcontainer.json` - Main dev container configuration (uses docker-compose) +- `docker-compose.yml` - Docker compose with socket proxy setup +- `Dockerfile` - Container image definition +- `DOCKER_SECURITY.md` - Docker socket proxy security documentation +- `KUBERNETES_SECURITY.md` - Kubernetes security documentation and best practices +- `SECURITY_QUICKREF.md` - Quick reference for security features +- `DOTFILES.md` - Guide for customizing with personal dotfiles +- `dev-scripts/` - Utility scripts (kubectl wrapper, setup, etc.) +- `home/` - Files mounted into container home directory (your dotfiles go here) +- `vscode/` - VS Code workspace settings (launch.json, settings.json, etc.) + +## Usage + +### Opening the Dev Container + +1. Install VS Code and the "Dev Containers" extension +2. Open the CloudHarness repository in VS Code +3. Click "Reopen in Container" when prompted (or use Command Palette → "Dev Containers: Reopen in Container") + +### First Time Setup + +The container will automatically: +1. Build the Docker image (may take several minutes) +2. Install Python packages and dependencies +3. Set up the virtual environment +4. Configure VS Code settings +5. Set up kubectl safety wrapper + +### Working with Docker + +```bash +# All standard operations work normally +docker ps +docker images +docker build -t myimage . +docker run -d --name myapp myimage +docker-compose up + +# Privileged operations are blocked by proxy +docker run --privileged ubuntu # ❌ Blocked at API level +# Error: Forbidden - privileged mode not allowed + +# System directory mounts are blocked +docker run -v /etc:/etc ubuntu # ❌ Blocked by validator +# Error: MOUNT VALIDATION FAILED + +# Workspace mounts work fine +docker run -v /workspace/data:/data ubuntu # ✅ Works +``` + +### Working with Kubernetes + +```bash +# Read-only operations work without prompts +kubectl get pods +kubectl describe deployment myapp +kubectl logs pod/my-pod + +# Destructive operations require confirmation +kubectl delete pod my-pod +# → Shows warning and asks for confirmation + +# Bypass safety wrapper (use with caution) +KUBECTL_SAFE_MODE=off kubectl delete pod my-pod +``` + +### Using Helm and Skaffold + +```bash +# Generate and deploy using CloudHarness tools +harness-deployment cloudharness . + +# Use helm directly +helm list +helm upgrade myapp ./helm/myapp + +# Use skaffold for local development +skaffold dev +``` + +## Mounted Directories + +| Host Path | Container Path | Mode | Purpose | +|-----------|----------------|------|---------| +| Repository root | `/workspace` | RW | Source code | +| `~/.docker` | `/root/.docker` | RO | Docker config/credentials | +| `~/.kube` | `/root/.kube` | RO | Kubernetes config/credentials | +| `/var/run/docker.sock` | `/var/run/docker.sock` | RW | Docker daemon socket | +| `.devcontainer/home` | `/root` | RW | Container home directory | +| `.devcontainer/vscode` | `/workspace/.vscode` | RW | VS Code workspace settings | + +## Environment Variables + +- `PYTHONPATH` - Includes all CloudHarness library paths +- `DOCKER_HOST` - Points to docker-socket-proxy (`tcp://127.0.0.1:2375`) +- `DOCKER_SOCKET_PROXY` - Flag indicating proxy is enabled (`1`) +- `KUBECONFIG` - Points to read-only kubeconfig +- `KUBECTL_SAFE_MODE` - Enables kubectl safety wrapper (`on` by default) + +## Customization + +### Adding More Tools + +Edit the `Dockerfile` and add installation commands. For security: +1. Pin specific versions +2. Verify checksums for downloaded binaries +3. Document the changes + +### VS Code Extensions + +Add extensions to `devcontainer.json`: +```jsonc +"extensions": [ + "publisher.extension-name" +] +``` + +### Shell Configuration + +Files in `home/` directory are mounted to `/root`: +- `home/.bashrc` - Shell configuration +- `home/.gitconfig` - Git configuration +- etc. + +## Troubleshooting + +### Container won't start +- Check Docker daemon is running on host +- Ensure you have permission to access Docker socket +- Try rebuilding: Command Palette → "Dev Containers: Rebuild Container" + +### Kubernetes commands fail +- Verify `~/.kube/config` exists on host +- Check cluster connectivity from host first +- Ensure kubeconfig is valid + +### Python imports not working +- Virtual environment should auto-activate +- Check `PYTHONPATH` environment variable +- Try reloading window: Command Palette → "Developer: Reload Window" + +## Rebuilding the Container + +When you change: +- `Dockerfile` +- `devcontainer.json` (build section) +- Dependencies in requirements files + +Rebuild the container: +1. Command Palette → "Dev Containers: Rebuild Container" +2. Or: "Dev Containers: Rebuild Without Cache" for clean rebuild + +## Security Considerations + +This dev container has access to: +- ✅ Host file system (workspace directory) +- ✅ Host Docker daemon (via **secure proxy** - API-level filtering) +- ✅ Host Kubernetes clusters (read-only config) + +Security measures in place: +- **Docker socket proxy** filters all Docker API calls +- **Privileged containers BLOCKED** at API level (cannot bypass) +- **No direct socket mount** - only proxy access +- **Battle-tested security** - used in production environments +- **Kubeconfig is read-only** (cannot modify credentials) +- **kubectl wrapper** prevents accidental destructive operations +- **All tools installed with checksum verification** +- **Docker config is read-only** + +**Important**: The container runs as root inside the container namespace. Files created will have host user permissions due to bind mounts. + +See [DOCKER_SECURITY.md](DOCKER_SECURITY.md) and [KUBERNETES_SECURITY.md](KUBERNETES_SECURITY.md) for detailed security information. + +## Additional Resources + +- [VS Code Dev Containers Documentation](https://code.visualstudio.com/docs/devcontainers/containers) +- [CloudHarness Documentation](../docs/README.md) +- [Docker Security](./DOCKER_SECURITY.md) +- [Kubernetes Security](./KUBERNETES_SECURITY.md) diff --git a/.devcontainer/SECURITY_QUICKREF.md b/.devcontainer/SECURITY_QUICKREF.md new file mode 100755 index 000000000..1e9505626 --- /dev/null +++ b/.devcontainer/SECURITY_QUICKREF.md @@ -0,0 +1,213 @@ +# Security Quick Reference + +## 🔒 Security Features Enabled + +| Feature | Status | Purpose | +|---------|--------|---------|| +| Docker Socket Proxy | ✅ Enabled | API-level filtering, blocks privileged operations | +| Privileged Container Block | ✅ Enabled | Cannot run privileged containers | +| No Direct Socket Mount | ✅ Enabled | Dev container connects via TCP proxy only | +| Kubeconfig Read-Only Mount | ✅ Enabled | Prevents credential theft/modification | +| Production Cluster Filtering | ✅ Enabled | Blocks access to production environments | +| Tool Checksum Verification | ✅ Enabled | Prevents supply chain attacks | +| Kubectl Safety Wrapper | ✅ Enabled | Prevents accidental destructive operations | +| Pinned Tool Versions | ✅ Enabled | Prevents automatic updates to compromised versions | + +## 📝 Quick Commands + +### Docker Operations + +#### Standard Operations (Work Normally) +```bash +docker ps +docker images +docker build -t myapp . +docker run -d myapp +docker exec -it myapp bash +docker logs myapp +docker-compose up +``` + +#### Blocked Operations (Cannot Bypass) +```bash +docker run --privileged ubuntu # ❌ Blocked by API proxy +# Error: Forbidden - privileged mode not allowed + +docker run --cap-add=ALL ubuntu # ❌ Blocked by API proxy +docker run --pid=host ubuntu # ❌ Blocked by API proxy + +# Mount validation (client-side) +docker run -v /etc:/etc ubuntu # ❌ Blocked by validator +# Error: MOUNT VALIDATION FAILED - Only /workspace paths allowed + +docker run -v /workspace/data:/data ubuntu # ✅ Allowed +``` + +#### Checking Proxy Status +```bash +# Check if proxy is running +docker ps | grep docker-proxy + +# View proxy logs +docker logs cloudharness-docker-proxy + +# Check docker connection +echo $DOCKER_HOST # Should show: tcp://127.0.0.1:2375 +``` + +### Kubernetes Operations + +#### Safe Operations (No Prompt) +```bash +kubectl get pods +kubectl get nodes +kubectl describe service myapp +kubectl logs -f deployment/myapp +kubectl port-forward service/myapp 8080:80 +``` + +### Destructive Operations (Requires Confirmation) +```bash +kubectl delete pod myapp-xyz # ⚠️ Prompts for confirmation +kubectl apply -f deployment.yaml # ⚠️ Prompts for confirmation +kubectl scale deployment myapp --replicas=3 # ⚠️ Prompts for confirmation +``` + +### Bypass Safety (Use Carefully) +```bash +# For a single command +KUBECTL_SAFE_MODE=off kubectl delete pod myapp-xyz + +# For current shell session +export KUBECTL_SAFE_MODE=off +kubectl delete pod myapp-xyz + +# Re-enable +export KUBECTL_SAFE_MODE=on +``` + +## 🚨 Critical Namespaces (Extra Protection) + +These namespaces trigger additional warnings: +- `kube-system` - Core Kubernetes components +- `kube-public` - Public cluster information +- `kube-node-lease` - Node heartbeat data +- `default` - Default namespace + +## ✅ Security Checklist + +### Docker Security +- [ ] Docker socket proxy is running (`docker ps | grep docker-proxy`) +- [ ] DOCKER_HOST points to proxy (`echo $DOCKER_HOST`) +- [ ] Mount validation is enabled (`echo $DOCKER_MOUNT_VALIDATION`) +- [ ] Privileged operations are blocked (try `docker run --privileged ubuntu`) +- [ ] System directory mounts are blocked (try `docker run -v /etc:/etc ubuntu`) +- [ ] Images pulled from trusted registries +- [ ] Review proxy logs for suspicious activity +- [ ] Read DOCKER_SECURITY.md for detailed information + +### Kubernetes Security +- [ ] Kubeconfig is mounted read-only (cannot write to `~/.kube/config`) +- [ ] Production clusters are filtered out (verify with `kubectl config get-clusters`) +- [ ] Safety wrapper is active (test with `kubectl delete` command) +- [ ] Tools are pinned versions (check with `kubectl version --client`) +- [ ] Read KUBERNETES_SECURITY.md for detailed information +- [ ] Use separate kubeconfig contexts for dev work +- [ ] Never disable safety mode in production contexts + +### Verify Production Filtering +```bash +# Should NOT show any production clusters +kubectl config get-clusters | grep -i production +# Exit code 1 means no production clusters found (good!) +``` + +## 🔧 Troubleshooting + +### "Forbidden" error from Docker +✅ This is expected! The socket proxy blocks dangerous operations like --privileged. +This is a security feature and cannot be bypassed from the dev container. + +### "MOUNT VALIDATION FAILED" error +✅ This is expected! Only /workspace paths can be mounted for security. +To bypass temporarily: `DOCKER_MOUNT_VALIDATION=off docker run -v /path:/path ubuntu` + +### "Cannot connect to Docker daemon" +Check if socket proxy is running: +```bash +docker ps | grep docker-proxy +echo $DOCKER_HOST # Should output "tcp://127.0.0.1:2375" + +# Restart proxy if needed +docker-compose restart docker-socket-proxy +``` + +### "Read-only file system" when accessing kubeconfig +✅ This is expected! The kubeconfig is intentionally read-only for security. + +### Safety wrapper not asking for confirmation +Check if KUBECTL_SAFE_MODE is enabled: +```bash +echo $KUBECTL_SAFE_MODE # Should output "on" +alias kubectl # Should point to wrapper script +``` + +### Need to bypass safety for automation +```bash +# Docker - use environment variable +DOCKER_SAFE_MODE=off docker run -v /path:/path myimage + +# Or use real docker +/usr/bin/docker run -v /path:/path myimage + +# Kubernetes - use environment variable +KUBECTL_SAFE_MODE=off kubectl apply -f manifests/ + +# Or use real kubectl +/usr/local/bin/kubectl apply -f manifests/ +``` + +## 📚 Documentation + +- Docker socket proxy guide: `.devcontainer/DOCKER_SECURITY.md` +- Kubernetes security guide: `.devcontainer/KUBERNETES_SECURITY.md` +- Dev container guide: `.devcontainer/README.md` +- Docker-compose configuration: `.devcontainer/docker-compose.yml` + +## 🆘 Emergency Actions + +### Suspected Compromise +```bash +# 1. Stop container immediately +docker stop cloudharness-dev + +# 2. Check cluster for suspicious changes +kubectl get all --all-namespaces +kubectl get secrets --all-namespaces + +# 3. Rotate credentials +# Contact cluster admin to regenerate certificates +``` + +### Accidental Destructive Operation +```bash +# If you have kubectl operations in shell history: +history | grep kubectl + +# Check cluster audit logs (if enabled) +# Contact cluster admin for assistance +``` + +## 🔄 Current Tool Versions + +| Tool | Version | Installed | +|------|---------|-----------| +| kubectl | v1.31.2 | ✅ | +| Helm | v3.16.2 | ✅ | +| Skaffold | v2.13.2 | ✅ | + +**Note**: These versions are verified with SHA256 checksums during installation. + +--- + +**Remember**: Security is a shared responsibility. Use these tools wisely and follow best practices. diff --git a/.devcontainer/dev-scripts/README.md b/.devcontainer/dev-scripts/README.md new file mode 100755 index 000000000..2bb40f4b1 --- /dev/null +++ b/.devcontainer/dev-scripts/README.md @@ -0,0 +1,59 @@ +# Development Scripts + +This directory contains scripts to help with the CloudHarness development environment. + +## Runtime Virtual Environment + +The CloudHarness development container comes with all CloudHarness libraries and common dependencies pre-installed globally. However, if you need to install additional packages for development or testing, you should use the runtime virtual environment to avoid conflicts. + +## VS Code Integration + +When using VS Code with dev containers, the virtual environment is automatically configured: + +- The Python interpreter is set to `/root/.local/venv/bin/python` +- New terminals automatically activate the virtual environment +- The Python analysis paths include both the virtual environment and CloudHarness libraries +- All Python extensions work seamlessly with the virtual environment + +### Usage + +1. **Set up and activate the runtime virtual environment:** + ```bash + /usr/local/share/dev-scripts/runtime-venv.sh + ``` + +2. **Activate an existing runtime virtual environment:** + ```bash + source /usr/local/share/dev-scripts/use-venv + ``` + +3. **VS Code setup (automatically runs in dev container):** + ```bash + /usr/local/share/dev-scripts/vscode-setup.sh + ``` + +4. **Install additional packages (while venv is active):** + ```bash + pip install + ``` + +5. **Deactivate the virtual environment:** + ```bash + deactivate + ``` + +### How it works + +- The runtime virtual environment is created in `$HOME/.local/venv` inside the container +- This location is persisted if you mount a home directory volume +- Global CloudHarness libraries remain accessible due to the PYTHONPATH configuration +- Additional packages installed in the virtual environment take precedence when there are conflicts +- The virtual environment inherits from the global site-packages, so you still have access to all pre-installed libraries +- VS Code dev container automatically configures the Python interpreter and analysis paths + +### Best Practices + +- Use the runtime virtual environment for experimental packages or project-specific dependencies +- Keep the global environment clean by not installing additional packages directly with pip outside the venv +- Document any additional dependencies in your project's requirements.txt file +- The virtual environment is automatically activated in new VS Code terminals when using dev containers diff --git a/.devcontainer/dev-scripts/common-bashrc.sh b/.devcontainer/dev-scripts/common-bashrc.sh new file mode 100755 index 000000000..9e67240f4 --- /dev/null +++ b/.devcontainer/dev-scripts/common-bashrc.sh @@ -0,0 +1,204 @@ +#!/bin/bash +# Common bash configuration for CloudHarness dev container +# Contributors can add useful aliases and functions here + +# ============================================================================ +# Common Aliases +# ============================================================================ + +# Enhanced ls +alias ll='ls -alFh' +alias la='ls -A' +alias l='ls -CF' + +# Git shortcuts +alias gs='git status' +alias ga='git add' +alias gc='git commit' +alias gp='git push' +alias gl='git log --oneline --graph --decorate' +alias gd='git diff' +alias gco='git checkout' +alias gb='git branch' + +# Docker shortcuts +alias d='docker' +alias dc='docker-compose' +alias dps='docker ps' +alias dpsa='docker ps -a' +alias di='docker images' +alias dex='docker exec -it' +alias dlogs='docker logs -f' + +# Kubernetes shortcuts +alias k='kubectl' +alias kgp='kubectl get pods' +alias kgs='kubectl get services' +alias kgd='kubectl get deployments' +alias kgn='kubectl get nodes' +alias kdp='kubectl describe pod' +alias kds='kubectl describe service' +alias kdd='kubectl describe deployment' +alias kexec='kubectl exec -it' + +# Directory navigation +alias ..='cd ..' +alias ...='cd ../..' +alias ....='cd ../../..' +alias ~='cd ~' + +# Safety +alias rm='rm -i' +alias cp='cp -i' +alias mv='mv -i' + +# Editor +alias vim='nvim' +alias vi='nvim' + +# Misc +alias h='history' +alias c='clear' +alias reload='source ~/.bashrc' +alias path='echo -e ${PATH//:/\\n}' +alias ports='netstat -tulanp' + +# ============================================================================ +# Useful Functions +# ============================================================================ + +# Create directory and cd into it +mkcd() { + mkdir -p "$1" && cd "$1" +} + +# Extract various archive formats +extract() { + if [ -f "$1" ]; then + case "$1" in + *.tar.bz2) tar xjf "$1" ;; + *.tar.gz) tar xzf "$1" ;; + *.bz2) bunzip2 "$1" ;; + *.rar) unrar x "$1" ;; + *.gz) gunzip "$1" ;; + *.tar) tar xf "$1" ;; + *.tbz2) tar xjf "$1" ;; + *.tgz) tar xzf "$1" ;; + *.zip) unzip "$1" ;; + *.Z) uncompress "$1" ;; + *.7z) 7z x "$1" ;; + *) echo "'$1' cannot be extracted via extract()" ;; + esac + else + echo "'$1' is not a valid file" + fi +} + +# Quick find +qfind() { + find . -name "*$1*" +} + +# Git clone and cd +gclone() { + git clone "$1" && cd "$(basename "$1" .git)" +} + +# Docker quick cleanup +dclean() { + docker system prune -af --volumes +} + +# Kubernetes namespace quick switch +kns() { + if [ -z "$1" ]; then + kubectl config view --minify --output 'jsonpath={..namespace}' + echo + else + kubectl config set-context --current --namespace="$1" + fi +} + +# Show kubectl current context +kctx() { + if [ -z "$1" ]; then + kubectl config current-context + else + kubectl config use-context "$1" + fi +} + +# Port forward shortcut +kpf() { + local pod=$1 + local port=${2:-8080} + kubectl port-forward "$pod" "$port:$port" +} + +# Logs with namespace +klogs() { + local pod=$1 + shift + kubectl logs -f "$pod" "$@" +} + +# Quick Python virtual environment activation +venv() { + if [ -d "venv" ]; then + source venv/bin/activate + elif [ -d ".venv" ]; then + source .venv/bin/activate + else + echo "No virtual environment found (venv or .venv)" + fi +} + +# CloudHarness specific shortcuts +alias ch-deploy='harness-deployment' +alias ch-build='harness-deployment cloudharness . && skaffold build' +alias ch-dev='skaffold dev' + +# ============================================================================ +# Environment Configuration +# ============================================================================ + +# Better history +export HISTSIZE=10000 +export HISTFILESIZE=20000 +export HISTCONTROL=ignoreboth:erasedups +shopt -s histappend + +# Better autocomplete +bind 'set completion-ignore-case on' +bind 'set show-all-if-ambiguous on' + +# ============================================================================ +# Tool Integrations +# ============================================================================ + +# fzf integration (if installed) +if command -v fzf &> /dev/null; then + # CTRL-T: Paste the selected file path into the command line + # CTRL-R: Search through command history + # ALT-C: cd into the selected directory + [ -f /usr/share/doc/fzf/examples/key-bindings.bash ] && source /usr/share/doc/fzf/examples/key-bindings.bash + [ -f /usr/share/doc/fzf/examples/completion.bash ] && source /usr/share/doc/fzf/examples/completion.bash +fi + +# NOTE: Starship and Atuin are initialized in ~/.bashrc +# in the correct order to ensure compatibility + +# asdf version manager (if installed) +if [ -f "$HOME/.asdf/asdf.sh" ]; then + . "$HOME/.asdf/asdf.sh" + . "$HOME/.asdf/completions/asdf.bash" +fi + +# ============================================================================ +# Welcome Message +# ============================================================================ + +echo "CloudHarness Development Container" +echo "-----------------------------------" +echo "Common aliases loaded. Type 'alias' to see all available shortcuts." +echo "" diff --git a/.devcontainer/dev-scripts/docker-mount-validator.sh b/.devcontainer/dev-scripts/docker-mount-validator.sh new file mode 100755 index 000000000..7b7e70f49 --- /dev/null +++ b/.devcontainer/dev-scripts/docker-mount-validator.sh @@ -0,0 +1,130 @@ +#!/bin/bash +# Docker mount validator for socket proxy setup +# This provides client-side validation of mount paths before API calls +# Use: alias docker='/usr/local/share/dev-scripts/docker-mount-validator.sh' + +# Location of real docker binary +REAL_DOCKER="/usr/bin/docker" + +# Define allowed mount paths (workspace only) +ALLOWED_MOUNT_PATHS=( + "/workspace" + "${WORKSPACE_ROOT:-/workspace}" +) + +# Define blocked mount paths (everything else on host) +BLOCKED_MOUNT_PATHS=( + "/" + "/root" + "/home" + "/etc" + "/usr" + "/var" + "/bin" + "/sbin" + "/boot" + "/sys" + "/proc" + "/dev" + "/opt" + "/srv" + "/mnt" + "/media" +) + +# Skip validation if disabled +if [[ "${DOCKER_MOUNT_VALIDATION}" == "off" ]]; then + exec "$REAL_DOCKER" "$@" +fi + +# Only validate 'run' and 'create' commands +OPERATION="${1:-}" +if [[ "$OPERATION" != "run" ]] && [[ "$OPERATION" != "create" ]]; then + exec "$REAL_DOCKER" "$@" +fi + +# Function to check mount path safety +check_mount_path() { + local source_path="$1" + + # Skip if it's a named volume (no leading /) + if [[ "$source_path" != /* ]]; then + return 0 + fi + + # Check if it's in blocked paths + for blocked in "${BLOCKED_MOUNT_PATHS[@]}"; do + if [[ "$source_path" == "$blocked" ]] || [[ "$source_path" == "$blocked"/* ]]; then + # Check if it's within an allowed subpath + local is_allowed=false + for allowed in "${ALLOWED_MOUNT_PATHS[@]}"; do + if [[ "$source_path" == "$allowed"* ]]; then + is_allowed=true + break + fi + done + + if [[ "$is_allowed" == false ]]; then + echo "❌ MOUNT VALIDATION FAILED:" + echo " Attempting to mount: $source_path" + echo " Only /workspace paths are allowed for security." + echo "" + echo "To bypass this check (NOT RECOMMENDED):" + echo " DOCKER_MOUNT_VALIDATION=off docker $*" + return 1 + fi + fi + done + + # Check if it's explicitly allowed + local is_allowed=false + for allowed in "${ALLOWED_MOUNT_PATHS[@]}"; do + if [[ "$source_path" == "$allowed"* ]]; then + is_allowed=true + break + fi + done + + if [[ "$is_allowed" == false ]]; then + echo "❌ MOUNT VALIDATION FAILED:" + echo " Attempting to mount: $source_path" + echo " Only /workspace paths are allowed for security." + echo "" + echo "To bypass this check (NOT RECOMMENDED):" + echo " DOCKER_MOUNT_VALIDATION=off docker $*" + return 1 + fi + + return 0 +} + +# Parse arguments to find mounts +args=("$@") +for i in "${!args[@]}"; do + arg="${args[$i]}" + + # Check -v and --volume flags + if [[ "$arg" == "-v" ]] || [[ "$arg" == "--volume" ]]; then + mount_spec="${args[$((i+1))]}" + source_path="${mount_spec%%:*}" + + if ! check_mount_path "$source_path"; then + exit 1 + fi + fi + + # Check --mount flag + if [[ "$arg" == "--mount" ]]; then + mount_spec="${args[$((i+1))]}" + if [[ "$mount_spec" =~ source=([^,]+) ]]; then + source_path="${BASH_REMATCH[1]}" + + if ! check_mount_path "$source_path"; then + exit 1 + fi + fi + fi +done + +# All checks passed, execute docker command +exec "$REAL_DOCKER" "$@" diff --git a/.devcontainer/dev-scripts/kubectl-wrapper.sh b/.devcontainer/dev-scripts/kubectl-wrapper.sh new file mode 100755 index 000000000..830bb27c1 --- /dev/null +++ b/.devcontainer/dev-scripts/kubectl-wrapper.sh @@ -0,0 +1,104 @@ +#!/bin/bash +# Kubectl wrapper with safety features to prevent accidental cluster modifications +# This wrapper adds confirmation prompts for destructive operations + +# Define destructive operations that require confirmation +DESTRUCTIVE_OPERATIONS=( + "delete" + "apply" + "create" + "replace" + "patch" + "edit" + "drain" + "cordon" + "taint" + "label" + "annotate" + "scale" + "rollout" + "set" +) + +# Define critical namespaces that should have extra protection +CRITICAL_NAMESPACES=( + "kube-system" + "kube-public" + "kube-node-lease" + "default" +) + +# Function to check if operation is destructive +is_destructive() { + local operation="$1" + for op in "${DESTRUCTIVE_OPERATIONS[@]}"; do + if [[ "$operation" == "$op" ]]; then + return 0 + fi + done + return 1 +} + +# Function to check if namespace is critical +is_critical_namespace() { + local namespace="$1" + for ns in "${CRITICAL_NAMESPACES[@]}"; do + if [[ "$namespace" == "$ns" ]]; then + return 0 + fi + done + return 1 +} + +# Function to extract namespace from arguments +get_namespace() { + local args=("$@") + for i in "${!args[@]}"; do + if [[ "${args[$i]}" == "-n" ]] || [[ "${args[$i]}" == "--namespace" ]]; then + echo "${args[$((i+1))]}" + return + fi + done + echo "default" +} + +# Skip safety checks if KUBECTL_SAFE_MODE is disabled +if [[ "${KUBECTL_SAFE_MODE}" == "off" ]]; then + exec /usr/local/bin/kubectl "$@" +fi + +# Check if this is a read-only operation +OPERATION="${1:-}" +if [[ -z "$OPERATION" ]] || ! is_destructive "$OPERATION"; then + # Safe operation, execute directly + exec /usr/local/bin/kubectl "$@" +fi + +# Extract namespace +NAMESPACE=$(get_namespace "$@") + +# Provide warning for destructive operations +echo "⚠️ WARNING: You are about to execute a potentially destructive kubectl operation:" +echo " Operation: $OPERATION" +echo " Namespace: $NAMESPACE" +echo " Full command: kubectl $*" + +# Extra warning for critical namespaces +if is_critical_namespace "$NAMESPACE"; then + echo "" + echo "🚨 CRITICAL NAMESPACE ALERT: '$NAMESPACE' is a system-critical namespace!" +fi + +echo "" +echo "The kubeconfig is mounted read-only, but cluster resources can still be modified." +echo "" +read -p "Are you sure you want to proceed? (yes/no): " -r +echo + +if [[ ! $REPLY =~ ^[Yy][Ee][Ss]$ ]]; then + echo "Operation cancelled." + exit 1 +fi + +# Execute the actual kubectl command +exec /usr/local/bin/kubectl "$@" diff --git a/.devcontainer/dev-scripts/post-create.sh b/.devcontainer/dev-scripts/post-create.sh new file mode 100755 index 000000000..d46094ad0 --- /dev/null +++ b/.devcontainer/dev-scripts/post-create.sh @@ -0,0 +1,351 @@ +#!/bin/bash +# Post-create setup script for CloudHarness dev container + +set -e + +echo "Setting up CloudHarness development environment..." + +# Make scripts executable +chmod +x /usr/local/share/dev-scripts/*.sh + +# Run vscode setup +/usr/local/share/dev-scripts/vscode-setup.sh + +# Setup Kubernetes access +/usr/local/share/dev-scripts/setup-docker-desktop-kube.sh + +# Setup Docker config (remove incompatible credential helpers) +echo "Configuring Docker credentials..." +mkdir -p /root/.docker-container +if [ -f /root/.docker/config.json ]; then + # Copy and sanitize the Docker config using jq to properly handle JSON + if command -v jq &> /dev/null; then + jq 'del(.credsStore)' /root/.docker/config.json > /root/.docker-container/config.json + else + # Fallback: use Python to remove credsStore + python3 -c "import json, sys; config=json.load(open('/root/.docker/config.json')); config.pop('credsStore', None); json.dump(config, sys.stdout, indent=2)" > /root/.docker-container/config.json + fi +else + # Create minimal config if none exists + echo '{}' > /root/.docker-container/config.json +fi + +# Download bash-preexec if it doesn't exist (required for atuin/starship) +if [ ! -f ~/.bash-preexec.sh ]; then + echo "Downloading bash-preexec..." + curl -sL https://raw.githubusercontent.com/rcaloras/bash-preexec/master/bash-preexec.sh -o ~/.bash-preexec.sh +fi + +# Initialize bashrc if needed +if [ ! -f ~/.bashrc ] || [ ! -s ~/.bashrc ] || ! grep -q 'common-bashrc' ~/.bashrc; then + echo "Initializing ~/.bashrc..." + + cat > ~/.bashrc << 'EOF' +# CloudHarness Dev Container Bash Configuration + +# Enable colors for ls and grep +export LS_COLORS='di=1;34:ln=1;36:so=1;35:pi=1;33:ex=1;32:bd=1;33:cd=1;33:su=1;31:sg=1;31:tw=1;34:ow=1;34' +alias ls='ls --color=auto' +alias grep='grep --color=auto' + +# Setup Docker Desktop Kubernetes access (sources KUBECONFIG) +source /usr/local/share/dev-scripts/setup-docker-desktop-kube.sh + +# Kubectl safety wrapper +alias kubectl='/usr/local/share/dev-scripts/kubectl-wrapper.sh' +export KUBECTL_SAFE_MODE=on + +# Docker security is handled by docker-socket-proxy +# Additional client-side mount validation for extra protection +if [ -n "$DOCKER_SOCKET_PROXY" ]; then + echo "Docker access secured via socket proxy at $DOCKER_HOST" + alias docker='/usr/local/share/dev-scripts/docker-mount-validator.sh' + export DOCKER_MOUNT_VALIDATION=on + export WORKSPACE_ROOT=/workspace +fi + +# Source common bash configuration (aliases and functions) +if [ -f /usr/local/share/dev-scripts/common-bashrc.sh ]; then + source /usr/local/share/dev-scripts/common-bashrc.sh +fi + +# Source the virtual environment (must use source, not bash) +if [ -f /usr/local/share/dev-scripts/use-venv ]; then + source /usr/local/share/dev-scripts/use-venv +fi + +# Load bash-preexec (required for atuin and starship to work with VS Code) +if [ -f ~/.bash-preexec.sh ]; then + source ~/.bash-preexec.sh +fi + +# Initialize Atuin BEFORE Starship (order matters!) +# Atuin needs bash-preexec and to hook into PROMPT_COMMAND before Starship +if command -v atuin &> /dev/null; then + eval "$(atuin init bash)" +fi + +# Initialize Starship prompt (must be after Atuin) +# Starship will preserve existing PROMPT_COMMAND entries +if command -v starship &> /dev/null; then + export STARSHIP_CONFIG=~/.config/starship.toml + eval "$(starship init bash)" +fi + +# Fix for VS Code shell integration overriding bash-preexec's DEBUG trap +# VS Code loads its shell integration after bashrc, so we restore the trap on each prompt +__restore_bash_preexec_trap() { + if declare -F __bp_preexec_invoke_exec &>/dev/null; then + local current_trap=\$(trap -p DEBUG) + if [[ "\$current_trap" != *"__bp_preexec_invoke_exec"* ]]; then + trap '__bp_preexec_invoke_exec "\$_"' DEBUG + fi + fi +} + +# Add the trap restoration to PROMPT_COMMAND +if [[ "\$PROMPT_COMMAND" != *"__restore_bash_preexec_trap"* ]]; then + PROMPT_COMMAND="__restore_bash_preexec_trap\${PROMPT_COMMAND:+; \$PROMPT_COMMAND}" +fi + +# asdf version manager +if [ -f "\$HOME/.asdf/asdf.sh" ]; then + . "\$HOME/.asdf/asdf.sh" + . "\$HOME/.asdf/completions/asdf.bash" +fi +EOF +fi + +# Initialize Atuin (shell history database) +if command -v atuin &> /dev/null; then + echo "Initializing Atuin shell history..." + + # Create atuin data directory if it doesn't exist + mkdir -p ~/.local/share/atuin ~/.config/atuin + + # Generate default config with explicit paths (only if it doesn't exist) + if [ ! -f ~/.config/atuin/config.toml ]; then + cat > ~/.config/atuin/config.toml << 'ATUINCONF' +## Atuin configuration for CloudHarness dev container + +## Explicitly set paths to avoid VS Code XDG_DATA_HOME issues +db_path = "~/.local/share/atuin/history.db" +key_path = "~/.local/share/atuin/key" +session_path = "~/.local/share/atuin/session" + +dialect = "uk" +auto_sync = false +update_check = false +search_mode = "fuzzy" +filter_mode = "global" +style = "auto" +inline_height = 20 +show_preview = true +exit_mode = "return-original" +max_preview_height = 4 +show_help = true +secrets_filter = true +enter_accept = true +history_filter = ["^secret", "^password", "AWS_SECRET", "KUBECONFIG"] +ATUINCONF + fi + + # Import existing bash history + if [ -f ~/.bash_history ]; then + echo "Importing bash history into Atuin..." + atuin import auto 2>/dev/null || true + fi +fi + +# Create Starship config if it doesn't exist +if [ ! -f ~/.config/starship.toml ]; then + echo "Creating Starship prompt configuration..." + mkdir -p ~/.config + cat > ~/.config/starship.toml << 'STARSHIPCONF' +# Starship prompt configuration for CloudHarness dev container +# Documentation: https://starship.rs/config/ + +# Timeout for starship to run (in milliseconds) +command_timeout = 1000 + +# Add a new line before the prompt +add_newline = true + +# Format of the prompt +format = """ +[╭─](bold green)$username\ +$hostname\ +$directory\ +$git_branch\ +$git_status\ +$python\ +$nodejs\ +$docker_context\ +$kubernetes +[╰─](bold green)$character""" + +[character] +success_symbol = "[➜](bold green)" +error_symbol = "[✗](bold red)" + +[username] +style_user = "bold yellow" +style_root = "bold red" +format = "[$user]($style) " +disabled = false +show_always = true + +[hostname] +ssh_only = false +format = "[@$hostname](bold blue) " +disabled = false + +[directory] +truncation_length = 3 +truncate_to_repo = true +format = "[$path]($style)[$read_only]($read_only_style) " +style = "bold cyan" +read_only = " 🔒" + +[git_branch] +symbol = " " +format = "on [$symbol$branch]($style) " +style = "bold purple" + +[git_status] +format = '([\[$all_status$ahead_behind\]]($style) )' +style = "bold red" +conflicted = "🏳" +ahead = "⇡${count}" +behind = "⇣${count}" +diverged = "⇕⇡${ahead_count}⇣${behind_count}" +untracked = "?${count}" +stashed = "$${count}" +modified = "!${count}" +staged = "+${count}" +renamed = "»${count}" +deleted = "✘${count}" + +[python] +symbol = " " +format = 'via [${symbol}${pyenv_prefix}(${version} )(\($virtualenv\) )]($style)' +style = "yellow" +pyenv_version_name = false +detect_extensions = ["py"] +detect_files = [".python-version", "Pipfile", "__pycache__", "pyproject.toml", "requirements.txt", "setup.py", "tox.ini"] +detect_folders = [] + +[nodejs] +symbol = " " +format = "via [$symbol($version )]($style)" +style = "bold green" + +[docker_context] +symbol = " " +format = "via [$symbol$context]($style) " +style = "blue bold" +only_with_files = true +detect_files = ["docker-compose.yml", "docker-compose.yaml", "Dockerfile"] +detect_folders = [] + +[kubernetes] +symbol = "☸ " +format = 'on [$symbol$context( \($namespace\))]($style) ' +style = "cyan bold" +disabled = false +detect_files = ["k8s"] +detect_folders = ["k8s"] + +[kubernetes.context_aliases] +"docker-desktop" = "🐳 desktop" +"kind-.*" = "kind" +"minikube" = "mini" + +[cmd_duration] +min_time = 500 +format = "took [$duration](bold yellow) " + +[time] +disabled = false +format = '🕙[\[ $time \]]($style) ' +time_format = "%T" +style = "bold white" + +[memory_usage] +disabled = true +threshold = -1 +symbol = " " +format = "via $symbol[${ram_pct}]($style) " +style = "bold dimmed white" + +[package] +disabled = true +STARSHIPCONF +fi + +# Create tmux config if it doesn't exist +if [ ! -f ~/.tmux.conf ]; then + cat > ~/.tmux.conf << 'EOF' +# CloudHarness tmux configuration + +# Remap prefix to Ctrl-a +unbind C-b +set-option -g prefix C-a +bind-key C-a send-prefix + +# Split panes using | and - +bind | split-window -h +bind - split-window -v +unbind '"' +unbind % + +# Reload config file +bind r source-file ~/.tmux.conf \; display "Config reloaded!" + +# Switch panes using Alt-arrow without prefix +bind -n M-Left select-pane -L +bind -n M-Right select-pane -R +bind -n M-Up select-pane -U +bind -n M-Down select-pane -D + +# Enable mouse mode +set -g mouse on + +# Don't rename windows automatically +set-option -g allow-rename off + +# Start windows and panes at 1, not 0 +set -g base-index 1 +setw -g pane-base-index 1 + +# Status bar +set -g status-position bottom +set -g status-style 'bg=colour234 fg=colour137' +set -g status-left '' +set -g status-right '#[fg=colour233,bg=colour241] %d/%m #[fg=colour233,bg=colour245] %H:%M:%S ' +set -g status-right-length 50 +set -g status-left-length 20 + +# Active window +setw -g window-status-current-style 'fg=colour81 bg=colour238 bold' +setw -g window-status-current-format ' #I#[fg=colour250]:#[fg=colour255]#W#[fg=colour50]#F ' + +# Inactive windows +setw -g window-status-style 'fg=colour138 bg=colour235' +setw -g window-status-format ' #I#[fg=colour237]:#[fg=colour250]#W#[fg=colour244]#F ' +EOF +fi + +echo "" +echo "✓ CloudHarness development environment ready!" +echo "" +echo "Available tools:" +echo " - kubectl, helm, skaffold, k9s (Kubernetes)" +echo " - nvim (Neovim with sensible defaults)" +echo " - tmux (terminal multiplexer)" +echo " - starship (beautiful prompt)" +echo " - atuin (shell history)" +echo " - htop, tig, gmap, rhttp" +echo "" +echo "Type 'alias' to see common shortcuts or check /usr/local/share/dev-scripts/common-bashrc.sh" +echo "" diff --git a/.devcontainer/dev-scripts/runtime-venv.sh b/.devcontainer/dev-scripts/runtime-venv.sh new file mode 100755 index 000000000..b62011cc0 --- /dev/null +++ b/.devcontainer/dev-scripts/runtime-venv.sh @@ -0,0 +1,23 @@ +#!/bin/bash + +# Script to set up and activate a runtime virtual environment +# This allows users to install additional packages at runtime without affecting the global environment + +VENV_DIR="$HOME/.local/venv" + +# Create virtual environment if it doesn't exist +if [ ! -d "$VENV_DIR" ]; then + echo "Creating runtime virtual environment at $VENV_DIR..." + python -m venv --system-site-packages "$VENV_DIR" +fi + +# Activate the virtual environment +echo "Activating runtime virtual environment..." +source "$VENV_DIR/bin/activate" + +# Ensure pip is up to date in the virtual environment +pip install --upgrade pip + +echo "Runtime virtual environment is now active." +echo "You can install additional packages with 'pip install '" +echo "To deactivate, run 'deactivate'" \ No newline at end of file diff --git a/.devcontainer/dev-scripts/setup-docker-desktop-kube.sh b/.devcontainer/dev-scripts/setup-docker-desktop-kube.sh new file mode 100755 index 000000000..abd8c3d42 --- /dev/null +++ b/.devcontainer/dev-scripts/setup-docker-desktop-kube.sh @@ -0,0 +1,110 @@ +#!/bin/bash +# Setup script to make Docker Desktop Kubernetes accessible from dev container +# with security filtering to block production cluster access + +if [ ! -f ~/.kube/config ]; then + echo "ℹ No kubeconfig found at ~/.kube/config" + return 0 +fi + +echo "Configuring Kubernetes access with security filtering..." + +# Create directory for filtered config +mkdir -p ~/.kube-container + +# Filter out production clusters using Python +python3 - <<'PYTHON_EOF' +import yaml +import sys + +# Patterns to block (production environments) +BLOCKED_PATTERNS = [ + 'production', + 'prod', +] + +def should_block(name): + """Check if a cluster/context name should be blocked""" + name_lower = name.lower() + return any(pattern.lower() in name_lower for pattern in BLOCKED_PATTERNS) + +try: + # Read original kubeconfig + with open('/root/.kube/config', 'r') as f: + config = yaml.safe_load(f) + + blocked_items = [] + + # Filter clusters + if 'clusters' in config: + original_count = len(config['clusters']) + config['clusters'] = [c for c in config['clusters'] if not should_block(c.get('name', ''))] + blocked_clusters = original_count - len(config['clusters']) + if blocked_clusters > 0: + blocked_items.append(f"{blocked_clusters} cluster(s)") + + # Filter contexts + if 'contexts' in config: + original_count = len(config['contexts']) + config['contexts'] = [c for c in config['contexts'] + if not should_block(c.get('name', '')) and + not should_block(c.get('context', {}).get('cluster', ''))] + blocked_contexts = original_count - len(config['contexts']) + if blocked_contexts > 0: + blocked_items.append(f"{blocked_contexts} context(s)") + + # Filter users (remove users only used by blocked contexts) + if 'users' in config: + # Get remaining context user names + remaining_users = set() + for ctx in config.get('contexts', []): + user = ctx.get('context', {}).get('user') + if user: + remaining_users.add(user) + + original_count = len(config['users']) + config['users'] = [u for u in config['users'] + if u.get('name') in remaining_users or not should_block(u.get('name', ''))] + blocked_users = original_count - len(config['users']) + if blocked_users > 0: + blocked_items.append(f"{blocked_users} user(s)") + + # Check if current context was blocked + current_context = config.get('current-context', '') + if current_context and should_block(current_context): + # Try to switch to docker-desktop, otherwise first available + if any(c.get('name') == 'docker-desktop' for c in config.get('contexts', [])): + config['current-context'] = 'docker-desktop' + print(" ℹ Switched to docker-desktop context", file=sys.stderr) + elif config.get('contexts'): + config['current-context'] = config['contexts'][0]['name'] + print(f" ℹ Switched to {config['contexts'][0]['name']} context", file=sys.stderr) + else: + config['current-context'] = '' + + # Write filtered config + with open('/root/.kube-container/config', 'w') as f: + yaml.dump(config, f, default_flow_style=False, sort_keys=False) + + # Print blocked items + if blocked_items: + print(f" 🔒 Blocked: {', '.join(blocked_items)}", file=sys.stderr) + else: + print(" ✓ No production clusters found", file=sys.stderr) + +except Exception as e: + print(f" ⚠ Error filtering kubeconfig: {e}", file=sys.stderr) + # Fallback: just copy the config + import shutil + shutil.copy('/root/.kube/config', '/root/.kube-container/config') + sys.exit(0) + +PYTHON_EOF + +# Set KUBECONFIG to use the filtered copy +export KUBECONFIG=~/.kube-container/config + +echo "✓ Kubernetes configured with security filtering" +echo " Production clusters are NOT accessible from dev container" +echo " Filtered config at ~/.kube-container/config" +echo "" diff --git a/.devcontainer/dev-scripts/use-venv b/.devcontainer/dev-scripts/use-venv new file mode 100755 index 000000000..6fcd2563e --- /dev/null +++ b/.devcontainer/dev-scripts/use-venv @@ -0,0 +1,24 @@ +#!/bin/bash + +# Wrapper script to source the runtime virtual environment +# Usage: source /usr/local/share/dev-scripts/use-venv + +VENV_DIR="$HOME/.local/venv" + +if [ -d "$VENV_DIR" ]; then + # Check if we're already in the virtual environment + if [[ "$VIRTUAL_ENV" != "$VENV_DIR" ]]; then + # Prevent venv from modifying PS1 + export VIRTUAL_ENV_DISABLE_PROMPT=1 + source "$VENV_DIR/bin/activate" + # Customize prompt with colors: green (venv), cyan user, blue path + # export PS1="\[\033[32m\](venv)\[\033[0m\] \[\033[36m\]\u\[\033[0m\]:\[\033[34m\]\w\[\033[0m\]\$ " + # Only show message in interactive shells on first activation + if [[ $- == *i* ]] && [[ -z "$VENV_ACTIVATED" ]]; then + echo "Runtime virtual environment activated." + export VENV_ACTIVATED=1 + fi + fi +else + echo "No runtime virtual environment found. Run '/usr/local/share/dev-scripts/runtime-venv.sh' first." +fi \ No newline at end of file diff --git a/.devcontainer/dev-scripts/vscode-setup.sh b/.devcontainer/dev-scripts/vscode-setup.sh new file mode 100755 index 000000000..6d0fa27e5 --- /dev/null +++ b/.devcontainer/dev-scripts/vscode-setup.sh @@ -0,0 +1,22 @@ +#!/bin/bash + +# VS Code Python environment setup script +# This script ensures the virtual environment is properly set up for VS Code integration +echo "Running VS Code Python environment setup..." +VENV_DIR="$HOME/.local/venv" + +# Create the virtual environment if it doesn't exist +if [ ! -d "$VENV_DIR" ]; then + echo "Creating Python virtual environment for VS Code..." + python -m venv --system-site-packages "$VENV_DIR" +fi + +# Create a Python interpreter symlink that VS Code can reliably find +mkdir -p "$HOME/.local/bin" +ln -sf "$VENV_DIR/bin/python" "$HOME/.local/bin/python-venv" + +# Ensure the virtual environment has necessary development packages +"$VENV_DIR/bin/pip" install --upgrade pip setuptools wheel + +echo "Python virtual environment ready for VS Code integration." +echo "Python interpreter: $VENV_DIR/bin/python" diff --git a/.devcontainer/devcontainer.json b/.devcontainer/devcontainer.json new file mode 100755 index 000000000..d532c2594 --- /dev/null +++ b/.devcontainer/devcontainer.json @@ -0,0 +1,67 @@ +{ + "name": "CloudHarness Development", + + + "dockerComposeFile": "docker-compose.yml", + "service": "app", + "workspaceFolder": "/workspace", + "shutdownAction": "none", + "forwardPorts": [3000, 5000, 8000, 8100, 9000, 9100, 9200], + "postCreateCommand": "bash /usr/local/share/dev-scripts/post-create.sh", + "features": { + "ghcr.io/devcontainers/features/github-cli:1": {} + }, + "customizations": { + "vscode": { + "settings": { + "terminal.integrated.defaultProfile.linux": "bash", + "python.defaultInterpreterPath": "/root/.local/venv/bin/python", + "python.terminal.activateEnvironment": true, + "python.terminal.activateEnvInCurrentTerminal": true, + // Disable VS Code shell integration - conflicts with bash-preexec (required for atuin/starship) + "terminal.integrated.shellIntegration.enabled": false, + "terminal.integrated.env.linux": { + "XDG_DATA_HOME": "${env:HOME}/.local/share" + }, + "extensions.autoUpdate": false, + "extensions.autoCheckUpdates": false, + "python.analysis.extraPaths": [ + "/root/.local/venv/lib/python3.12/site-packages", + "/workspace/libraries/models", + "/workspace/libraries/cloudharness-utils", + "/workspace/libraries/cloudharness-common", + "/workspace/libraries/client/cloudharness_cli", + "/workspace/tools/deployment-cli-tools" + ], + "github.copilot.enable": { + "*": true, + "yaml": true, + "plaintext": true, + "markdown": true, + "python": true, + "javascript": true, + "typescript": true, + "json": true, + "jsonc": true + }, + "remote.extensionKind": { + "GitHub.copilot": ["ui"], + "GitHub.copilot-chat": ["ui"] + } + }, + "extensions": [ + "ms-python.python", + "ms-python.autopep8", + "ms-python.pylint", + "ms-python.python-extension-pack", + "KevinRose.vsc-python-indent", + "dbaeumer.vscode-eslint", + "redhat.vscode-yaml", + "ms-vscode.vscode-json", + "ms-kubernetes-tools.vscode-kubernetes-tools" + ] + } + }, + "remoteUser": "root", + "updateContentCommand": "echo 'Container updated'" +} diff --git a/.devcontainer/docker-compose.yml b/.devcontainer/docker-compose.yml new file mode 100644 index 000000000..fa05ee81a --- /dev/null +++ b/.devcontainer/docker-compose.yml @@ -0,0 +1,109 @@ +version: '3.8' + +services: + # Docker Socket Proxy - provides secure, filtered access to Docker daemon + # See: https://github.com/Tecnativa/docker-socket-proxy + docker-socket-proxy: + image: tecnativa/docker-socket-proxy:latest + container_name: cloudharness-docker-proxy + restart: "no" + privileged: false + environment: + # Disable all by default + ALLOW_RESTARTS: 1 + ALLOW_START: 0 + ALLOW_STOP: 1 + + # Enable read-only operations (safe) + EVENTS: 1 + PING: 1 + VERSION: 1 + + # Enable image operations (needed for building) + IMAGES: 1 + INFO: 1 + + # Enable container operations (most common dev tasks) + CONTAINERS: 1 + POST: 1 # Allows creating containers + + # Enable build operations + BUILD: 1 + COMMIT: 1 + + # Enable network operations (needed for docker-compose) + NETWORKS: 1 + + # Enable volume operations (needed for development) + VOLUMES: 1 + + # Enable exec (for debugging containers) + EXEC: 1 + + # DISABLE dangerous operations (security) + ALLOW_PRIVILEGED: 0 # Block --privileged flag + + # Disable direct host mounts at API level where possible + # Note: This blocks some mount operations but not all + # Docker API doesn't provide granular mount path filtering + # Consider using bind-propagation restrictions + + # Optional: Enable these if needed + # SECRETS: 1 + # SERVICES: 1 + # SWARM: 1 + # NODES: 1 + # TASKS: 1 + + # Logging + LOG_LEVEL: info + volumes: + # Mount the real Docker socket (only this container has direct access) + - /var/run/docker.sock:/var/run/docker.sock:ro + networks: + - devcontainer + ports: + # Expose on localhost only for security + # If port is already in use, container will exit silently (restart: no) + - "127.0.0.1:2375:2375" + + # Dev container - connects through the proxy + app: + image: gcr.io/metacellllc/cloud-harness/dev-container:latest + build: + context: .. + dockerfile: .devcontainer/Dockerfile + container_name: cloudharness-dev + user: root + network_mode: host + # Don't fail if proxy isn't running (may already be running from another devcontainer) + depends_on: + docker-socket-proxy: + condition: service_started + required: false + environment: + PYTHONPATH: "/cloudharness:/workspace/libraries/models:/workspace/libraries/cloudharness-utils:/workspace/libraries/cloudharness-common:/workspace/libraries/client/cloudharness_cli:/workspace/tools/deployment-cli-tools" + # Point docker client to the proxy on localhost + DOCKER_HOST: "tcp://127.0.0.1:2375" + KUBECONFIG: "/root/.kube-container/config" + DOCKER_SOCKET_PROXY: "1" # Flag to indicate we're using proxy + # Disable credential store to avoid desktop.exe error + DOCKER_CONFIG: "/root/.docker-container" + volumes: + - ../:/workspace:cached + - ${HOME}/.docker:/root/.docker:ro + - ${HOME}/.kube:/root/.kube:ro + - ./home:/root + - ./vscode:/workspace/.vscode + # Mount VS Code server directory for extension persistence + - vscode-server:/root/.vscode-server + - vscode-server-insiders:/root/.vscode-server-insiders + command: sleep infinity + +networks: + devcontainer: + driver: bridge + +volumes: + vscode-server: + vscode-server-insiders: diff --git a/.devcontainer/vscode-settings.json b/.devcontainer/vscode-settings.json new file mode 100755 index 000000000..51638250e --- /dev/null +++ b/.devcontainer/vscode-settings.json @@ -0,0 +1,15 @@ +{ + "python.defaultInterpreterPath": "/root/.local/venv/bin/python", + "python.terminal.activateEnvironment": true, + "python.terminal.activateEnvInCurrentTerminal": true, + "terminal.integrated.defaultProfile.linux": "bash", + "terminal.integrated.shellIntegration.enabled": true, + "python.analysis.extraPaths": [ + "/root/.local/venv/lib/python3.12/site-packages", + "/workspace/libraries/cloudharness-common", + "/workspace/libraries/models", + "/workspace/libraries/cloudharness-utils", + "/workspace/libraries/client/cloudharness_cli", + "/workspace/tools/deployment-cli-tools" + ] +} diff --git a/.devcontainer/vscode/launch.json b/.devcontainer/vscode/launch.json new file mode 100755 index 000000000..0d0cd6467 --- /dev/null +++ b/.devcontainer/vscode/launch.json @@ -0,0 +1,151 @@ +{ + "configurations": [ + { + "console": "integratedTerminal", + "name": "Python: Current File", + "program": "${file}", + "request": "launch", + "type": "python" + }, + { + "args": [ + ".", + "-i", + "samples", + "-d", + "ch", + "-dtls", + "-e", + "test", + "-l", + "-n", + "ch", + "-t", + "1" + ], + "console": "integratedTerminal", + "name": "Harness deployment", + "program": "tools/deployment-cli-tools/harness-deployment", + "request": "launch", + "type": "debugpy" + }, + { + "args": [ + ".", + "-i", + "samples", + "-d", + "ch.local", + "-dtls", + "-e", + "test-local", + "-l", + "-n", + "ch", + "--no-cd", + "-t", + "latest" + ], + "console": "integratedTerminal", + "name": "Harness deployment local", + "program": "tools/deployment-cli-tools/harness-deployment", + "request": "launch", + "type": "python" + }, + { + "args": [ + ".", + "-a" + ], + "console": "integratedTerminal", + "name": "Harness test", + "program": "${workspaceFolder}/tools/cloudharness-test/harness-test", + "request": "launch", + "type": "debugpy" + }, + { + "args": [ + "--host", + "0.0.0.0", + "--port", + "8000", + "main:app" + ], + "console": "integratedTerminal", + "cwd": "${workspaceFolder}/applications/testdjango/backend", + "env": { + "ACCOUNTS_ADMIN_PASSWORD": "metacell", + "ACCOUNTS_ADMIN_USERNAME": "admin", + "CH_CURRENT_APP_NAME": "testdjango", + "CH_VALUES_PATH": "${workspaceFolder}/deployment/helm/values.yaml", + "DJANGO_SETTINGS_MODULE": "django_baseapp.settings", + "KUBERNETES_SERVICE_HOST": "ssdds" + }, + "justMyCode": false, + "module": "uvicorn", + "name": "testdjango backend", + "request": "launch", + "type": "debugpy" + }, + { + "args": [ + "--host", + "0.0.0.0", + "--port", + "8000", + "django_baseapp.asgi:application" + ], + "console": "integratedTerminal", + "cwd": "${workspaceFolder}/applications/ninja/backend", + "env": { + "ACCOUNTS_ADMIN_PASSWORD": "metacell", + "ACCOUNTS_ADMIN_USERNAME": "admin", + "CH_CURRENT_APP_NAME": "ninja", + "CH_VALUES_PATH": "${workspaceFolder}/deployment/helm/values.yaml", + "DJANGO_SETTINGS_MODULE": "django_baseapp.settings", + "KUBERNETES_SERVICE_HOST": "ssdds" + }, + "justMyCode": false, + "module": "uvicorn", + "name": "ninja backend", + "request": "launch", + "type": "debugpy" + }, + { + "args": [ + "--host", + "0.0.0.0", + "--port", + "8000", + "django_baseapp.asgi:application" + ], + "console": "integratedTerminal", + "cwd": "${workspaceFolder}/applications/ninjatest/backend", + "env": { + "ACCOUNTS_ADMIN_PASSWORD": "metacell", + "ACCOUNTS_ADMIN_USERNAME": "admin", + "CH_CURRENT_APP_NAME": "ninjatest", + "CH_VALUES_PATH": "${workspaceFolder}/deployment/helm/values.yaml", + "DJANGO_SETTINGS_MODULE": "django_baseapp.settings", + "KUBERNETES_SERVICE_HOST": "ssdds" + }, + "justMyCode": false, + "module": "uvicorn", + "name": "ninjatest backend", + "request": "launch", + "type": "debugpy" + }, + { + "cleanUp": false, + "debug": [], + "imageRegistry": "localhost:5000", + "name": "CloudHarness: Run/Debug", + "portForward": true, + "request": "launch", + "skaffoldConfig": "${workspaceFolder}/skaffold.yaml", + "type": "cloudcode.kubernetes", + "watch": true + } + ], + "version": "0.2.0" +} \ No newline at end of file diff --git a/.devcontainer/vscode/mcp.json b/.devcontainer/vscode/mcp.json new file mode 100755 index 000000000..e8f3d7a69 --- /dev/null +++ b/.devcontainer/vscode/mcp.json @@ -0,0 +1,9 @@ +{ + "servers": { + "figma mcp": { + "url": "http://127.0.0.1:3845/mcp", + "type": "http" + } + }, + "inputs": [] +} \ No newline at end of file diff --git a/.devcontainer/vscode/settings.json b/.devcontainer/vscode/settings.json new file mode 100755 index 000000000..db19126b9 --- /dev/null +++ b/.devcontainer/vscode/settings.json @@ -0,0 +1,14 @@ +{ + "editor.defaultFormatter": "dbaeumer.vscode-eslint", + "editor.formatOnSave": true, + "javascript.format.enable": true, + "eslint.format.enable": true, + "eslint.workingDirectories": [{ "mode": "auto" }], + "editor.codeActionsOnSave": { + "source.fixAll.eslint": "explicit" + }, + "[typescriptreact]": { + "editor.defaultFormatter": "dbaeumer.vscode-eslint" + }, + "python.linting.lintOnSave": false, +} \ No newline at end of file diff --git a/.dockerignore b/.dockerignore index ae23cfe60..0ee711f10 100644 --- a/.dockerignore +++ b/.dockerignore @@ -2,10 +2,10 @@ .tox docs /applications -/infrastructure +# /infrastructure /blueprint test -/tools/deployment-cli-tools +# /tools/deployment-cli-tools .github .git .vscode diff --git a/.github/workflows/README.md b/.github/workflows/README.md new file mode 100644 index 000000000..1102730ed --- /dev/null +++ b/.github/workflows/README.md @@ -0,0 +1,92 @@ +# GitHub Actions - Docker Build and Push + +This workflow builds the CloudHarness development container and pushes it to Google Cloud Registry. + +## Required Secrets + +You need to configure the following secrets in your GitHub repository settings: + +### 1. `GCP_PROJECT_ID` +- **Description**: Your Google Cloud Project ID +- **Example**: `my-cloudharness-project` +- **How to find**: Go to Google Cloud Console → Project Info → Project ID + +### 2. `GCP_SA_KEY` +- **Description**: Google Cloud Service Account key (JSON format) +- **Format**: Complete JSON key file content +- **Required permissions**: + - `Storage Admin` (for pushing to Container Registry) + - `Container Registry Service Agent` + +## Setting up the Service Account + +1. **Create a Service Account**: + ```bash + gcloud iam service-accounts create github-actions \ + --description="Service account for GitHub Actions" \ + --display-name="GitHub Actions" + ``` + +2. **Grant necessary permissions**: + ```bash + gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ + --member="serviceAccount:github-actions@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ + --role="roles/storage.admin" + + gcloud projects add-iam-policy-binding YOUR_PROJECT_ID \ + --member="serviceAccount:github-actions@YOUR_PROJECT_ID.iam.gserviceaccount.com" \ + --role="roles/containerregistry.ServiceAgent" + ``` + +3. **Create and download the key**: + ```bash + gcloud iam service-accounts keys create github-actions-key.json \ + --iam-account=github-actions@YOUR_PROJECT_ID.iam.gserviceaccount.com + ``` + +4. **Add the key to GitHub Secrets**: + - Copy the entire content of `github-actions-key.json` + - Go to GitHub repository → Settings → Secrets and variables → Actions + - Create new secret named `GCP_SA_KEY` + - Paste the JSON content + +## Workflow Triggers + +The workflow runs on: +- **Push to main/develop**: Builds and pushes with branch name and `latest` tags +- **Pull requests**: Builds and pushes with PR reference tags +- **Manual trigger**: Can be run manually from GitHub Actions tab +- **File changes**: Only triggers when relevant files are modified + +## Image Tags + +The workflow creates multiple tags: +- `latest` (only for main branch) +- `` (for branch pushes) +- `-` (with git commit SHA) +- `pr-` (for pull requests) + +## Multi-platform Support + +The workflow builds for both: +- `linux/amd64` (Intel/AMD processors) +- `linux/arm64` (ARM processors, including Apple Silicon) + +## Registry Location + +Images are pushed to: `gcr.io/YOUR_PROJECT_ID/cloudharness-dev` + +## Usage + +After the workflow runs, you can pull the image: + +```bash +# Pull latest (from main branch) +docker pull gcr.io/YOUR_PROJECT_ID/cloudharness-dev:latest + +# Pull specific branch +docker pull gcr.io/YOUR_PROJECT_ID/cloudharness-dev:develop + +# Pull specific commit +docker pull gcr.io/YOUR_PROJECT_ID/cloudharness-dev:main-abc1234 +``` diff --git a/.github/workflows/build-dev-container.yml b/.github/workflows/build-dev-container.yml new file mode 100644 index 000000000..80f471d10 --- /dev/null +++ b/.github/workflows/build-dev-container.yml @@ -0,0 +1,85 @@ +name: Build and Push CloudHarness Dev Container + +on: + push: + branches: + - main + - develop + paths: + - 'Dockerfile' + - 'docker-compose.yml' + - '.devcontainer/**' + - 'dev-scripts/**' + - 'libraries/**' + - 'tools/**' + - 'infrastructure/common-images/**' + pull_request: + branches: + - main + - develop + paths: + - 'Dockerfile' + - 'docker-compose.yml' + - '.devcontainer/**' + - 'dev-scripts/**' + - 'libraries/**' + - 'tools/**' + - 'infrastructure/common-images/**' + workflow_dispatch: + +env: + REGISTRY: gcr.io + PROJECT_ID: metacellllc + IMAGE_NAME: cloud-harness/dev-container + +jobs: + build-and-push: + runs-on: ubuntu-latest + + permissions: + contents: read + id-token: write + + steps: + - name: Checkout repository + uses: actions/checkout@v4 + + - name: Set up Docker Buildx + uses: docker/setup-buildx-action@v3 + + - name: Authenticate to Google Cloud + uses: google-github-actions/auth@v2 + with: + credentials_json: ${{ secrets.GCP_SA_KEY }} + + - name: Configure Docker to use gcloud as a credential helper + run: | + gcloud auth configure-docker + + - name: Extract metadata + id: meta + uses: docker/metadata-action@v5 + with: + images: ${{ env.REGISTRY }}/${{ env.PROJECT_ID }}/${{ env.IMAGE_NAME }} + tags: | + type=ref,event=branch + type=ref,event=pr + type=sha,prefix={{branch}}- + type=raw,value=latest,enable={{is_default_branch}} + + - name: Build and push Docker image + uses: docker/build-push-action@v5 + with: + context: . + file: ./.devcontainer/Dockerfile + push: ${{ github.event_name != 'pull_request' }} + tags: ${{ steps.meta.outputs.tags }} + labels: ${{ steps.meta.outputs.labels }} + cache-from: type=gha + cache-to: type=gha,mode=max + platforms: linux/amd64,linux/arm64 + + - name: Output image details + run: | + echo "Image pushed to: ${{ steps.meta.outputs.tags }}" + echo "Image digest: ${{ steps.build.outputs.digest }}" diff --git a/blueprint/.devcontainer/.gitignore b/blueprint/.devcontainer/.gitignore new file mode 100755 index 000000000..0247178b6 --- /dev/null +++ b/blueprint/.devcontainer/.gitignore @@ -0,0 +1 @@ +home \ No newline at end of file diff --git a/blueprint/.devcontainer/Dockerfile b/blueprint/.devcontainer/Dockerfile new file mode 100755 index 000000000..259cd38d1 --- /dev/null +++ b/blueprint/.devcontainer/Dockerfile @@ -0,0 +1 @@ +FROM gcr.io/metacellllc/cloud-harness/dev-container:latest diff --git a/blueprint/.devcontainer/devcontainer.json b/blueprint/.devcontainer/devcontainer.json new file mode 100755 index 000000000..e8cde1359 --- /dev/null +++ b/blueprint/.devcontainer/devcontainer.json @@ -0,0 +1,67 @@ +{ + "name": "My Project Development", + + + "dockerComposeFile": "docker-compose.yml", + "service": "app", + "workspaceFolder": "/workspace", + "shutdownAction": "none", + "forwardPorts": [3000, 5000, 8000, 8100, 9000, 9100, 9200], + "postCreateCommand": "bash /usr/local/share/dev-scripts/post-create.sh", + "features": { + "ghcr.io/devcontainers/features/github-cli:1": {} + }, + "customizations": { + "vscode": { + "settings": { + "terminal.integrated.defaultProfile.linux": "bash", + "python.defaultInterpreterPath": "/root/.local/venv/bin/python", + "python.terminal.activateEnvironment": true, + "python.terminal.activateEnvInCurrentTerminal": true, + // Disable VS Code shell integration - conflicts with bash-preexec (required for atuin/starship) + "terminal.integrated.shellIntegration.enabled": false, + "terminal.integrated.env.linux": { + "XDG_DATA_HOME": "${env:HOME}/.local/share" + }, + "extensions.autoUpdate": false, + "extensions.autoCheckUpdates": false, + "python.analysis.extraPaths": [ + "/root/.local/venv/lib/python3.12/site-packages", + "/workspace/cloud-harness/libraries/models", + "/workspace/cloud-harness/libraries/cloudharness-utils", + "/workspace/cloud-harness/libraries/cloudharness-common", + "/workspace/cloud-harness/libraries/client/cloudharness_cli", + "/workspace/tools/deployment-cli-tools" + ], + "github.copilot.enable": { + "*": true, + "yaml": true, + "plaintext": true, + "markdown": true, + "python": true, + "javascript": true, + "typescript": true, + "json": true, + "jsonc": true + }, + "remote.extensionKind": { + "GitHub.copilot": ["ui"], + "GitHub.copilot-chat": ["ui"] + } + }, + "extensions": [ + "ms-python.python", + "ms-python.autopep8", + "ms-python.pylint", + "ms-python.python-extension-pack", + "KevinRose.vsc-python-indent", + "dbaeumer.vscode-eslint", + "redhat.vscode-yaml", + "ms-vscode.vscode-json", + "ms-kubernetes-tools.vscode-kubernetes-tools" + ] + } + }, + "remoteUser": "root", + "updateContentCommand": "echo 'Container updated'" +} diff --git a/blueprint/.devcontainer/docker-compose.yml b/blueprint/.devcontainer/docker-compose.yml new file mode 100644 index 000000000..c7646181d --- /dev/null +++ b/blueprint/.devcontainer/docker-compose.yml @@ -0,0 +1,109 @@ +version: '3.8' + +services: + # Docker Socket Proxy - provides secure, filtered access to Docker daemon + # See: https://github.com/Tecnativa/docker-socket-proxy + docker-socket-proxy: + image: tecnativa/docker-socket-proxy:latest + container_name: blueprint-docker-proxy + restart: "no" + privileged: false + environment: + # Disable all by default + ALLOW_RESTARTS: 1 + ALLOW_START: 0 + ALLOW_STOP: 1 + + # Enable read-only operations (safe) + EVENTS: 1 + PING: 1 + VERSION: 1 + + # Enable image operations (needed for building) + IMAGES: 1 + INFO: 1 + + # Enable container operations (most common dev tasks) + CONTAINERS: 1 + POST: 1 # Allows creating containers + + # Enable build operations + BUILD: 1 + COMMIT: 1 + + # Enable network operations (needed for docker-compose) + NETWORKS: 1 + + # Enable volume operations (needed for development) + VOLUMES: 1 + + # Enable exec (for debugging containers) + EXEC: 1 + + # DISABLE dangerous operations (security) + ALLOW_PRIVILEGED: 0 # Block --privileged flag + + # Disable direct host mounts at API level where possible + # Note: This blocks some mount operations but not all + # Docker API doesn't provide granular mount path filtering + # Consider using bind-propagation restrictions + + # Optional: Enable these if needed + # SECRETS: 1 + # SERVICES: 1 + # SWARM: 1 + # NODES: 1 + # TASKS: 1 + + # Logging + LOG_LEVEL: info + volumes: + # Mount the real Docker socket (only this container has direct access) + - /var/run/docker.sock:/var/run/docker.sock:ro + networks: + - devcontainer + ports: + # Expose on localhost only for security + # If port is already in use, container will exit silently (restart: no) + - "127.0.0.1:2375:2375" + + # Dev container - connects through the proxy + app: + image: gcr.io/metacellllc/cloud-harness/dev-container:latest + build: + context: .. + dockerfile: .devcontainer/Dockerfile + container_name: blueprint-dev + user: root + network_mode: host + # Don't fail if proxy isn't running (may already be running from another devcontainer) + depends_on: + docker-socket-proxy: + condition: service_started + required: false + environment: + PYTHONPATH: "/cloudharness:/workspace/cloud-harness/libraries/models:/workspace/cloud-harness/libraries/cloudharness-utils:/workspace/cloud-harness/libraries/cloudharness-common:/workspace/cloud-harness/libraries/client/cloudharness_cli:/workspace/tools/deployment-cli-tools" + # Point docker client to the proxy on localhost + DOCKER_HOST: "tcp://127.0.0.1:2375" + KUBECONFIG: "/root/.kube-container/config" + DOCKER_SOCKET_PROXY: "1" # Flag to indicate we're using proxy + # Disable credential store to avoid desktop.exe error + DOCKER_CONFIG: "/root/.docker-container" + volumes: + - ../:/workspace:cached + - ${HOME}/.docker:/root/.docker:ro + - ${HOME}/.kube:/root/.kube:ro + - ./home:/root + - ./vscode:/workspace/.vscode + # Mount VS Code server directory for extension persistence + - vscode-server:/root/.vscode-server + - vscode-server-insiders:/root/.vscode-server-insiders + command: sleep infinity + +networks: + devcontainer: + driver: bridge + +volumes: + vscode-server: + vscode-server-insiders: diff --git a/blueprint/.devcontainer/post-create.sh b/blueprint/.devcontainer/post-create.sh new file mode 100644 index 000000000..b4a22e95d --- /dev/null +++ b/blueprint/.devcontainer/post-create.sh @@ -0,0 +1,23 @@ +#!/bin/bash +# Post-create setup script for Blueprint dev container +# This delegates to the main CloudHarness post-create script + +set -e + +echo "Setting up Blueprint development environment..." + +# Run the main CloudHarness post-create script +# This handles all the common setup: bashrc, atuin, starship, tmux, k8s, docker, etc. +if [ -f /usr/local/share/dev-scripts/post-create.sh ]; then + /usr/local/share/dev-scripts/post-create.sh +else + echo "Warning: CloudHarness post-create.sh not found" + exit 1 +fi + +# Add any blueprint-specific setup here if needed in the future +# For now, everything is handled by the main post-create script + +echo "" +echo "✓ Blueprint development environment ready!" +echo "" diff --git a/blueprint/.devcontainer/vscode/launch.json b/blueprint/.devcontainer/vscode/launch.json new file mode 100644 index 000000000..41fefb5fa --- /dev/null +++ b/blueprint/.devcontainer/vscode/launch.json @@ -0,0 +1,227 @@ +{ + "configurations": [ + { + "console": "integratedTerminal", + "cwd": "${file}", + "name": "Python Debugger: Current File", + "program": "${file}", + "request": "launch", + "type": "debugpy" + }, + { + "console": "integratedTerminal", + "cwd": "${workspaceFolder}/applications/myapp/backend", + "env": { + "ACCOUNTS_ADMIN_PASSWORD": "metacell", + "ACCOUNTS_ADMIN_USERNAME": "admin", + "CH_CURRENT_APP_NAME": "myapp", + "CH_VALUES_PATH": "${workspaceFolder}/deployment/helm/values.yaml", + "DJANGO_SETTINGS_MODULE": "django_baseapp.settings", + "KUBERNETES_SERVICE_HOST": "." + }, + "justMyCode": true, + "name": "Python: Current File", + "program": "${file}", + "request": "launch", + "type": "debugpy" + }, + { + "args": [ + "cloud-harness", + ".", + "-i", + "myapp", + "-d", + "myapp.local", + "-u", + "-dtls", + "-e", + "prod-dev-test-local", + "-l", + "-n", + "myproject" + ], + "console": "integratedTerminal", + "name": "Harness deployment local", + "program": "cloud-harness/tools/deployment-cli-tools/harness-deployment", + "request": "launch", + "type": "python" + }, + { + "args": [ + "cloud-harness", + ".", + "-i", + "jupyterhub", + "-d", + "myapp.local", + "-u", + "-dtls", + "-e", + "dev-local", + "-l", + "-n", + "myproject", + "-t", + "latest" + ], + "console": "integratedTerminal", + "name": "Harness deployment WS", + "program": "cloud-harness/tools/deployment-cli-tools/harness-deployment", + "request": "launch", + "type": "python" + }, + { + "args": [ + "cloud-harness", + ".", + "-i", + "myapp", + "-d", + "myapp.local", + "-dtls", + "-e", + "research-test-dev", + "-l", + "-n", + "myapp" + ], + "console": "integratedTerminal", + "name": "test Harness deployment myapp", + "program": "cloud-harness/tools/deployment-cli-tools/harness-deployment", + "request": "launch", + "type": "debugpy" + }, + { + "args": [ + "neuroglass_research.tests" + ], + "console": "integratedTerminal", + "cwd": "${workspaceFolder}/applications/myapp/backend", + "env": { + "CH_CURRENT_APP_NAME": "myapp", + "CH_VALUES_PATH": "${workspaceFolder}/deployment/helm/values.yaml" + }, + "justMyCode": false, + "name": "Test", + "program": "runtests.py", + "request": "launch", + "type": "debugpy" + }, + { + "args": [ + "createsuperuser" + ], + "console": "integratedTerminal", + "cwd": "${workspaceFolder}/applications/myapp/backend", + "env": { + "ACCOUNTS_ADMIN_PASSWORD": "metacell", + "ACCOUNTS_ADMIN_USERNAME": "admin", + "CH_CURRENT_APP_NAME": "myapp", + "CH_VALUES_PATH": "${workspaceFolder}/deployment/helm/values.yaml", + "DJANGO_SETTINGS_MODULE": "django_baseapp.settings", + "KUBERNETES_SERVICE_HOST": "ssdds" + }, + "justMyCode": false, + "name": "Django superuser", + "program": "manage.py", + "request": "launch", + "type": "debugpy" + }, + { + "args": [ + "cloud-harness", + ".", + "-i", + "myapp", + "-d", + "mnptest.dev.metacell.us", + "-u", + "-dtls", + "-e", + "dev", + "-l", + "-n", + "mnptest", + "-r", + "us.gcr.io/metacellllc" + ], + "console": "integratedTerminal", + "name": "Harness deployment dev", + "program": "cloud-harness/tools/deployment-cli-tools/harness-deployment", + "request": "launch", + "type": "python" + }, + { + "args": [ + "cloud-harness", + ".", + "-i", + "myapp", + "-d", + "myapp.metacell.us", + "-u", + "-dtls", + "-e", + "prod", + "-l", + "-n", + "myproject", + "-r", + "us.gcr.io/metacellllc", + "-t", + "3.1.0" + ], + "console": "integratedTerminal", + "name": "Harness deployment prod", + "program": "cloud-harness/tools/deployment-cli-tools/harness-deployment", + "request": "launch", + "type": "python" + }, + { + "args": [ + "cloud-harness", + ".", + "-i", + "myapp", + "-d", + "myapp.stage.metacell.us", + "-u", + "-dtls", + "-e", + "stage", + "-l", + "-n", + "myproject", + "-r", + "us.gcr.io/metacellllc" + ], + "console": "integratedTerminal", + "name": "Harness deployment stage", + "program": "cloud-harness/tools/deployment-cli-tools/harness-deployment", + "request": "launch", + "type": "python" + }, + { + "args": [ + "runserver" + ], + "console": "integratedTerminal", + "cwd": "${workspaceFolder}/applications/myapp/backend", + "env": { + "ACCOUNTS_ADMIN_PASSWORD": "metacell", + "ACCOUNTS_ADMIN_USERNAME": "admin", + "CH_CURRENT_APP_NAME": "myapp", + "CH_VALUES_PATH": "${workspaceFolder}/deployment/helm/values.yaml", + "DJANGO_SETTINGS_MODULE": "django_baseapp.settings", + "KUBERNETES_SERVICE_HOST": "a" + }, + "justMyCode": false, + "name": "myapp django runserver", + "program": "manage.py", + "request": "launch", + "type": "debugpy" + } + ], + "version": "0.2.0" +} \ No newline at end of file diff --git a/blueprint/.devcontainer/vscode/mcp.json b/blueprint/.devcontainer/vscode/mcp.json new file mode 100644 index 000000000..e8f3d7a69 --- /dev/null +++ b/blueprint/.devcontainer/vscode/mcp.json @@ -0,0 +1,9 @@ +{ + "servers": { + "figma mcp": { + "url": "http://127.0.0.1:3845/mcp", + "type": "http" + } + }, + "inputs": [] +} \ No newline at end of file diff --git a/blueprint/.devcontainer/vscode/settings.json b/blueprint/.devcontainer/vscode/settings.json new file mode 100644 index 000000000..db19126b9 --- /dev/null +++ b/blueprint/.devcontainer/vscode/settings.json @@ -0,0 +1,14 @@ +{ + "editor.defaultFormatter": "dbaeumer.vscode-eslint", + "editor.formatOnSave": true, + "javascript.format.enable": true, + "eslint.format.enable": true, + "eslint.workingDirectories": [{ "mode": "auto" }], + "editor.codeActionsOnSave": { + "source.fixAll.eslint": "explicit" + }, + "[typescriptreact]": { + "editor.defaultFormatter": "dbaeumer.vscode-eslint" + }, + "python.linting.lintOnSave": false, +} \ No newline at end of file diff --git a/infrastructure/common-images/cloudharness-django/libraries/cloudharness-django/setup.cfg b/infrastructure/common-images/cloudharness-django/libraries/cloudharness-django/setup.cfg index 5ec4f2abb..fdd614796 100644 --- a/infrastructure/common-images/cloudharness-django/libraries/cloudharness-django/setup.cfg +++ b/infrastructure/common-images/cloudharness-django/libraries/cloudharness-django/setup.cfg @@ -29,9 +29,9 @@ include_package_data = true packages = find: python_requires = >=3.6 install_requires = - Django>=4.0.7 - django-admin-extra-buttons>=1.4.2 - psycopg2-binary>=2.9.3 - Pillow>=9.2.0 + Django + django-admin-extra-buttons + psycopg2-binary + Pillow python-keycloak django-prometheus diff --git a/tools/deployment-cli-tools/requirements.txt b/tools/deployment-cli-tools/requirements.txt index 266ae27f3..5801762a4 100644 --- a/tools/deployment-cli-tools/requirements.txt +++ b/tools/deployment-cli-tools/requirements.txt @@ -2,7 +2,5 @@ docker six ruamel.yaml oyaml -cloudharness_model -cloudharness_utils dirhash StrEnum ; python_version < '3.11' \ No newline at end of file