Skip to content

Module 7.md

Rabieh Fashwall edited this page Nov 27, 2025 · 1 revision

Module 7: Simple CI/CD for MLOps

Overview

This module demonstrates a simple CI/CD pipeline for deploying ML services using GitHub Actions and Kubernetes. You'll learn how to automatically build Docker images when code changes and deploy them to your local kind cluster.

What You'll Learn:

  • Building Docker containers in CI/CD pipelines
  • Pushing images to GitHub Container Registry
  • Deploying to Kubernetes
  • Separating CI (build) from CD (deploy)

Prerequisites

Before starting, ensure you have:

  1. GitHub Account - For running GitHub Actions
  2. GitHub CLI (gh) - Install here
  3. Docker Desktop
  4. kubectl
  5. kind

Quick Start

Step 1: Run Setup Script

cd modules/module-7
./setup-simple.sh

This script will:

  • ✓ Check all prerequisites
  • ✓ Verify kind cluster is running
  • ✓ Confirm Git repository is initialized

Step 2: Create GitHub Repository

If you haven't already, create a GitHub repository:

gh repo create ml-con-workshop --public --source=. --remote=origin

Step 3: Push Code to Trigger CI/CD

git add .
git commit -m "Add CI/CD workflow"
git push origin main

Step 4: Watch the Workflow

Monitor the build process:

# Watch in terminal
gh run watch

# Or open in browser
gh run view --web

The workflow will:

  1. Build the ML service Docker image
  2. Build the API gateway Docker image
  3. Push both images to GitHub Container Registry (ghcr.io)
  4. Output deployment instructions

Step 5: Deploy to Kubernetes

After the workflow completes successfully, follow the deployment instructions shown in the GitHub Actions summary. They'll look like this:

# 1. Pull images from GitHub Container Registry
docker pull ghcr.io/<your-username>/ml-con-workshop/sentiment-api:main-abc1234
docker pull ghcr.io/<your-username>/ml-con-workshop/api-gateway:main-abc1234

# 2. Load images into kind cluster
kind load docker-image ghcr.io/<your-username>/ml-con-workshop/sentiment-api:main-abc1234 --name mlops-workshop
kind load docker-image ghcr.io/<your-username>/ml-con-workshop/api-gateway:main-abc1234 --name mlops-workshop

# 3. Update Kubernetes manifests with new image tags
cd modules/module-7/k8s
sed -i.bak "s|image:.*sentiment-api.*|image: ghcr.io/<your-username>/ml-con-workshop/sentiment-api:main-abc1234|" ml-service.yaml
sed -i.bak "s|image:.*api-gateway.*|image: ghcr.io/<your-username>/ml-con-workshop/api-gateway:main-abc1234|" api-gateway.yaml

# 4. Deploy to Kubernetes
kubectl apply -f modules/module-7/k8s/

# 5. Wait for rollout to complete
kubectl rollout status deployment/sentiment-api --timeout=5m
kubectl rollout status deployment/api-gateway --timeout=5m

Step 6: Verify Deployment

# Check pods are running
kubectl get pods

# Test health endpoint
curl http://localhost:30080/health

# Test sentiment analysis
curl -X POST http://localhost:30080/predict \
  -H "Content-Type: application/json" \
  -d '{"request": {"text": "This workshop is amazing!"}}'

Expected response:

{
  "sentiment": "POSITIVE",
  "score": 0.9998,
  "text": "This workshop is amazing!"
}

How It Works

CI/CD Workflow Breakdown

The .github/workflows/mlops-simple.yaml file defines three jobs:

Job 1: Build ML Service

  • Checks out code
  • Installs BentoML
  • Builds Bento package
  • Containerizes with Docker
  • Pushes to ghcr.io

Job 2: Build API Gateway

  • Checks out code
  • Builds Go Docker image
  • Pushes to ghcr.io

Job 3: Deployment Instructions

  • Outputs commands for manual deployment
  • Includes image tags with commit SHA

Why Manual Deployment?

GitHub Actions runners cannot access your local kind cluster, so the deployment step is manual. In production, you'd use:

  • ArgoCD or Flux for GitOps
  • Cloud-hosted clusters (EKS, GKE, AKS) accessible from GitHub Actions
  • Self-hosted runners with cluster access

Image Tagging Strategy

Images are tagged with format: {branch}-{short-sha}

Example: main-a1b2c3d

This provides:

  • Traceability: Know exactly which commit is deployed
  • Uniqueness: Each build has a unique tag
  • Readability: Short SHA is easier to work with

Next Steps

After completing this module, you can:

  1. Modify the Services - Make changes to ML service or gateway
  2. Push to GitHub - Watch CI/CD automatically rebuild
  3. Deploy Updates - Follow deployment instructions again
  4. Add Testing - Extend workflow with unit tests
  5. Add Monitoring - Integrate with Module 6 (Prometheus/Grafana)

Production Considerations

This is a simplified demo. For production, add:

  • Automated testing (unit tests, integration tests, linting)
  • Security scanning (Trivy, Snyk for vulnerabilities)
  • Multi-environment deployments (dev, staging, production)
  • Approval gates for production deployments
  • Rollback strategies (automated on failure)
  • Notifications (Slack, email on build/deploy status)
  • GitOps (ArgoCD or Flux for automated deployments)
  • Service mesh (Istio, Linkerd for advanced traffic management)

Resources

Summary

You've learned:

  • ✅ How to build Docker images in GitHub Actions
  • ✅ How to push images to GitHub Container Registry
  • ✅ How to deploy to Kubernetes
  • ✅ How to separate CI (automated) from CD (manual)
  • ✅ How to tag images for traceability

This foundation prepares you for more advanced CI/CD patterns used in production MLOps systems!


Navigation

Previous Home Completion
Module 6: Monitoring & Observability 🏠 Home 🎉 Workshop Complete!

Quick Links


MLOps Workshop | GitHub Repository