Skip to content

This demo demonstrates penetration testing and CIS Benchmark scanning techniques on a Kubernetes cluster using kube-hunter for security assessment of the cluster to identify vulnerabilities, and kube-bench for assessing the cluster's configuration against a set of security best practices.

License

Notifications You must be signed in to change notification settings

iQuantC/Kubernetes_PenTest_Scan

Repository files navigation

Kubernetes Penetration Testing & CIS Benchmark Scanning

This demo demonstrates penetration testing and CIS Benchmark scanning techniques on a Kubernetes cluster using kube-hunter for security assessment of the cluster to identify vulnerabilities, and kube-bench for assessing the cluster's configuration against a set of security best practices.

Requirements

  1. Kubernetes cluster (Minikube)
  2. Kube-hunter (by Aqua Security)
  3. Kubectl
  4. Docker
  5. Python & Pip (optional)
  6. Kube-bench (by Aqua Security)

A. Kubernetes Penetration Testing with kube-bench

Install Required Dependencies

sudo apt update && sudo apt upgrade -y
sudo apt install -y curl wget apt-transport-https ca-certificates gnupg lsb-release python3-pip git

Install Docker

sudo apt install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker
sudo usermod -aG docker $USER
newgrp docker
docker --version

Install Kubectl

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version

Install Minikube

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube version

Start Minikube

minikube start --driver=docker --cpus=2 --memory=4096

Verify Cluster is Running

kubectl cluster-info
minikube status

Enable Useful Addons

minikube addons enable dashboard
minikube addons enable metrics-server
minikube addons enable ingress

Get Minikube IP

minikube ip

Install & Run kube-hunter

Option 1: Install & Run kube-hunter with Python-pip

You may need to set up a Python Virtual Env for this.

# Method 1: Install via pip
pip3 install kube-hunter

# Method 2: Clone from GitHub (for latest version)
git clone https://github.com/aquasecurity/kube-hunter.git
cd kube-hunter
pip3 install -r requirements.txt

Identify Potential Vulnerabilities

# Get the Minikube IP
MINIKUBE_IP=$(minikube ip)

# Run kube-hunter in passive mode
kube-hunter --remote $MINIKUBE_IP --report plain

Option 2: Install & Run kube-hunter with Docker Container

Log in to DockerHub Account

Note: If you get an error from running the kube-hunter container below, login to your dockerhub first. Create a Personal Access Token (PAT)

docker login -u iquantc
# Generate output on terminal
docker run -it --network minikube --rm aquasec/kube-hunter --remote 192.168.49.2 --report plain

or

# Send output to a JSON File
docker run -it --network minikube --rm aquasec/kube-hunter --remote 192.168.49.2 --report json > kube-hunter-report.json

Option 3: Install & Run kube-hunter in In-Cluster Mode

Deploy kube-hunter pod inside the Kubernetes cluster to discover internal cluster vulnerabilities.

Create kube-hunter-job.yaml Manifest

Based on the aquasecurity/kube-hunter GitHub Repository: https://github.com/aquasecurity/kube-hunter/blob/main/job.yaml

# kube-hunter-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: kube-hunter
spec:
  template:
    metadata:
      labels:
        app: kube-hunter
    spec:
      containers:
        - name: kube-hunter
          image: aquasec/kube-hunter:0.6.8
          command: ["kube-hunter"]
          args: ["--pod"]
      restartPolicy: Never

Run the kube-hunter Job

kubectl create -f kube-hunter-job.yaml

Find the Pod Name

kubectl describe job kube-hunter

or

kubectl get pods

View the Test Results from the Logs of kube-hunter Pod

kubectl logs <pod name>

Simulate Vulnerabilities

Create Namespace

kubectl create namespace vulnerable-test

Deploy Insecure Pod with Configuration below

Save the config below as nginx.yaml

  1. runAsUser: 0 Running container as root user
  2. privileged: This grants the container root capabilities to all devices on the host system
  3. hard-coded secrets: Use base64 encoding of the usernames & secrets instead (echo -n "admin123" | base64) or secret mgt store like Vault to store secrets.
  4. hostPath mount: Allows a Pod to mount a file/directory from the node's filesystem directly into the container running within the Pod. Use ConfigMaps or Secrets when appropriate
  5. Image version: If possible, update docker images of pods in Kubernetes cluster to latest versions.
# nginx.yaml
apiVersion: v1
kind: Pod
metadata:
  name: vulnerable-pod
  namespace: vulnerable-test
  labels:
    app: vulnerable-app
spec:
  securityContext:
    runAsUser: 0  # Running as root (vulnerability)
  containers:
  - name: vulnerable-container
    image: nginx:latest
    securityContext:
      privileged: true  # Privileged container (vulnerability)
      allowPrivilegeEscalation: true
      readOnlyRootFilesystem: false
      runAsNonRoot: false
    ports:
    - containerPort: 80
    env:
    - name: SECRET_PASSWORD
      value: "admin123"  # Hardcoded secret (vulnerability)
    volumeMounts:
    - name: host-root
      mountPath: /host
  volumes:
  - name: host-root
    hostPath:
      path: /  # Host path mount (vulnerability)
---
apiVersion: v1
kind: Service
metadata:
  name: vulnerable-service
  namespace: vulnerable-test
spec:
  selector:
    app: vulnerable-app
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30080
  type: NodePort

Deploy Insecure RBAC with Configuration below

Save the config below as insecure-rbac.yaml

  1. When it comes to security, the trick is "Least Possible Permissions"

  2. cluster-admin role: Provides the capability to perform any action on any resource within the cluster.

  3. runAsUser: 0 Running container as root user

  4. SYS_ADMIN, NET_ADMIN: Grants the container extensive system administration and network management privileges.

# insecure-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: insecure-service-account
  namespace: vulnerable-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: insecure-binding
subjects:
- kind: ServiceAccount
  name: insecure-service-account
  namespace: vulnerable-test
roleRef:
  kind: ClusterRole
  name: cluster-admin  # Excessive permissions (vulnerability)
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: insecure-deployment
  namespace: vulnerable-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: insecure-app
  template:
    metadata:
      labels:
        app: insecure-app
    spec:
      serviceAccountName: insecure-service-account
      containers:
      - name: insecure-container
        image: alpine:latest
        command: ["sleep", "3600"]
        securityContext:
          runAsUser: 0
          capabilities:
            add: ["SYS_ADMIN", "NET_ADMIN"]  # Excessive capabilities

Deploy the Vulnerable Resources

kubectl apply -f nginx.yaml
kubectl apply -f insecure-rbac.yaml

Verify Deployments

kubectl get pods -n vulnerable-test
kubectl get services -n vulnerable-test
kubectl get deployments -n vulnerable-test
kubectl get sa -n vulnerable-test
kubectl get clusterrole -n vulnerable-test

Vulnerability Verifications

Verify Privileged Container Access

kubectl exec -it vulnerable-pod -n vulnerable-test -- ls /host

Access Containers with Elevated Privileges

kubectl exec -it vulnerable-pod -n vulnerable-test -- /bin/bash

Check Mounted Secrets

kubectl get secrets --all-namespaces

Test Service Account Permissions

kubectl auth can-i --list --as=system:serviceaccount:vulnerable-test:insecure-service-account

Remediation Examples

Secure Pod Configurations

  1. runAsUser: 1000 Tells Kubernetes to run the container process (PID 1) with user ID (UID) 1000 (non-root user) inside the container.
  2. runAsGroup: 1000 Specifies that the container process should be run with a group ID (GID) of 1000
  3. fsGroup: 1000 Ensures that any volumes attached to the Pod will have their files and directories owned by the group ID 1000. This helps ensure consistency in file permissions across different nodes in the cluster

By running containers as non-root users and controlling their group and file system access, it aligns with the Pod Security Standards Baseline policy and reduces the attack surface

# secure-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: secure-pod
  namespace: secure-test
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000
  containers:
  - name: secure-container
    image: nginx:latest
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      capabilities:
        drop:
        - ALL
    ports:
    - containerPort: 8080
    volumeMounts:
    - name: tmp-volume
      mountPath: /tmp
    - name: cache-volume
      mountPath: /var/cache/nginx
  volumes:
  - name: tmp-volume
    emptyDir: {}
  - name: cache-volume
    emptyDir: {}

Secure Network Policy Configurations

  1. default-deny-all NetworkPolicy: Establishes a baseline where all traffic to and from pods in the secure-test namespace is denied by default
  2. allow-specific-ingress NetworkPolicy: This policy then creates an exception to this default deny policy, permitting specific ingress traffic to pods labeled app: secure-app on port 8080

This approach, known as the "default deny, explicit allow" model, is a security best practice. It ensures that only necessary traffic flows are allowed and helps to prevent unauthorized access and communication, enhancing the overall security of your Kubernetes cluster.

# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: secure-test
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-specific-ingress
  namespace: secure-test
spec:
  podSelector:
    matchLabels:
      app: secure-app
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: allowed-namespace
    ports:
    - protocol: TCP
      port: 8080

B. CIS Benchmark Scanning of Kubernetes Cluster with kube-bench

Running CIS Benchmarks on a Kubernetes cluster means assessing your cluster's configuration against a set of security best practices and recommendations published by the Center for Internet Security (CIS).

Install & Run kube-bench

Option 1: Install & Run kube-bench with a Docker Container (Host-based Scan)

This scans the host and local components like kubelet, etcd, API server config files etc., using the Minikube profile.

docker run -it --rm \
  --name kube-bench \
  --pid=host \
  --net=host \
  -v /etc:/etc:ro \
  -v /var:/var:ro \
  -v /usr/bin:/usr/bin:ro \
  -v /lib/systemd:/lib/systemd:ro \
  aquasec/kube-bench:latest

If you want to export kube-bench output to a JSON report

docker run -it --rm \
  --pid=host \
  --net=host \
  -v /etc:/etc:ro \
  -v /var:/var:ro \
  -v /usr/bin:/usr/bin:ro \
  -v /lib/systemd:/lib/systemd:ro \
  aquasec/kube-bench:latest --json > kube-bench-report.json

Option 2: Install & Run kube-bench in In-Cluster Mode (Node-level Scan)

You can also run kube-bench inside the Minikube cluster as a pod

# kube-bench.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-bench
spec:
  hostPID: true
  containers:
    - name: kube-bench
      image: aquasec/kube-bench:latest
      args: ["--benchmark", "cis-1.23", "--json"]
      securityContext:
        privileged: true
      volumeMounts:
        - name: var-lib-etcd
          mountPath: /var/lib/etcd
          readOnly: true
        - name: etc-kubernetes
          mountPath: /etc/kubernetes
          readOnly: true
        - name: etc-systemd
          mountPath: /etc/systemd
          readOnly: true
        - name: usr-bin
          mountPath: /usr/bin
          readOnly: true
        - name: var-lib-kubelet
          mountPath: /var/lib/kubelet
          readOnly: true
  volumes:
    - name: var-lib-etcd
      hostPath:
        path: /var/lib/etcd
    - name: etc-kubernetes
      hostPath:
        path: /etc/kubernetes
    - name: etc-systemd
      hostPath:
        path: /etc/systemd
    - name: usr-bin
      hostPath:
        path: /usr/bin
    - name: var-lib-kubelet
      hostPath:
        path: /var/lib/kubelet
  restartPolicy: Never

Apply the Pod Config Above

kubectl apply -f kube-bench.yaml

View the Report

Filter everything between { and } i.e., the start & end of JSON, and redirect it cleanly to the JSON file:

kubectl logs kube-bench | sed -n '/^{/,/^}$/p' > kube-bench-results.json

It may look kinda crazy (hard to read), so check the JSON file with jq

  1. jq is a powerful, lightweight, and flexible command-line JSON processor.
  2. It's like sed, awk, and grep but specifically designed for working with JSON data, allowing you to slice, filter, map, and transform structured data.

Install jq

sudo apt install jq
jq . kube-bench-results.json

Write Python Script to Convert kube-bench-results.json to Readable format

  1. The Python script kube_bench_report_gen.py will generate a human-readable report in kube_bench_report.md.
  2. Each test entry includes its number, description, status, and a remediation summary
#kube_bench_report_gen.py
import json
from pathlib import Path

def load_kube_bench_results(json_path):
    with open(json_path, 'r') as file:
        return json.load(file)

def generate_report(data, output_path="kube_bench_report.md"):
    lines = []
    total_pass, total_fail, total_warn, total_info = 0, 0, 0, 0

    for control in data.get("Controls", []):
        lines.append(f"# Control: {control['text']} ({control['id']})")
        lines.append(f"**Node Type:** {control['node_type']}")
        lines.append("")

        for test in control.get("tests", []):
            section_header = f"## Section {test['section']}: {test['desc']}"
            lines.append(section_header)
            lines.append(f"- **Pass:** {test['pass']}")
            lines.append(f"- **Fail:** {test['fail']}")
            lines.append(f"- **Warn:** {test['warn']}")
            lines.append(f"- **Info:** {test['info']}")
            lines.append("")

            total_pass += test['pass']
            total_fail += test['fail']
            total_warn += test['warn']
            total_info += test['info']

            for result in test.get("results", []):
                lines.append(f"### {result['test_number']} - {result['test_desc']}")
                lines.append(f"- **Status:** {result['status']}")
                if result.get('reason'):
                    lines.append(f"- **Reason:** {result['reason'][:500]}{'...' if len(result['reason']) > 500 else ''}")
                if result.get('remediation'):
                    lines.append(f"- **Remediation:** {result['remediation'].replace(chr(10), ' ')}")
                lines.append("")

        lines.append("\n---\n")

    # Summary
    lines.append("# Summary")
    lines.append(f"- **Total Passed:** {total_pass}")
    lines.append(f"- **Total Failed:** {total_fail}")
    lines.append(f"- **Total Warnings:** {total_warn}")
    lines.append(f"- **Total Info:** {total_info}")
    lines.append("")

    with open(output_path, 'w') as file:
        file.write('\n'.join(lines))

    print(f" Report saved to {output_path}")

if __name__ == "__main__":
    json_input = "kube-bench-results.json"  # Adjust path if needed
    output_file = "kube_bench_report.md"
    data = load_kube_bench_results(json_input)
    generate_report(data, output_file)

Run the Python Script

python3 kube_bench_report_gen.py

Clean Up

# Delete vulnerable resources
kubectl delete namespace vulnerable-test

# Delete kube-bench pod (if you used option 2)
kubectl delete pod kube-bench || true

# Stop Minikube
minikube stop

# Optional: Delete Minikube cluster
minikube delete --all

Subscribe to iQuant on YouTube!!!

About

This demo demonstrates penetration testing and CIS Benchmark scanning techniques on a Kubernetes cluster using kube-hunter for security assessment of the cluster to identify vulnerabilities, and kube-bench for assessing the cluster's configuration against a set of security best practices.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages