This demo demonstrates penetration testing and CIS Benchmark scanning techniques on a Kubernetes cluster using kube-hunter for security assessment of the cluster to identify vulnerabilities, and kube-bench for assessing the cluster's configuration against a set of security best practices.
- Kubernetes cluster (Minikube)
- Kube-hunter (by Aqua Security)
- Kubectl
- Docker
- Python & Pip (optional)
- Kube-bench (by Aqua Security)
sudo apt update && sudo apt upgrade -y
sudo apt install -y curl wget apt-transport-https ca-certificates gnupg lsb-release python3-pip gitsudo apt install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker
sudo usermod -aG docker $USER
newgrp docker
docker --versioncurl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl versioncurl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube versionminikube start --driver=docker --cpus=2 --memory=4096kubectl cluster-info
minikube statusminikube addons enable dashboard
minikube addons enable metrics-server
minikube addons enable ingressminikube ipYou may need to set up a Python Virtual Env for this.
# Method 1: Install via pip
pip3 install kube-hunter
# Method 2: Clone from GitHub (for latest version)
git clone https://github.com/aquasecurity/kube-hunter.git
cd kube-hunter
pip3 install -r requirements.txt# Get the Minikube IP
MINIKUBE_IP=$(minikube ip)
# Run kube-hunter in passive mode
kube-hunter --remote $MINIKUBE_IP --report plainNote: If you get an error from running the kube-hunter container below, login to your dockerhub first. Create a Personal Access Token (PAT)
docker login -u iquantc# Generate output on terminal
docker run -it --network minikube --rm aquasec/kube-hunter --remote 192.168.49.2 --report plainor
# Send output to a JSON File
docker run -it --network minikube --rm aquasec/kube-hunter --remote 192.168.49.2 --report json > kube-hunter-report.jsonDeploy kube-hunter pod inside the Kubernetes cluster to discover internal cluster vulnerabilities.
Based on the aquasecurity/kube-hunter GitHub Repository: https://github.com/aquasecurity/kube-hunter/blob/main/job.yaml
# kube-hunter-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: kube-hunter
spec:
template:
metadata:
labels:
app: kube-hunter
spec:
containers:
- name: kube-hunter
image: aquasec/kube-hunter:0.6.8
command: ["kube-hunter"]
args: ["--pod"]
restartPolicy: Neverkubectl create -f kube-hunter-job.yamlkubectl describe job kube-hunteror
kubectl get podskubectl logs <pod name>kubectl create namespace vulnerable-testSave the config below as nginx.yaml
- runAsUser: 0 Running container as root user
- privileged: This grants the container root capabilities to all devices on the host system
- hard-coded secrets: Use base64 encoding of the usernames & secrets instead (echo -n "admin123" | base64) or secret mgt store like Vault to store secrets.
- hostPath mount: Allows a Pod to mount a file/directory from the node's filesystem directly into the container running within the Pod. Use ConfigMaps or Secrets when appropriate
- Image version: If possible, update docker images of pods in Kubernetes cluster to latest versions.
# nginx.yaml
apiVersion: v1
kind: Pod
metadata:
name: vulnerable-pod
namespace: vulnerable-test
labels:
app: vulnerable-app
spec:
securityContext:
runAsUser: 0 # Running as root (vulnerability)
containers:
- name: vulnerable-container
image: nginx:latest
securityContext:
privileged: true # Privileged container (vulnerability)
allowPrivilegeEscalation: true
readOnlyRootFilesystem: false
runAsNonRoot: false
ports:
- containerPort: 80
env:
- name: SECRET_PASSWORD
value: "admin123" # Hardcoded secret (vulnerability)
volumeMounts:
- name: host-root
mountPath: /host
volumes:
- name: host-root
hostPath:
path: / # Host path mount (vulnerability)
---
apiVersion: v1
kind: Service
metadata:
name: vulnerable-service
namespace: vulnerable-test
spec:
selector:
app: vulnerable-app
ports:
- port: 80
targetPort: 80
nodePort: 30080
type: NodePortSave the config below as insecure-rbac.yaml
-
When it comes to security, the trick is "Least Possible Permissions"
-
cluster-admin role: Provides the capability to perform any action on any resource within the cluster.
-
runAsUser: 0 Running container as root user
-
SYS_ADMIN, NET_ADMIN: Grants the container extensive system administration and network management privileges.
# insecure-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: insecure-service-account
namespace: vulnerable-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: insecure-binding
subjects:
- kind: ServiceAccount
name: insecure-service-account
namespace: vulnerable-test
roleRef:
kind: ClusterRole
name: cluster-admin # Excessive permissions (vulnerability)
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: insecure-deployment
namespace: vulnerable-test
spec:
replicas: 1
selector:
matchLabels:
app: insecure-app
template:
metadata:
labels:
app: insecure-app
spec:
serviceAccountName: insecure-service-account
containers:
- name: insecure-container
image: alpine:latest
command: ["sleep", "3600"]
securityContext:
runAsUser: 0
capabilities:
add: ["SYS_ADMIN", "NET_ADMIN"] # Excessive capabilitieskubectl apply -f nginx.yaml
kubectl apply -f insecure-rbac.yamlkubectl get pods -n vulnerable-test
kubectl get services -n vulnerable-test
kubectl get deployments -n vulnerable-test
kubectl get sa -n vulnerable-test
kubectl get clusterrole -n vulnerable-testkubectl exec -it vulnerable-pod -n vulnerable-test -- ls /hostkubectl exec -it vulnerable-pod -n vulnerable-test -- /bin/bashkubectl get secrets --all-namespaceskubectl auth can-i --list --as=system:serviceaccount:vulnerable-test:insecure-service-account- runAsUser: 1000 Tells Kubernetes to run the container process (PID 1) with user ID (UID) 1000 (non-root user) inside the container.
- runAsGroup: 1000 Specifies that the container process should be run with a group ID (GID) of 1000
- fsGroup: 1000 Ensures that any volumes attached to the Pod will have their files and directories owned by the group ID 1000. This helps ensure consistency in file permissions across different nodes in the cluster
By running containers as non-root users and controlling their group and file system access, it aligns with the Pod Security Standards Baseline policy and reduces the attack surface
# secure-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
namespace: secure-test
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: secure-container
image: nginx:latest
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
capabilities:
drop:
- ALL
ports:
- containerPort: 8080
volumeMounts:
- name: tmp-volume
mountPath: /tmp
- name: cache-volume
mountPath: /var/cache/nginx
volumes:
- name: tmp-volume
emptyDir: {}
- name: cache-volume
emptyDir: {}- default-deny-all NetworkPolicy: Establishes a baseline where all traffic to and from pods in the secure-test namespace is denied by default
- allow-specific-ingress NetworkPolicy: This policy then creates an exception to this default deny policy, permitting specific ingress traffic to pods labeled app: secure-app on port 8080
This approach, known as the "default deny, explicit allow" model, is a security best practice. It ensures that only necessary traffic flows are allowed and helps to prevent unauthorized access and communication, enhancing the overall security of your Kubernetes cluster.
# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: secure-test
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-specific-ingress
namespace: secure-test
spec:
podSelector:
matchLabels:
app: secure-app
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: allowed-namespace
ports:
- protocol: TCP
port: 8080Running CIS Benchmarks on a Kubernetes cluster means assessing your cluster's configuration against a set of security best practices and recommendations published by the Center for Internet Security (CIS).
This scans the host and local components like kubelet, etcd, API server config files etc., using the Minikube profile.
docker run -it --rm \
--name kube-bench \
--pid=host \
--net=host \
-v /etc:/etc:ro \
-v /var:/var:ro \
-v /usr/bin:/usr/bin:ro \
-v /lib/systemd:/lib/systemd:ro \
aquasec/kube-bench:latestIf you want to export kube-bench output to a JSON report
docker run -it --rm \
--pid=host \
--net=host \
-v /etc:/etc:ro \
-v /var:/var:ro \
-v /usr/bin:/usr/bin:ro \
-v /lib/systemd:/lib/systemd:ro \
aquasec/kube-bench:latest --json > kube-bench-report.jsonYou can also run kube-bench inside the Minikube cluster as a pod
# kube-bench.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-bench
spec:
hostPID: true
containers:
- name: kube-bench
image: aquasec/kube-bench:latest
args: ["--benchmark", "cis-1.23", "--json"]
securityContext:
privileged: true
volumeMounts:
- name: var-lib-etcd
mountPath: /var/lib/etcd
readOnly: true
- name: etc-kubernetes
mountPath: /etc/kubernetes
readOnly: true
- name: etc-systemd
mountPath: /etc/systemd
readOnly: true
- name: usr-bin
mountPath: /usr/bin
readOnly: true
- name: var-lib-kubelet
mountPath: /var/lib/kubelet
readOnly: true
volumes:
- name: var-lib-etcd
hostPath:
path: /var/lib/etcd
- name: etc-kubernetes
hostPath:
path: /etc/kubernetes
- name: etc-systemd
hostPath:
path: /etc/systemd
- name: usr-bin
hostPath:
path: /usr/bin
- name: var-lib-kubelet
hostPath:
path: /var/lib/kubelet
restartPolicy: Neverkubectl apply -f kube-bench.yamlFilter everything between { and } i.e., the start & end of JSON, and redirect it cleanly to the JSON file:
kubectl logs kube-bench | sed -n '/^{/,/^}$/p' > kube-bench-results.jsonIt may look kinda crazy (hard to read), so check the JSON file with jq
- jq is a powerful, lightweight, and flexible command-line JSON processor.
- It's like sed, awk, and grep but specifically designed for working with JSON data, allowing you to slice, filter, map, and transform structured data.
sudo apt install jqjq . kube-bench-results.json- The Python script kube_bench_report_gen.py will generate a human-readable report in kube_bench_report.md.
- Each test entry includes its number, description, status, and a remediation summary
#kube_bench_report_gen.py
import json
from pathlib import Path
def load_kube_bench_results(json_path):
with open(json_path, 'r') as file:
return json.load(file)
def generate_report(data, output_path="kube_bench_report.md"):
lines = []
total_pass, total_fail, total_warn, total_info = 0, 0, 0, 0
for control in data.get("Controls", []):
lines.append(f"# Control: {control['text']} ({control['id']})")
lines.append(f"**Node Type:** {control['node_type']}")
lines.append("")
for test in control.get("tests", []):
section_header = f"## Section {test['section']}: {test['desc']}"
lines.append(section_header)
lines.append(f"- **Pass:** {test['pass']}")
lines.append(f"- **Fail:** {test['fail']}")
lines.append(f"- **Warn:** {test['warn']}")
lines.append(f"- **Info:** {test['info']}")
lines.append("")
total_pass += test['pass']
total_fail += test['fail']
total_warn += test['warn']
total_info += test['info']
for result in test.get("results", []):
lines.append(f"### {result['test_number']} - {result['test_desc']}")
lines.append(f"- **Status:** {result['status']}")
if result.get('reason'):
lines.append(f"- **Reason:** {result['reason'][:500]}{'...' if len(result['reason']) > 500 else ''}")
if result.get('remediation'):
lines.append(f"- **Remediation:** {result['remediation'].replace(chr(10), ' ')}")
lines.append("")
lines.append("\n---\n")
# Summary
lines.append("# Summary")
lines.append(f"- **Total Passed:** {total_pass}")
lines.append(f"- **Total Failed:** {total_fail}")
lines.append(f"- **Total Warnings:** {total_warn}")
lines.append(f"- **Total Info:** {total_info}")
lines.append("")
with open(output_path, 'w') as file:
file.write('\n'.join(lines))
print(f" Report saved to {output_path}")
if __name__ == "__main__":
json_input = "kube-bench-results.json" # Adjust path if needed
output_file = "kube_bench_report.md"
data = load_kube_bench_results(json_input)
generate_report(data, output_file)python3 kube_bench_report_gen.py# Delete vulnerable resources
kubectl delete namespace vulnerable-test
# Delete kube-bench pod (if you used option 2)
kubectl delete pod kube-bench || true
# Stop Minikube
minikube stop
# Optional: Delete Minikube cluster
minikube delete --all