Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 56 additions & 0 deletions .k8s/base/api/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
spec:
initContainers:
- name: migrations
image: learninglocker
args: [migrations]
envFrom:
- configMapRef:
name: learninglocker-config
- secretRef:
name: learninglocker-secret
env:
- name: MONGO_HOST
value: $(LEARNINGLOCKER_MONGO_SERVICE)

- name: seeds
image: learninglocker
args: [seeds]
envFrom:
- configMapRef:
name: learninglocker-config
- secretRef:
name: learninglocker-secret
env:
- name: MONGO_HOST
value: $(LEARNINGLOCKER_MONGO_SERVICE)

containers:
- name: api
image: learninglocker
args: [api]
envFrom:
- configMapRef:
name: learninglocker-config
- secretRef:
name: learninglocker-secret
env:
- name: PATH_PREFIX
value: "/api"
- name: REDIS_HOST
value: $(LEARNINGLOCKER_REDIS_SERVICE)
- name: MONGO_HOST
value: $(LEARNINGLOCKER_MONGO_SERVICE)
ports:
- containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /api/
port: 8080
Comment on lines +34 to +56
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden security, add resources and readiness probe.

Baseline best practices.

         - name: api
           image: learninglocker
           args: [api]
+          securityContext:
+            runAsNonRoot: true
+            allowPrivilegeEscalation: false
+            readOnlyRootFilesystem: true
+            seccompProfile:
+              type: RuntimeDefault
+            capabilities:
+              drop: [ "ALL" ]
+          resources:
+            requests:
+              cpu: "100m"
+              memory: "256Mi"
+            limits:
+              cpu: "500m"
+              memory: "512Mi"
+          readinessProbe:
+            httpGet:
+              path: /api/
+              port: 8080
+            initialDelaySeconds: 5
+            periodSeconds: 10
+            failureThreshold: 3
+          automountServiceAccountToken: false

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 1-56: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[LOW] 1-56: CPU limits should be set

(CKV_K8S_11)


[LOW] 1-56: CPU requests should be set

(CKV_K8S_10)


[LOW] 1-56: Apply security context to your containers

(CKV_K8S_30)


[LOW] 1-56: The default namespace should not be used

(CKV_K8S_21)


[LOW] 1-56: Minimize the admission of containers with the NET_RAW capability

(CKV_K8S_28)


[LOW] 1-56: Image should use digest

(CKV_K8S_43)


[LOW] 1-56: Image Tag should be fixed - not latest or blank

(CKV_K8S_14)


[LOW] 1-56: Memory limits should be set

(CKV_K8S_13)


[LOW] 1-56: Memory requests should be set

(CKV_K8S_12)


[LOW] 1-56: Minimize the admission of containers with capabilities assigned

(CKV_K8S_37)


[LOW] 1-56: Apply security context to your pods and containers

(CKV_K8S_29)


[LOW] 1-56: Use read-only filesystem for containers where possible

(CKV_K8S_22)


[LOW] 1-56: Readiness Probe Should be Configured

(CKV_K8S_9)


[MEDIUM] 1-56: Minimize the admission of root containers

(CKV_K8S_23)


[LOW] 1-56: Containers should run as a high UID to avoid host conflict

(CKV_K8S_40)


[LOW] 1-56: Ensure that the seccomp profile is set to docker/default or runtime/default

(CKV_K8S_31)


[LOW] 1-56: Prefer using secrets as files over secrets as environment variables

(CKV_K8S_35)


[LOW] 1-56: Ensure that Service Account Tokens are only mounted where necessary

(CKV_K8S_38)

🪛 YAMLlint (1.37.1)

[error] 35-35: wrong indentation: expected 8 but found 6

(indentation)


[error] 39-39: wrong indentation: expected 10 but found 8

(indentation)


[error] 44-44: wrong indentation: expected 10 but found 8

(indentation)


[error] 51-51: wrong indentation: expected 10 but found 8

(indentation)

12 changes: 12 additions & 0 deletions .k8s/base/api/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

labels:
- pairs:
app.kubernetes.io/component: api
includeSelectors: true
includeTemplates: true

resources:
- deployment.yaml
- service.yaml
9 changes: 9 additions & 0 deletions .k8s/base/api/service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
apiVersion: v1
kind: Service
metadata:
name: api
spec:
ports:
- name: http
port: 80
targetPort: 8080
38 changes: 38 additions & 0 deletions .k8s/base/ingress.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: learninglocker
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
traefik.ingress.kubernetes.io/redirect-entry-point: https
Comment on lines +5 to +8
Copy link

@coderabbitai coderabbitai bot Aug 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Legacy annotations — confirm Traefik/cert-manager versions

  • kubernetes.io/tls-acme and traefik.ingress.kubernetes.io/redirect-entry-point are legacy. Traefik v2 typically uses router.* annotations; cert-manager relies on the TLS block + issuer annotation only.

If you’re on Traefik v2, I can propose the v2 router.entrypoints/router.tls annotations.

🤖 Prompt for AI Agents
In .k8s/base/ingress.yaml around lines 5-8, the file uses legacy annotations
(kubernetes.io/tls-acme and traefik.ingress.kubernetes.io/redirect-entry-point);
if you’re running Traefik v2, remove kubernetes.io/tls-acme and the
redirect-entry-point annotation and replace them with Traefik v2 router
annotations (e.g. traefik.ingress.kubernetes.io/router.entrypoints set to
websecure and traefik.ingress.kubernetes.io/router.tls enabled) and ensure
cert-manager is configured via a TLS block with a secretName plus the
cert-manager.io/cluster-issuer (or cert-manager.io/issuer) annotation; confirm
Traefik and cert-manager versions and apply those changes accordingly.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai I'm on Traefik v3.3, please provide updated annotation for recent versions of the Traefik Ingress controller.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cert-manager.io/cluster-issuer annotation remains current and works with Traefik v3.3.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai Please propose the v2 router.entrypoints/router.tls annotations.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ralf-berger Here are the updated Traefik v3.3 annotations for your ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: learninglocker
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
    traefik.ingress.kubernetes.io/router.tls: "true"

Changes made:

  • Removed kubernetes.io/tls-acme: "true" - cert-manager handles this automatically with the TLS block
  • Removed traefik.ingress.kubernetes.io/redirect-entry-point: https - replaced with modern router configuration
  • Added traefik.ingress.kubernetes.io/router.entrypoints: web,websecure - accepts both HTTP (web) and HTTPS (websecure) traffic
  • Added traefik.ingress.kubernetes.io/router.tls: "true" - enables TLS on the router

The cert-manager.io/cluster-issuer: letsencrypt-prod annotation remains unchanged as it's the current standard for cert-manager integration.

spec:
rules:
- host: $(LEARNINGLOCKER_FQDN)
http:
paths:
- pathType: Prefix
path: /api
backend:
service:
name: api
port:
name: http
- pathType: Prefix
path: /data
backend:
service:
name: xapi-service
port:
name: http
- pathType: Prefix
path: /
backend:
service:
name: ui
port:
name: http
tls:
- hosts:
- $(LEARNINGLOCKER_FQDN)
secretName: learninglocker-tls-crt
Comment on lines +9 to +38
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Fix indentation; add ingressClassName; modernize Traefik/cert-manager usage

Current YAML is mis-indented and uses legacy annotations. Suggested minimal fix (keeping your routes) and explicit class:

 spec:
-  rules:
-  - host: $(LEARNINGLOCKER_FQDN)
-    http:
-      paths:
-      - pathType: Prefix
-        path: /api
-        backend:
-          service:
-            name: api
-            port:
-              name: http
-      - pathType: Prefix
-        path: /data
-        backend:
-          service:
-            name: xapi-service
-            port:
-              name: http
-      - pathType: Prefix
-        path: /
-        backend:
-          service:
-            name: ui
-            port:
-              name: http
-  tls:
-  - hosts:
-    - $(LEARNINGLOCKER_FQDN)
-    secretName: learninglocker-tls-crt
+  ingressClassName: traefik
+  rules:
+    - host: $(LEARNINGLOCKER_FQDN)
+      http:
+        paths:
+          - path: /api
+            pathType: Prefix
+            backend:
+              service:
+                name: api
+                port: { name: http }
+          - path: /data
+            pathType: Prefix
+            backend:
+              service:
+                name: xapi-service
+                port: { name: http }
+          - path: /
+            pathType: Prefix
+            backend:
+              service:
+                name: ui
+                port: { name: http }
+  tls:
+    - hosts: [$(LEARNINGLOCKER_FQDN)]
+      secretName: learninglocker-tls-crt
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
spec:
rules:
- host: $(LEARNINGLOCKER_FQDN)
http:
paths:
- pathType: Prefix
path: /api
backend:
service:
name: api
port:
name: http
- pathType: Prefix
path: /data
backend:
service:
name: xapi-service
port:
name: http
- pathType: Prefix
path: /
backend:
service:
name: ui
port:
name: http
tls:
- hosts:
- $(LEARNINGLOCKER_FQDN)
secretName: learninglocker-tls-crt
spec:
ingressClassName: traefik
rules:
- host: $(LEARNINGLOCKER_FQDN)
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api
port: { name: http }
- path: /data
pathType: Prefix
backend:
service:
name: xapi-service
port: { name: http }
- path: /
pathType: Prefix
backend:
service:
name: ui
port: { name: http }
tls:
- hosts: [$(LEARNINGLOCKER_FQDN)]
secretName: learninglocker-tls-crt
🧰 Tools
🪛 Checkov (3.2.334)

[LOW] 1-38: The default namespace should not be used

(CKV_K8S_21)

🪛 YAMLlint (1.37.1)

[error] 11-11: wrong indentation: expected 4 but found 2

(indentation)


[error] 14-14: wrong indentation: expected 8 but found 6

(indentation)


[error] 36-36: wrong indentation: expected 4 but found 2

(indentation)


[error] 37-37: wrong indentation: expected 6 but found 4

(indentation)

🤖 Prompt for AI Agents
.k8s/base/ingress.yaml around lines 9-38: the spec block is mis-indented and is
still relying on legacy annotations; fix the YAML indentation so spec -> rules
-> - host -> http -> paths -> - pathType/path/backend/service/name/port are
properly nested, align the tls block (hosts/secretName) under spec, add an
explicit ingressClassName: "traefik" at the top of the spec, and remove/replace
legacy cert-manager/Traefik annotations in favor of the ingressClassName plus
the tls.secretName (ensure the TLS secret and host entry are properly indented
and retained).

68 changes: 68 additions & 0 deletions .k8s/base/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namePrefix: learninglocker-
labels:
- pairs:
app.kubernetes.io/name: learninglocker
includeSelectors: true
includeTemplates: true

resources:
- api
- mongo
- redis
- ui
- worker
- xapi-service
- ingress.yaml

configMapGenerator:
- name: learninglocker-ingress
literals:
- host=example.org

- name: learninglocker-config
literals:
- LOG_MIN_LEVEL=warning
- QUEUE_PROVIDER=REDIS
- MONGO_DATABASE=learninglocker_v2
- UI_HOST=ui
- UI_PORT=3000
- API_HOST=api
- API_PORT=8080
# - SITE_URL=http://example.org
# - SMTP_HOST=""
# - SMTP_PORT=""
# - SMTP_SECURED=""
# - SMTP_USER=""

secretGenerator:
- name: learninglocker-secret
literals: []
# - APP_SECRET="i-am-not-secure-please-change-me"
# - SMTP_PASS=""
# - LL_ADMIN_EMAIL=""
# - LL_ADMIN_ORG=""
# - LL_ADMIN_PASSWORD=""

Comment on lines +40 to +48
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Don’t generate empty Secrets in base; move secrets to overlays or external secret management.

Empty secretGenerator will create an unusable Secret and encourages env-var secrets. Prefer ExternalSecrets/SealedSecrets and mount as files.

-secretGenerator:
-  - name: learninglocker-secret
-    literals: []
+## Move secret generation to environment-specific overlays (e.g., .k8s/overlays/prod).
+# secretGenerator:
+#   - name: learninglocker-secret
+#     literals: []

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Checkov (3.2.334)

[LOW] 43-44: Base64 High Entropy String

(CKV_SECRET_6)

🪛 YAMLlint (1.37.1)

[error] 41-41: wrong indentation: expected 2 but found 0

(indentation)

🤖 Prompt for AI Agents
.k8s/base/kustomization.yaml around lines 40 to 48: the base currently contains
a secretGenerator with an empty name and no literals which will produce an
empty/unusable Kubernetes Secret; remove the secretGenerator entry from the base
and instead define secrets in environment-specific overlays or integrate an
external secret manager (ExternalSecrets/SealedSecrets) and reference those
resources from overlays; ensure the base kustomization has no literal secrets
and update overlays to either generate secrets there or mount secrets provided
by the external solution.

vars:
- name: LEARNINGLOCKER_FQDN
objref:
apiVersion: v1
kind: ConfigMap
name: learninglocker-ingress
fieldref:
fieldpath: data.host

images:
- name: learninglocker
newName: ghcr.io/ude-soco/learninglocker
newTag: master
- name: redis
newTag: '7'
- name: mongo
newTag: '4.4'
- name: xapi-service
Comment on lines +58 to +66
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Pin images by digest; avoid floating tags (master, 7, 4.4).

Use digests to ensure reproducible deployments and mitigate supply-chain risk.

-  - name: learninglocker
-    newName: ghcr.io/ude-soco/learninglocker
-    newTag: master
+  - name: learninglocker
+    newName: ghcr.io/ude-soco/learninglocker@sha256:<digest>

-  - name: redis
-    newTag: '7'
+  - name: redis
+    newName: redis@sha256:<digest>

-  - name: mongo
-    newTag: '4.4'
+  - name: mongo
+    newName: mongo@sha256:<digest>

-  - name: xapi-service
-    newName: learninglocker/xapi-service
+  - name: xapi-service
+    newName: learninglocker/xapi-service@sha256:<digest>

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 59-59: wrong indentation: expected 2 but found 0

(indentation)

newName: learninglocker/xapi-service

36 changes: 36 additions & 0 deletions .k8s/base/mongo/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
strategy:
type: Recreate
template:
spec:
containers:
- name: mongo
image: mongo
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Pin image version/digest

Avoid floating tags; set explicit mongo version and digest via kustomize.

🤖 Prompt for AI Agents
.k8s/base/mongo/deployment.yaml around line 13: the container image uses a
floating tag ("mongo"), which is unsafe; update it to an explicit version and
digest (e.g., mongo:<version>@sha256:<digest>) or use kustomize image
transformer to set newName/newTag and/or digest so the manifest references a
concrete immutable image; ensure you select a specific supported MongoDB version
and include the sha256 digest in the image reference, then commit the updated
kustomize patch or deployment.yaml.

args:
- --bind_ip
- "0.0.0.0"
- --wiredTigerCacheSizeGB
- "0.25"
- --quiet
ports:
- containerPort: 27017
protocol: TCP
volumeMounts:
- mountPath: /data/db
name: mongo
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
volumes:
- name: mongo
persistentVolumeClaim:
claimName: mongo
Comment on lines +1 to +36
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use StatefulSet for MongoDB (stable identity/storage) or tighten current Deployment

Mongo benefits from stable network IDs and PV binding. Prefer StatefulSet with a Headless Service.

If you want to keep Deployment for now, at minimum add selector/labels, security, resources, and probe timings as below.

 spec:
-  replicas: 1
-  strategy:
-    type: Recreate
-  template:
-    spec:
-      containers:
-        - name: mongo
-          image: mongo
+  replicas: 1
+  strategy: { type: Recreate }
+  selector:
+    matchLabels: { app: learninglocker, component: mongo }
+  template:
+    metadata:
+      labels: { app: learninglocker, component: mongo }
+    spec:
+      automountServiceAccountToken: false
+      securityContext: { runAsNonRoot: true, fsGroup: 999 }
+      containers:
+        - name: mongo
+          image: mongo
+          imagePullPolicy: IfNotPresent
+          securityContext:
+            allowPrivilegeEscalation: false
+            readOnlyRootFilesystem: false
+            capabilities: { drop: ["ALL"] }
+          resources:
+            requests: { cpu: "100m", memory: "512Mi" }
+            limits:   { cpu: "1",   memory: "2Gi" }
           args:
             - --bind_ip
             - "0.0.0.0"
             - --wiredTigerCacheSizeGB
             - "0.25"
             - --quiet
           ports:
             - containerPort: 27017
               protocol: TCP
           volumeMounts:
             - mountPath: /data/db
               name: mongo
           livenessProbe:
-            exec:
-              command:
-                - mongo
-                - --disableImplicitSessions
-                - --eval
-                - "db.adminCommand('ping')"
+            tcpSocket: { port: 27017 }
+            initialDelaySeconds: 15
+            periodSeconds: 10
+          readinessProbe:
+            exec:
+              command: ["bash","-lc","mongosh --quiet --eval \"db.adminCommand('ping')\" || exit 1"]
+            initialDelaySeconds: 10
+            periodSeconds: 10
       volumes:
-        - name: mongo
-          persistentVolumeClaim:
-            claimName: mongo
+        - name: mongo
+          persistentVolumeClaim: { claimName: mongo }

Please confirm your mongo image tag provides mongosh; if not, we’ll switch the readinessProbe to tcpSocket as well.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
strategy:
type: Recreate
template:
spec:
containers:
- name: mongo
image: mongo
args:
- --bind_ip
- "0.0.0.0"
- --wiredTigerCacheSizeGB
- "0.25"
- --quiet
ports:
- containerPort: 27017
protocol: TCP
volumeMounts:
- mountPath: /data/db
name: mongo
livenessProbe:
exec:
command:
- mongo
- --disableImplicitSessions
- --eval
- "db.adminCommand('ping')"
volumes:
- name: mongo
persistentVolumeClaim:
claimName: mongo
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
strategy: { type: Recreate }
selector:
matchLabels: { app: learninglocker, component: mongo }
template:
metadata:
labels: { app: learninglocker, component: mongo }
spec:
automountServiceAccountToken: false
securityContext: { runAsNonRoot: true, fsGroup: 999 }
containers:
- name: mongo
image: mongo
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: false
capabilities: { drop: ["ALL"] }
resources:
requests: { cpu: "100m", memory: "512Mi" }
limits: { cpu: "1", memory: "2Gi" }
args:
- --bind_ip
- "0.0.0.0"
- --wiredTigerCacheSizeGB
- "0.25"
- --quiet
ports:
- containerPort: 27017
protocol: TCP
volumeMounts:
- mountPath: /data/db
name: mongo
livenessProbe:
tcpSocket: { port: 27017 }
initialDelaySeconds: 15
periodSeconds: 10
readinessProbe:
exec:
command: ["bash","-lc","mongosh --quiet --eval \"db.adminCommand('ping')\" || exit 1"]
initialDelaySeconds: 10
periodSeconds: 10
volumes:
- name: mongo
persistentVolumeClaim: { claimName: mongo }
🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 1-36: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[LOW] 1-36: CPU limits should be set

(CKV_K8S_11)


[LOW] 1-36: CPU requests should be set

(CKV_K8S_10)


[LOW] 1-36: Apply security context to your containers

(CKV_K8S_30)


[LOW] 1-36: The default namespace should not be used

(CKV_K8S_21)


[LOW] 1-36: Minimize the admission of containers with the NET_RAW capability

(CKV_K8S_28)


[LOW] 1-36: Image should use digest

(CKV_K8S_43)


[LOW] 1-36: Image Tag should be fixed - not latest or blank

(CKV_K8S_14)


[LOW] 1-36: Memory limits should be set

(CKV_K8S_13)


[LOW] 1-36: Memory requests should be set

(CKV_K8S_12)


[LOW] 1-36: Minimize the admission of containers with capabilities assigned

(CKV_K8S_37)


[LOW] 1-36: Apply security context to your pods and containers

(CKV_K8S_29)


[LOW] 1-36: Use read-only filesystem for containers where possible

(CKV_K8S_22)


[LOW] 1-36: Readiness Probe Should be Configured

(CKV_K8S_9)


[MEDIUM] 1-36: Minimize the admission of root containers

(CKV_K8S_23)


[LOW] 1-36: Containers should run as a high UID to avoid host conflict

(CKV_K8S_40)


[LOW] 1-36: Ensure that the seccomp profile is set to docker/default or runtime/default

(CKV_K8S_31)


[LOW] 1-36: Ensure that Service Account Tokens are only mounted where necessary

(CKV_K8S_38)

20 changes: 20 additions & 0 deletions .k8s/base/mongo/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

labels:
- pairs:
service: mongo
includeSelectors: true
includeTemplates: true
Comment on lines +4 to +8
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix YAML indentation errors.

The labels section has indentation issues that will prevent proper parsing.

Apply this diff to fix the indentation:

 labels:
-- pairs:
-    service: mongo
-  includeSelectors: true
-  includeTemplates: true
+- pairs:
+    service: mongo
+  includeSelectors: true
+  includeTemplates: true

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 5-5: wrong indentation: expected at least 1

(indentation)

🤖 Prompt for AI Agents
.k8s/base/mongo/kustomization.yaml lines 4-8: the labels block is mis-indented;
move pairs, includeSelectors and includeTemplates to be nested under labels with
consistent two-space indentation so it reads: labels: then pairs: (mapping
service: mongo) and includeSelectors: true and includeTemplates: true as keys at
the same indent level under labels.


resources:
- deployment.yaml
- pvc.yaml
- service.yaml
Comment on lines +10 to +13
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix resources indentation.

The resources section has incorrect indentation that will cause YAML parsing errors.

Apply this diff to fix the indentation:

 resources:
-- deployment.yaml
-- pvc.yaml
-- service.yaml
+  - deployment.yaml
+  - pvc.yaml
+  - service.yaml
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resources:
- deployment.yaml
- pvc.yaml
- service.yaml
resources:
- deployment.yaml
- pvc.yaml
- service.yaml
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 11-11: wrong indentation: expected 2 but found 0

(indentation)

🤖 Prompt for AI Agents
In .k8s/base/mongo/kustomization.yaml around lines 10 to 13, the resources list
items are mis-indented and will break YAML parsing; indent each list item
exactly two spaces (or align them directly under the resources key) so the file
looks like a properly indented YAML sequence under the resources: key (i.e.,
move the "-" items to be children of resources with consistent indentation).


vars:
- name: LEARNINGLOCKER_MONGO_SERVICE
objref:
apiVersion: v1
kind: Service
name: mongo
Comment on lines +15 to +20
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix vars section indentation.

The vars section has incorrect indentation that will cause YAML parsing errors.

Apply this diff to fix the indentation:

 vars:
-- name: LEARNINGLOCKER_MONGO_SERVICE
-  objref:
-    apiVersion: v1
-    kind: Service
-    name: mongo
+  - name: LEARNINGLOCKER_MONGO_SERVICE
+    objref:
+      apiVersion: v1
+      kind: Service
+      name: mongo
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
vars:
- name: LEARNINGLOCKER_MONGO_SERVICE
objref:
apiVersion: v1
kind: Service
name: mongo
vars:
- name: LEARNINGLOCKER_MONGO_SERVICE
objref:
apiVersion: v1
kind: Service
name: mongo
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 16-16: wrong indentation: expected 2 but found 0

(indentation)

🤖 Prompt for AI Agents
In .k8s/base/mongo/kustomization.yaml around lines 15-20 the vars list item is
not properly indented under vars causing YAML parse errors; indent the dash and
its block two spaces under vars (i.e., make the list item a child of vars) and
keep objref and its apiVersion/kind/name fields indented consistently beneath
the list item.

10 changes: 10 additions & 0 deletions .k8s/base/mongo/pvc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo
spec:
accessModes:
- ReadWriteOnce
Comment on lines +6 to +7
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix YAML list indentation under accessModes.

Indent list items to satisfy YAMLlint.

Apply:

 spec:
   accessModes:
-  - ReadWriteOnce
+    - ReadWriteOnce
   resources:
     requests:
       storage: 100Mi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
accessModes:
- ReadWriteOnce
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
.k8s/base/mongo/pvc.yaml around lines 6 to 7: the YAML list under accessModes is
misindented for YAML linter; adjust indentation so the dash is indented two
spaces under the key (i.e., align list items as children of accessModes) so it
becomes a proper YAML sequence.

resources:
requests:
storage: 100Mi
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Validate capacity for MongoDB data.

100Mi is likely too small for Mongo. Consider higher defaults or document overlay overrides.

🤖 Prompt for AI Agents
.k8s/base/mongo/pvc.yaml around line 10: the PVC capacity is set to 100Mi which
is far too small for MongoDB data; update the default storage to a realistic
minimum (e.g., 5Gi or higher depending on expected dataset) and add a clear
comment or values/config overlay so deployments can override it; ensure any
Helm/overlay values reference and validate this field and consider adding a note
in docs about recommended sizing and retention/backups.

9 changes: 9 additions & 0 deletions .k8s/base/mongo/service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- name: mongo
port: 27017
targetPort: 27017
Comment on lines +1 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Avoid default namespace; set namespace at kustomization level.
Keeps resources scoped and avoids collisions.

Add to .k8s/base/kustomization.yaml (or an overlay):

namespace: learninglocker
🧰 Tools
🪛 Checkov (3.2.334)

[LOW] 1-9: The default namespace should not be used

(CKV_K8S_21)

🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
In .k8s/base/mongo/service.yaml around lines 1 to 9, the Service is left in the
default namespace; instead set the namespace at the kustomization level to scope
resources and avoid collisions. Edit .k8s/base/kustomization.yaml (or the
appropriate overlay) to add a "namespace: learninglocker" entry so all base
resources, including this mongo Service, are placed into the learninglocker
namespace rather than relying on default.

Comment on lines +5 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Service missing selector => no endpoints; clients won’t reach Mongo.
Add a selector matching the Mongo Deployment’s Pod labels.

Apply:

 spec:
-  ports:
-  - name: mongo
-    port: 27017
-    targetPort: 27017
+  selector:
+    app.kubernetes.io/name: mongo
+  ports:
+    - name: mongo
+      port: 27017
+      targetPort: 27017

Run to confirm the selector key/value matches your Mongo Deployment Pod template labels:


🏁 Script executed:

#!/usr/bin/env bash
set -euo pipefail
echo "Mongo Service selector:"
rg -nC2 'kind:\s*Service|name:\s*mongo|selector:' .k8s -g '**/mongo/**/service.yaml'
echo
echo "Mongo Deployment labels (metadata + pod template):"
rg -nC3 '(kind:\s*Deployment|metadata:\s*$|^ {2}labels:|^ {4}[a-zA-Z0-9\.\-]+:|template:|podTemplate|spec:)' .k8s -g '**/mongo/**/deployment.yaml'

Length of output: 1475


🏁 Script executed:

#!/usr/bin/env bash
set -euo pipefail

echo "Checking for any labels in the Deployment manifest:"
rg -n 'labels:' .k8s/base/mongo/deployment.yaml || echo "No 'labels:' lines found in deployment.yaml"

echo
echo "Showing context around the pod template block:"
rg -nC5 'template:' .k8s/base/mongo/deployment.yaml

Length of output: 639


Add missing pod labels and service selector

  • .k8s/base/mongo/deployment.yaml lacks both metadata.labels and spec.template.metadata.labels; add, for example:
    metadata:
      labels:
        app.kubernetes.io/name: mongo
    spec:
      template:
        metadata:
          labels:
            app.kubernetes.io/name: mongo
  • .k8s/base/mongo/service.yaml needs a matching selector under spec:
    spec:
      selector:
        app.kubernetes.io/name: mongo
      ports:
        - name: mongo
          port: 27017
          targetPort: 27017
🧰 Tools
🪛 Checkov (3.2.334)

[LOW] 1-9: The default namespace should not be used

(CKV_K8S_21)

🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
In .k8s/base/mongo/service.yaml around lines 5-9 and in
.k8s/base/mongo/deployment.yaml, the Service lacks a selector and the Deployment
is missing metadata.labels and spec.template.metadata.labels; add a stable label
(e.g. app.kubernetes.io/name: mongo) under deployment.metadata.labels and under
deployment.spec.template.metadata.labels, then add a matching selector under
service.spec.selector with the same label key and value so the Service targets
the Deployment pods.

Comment on lines +6 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Indentation under ports is off.
Yamllint expects list items indented under ports.

Apply (if not fixed by previous diff):

-  ports:
-  - name: mongo
-    port: 27017
-    targetPort: 27017
+  ports:
+    - name: mongo
+      port: 27017
+      targetPort: 27017
🧰 Tools
🪛 Checkov (3.2.334)

[LOW] 1-9: The default namespace should not be used

(CKV_K8S_21)

🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
.k8s/base/mongo/service.yaml around lines 6 to 9: the list item under "ports" is
not indented to meet yamllint expectations; indent the "-" for the port entry
two spaces (so it's a child of ports) and align the nested keys (name, port,
targetPort) under that indentation to produce a properly nested YAML list.

31 changes: 31 additions & 0 deletions .k8s/base/redis/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
template:
Comment on lines +5 to +7
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add missing selector for Deployment.

The Deployment is missing the required selector field which will cause it to fail validation.

Apply this diff to add the selector:

 spec:
   replicas: 1
+  selector:
+    matchLabels:
+      app.kubernetes.io/name: redis
   template:
+    metadata:
+      labels:
+        app.kubernetes.io/name: redis
     spec:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
spec:
replicas: 1
template:
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: redis
template:
metadata:
labels:
app.kubernetes.io/name: redis
spec:
🤖 Prompt for AI Agents
.k8s/base/redis/deployment.yaml around lines 5 to 7: the Deployment spec is
missing the required selector field; add a spec.selector with matchLabels that
exactly matches the pod template's metadata.labels (e.g., if
template.metadata.labels contains app: redis, add spec.selector.matchLabels:
app: redis) so Kubernetes can validate and associate ReplicaSet pods with this
Deployment.

spec:
containers:
- name: redis
image: redis
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Consider pinning Redis image to specific version.

Using redis without a version tag defaults to latest, which can lead to unpredictable deployments and potential compatibility issues.

Apply this diff to pin to a specific Redis version:

-        image: redis
+        image: redis:7.2-alpine
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
image: redis
image: redis:7.2-alpine
🤖 Prompt for AI Agents
.k8s/base/redis/deployment.yaml around line 11: the deployment uses the unpinned
image "redis" which resolves to latest and can cause unpredictable changes;
update the image field to a specific, tested Redis tag (for example a stable 7.x
or 6.x release used in your environment) by replacing "redis" with
"redis:<chosen-version>" and commit that concrete tag so deployments are
reproducible.

args:
- --loglevel
- warning
livenessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 2
periodSeconds: 10
ports:
- containerPort: 6379
protocol: TCP
volumeMounts:
- mountPath: /data
name: redis
volumes:
- name: redis
persistentVolumeClaim:
claimName: redis
Comment on lines +9 to +31
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix YAML indentation errors.

Multiple indentation errors will prevent the YAML from parsing correctly.

Apply this diff to fix the indentation:

       containers:
-      - name: redis
-        image: redis
-        args:
-        - --loglevel
-        - warning
-        livenessProbe:
-          exec:
-            command:
-            - redis-cli
-            - ping
-          initialDelaySeconds: 2
-          periodSeconds: 10
-        ports:
-        - containerPort: 6379
-          protocol: TCP
-        volumeMounts:
-        - mountPath: /data
-          name: redis
+        - name: redis
+          image: redis
+          args:
+            - --loglevel
+            - warning
+          livenessProbe:
+            exec:
+              command:
+                - redis-cli
+                - ping
+            initialDelaySeconds: 2
+            periodSeconds: 10
+          ports:
+            - containerPort: 6379
+              protocol: TCP
+          volumeMounts:
+            - mountPath: /data
+              name: redis
       volumes:
-        - name: redis
-          persistentVolumeClaim:
-            claimName: redis
+        - name: redis
+          persistentVolumeClaim:
+            claimName: redis
🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 1-31: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[LOW] 1-31: CPU limits should be set

(CKV_K8S_11)


[LOW] 1-31: CPU requests should be set

(CKV_K8S_10)


[LOW] 1-31: Apply security context to your containers

(CKV_K8S_30)


[LOW] 1-31: The default namespace should not be used

(CKV_K8S_21)


[LOW] 1-31: Minimize the admission of containers with the NET_RAW capability

(CKV_K8S_28)


[LOW] 1-31: Image should use digest

(CKV_K8S_43)


[LOW] 1-31: Image Tag should be fixed - not latest or blank

(CKV_K8S_14)


[LOW] 1-31: Memory limits should be set

(CKV_K8S_13)


[LOW] 1-31: Memory requests should be set

(CKV_K8S_12)


[LOW] 1-31: Minimize the admission of containers with capabilities assigned

(CKV_K8S_37)


[LOW] 1-31: Apply security context to your pods and containers

(CKV_K8S_29)


[LOW] 1-31: Use read-only filesystem for containers where possible

(CKV_K8S_22)


[LOW] 1-31: Readiness Probe Should be Configured

(CKV_K8S_9)


[MEDIUM] 1-31: Minimize the admission of root containers

(CKV_K8S_23)


[LOW] 1-31: Containers should run as a high UID to avoid host conflict

(CKV_K8S_40)


[LOW] 1-31: Ensure that the seccomp profile is set to docker/default or runtime/default

(CKV_K8S_31)


[LOW] 1-31: Ensure that Service Account Tokens are only mounted where necessary

(CKV_K8S_38)

🪛 YAMLlint (1.37.1)

[error] 10-10: wrong indentation: expected 8 but found 6

(indentation)


[error] 13-13: wrong indentation: expected 10 but found 8

(indentation)


[error] 18-18: wrong indentation: expected 14 but found 12

(indentation)


[error] 23-23: wrong indentation: expected 10 but found 8

(indentation)


[error] 26-26: wrong indentation: expected 10 but found 8

(indentation)

🤖 Prompt for AI Agents
.k8s/base/redis/deployment.yaml lines 9-31: the YAML is misindented causing
fields to be misplaced; fix by ensuring the containers list and its container
fields (name, image, args, livenessProbe, ports, volumeMounts) are all
consistently indented as children of spec.template.spec, and make volumes a
sibling of containers (also under spec.template.spec); align dash items (args
and command entries) under their parent keys so livenessProbe and volumeMounts
are at the same indent level as name/image/ports for the container.

20 changes: 20 additions & 0 deletions .k8s/base/redis/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

labels:
- pairs:
service: redis
includeSelectors: true
includeTemplates: true

Comment on lines +4 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix YAML indentation for labels transformer

Current indentation is invalid. Patch:

 labels:
-- pairs:
-    service: redis
-  includeSelectors: true
-  includeTemplates: true
+  - pairs:
+      service: redis
+    includeSelectors: true
+    includeTemplates: true
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
labels:
- pairs:
service: redis
includeSelectors: true
includeTemplates: true
labels:
- pairs:
service: redis
includeSelectors: true
includeTemplates: true
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 5-5: wrong indentation: expected at least 1

(indentation)

🤖 Prompt for AI Agents
.k8s/base/redis/kustomization.yaml lines 4-9: the labels transformer has
incorrect YAML indentation (a stray dash before "pairs"); move "pairs:" up one
level under "labels" (remove the leading "-") so "pairs:", "includeSelectors:"
and "includeTemplates:" are all direct children of "labels" with proper
indentation.

resources:
- deployment.yaml
- pvc.yaml
- service.yaml

vars:
- name: LEARNINGLOCKER_REDIS_SERVICE
objref:
apiVersion: v1
kind: Service
name: redis
10 changes: 10 additions & 0 deletions .k8s/base/redis/pvc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis
spec:
accessModes:
- ReadWriteOnce
Comment on lines +6 to +7
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix YAML list indentation under accessModes.

YAMLlint flags this. Indent the list item by 4 spaces under accessModes:.

Apply:

 spec:
   accessModes:
-  - ReadWriteOnce
+    - ReadWriteOnce
   resources:
     requests:
       storage: 100Mi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
accessModes:
- ReadWriteOnce
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
.k8s/base/redis/pvc.yaml around lines 6 to 7: the YAML list item under
accessModes is mis-indented which linter flags; indent the dash line so the list
item is aligned by two additional spaces (total 4 spaces) under accessModes: so
that accessModes: is followed by a properly indented list entry, preserving YAML
spacing and no other changes.

resources:
requests:
storage: 100Mi
Comment on lines +1 to +10
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Confirm namespace/SC handling in overlays.

Base omits metadata.namespace and storageClassName. Ensure overlays set the namespace and, if needed, a cluster-appropriate storageClassName.

🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
.k8s/base/redis/pvc.yaml lines 1-10: the base PVC intentionally omits
metadata.namespace and storageClassName, but overlays must supply them; update
the kustomize overlays (or respective overlay yaml) to add metadata.namespace:
<target-namespace> and, if your cluster requires a specific storage class, add
spec.storageClassName: <cluster-storage-class> (or leave empty for default) and
verify the chosen storageClassName exists in the cluster before applying.

9 changes: 9 additions & 0 deletions .k8s/base/redis/service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- name: redis
port: 6379
targetPort: 6379
Comment on lines +5 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Add a selector to bind the Service to the Redis Pods.

Without a selector, this Service won’t target any Pods (unless you manage Endpoints separately). Add a selector matching the Deployment labels.

Apply (adjust label keys/values to your Deployment):

 spec:
   ports:
     - name: redis
       port: 6379
       targetPort: 6379
+  selector:
+    app.kubernetes.io/name: redis

🏁 Script executed:

#!/bin/bash
# Locate the Redis Deployment YAML under .k8s/base/redis
deployment_file=$(rg -l 'kind:\s*Deployment' -g '*.yaml' .k8s/base/redis)
echo "Deployment file: $deployment_file"
# Print the metadata.labels section from the Deployment for selector alignment
rg -n 'labels:' -C5 "$deployment_file"

Length of output: 266


🏁 Script executed:

#!/bin/bash
# Print the first 200 lines of the Redis Deployment to inspect metadata.labels
sed -n '1,200p' .k8s/base/redis/deployment.yaml

Length of output: 665


Bind Service to Redis Pods by adding labels and selector

  • Add metadata.labels to your Redis Deployment’s Pod template (.k8s/base/redis/deployment.yaml):
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/name: redis
  • Add a selector to the Service (.k8s/base/redis/service.yaml) to match those labels:
 spec:
   ports:
     - name: redis
       port: 6379
       targetPort: 6379
+  selector:
+    app.kubernetes.io/name: redis
🧰 Tools
🪛 Checkov (3.2.334)

[LOW] 1-9: The default namespace should not be used

(CKV_K8S_21)

🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
.k8s/base/redis/service.yaml around lines 5 to 9: the Service has ports defined
but no selector, so it won't target Redis Pods; update the Service to include a
selector that matches the Pod labels (e.g., app.kubernetes.io/name: redis). Also
ensure .k8s/base/redis/deployment.yaml pod template metadata
(spec.template.metadata.labels) contains the same label key/value so the Service
can bind to the Pods.

Comment on lines +6 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix YAML list indentation under ports.

Indent the list item under ports: by 4 spaces.

Apply:

 spec:
   ports:
-  - name: redis
-    port: 6379
-    targetPort: 6379
+    - name: redis
+      port: 6379
+      targetPort: 6379
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ports:
- name: redis
port: 6379
targetPort: 6379
spec:
ports:
- name: redis
port: 6379
targetPort: 6379
🧰 Tools
🪛 Checkov (3.2.334)

[LOW] 1-9: The default namespace should not be used

(CKV_K8S_21)

🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
.k8s/base/redis/service.yaml around lines 6 to 9: the YAML list item under
"ports:" is mis-indented; indent the "-" list entry (and its nested keys "name",
"port", "targetPort") by 4 spaces so they are children of "ports:" (ensure
consistent 2-space or 4-space indentation across the file), then validate the
YAML to confirm proper structure.

32 changes: 32 additions & 0 deletions .k8s/base/ui/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui
spec:
replicas: 1
template:
spec:
containers:
- name: ui
image: learninglocker
args:
- ui
envFrom:
- configMapRef:
name: learninglocker-config
- secretRef:
name: learninglocker-secret
env:
- name: REDIS_HOST
value: $(LEARNINGLOCKER_REDIS_SERVICE)
- name: MONGO_HOST
value: $(LEARNINGLOCKER_MONGO_SERVICE)
ports:
- containerPort: 3000
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 15
periodSeconds: 30
Comment on lines +27 to +32
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add readiness probe, security hardening, and resources.

Align with API pod baseline.

         livenessProbe:
           httpGet:
             path: /
             port: 3000
           initialDelaySeconds: 15
           periodSeconds: 30
+        readinessProbe:
+          httpGet:
+            path: /
+            port: 3000
+          initialDelaySeconds: 5
+          periodSeconds: 10
+          failureThreshold: 3
+        securityContext:
+          runAsNonRoot: true
+          allowPrivilegeEscalation: false
+          readOnlyRootFilesystem: true
+          seccompProfile:
+            type: RuntimeDefault
+          capabilities:
+            drop: [ "ALL" ]
+        resources:
+          requests:
+            cpu: "50m"
+            memory: "128Mi"
+          limits:
+            cpu: "250m"
+            memory: "256Mi"
+        automountServiceAccountToken: false

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Checkov (3.2.334)

[MEDIUM] 1-32: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[LOW] 1-32: CPU limits should be set

(CKV_K8S_11)


[LOW] 1-32: CPU requests should be set

(CKV_K8S_10)


[LOW] 1-32: Apply security context to your containers

(CKV_K8S_30)


[LOW] 1-32: The default namespace should not be used

(CKV_K8S_21)


[LOW] 1-32: Minimize the admission of containers with the NET_RAW capability

(CKV_K8S_28)


[LOW] 1-32: Image should use digest

(CKV_K8S_43)


[LOW] 1-32: Image Tag should be fixed - not latest or blank

(CKV_K8S_14)


[LOW] 1-32: Memory limits should be set

(CKV_K8S_13)


[LOW] 1-32: Memory requests should be set

(CKV_K8S_12)


[LOW] 1-32: Minimize the admission of containers with capabilities assigned

(CKV_K8S_37)


[LOW] 1-32: Apply security context to your pods and containers

(CKV_K8S_29)


[LOW] 1-32: Use read-only filesystem for containers where possible

(CKV_K8S_22)


[LOW] 1-32: Readiness Probe Should be Configured

(CKV_K8S_9)


[MEDIUM] 1-32: Minimize the admission of root containers

(CKV_K8S_23)


[LOW] 1-32: Containers should run as a high UID to avoid host conflict

(CKV_K8S_40)


[LOW] 1-32: Ensure that the seccomp profile is set to docker/default or runtime/default

(CKV_K8S_31)


[LOW] 1-32: Prefer using secrets as files over secrets as environment variables

(CKV_K8S_35)


[LOW] 1-32: Ensure that Service Account Tokens are only mounted where necessary

(CKV_K8S_38)

🪛 YAMLlint (1.37.1)

[error] 32-32: no new line character at the end of file

(new-line-at-end-of-file)

🤖 Prompt for AI Agents
In .k8s/base/ui/deployment.yaml around lines 27 to 32, add a readinessProbe and
pod/container security hardening plus resource requests/limits to align with the
API pod baseline: add a readinessProbe (httpGet path: /, port: 3000,
initialDelaySeconds ~5-10, periodSeconds ~10), and under the container add
securityContext settings (allowPrivilegeEscalation: false,
readOnlyRootFilesystem: true, runAsNonRoot: true, runAsUser: 1000, capabilities:
drop all). Also add a pod-level securityContext if required (runAsNonRoot: true,
fsGroup: 1000). Finally add resources block on the container with sensible
requests/limits (e.g., cpu and memory request and limit values matching API
baseline) so the UI pod has resource guarantees and is hardened.

12 changes: 12 additions & 0 deletions .k8s/base/ui/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

labels:
- pairs:
app.kubernetes.io/component: ui
includeSelectors: true
includeTemplates: true
Comment on lines +4 to +8
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix YAML indentation errors.

The YAML structure has indentation issues that will prevent proper parsing.

Apply this diff to fix the indentation:

 labels:
-- pairs:
-    app.kubernetes.io/component: ui
-  includeSelectors: true
-  includeTemplates: true
+- pairs:
+    app.kubernetes.io/component: ui
+  includeSelectors: true
+  includeTemplates: true

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 5-5: wrong indentation: expected at least 1

(indentation)

🤖 Prompt for AI Agents
In .k8s/base/ui/kustomization.yaml around lines 4 to 8, the labels block is
mis-indented (the "pairs" mapping is not nested correctly under "labels"); fix
by indenting "pairs:" and its child key-values two spaces under "labels:", and
ensure the sibling keys "includeSelectors:" and "includeTemplates:" are aligned
at the same indentation level as "pairs" so the YAML parses properly.


resources:
- deployment.yaml
- service.yaml
Comment on lines +10 to +12
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix resources indentation.

The resources section has incorrect indentation that will cause YAML parsing errors.

Apply this diff to fix the indentation:

 resources:
-- deployment.yaml
-- service.yaml
+  - deployment.yaml
+  - service.yaml
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resources:
- deployment.yaml
- service.yaml
resources:
- deployment.yaml
- service.yaml
🧰 Tools
🪛 YAMLlint (1.37.1)

[error] 11-11: wrong indentation: expected 2 but found 0

(indentation)

🤖 Prompt for AI Agents
In .k8s/base/ui/kustomization.yaml around lines 10 to 12, the hyphenated
resource entries are not indented under the resources key; fix by indenting the
list items one level (two spaces) so they are children of resources (i.e.,
resources: followed by the two-space-indented - deployment.yaml and -
service.yaml).

9 changes: 9 additions & 0 deletions .k8s/base/ui/service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
apiVersion: v1
kind: Service
metadata:
name: ui
spec:
ports:
- name: http
port: 80
targetPort: 3000
Comment on lines +1 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Avoid default namespace; set via kustomization.
Same recommendation as for mongo Service.

Add in the appropriate kustomization:

namespace: learninglocker
🧰 Tools
🪛 Checkov (3.2.334)

[LOW] 1-9: The default namespace should not be used

(CKV_K8S_21)

🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
In .k8s/base/ui/service.yaml lines 1-9, the Service is currently in the default
namespace; instead of modifying the Service, add the namespace declaration to
the base kustomization. Open .k8s/base/kustomization.yaml and add the line
"namespace: learninglocker" (ensuring proper indentation), so the ui Service
(and other base resources) are placed into the learninglocker namespace via
kustomize.

Comment on lines +5 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Service missing selector => no endpoints for UI.
Add label selector to match the UI Deployment’s Pod template labels.

Apply:

 spec:
-  ports:
-  - name: http
-    port: 80
-    targetPort: 3000
+  selector:
+    app.kubernetes.io/name: ui
+  ports:
+    - name: http
+      port: 80
+      targetPort: 3000

Quick check that selectors match the Deployment labels:


🏁 Script executed:

#!/usr/bin/env bash
set -euo pipefail
echo "UI Service selector:"
rg -nC2 'kind:\s*Service|name:\s*ui|selector:' .k8s -g '**/ui/**/service.yaml'
echo
echo "UI Deployment labels (metadata + pod template):"
rg -nC3 '(kind:\s*Deployment|metadata:\s*$|^ {2}labels:|^ {4}[a-zA-Z0-9\.\-]+:|template:|spec:)' .k8s -g '**/ui/**/deployment.yaml'

Length of output: 1139


🏁 Script executed:

#!/usr/bin/env bash
set -euo pipefail
echo "---- UI Deployment YAML (full) ----"
cat .k8s/base/ui/deployment.yaml

Length of output: 850


Add pod template label and matching Service selector

  • In .k8s/base/ui/deployment.yaml, under spec.template insert:
    metadata:
      labels:
        app.kubernetes.io/name: ui
  • In .k8s/base/ui/service.yaml, under spec insert:
    selector:
      app.kubernetes.io/name: ui

This ensures the Service routes to the UI pods.

🧰 Tools
🪛 Checkov (3.2.334)

[LOW] 1-9: The default namespace should not be used

(CKV_K8S_21)

🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
In .k8s/base/ui/service.yaml around lines 5 to 9 and corresponding
.k8s/base/ui/deployment.yaml spec.template, the Service lacks a selector to
target the UI pods and the Deployment template is missing the label the selector
should match; add under spec.selector in service.yaml a selector mapping
app.kubernetes.io/name: ui, and add under spec.template.metadata.labels in
deployment.yaml the label app.kubernetes.io/name: ui so the Service routes
traffic to the UI pods.

Comment on lines +6 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Indentation under ports is off.
Fix yamllint failure.

Apply:

-  ports:
-  - name: http
-    port: 80
-    targetPort: 3000
+  ports:
+    - name: http
+      port: 80
+      targetPort: 3000
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ports:
- name: http
port: 80
targetPort: 3000
ports:
- name: http
port: 80
targetPort: 3000
🧰 Tools
🪛 Checkov (3.2.334)

[LOW] 1-9: The default namespace should not be used

(CKV_K8S_21)

🪛 YAMLlint (1.37.1)

[error] 7-7: wrong indentation: expected 4 but found 2

(indentation)

🤖 Prompt for AI Agents
In .k8s/base/ui/service.yaml around lines 6 to 9, the list under the ports key
is mis-indented; move the list items two spaces in so the dash for each port
item is indented one level (two spaces) under the ports key and all nested keys
(name, port, targetPort) are aligned beneath that dash at one additional
indentation level, so the YAML list is properly nested and passes yamllint.

Loading