Skip to content

Add script to import weekly-managed cluster for team dev env#6186

Open
zlayne wants to merge 2 commits into
stolostron:mainfrom
zlayne:import-weekly-managed-script
Open

Add script to import weekly-managed cluster for team dev env#6186
zlayne wants to merge 2 commits into
stolostron:mainfrom
zlayne:import-weekly-managed-script

Conversation

@zlayne
Copy link
Copy Markdown
Contributor

@zlayne zlayne commented May 11, 2026

📝 Summary

Ticket Summary (Title):
Add a script that imports the weekly-managed cluster into our team's weekly hub cluster.
This script will be utilized by a CronJob in the collective environment - the job will run 1 hour after the weekly cluster standup job.

Ticket Link:
https://issues.redhat.com/browse/ACM-33618

Type of Change:

  • 🐞 Bug Fix
  • ✨ Feature
  • 🔧 Refactor
  • 💸 Tech Debt
  • 🧪 Test-related
  • 📄 Docs

✅ Checklist

General

  • PR title follows the convention (e.g. ACM-12340 Fix bug with...)
  • Code builds and runs locally without errors
  • No console logs, commented-out code, or unnecessary files
  • All commits are meaningful and well-labeled
  • All new display strings are externalized for localization (English only)
  • (Nice to have) JSDoc comments added for new functions and interfaces

If Feature

  • UI/UX reviewed (if applicable)
  • All acceptance criteria met
  • Unit test coverage added or updated
  • Relevant documentation or comments included

If Bugfix

  • Root cause and fix summary are documented in the ticket (for future reference / errata)
  • Fix tested thoroughly and resolves the issue
  • Test(s) added to prevent regression

🗒️ Notes for Reviewers

Summary by CodeRabbit

  • New Features
    • Added end-to-end automation to import managed clusters into RHACM hubs, driven by environment configuration. Includes namespace discovery, secret-based kubeconfig handling, optional CRD import, pull-secret and mirror handling, idempotent cleanup of prior agents, status polling with retries, strict error handling, and detailed progress logging to simplify and stabilize cluster onboarding.

Signed-off-by: zlayne <zlayne@redhat.com>
@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented May 11, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: zlayne

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 11, 2026

📝 Walkthrough

Walkthrough

A new end-to-end Bash script, scripts/import-weekly-managed.sh, automates importing a managed cluster into an RHACM hub by discovering cluster namespaces via ClusterClaim, extracting admin kubeconfigs, creating ManagedCluster and KlusterletAddonConfig, applying import manifests/CRDs, and polling until the managed cluster is joined and available.

Changes

Managed Cluster Import Automation

Layer / File(s) Summary
Configuration & bootstrap
scripts/import-weekly-managed.sh (top)
Script header, strict shell flags (set -euo pipefail), environment-driven defaults (WORKDIR, HUB_CLUSTER_NAME, MANAGED_CLUSTER_NAME, CONSOLE_NAMESPACE, CLUSTERPOOL_TARGET_NAMESPACE, etc.), temporary workdir setup and exit cleanup wiring.
Logging, error handling & utilities
scripts/import-weekly-managed.sh (helpers)
Adds log(), fail(), cleanup() and helper functions: get_cluster_namespace(), find_admin_kubeconfig_secret(), extract_kubeconfig(), copy_pull_secret(), copy_image_digest_mirror_set(), cleanup_old_klusterlet() for core script operations.
ClusterClaim creation & namespace discovery
scripts/import-weekly-managed.sh (ClusterClaim flow)
Ensures a ClusterClaim exists in the configured clusterpool target namespace for the managed cluster, creates it if missing, and polls clusterclaim.hive.spec.namespace until the managed cluster namespace is provisioned.
Kubeconfig secret lookup & extraction
scripts/import-weekly-managed.sh (secret handling)
Discovers *-admin-kubeconfig secrets in hub and managed namespaces, decodes data.kubeconfig into files (HUB_KUBECONFIG, MANAGED_KUBECONFIG), and validates oc connectivity to both clusters.
Managed-side prep
scripts/import-weekly-managed.sh (managed cluster setup)
On managed cluster: copies pull-secret, optionally applies an ImageDigestMirrorSet manifest if present, and removes existing klusterlet resources/namespaces to ensure a clean import state.
Hub-side resources & import orchestration
scripts/import-weekly-managed.sh (hub creation + polling)
On hub: ensures managed cluster namespace exists, checks existing ManagedCluster join/availability conditions (short-circuits if already joined), creates ManagedCluster and KlusterletAddonConfig when needed, polls for ${MANAGED_CLUSTER_NAME}-import secret, extracts import.yaml and optional crds.yaml.
Applying import manifests & final verification
scripts/import-weekly-managed.sh (apply & wait)
Applies decoded CRDs to managed cluster (if present), applies import.yaml on managed cluster, and polls the hub until ManagedClusterJoined=True and ManagedClusterConditionAvailable=True, failing if conditions are not met within retries.
sequenceDiagram
    participant User
    participant Script as import-weekly-managed.sh
    participant Hub as Hub Cluster (oc)
    participant Managed as Managed Cluster (oc)
    participant K8sSecrets as Kubernetes Secrets

    User->>Script: run with env vars
    Script->>Hub: ensure ClusterClaim in clusterpool target namespace
    Hub-->>Script: ClusterClaim namespace set (polled)
    Script->>K8sSecrets: find hub admin-kubeconfig secret
    K8sSecrets-->>Script: hub secret -> extract HUB_KUBECONFIG
    Script->>K8sSecrets: find managed admin-kubeconfig secret
    K8sSecrets-->>Script: managed secret -> extract MANAGED_KUBECONFIG
    Script->>Hub: verify hub auth using HUB_KUBECONFIG
    Script->>Managed: verify managed auth using MANAGED_KUBECONFIG
    Script->>Managed: copy pull-secret, apply ImageDigestMirrorSet, cleanup klusterlet
    Script->>Hub: ensure managed namespace, create ManagedCluster & KlusterletAddonConfig
    Script->>Hub: poll for ${MANAGED_CLUSTER_NAME}-import secret
    Hub-->>Script: import secret appears
    Script->>Managed: apply CRDs (if any) and import.yaml
    Script->>Hub: poll ManagedCluster conditions
    Hub-->>Script: joined & available
    Script-->>User: import complete
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 22.22% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: adding a script to import a weekly-managed cluster for the team dev environment, matching the script file added.
Description check ✅ Passed The description includes ticket summary, link (ACM-33618), type selection (Tech Debt), and identifies script purpose and CronJob context; however, the checklist items remain unchecked and notes for reviewers are empty.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
scripts/import-weekly-managed.sh (2)

189-195: 💤 Low value

Unused loop variable i; consider logging progress.

The variable i is unused (Shellcheck SC2034). Either replace with _ or use it for progress logging, which would be helpful when debugging CronJob runs.

Option: Add progress logging
 for i in {1..60}; do
   if oc get secret "${IMPORT_SECRET}" \
     -n "${MANAGED_CLUSTER_NAME}" >/dev/null 2>&1; then
+    log "Import secret found"
     break
   fi
+  log "Waiting for import secret... attempt ${i}/60"
   sleep 10
 done
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@scripts/import-weekly-managed.sh` around lines 189 - 195, The for-loop uses
an unused loop variable `i` (for i in {1..60}); to fix, either replace `i` with
`_` to silence Shellcheck (SC2034) or add minimal progress logging inside the
loop (for example using `echo "Waiting for secret ${IMPORT_SECRET} in
${MANAGED_CLUSTER_NAME}: attempt $i/60"` before the sleep) so the variable is
used and CronJob logs show progress; update the loop header (`for i in {1..60}`)
and add or remove the logging accordingly.

227-227: TODO: Cleanup of persisting klusterlet addons.

This TODO suggests cleanup logic may be needed. If the managed cluster was previously imported, stale klusterlet resources could interfere with re-import.

Would you like me to open an issue to track implementing this cleanup logic?

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@scripts/import-weekly-managed.sh` at line 227, The TODO notes missing cleanup
for persisting klusterlet addons which can block re-imports; implement a cleanup
step in scripts/import-weekly-managed.sh that detects a previously-imported
managed cluster and deletes stale klusterlet-related resources (e.g.,
Klusterlet, KlusterletAddon, klusterlet namespace and associated
CRs/secrets/configmaps) before re-importing; add idempotent checks and error
handling so deletes are safe to run repeatedly, and log actions so failures are
visible.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@scripts/import-weekly-managed.sh`:
- Around line 1-3: Move the shebang to the very first line of the script so the
kernel recognizes it as a bash executable: in import-weekly-managed.sh place
"#!/usr/bin/env bash" before any comments (remove or relocate the copyright
header below that line), leaving "set -euo pipefail" and the rest of the script
intact; ensure no blank lines appear before the shebang.
- Around line 51-58: The find_admin_kubeconfig_secret function can cause the
whole script to exit when grep finds no match because of set -eo pipefail;
modify the pipeline inside find_admin_kubeconfig_secret (the oc ... | grep
'\-admin-kubeconfig$' | head -n 1 sequence) to append "|| true" after grep (or
at end of the pipeline) so that a non-matching grep returns an empty result
instead of a failing exit code, allowing the caller (which checks for an empty
result) to handle the no-match case gracefully.
- Around line 211-219: The current oc jsonpath check can succeed but return an
empty string, producing an empty "${CRDS_YAML}" that later gets applied; change
the flow to capture the jsonpath output and verify it's non-empty before
writing/applying: run the oc get secret ... -o jsonpath='{.data.crds\.yaml}' and
direct its (base64-decoded) output into a temp file or capture to a variable,
test that the resulting file/variable is non-empty (e.g. test -s "${CRDS_YAML}"
or check -n on the variable) and only move/save it to "${CRDS_YAML}" and proceed
if non-empty; reference the existing symbols IMPORT_SECRET,
MANAGED_CLUSTER_NAME, CRDS_YAML and the oc get secret / base64 -d pipeline when
implementing this guard.

---

Nitpick comments:
In `@scripts/import-weekly-managed.sh`:
- Around line 189-195: The for-loop uses an unused loop variable `i` (for i in
{1..60}); to fix, either replace `i` with `_` to silence Shellcheck (SC2034) or
add minimal progress logging inside the loop (for example using `echo "Waiting
for secret ${IMPORT_SECRET} in ${MANAGED_CLUSTER_NAME}: attempt $i/60"` before
the sleep) so the variable is used and CronJob logs show progress; update the
loop header (`for i in {1..60}`) and add or remove the logging accordingly.
- Line 227: The TODO notes missing cleanup for persisting klusterlet addons
which can block re-imports; implement a cleanup step in
scripts/import-weekly-managed.sh that detects a previously-imported managed
cluster and deletes stale klusterlet-related resources (e.g., Klusterlet,
KlusterletAddon, klusterlet namespace and associated CRs/secrets/configmaps)
before re-importing; add idempotent checks and error handling so deletes are
safe to run repeatedly, and log actions so failures are visible.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 50f5b59c-5b68-42f3-af2b-0925a1d14bf3

📥 Commits

Reviewing files that changed from the base of the PR and between 0bec288 and 2980091.

📒 Files selected for processing (1)
  • scripts/import-weekly-managed.sh

Comment thread scripts/import-weekly-managed.sh Outdated
Comment on lines +51 to +58
find_admin_kubeconfig_secret() {
local NAMESPACE="$1"
oc get secrets -n "${NAMESPACE}" \
--no-headers \
-o custom-columns="NAME:.metadata.name" | \
grep '\-admin-kubeconfig$' | \
head -n 1
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

grep returning no match will exit the script due to pipefail.

With set -eo pipefail, if grep finds no matching secret, it returns exit code 1, terminating the script before the caller can check for an empty result (lines 97-101). Append || true to allow empty results.

Proposed fix
 find_admin_kubeconfig_secret() {
   local NAMESPACE="$1"
   oc get secrets -n "${NAMESPACE}" \
     --no-headers \
     -o custom-columns="NAME:.metadata.name" | \
-    grep '\-admin-kubeconfig$' | \
-    head -n 1
+    grep '\-admin-kubeconfig$' | \
+    head -n 1 || true
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
find_admin_kubeconfig_secret() {
local NAMESPACE="$1"
oc get secrets -n "${NAMESPACE}" \
--no-headers \
-o custom-columns="NAME:.metadata.name" | \
grep '\-admin-kubeconfig$' | \
head -n 1
}
find_admin_kubeconfig_secret() {
local NAMESPACE="$1"
oc get secrets -n "${NAMESPACE}" \
--no-headers \
-o custom-columns="NAME:.metadata.name" | \
grep '\-admin-kubeconfig$' | \
head -n 1 || true
}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@scripts/import-weekly-managed.sh` around lines 51 - 58, The
find_admin_kubeconfig_secret function can cause the whole script to exit when
grep finds no match because of set -eo pipefail; modify the pipeline inside
find_admin_kubeconfig_secret (the oc ... | grep '\-admin-kubeconfig$' | head -n
1 sequence) to append "|| true" after grep (or at end of the pipeline) so that a
non-matching grep returns an empty result instead of a failing exit code,
allowing the caller (which checks for an empty result) to handle the no-match
case gracefully.

Comment thread scripts/import-weekly-managed.sh Outdated
Generated-by: ChatGPT
Signed-off-by: zlayne <zlayne@redhat.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
scripts/import-weekly-managed.sh (1)

74-83: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

grep returning no match exits script due to pipefail.

This was flagged previously but remains unfixed. With set -eo pipefail, if no secret matches the pattern, grep returns exit code 1 and terminates the script before the caller can check for an empty result at lines 224-228.

Proposed fix
 find_admin_kubeconfig_secret() {
   local NAMESPACE="$1"

   oc get secrets \
     -n "${NAMESPACE}" \
     --no-headers \
     -o custom-columns="NAME:.metadata.name" | \
     grep '\-admin-kubeconfig$' | \
-    head -n 1
+    head -n 1 || true
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@scripts/import-weekly-managed.sh` around lines 74 - 83, The function
find_admin_kubeconfig_secret can cause the whole script to exit under set -eo
pipefail because grep returns exit code 1 when there is no match; change the
pipeline so a non-match doesn't produce a failing exit status — e.g. replace the
grep ... | head sequence with a single awk filter that prints the first line
matching the pattern and exits (use the same '\-admin-kubeconfig$' pattern), or
wrap grep in a command group that falls back to true (e.g. { grep '...'; } ||
true | head -n1) so find_admin_kubeconfig_secret returns an empty string instead
of causing the script to terminate.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@scripts/import-weekly-managed.sh`:
- Around line 29-39: The script writes kubeconfigs to /tmp via WORKDIR and sets
HUB_KUBECONFIG and MANAGED_KUBECONFIG which can expose creds; change the
creation of WORKDIR to use a secure directory with restrictive permissions and
ensure cleanup on exit: create WORKDIR with a secure mode (e.g. mkdir + set
umask or chmod 700 on WORKDIR after mkdir), write kubeconfigs only into that
directory (HUB_KUBECONFIG and MANAGED_KUBECONFIG remain), and add a trap/cleanup
handler to securely delete or shred those files on script exit/failure so
credentials are not left on disk.
- Around line 400-404: The oc get secret pipeline that reads
'{.data.import\.yaml}' into IMPORT_YAML may produce an empty value; change the
sequence so you capture the base64-decoded output into a shell variable first
(using HUB_KUBECONFIG, IMPORT_SECRET, MANAGED_CLUSTER_NAME and the jsonpath
used), validate that the variable is non-empty and a plausible YAML before
writing to the file named by IMPORT_YAML, and if empty or invalid emit a clear
error (and exit non-zero) instead of creating an empty file that later oc apply
will attempt to use; add a log message indicating the secret key was missing or
empty so callers can diagnose the failure.

---

Duplicate comments:
In `@scripts/import-weekly-managed.sh`:
- Around line 74-83: The function find_admin_kubeconfig_secret can cause the
whole script to exit under set -eo pipefail because grep returns exit code 1
when there is no match; change the pipeline so a non-match doesn't produce a
failing exit status — e.g. replace the grep ... | head sequence with a single
awk filter that prints the first line matching the pattern and exits (use the
same '\-admin-kubeconfig$' pattern), or wrap grep in a command group that falls
back to true (e.g. { grep '...'; } || true | head -n1) so
find_admin_kubeconfig_secret returns an empty string instead of causing the
script to terminate.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: c60f9300-afe3-4be2-9d6b-166649f66481

📥 Commits

Reviewing files that changed from the base of the PR and between 2980091 and 6a827ca.

📒 Files selected for processing (1)
  • scripts/import-weekly-managed.sh

Comment on lines +29 to +39
WORKDIR="/tmp/rhacm-import"
HUB_CLUSTER_NAME="${HUB_CLUSTER_NAME:-weekly}"
MANAGED_CLUSTER_NAME="${MANAGED_CLUSTER_NAME:-weekly-managed}"
CONSOLE_NAMESPACE="${CONSOLE_NAMESPACE:-console-squad}"
CLUSTERPOOL_NAME="${CLUSTERPOOL_NAME:-cs-aws-422}"
CLUSTERPOOL_TARGET_NAMESPACE="${CLUSTERPOOL_TARGET_NAMESPACE:-$CONSOLE_NAMESPACE}"
CLUSTERCLAIM_NAME="${CLUSTERCLAIM_NAME:-$MANAGED_CLUSTER_NAME}"
CLUSTERCLAIM_LIFETIME="${CLUSTERCLAIM_LIFETIME:-164h}"
HUB_KUBECONFIG="${WORKDIR}/hub-kubeconfig"
MANAGED_KUBECONFIG="${WORKDIR}/managed-kubeconfig"
mkdir -p "${WORKDIR}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Kubeconfig files in /tmp may expose credentials.

Admin kubeconfigs written to /tmp/rhacm-import/ could be readable by other users depending on system umask. If the script terminates abnormally before cleanup, credentials persist.

Consider restricting permissions:

Proposed fix
 WORKDIR="/tmp/rhacm-import"
+umask 077
 HUB_CLUSTER_NAME="${HUB_CLUSTER_NAME:-weekly}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
WORKDIR="/tmp/rhacm-import"
HUB_CLUSTER_NAME="${HUB_CLUSTER_NAME:-weekly}"
MANAGED_CLUSTER_NAME="${MANAGED_CLUSTER_NAME:-weekly-managed}"
CONSOLE_NAMESPACE="${CONSOLE_NAMESPACE:-console-squad}"
CLUSTERPOOL_NAME="${CLUSTERPOOL_NAME:-cs-aws-422}"
CLUSTERPOOL_TARGET_NAMESPACE="${CLUSTERPOOL_TARGET_NAMESPACE:-$CONSOLE_NAMESPACE}"
CLUSTERCLAIM_NAME="${CLUSTERCLAIM_NAME:-$MANAGED_CLUSTER_NAME}"
CLUSTERCLAIM_LIFETIME="${CLUSTERCLAIM_LIFETIME:-164h}"
HUB_KUBECONFIG="${WORKDIR}/hub-kubeconfig"
MANAGED_KUBECONFIG="${WORKDIR}/managed-kubeconfig"
mkdir -p "${WORKDIR}"
WORKDIR="/tmp/rhacm-import"
umask 077
HUB_CLUSTER_NAME="${HUB_CLUSTER_NAME:-weekly}"
MANAGED_CLUSTER_NAME="${MANAGED_CLUSTER_NAME:-weekly-managed}"
CONSOLE_NAMESPACE="${CONSOLE_NAMESPACE:-console-squad}"
CLUSTERPOOL_NAME="${CLUSTERPOOL_NAME:-cs-aws-422}"
CLUSTERPOOL_TARGET_NAMESPACE="${CLUSTERPOOL_TARGET_NAMESPACE:-$CONSOLE_NAMESPACE}"
CLUSTERCLAIM_NAME="${CLUSTERCLAIM_NAME:-$MANAGED_CLUSTER_NAME}"
CLUSTERCLAIM_LIFETIME="${CLUSTERCLAIM_LIFETIME:-164h}"
HUB_KUBECONFIG="${WORKDIR}/hub-kubeconfig"
MANAGED_KUBECONFIG="${WORKDIR}/managed-kubeconfig"
mkdir -p "${WORKDIR}"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@scripts/import-weekly-managed.sh` around lines 29 - 39, The script writes
kubeconfigs to /tmp via WORKDIR and sets HUB_KUBECONFIG and MANAGED_KUBECONFIG
which can expose creds; change the creation of WORKDIR to use a secure directory
with restrictive permissions and ensure cleanup on exit: create WORKDIR with a
secure mode (e.g. mkdir + set umask or chmod 700 on WORKDIR after mkdir), write
kubeconfigs only into that directory (HUB_KUBECONFIG and MANAGED_KUBECONFIG
remain), and add a trap/cleanup handler to securely delete or shred those files
on script exit/failure so credentials are not left on disk.

Comment on lines +400 to +404
oc --kubeconfig="${HUB_KUBECONFIG}" \
get secret "${IMPORT_SECRET}" \
-n "${MANAGED_CLUSTER_NAME}" \
-o jsonpath='{.data.import\.yaml}' | \
base64 -d > "${IMPORT_YAML}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Missing validation for import.yaml content.

If the import.yaml key is unexpectedly missing or empty, this creates an empty file that oc apply will attempt to process at line 432, potentially causing a confusing failure or silent no-op.

Proposed fix
 oc --kubeconfig="${HUB_KUBECONFIG}" \
   get secret "${IMPORT_SECRET}" \
   -n "${MANAGED_CLUSTER_NAME}" \
   -o jsonpath='{.data.import\.yaml}' | \
   base64 -d > "${IMPORT_YAML}"

+[[ -s "${IMPORT_YAML}" ]] \
+  || fail "Import manifest is empty"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@scripts/import-weekly-managed.sh` around lines 400 - 404, The oc get secret
pipeline that reads '{.data.import\.yaml}' into IMPORT_YAML may produce an empty
value; change the sequence so you capture the base64-decoded output into a shell
variable first (using HUB_KUBECONFIG, IMPORT_SECRET, MANAGED_CLUSTER_NAME and
the jsonpath used), validate that the variable is non-empty and a plausible YAML
before writing to the file named by IMPORT_YAML, and if empty or invalid emit a
clear error (and exit non-zero) instead of creating an empty file that later oc
apply will attempt to use; add a log message indicating the secret key was
missing or empty so callers can diagnose the failure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant