-
Notifications
You must be signed in to change notification settings - Fork 5.1k
start: trust custom CAs before registry probe; retry once on cert errors #21808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
- Reuse bootstrapper certs plumbing to copy custom CAs into the guest - Install symlinks and refresh trust before tryRegistry runs - Retry once on certificate trust errors (suppresses misleading warning) Fixes kubernetes#21799 Signed-off-by: Andreas Müller <mulan04.0120@gmail.com>
|
|
|
Welcome @mulan04! |
|
Hi @mulan04. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Can one of the admins verify this patch? |
|
/ok-to-test |
medyagh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
plz add a Before/After this PR ouptut
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
@medyagh I added the requested 'Before / After this PR output' section to the PR description - I hope it is clear now what this PR accomplishes Before / After this PR outputBefore this PR❗ Failing to connect to https://registry.k8s.io/ from inside the minikube VM with VPN Client requiring a custom CA cert After this PR(no warning about https://registry.k8s.io/)
|
This comment has been minimized.
This comment has been minimized.
medyagh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you for patience, this PR seems interesting plz see comments
| // EnsureCACertsEarly collects host-provided custom CA certs, copies them into the guest, | ||
| // installs symlinks into the system trust store, and refreshes trust *before* HTTPS probes. | ||
| func EnsureCACertsEarly(cr command.Runner) error { | ||
| caCerts, err := collectCACerts() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the collectCACerts is also called in func SetupCerts
would that make it duplicate or do two times? I would like to ensure this wont make minikube start slower, can it be done once for every minikbube start?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also would u plz add before after this pr
before this PR
minikube delete --all
time minikbue start
After this PR
minikube delete --all
time minikbue start
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch — collectCACerts is indeed called both from EnsureCACertsEarly and later from SetupCerts, so with custom CAs present we do:
- scan
~/.minikube/certsand~/.minikube/files/etc/ssl/certsfor.crt/.pemcerts, - create
FileAssets and copy them into the guest, and - call
installCertSymlinks
twice per minikube start.
All of that is idempotent and the work is proportional to the number of user-provided CA files (typically small), so the overhead is very small compared to the rest of the start sequence.
To validate this, I measured startup time before/after this PR with enabled VPN which requires the customCA in ~/.minikube/certs:
Before this PR (master)
$ /tmp/minikube-master delete --all --purge
🔥 Deleting "minikube" in podman ...
🔥 Removing /home/user/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
💀 Successfully purged minikube directory located at - [/home/user/.minikube]
$ mkdir ~/.minikube/
cp -r ~/certs/ ~/.minikube/
$ time /tmp/minikube-master start \
--driver=podman \
--container-runtime=containerd \
--embed-certs
😄 minikube v1.37.0 on Fedora 42 (kvm/amd64)
✨ Using the podman driver based on user configuration
📌 Using Podman driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.48-1765661130-22141 ...
💾 Downloading Kubernetes v1.34.3 preload ...
> preloaded-images-k8s-v18-v1...: 324.02 MiB / 324.02 MiB 100.00% 2.07 Mi
> gcr.io/k8s-minikube/kicbase...: 498.47 MiB / 498.47 MiB 100.00% 3.05 Mi
E1217 14:18:17.914003 13521 cache.go:238] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=3900MB) ...
❗ Failing to connect to https://registry.k8s.io/ from inside the minikube container
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
📦 Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
real 4m54.419s
user 0m10.988s
sys 0m23.922s
for i in 1 2 3; do
/tmp/minikube-master delete --all
time /tmp/minikube-master start \
--driver=podman \
--container-runtime=containerd \
--embed-certs
done
🔥 Deleting "minikube" in podman ...
🔥 Removing /home/user/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
😄 minikube v1.37.0 on Fedora 42 (kvm/amd64)
✨ Using the podman driver based on user configuration
📌 Using Podman driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.48-1765661130-22141 ...
E1217 14:40:24.280843 49878 cache.go:238] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=3900MB) ...
❗ Failing to connect to https://registry.k8s.io/ from inside the minikube container
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
📦 Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
real 0m19.482s
user 0m0.968s
sys 0m0.672s
🔥 Deleting "minikube" in podman ...
🔥 Removing /home/user/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
😄 minikube v1.37.0 on Fedora 42 (kvm/amd64)
✨ Using the podman driver based on user configuration
📌 Using Podman driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.48-1765661130-22141 ...
E1217 14:40:48.966342 53245 cache.go:238] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=3900MB) ...
❗ Failing to connect to https://registry.k8s.io/ from inside the minikube container
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
📦 Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
real 0m19.431s
user 0m0.779s
sys 0m0.732s
🔥 Deleting "minikube" in podman ...
🔥 Removing /home/user/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
😄 minikube v1.37.0 on Fedora 42 (kvm/amd64)
✨ Using the podman driver based on user configuration
📌 Using Podman driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.48-1765661130-22141 ...
E1217 14:41:15.365287 56645 cache.go:238] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=3900MB) ...
❗ Failing to connect to https://registry.k8s.io/ from inside the minikube container
💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
📦 Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
real 0m19.992s
user 0m0.824s
sys 0m0.724sAfter this PR
$ for i in 1 2 3; do
/tmp/minikube-pr delete --all
time /tmp/minikube-pr start \
--driver=podman \
--container-runtime=containerd \
--embed-certs
done
🔥 Deleting "minikube" in podman ...
🔥 Removing /home/user/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
😄 minikube v1.37.0 on Fedora 42 (kvm/amd64)
✨ Using the podman driver based on user configuration
📌 Using Podman driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.48-1765661130-22141 ...
E1217 14:37:54.159666 39146 cache.go:238] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=3900MB) ...
📦 Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
real 0m23.042s
user 0m0.778s
sys 0m0.702s
🔥 Deleting "minikube" in podman ...
🔥 Removing /home/user/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
😄 minikube v1.37.0 on Fedora 42 (kvm/amd64)
✨ Using the podman driver based on user configuration
📌 Using Podman driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.48-1765661130-22141 ...
E1217 14:38:23.629614 42520 cache.go:238] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=3900MB) ...
📦 Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
real 0m18.974s
user 0m0.751s
sys 0m0.732s
🔥 Deleting "minikube" in podman ...
🔥 Removing /home/user/.minikube/machines/minikube ...
💀 Removed all traces of the "minikube" cluster.
🔥 Successfully deleted all profiles
😄 minikube v1.37.0 on Fedora 42 (kvm/amd64)
✨ Using the podman driver based on user configuration
📌 Using Podman driver with root privileges
👍 Starting "minikube" primary control-plane node in "minikube" cluster
🚜 Pulling base image v0.0.48-1765661130-22141 ...
E1217 14:38:49.273019 45866 cache.go:238] Error downloading kic artifacts: not yet implemented, see issue #8426
🔥 Creating podman container (CPUs=2, Memory=3900MB) ...
📦 Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
🔗 Configuring CNI (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
real 0m19.865s
user 0m0.934s
sys 0m0.643sRepeating each 3x shows startup times within normal variance (no measurable slowdown).
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mulan04 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR fixes a misleading SSL certificate error warning that appears during minikube start when users provide custom CA certificates (e.g., for corporate proxies or VPNs). The issue occurred because the registry connectivity probe ran before the custom CAs were installed and trusted in the guest VM.
Key Changes:
- Custom CA certificates are now installed and trusted in the guest VM before the HTTPS registry connectivity probe runs
- Added retry logic with certificate error detection to handle edge cases where the trust store refresh hasn't fully propagated
- The warning is suppressed if the retry succeeds, eliminating false positive SSL warnings
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| pkg/minikube/node/start.go | Added early CA installation call in validateNetwork(), implemented isCertError() helper to detect certificate trust errors, and enhanced tryRegistry() with single-retry logic for cert errors |
| pkg/minikube/bootstrapper/certs.go | Introduced EnsureCACertsEarly() function to collect, copy, and install custom CA certificates into the guest, and refresh the system trust store before connectivity probes run |
The implementation is well-designed and follows existing codebase patterns. The changes are minimal, focused, and solve the specific problem without introducing unnecessary complexity. The code properly handles errors (making CA setup failures non-fatal), manages resources correctly (using defer for file cleanup), and includes appropriate retry logic to handle timing edge cases. No issues were found during the review.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
Keywords which can automatically close issues and at(@) or hashtag(#) mentions are not allowed in commit messages. The list of commits with invalid commit messages:
DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
kvm2 driver with docker runtime DetailsTimes for minikube start: 39.8s 44.0s 39.6s 42.5s 40.7s Times for minikube ingress: 16.3s 16.8s 15.8s 15.8s 15.8s docker driver with docker runtime DetailsTimes for minikube start: 21.5s 22.7s 21.1s 21.7s 21.1s Times for minikube ingress: 10.7s 10.7s 10.7s 10.7s 11.7s docker driver with containerd runtime DetailsTimes for minikube start: 23.0s 19.8s 22.2s 21.4s 20.1s Times for minikube (PR 21808) ingress: 22.1s 22.2s 23.2s 23.2s 23.1s |
|
Here are the number of top 10 failed tests in each environments with lowest flake rate.
Besides the following environments also have failed tests:
To see the flake rates of all tests by environment, click here. |
|
@mulan04: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What this PR does / why we need it
When a user provides custom root CAs (for example behind enterprise VPNs or proxies), Minikube currently probes
https://registry.k8s.io/before those CAs are copied and trusted inside the guest.This causes a misleading warning like:
even though connectivity works moments later once the CAs are installed.
This PR:
update-ca-certificates/update-ca-trust) inside the guest.Before / After this PR output
Before this PR
❗ Failing to connect to https://registry.k8s.io/ from inside the minikube VM
curl: (60) SSL certificate problem: self-signed certificate
After this PR
(no warning about https://registry.k8s.io/)
Which issue(s) this PR fixes
Fixes [#21799](#21799)
(Startup falsely reports registry SSL failure when using custom CA.)
Special notes for your reviewer
collectCACerts,installCertSymlinks)instead of duplicating certificate copy and trust code.
EnsureCACertsEarlyis idempotent and safe to call multiple times; it will only act when host-provided certs exist.Testing
Place a custom CA (e.g. your corporate proxy CA) in
~/.minikube/certs/or
~/.minikube/files/etc/ssl/certs/.Run Minikube start:
Observe:
registry.k8s.io.Also verified:
Release note