-
Notifications
You must be signed in to change notification settings - Fork 650
[all] update deps to 1.34 #3004
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
cc @stephenfin |
|
/retest |
|
/test openstack-cloud-csi-manila-e2e-test |
|
/lgtm I can't see anything obviously wrong with this, but the Manila job had been passing pretty consistently recently so I wonder if we've broken something in Gophercloud v2.8.0? I'm going to propose a bump of just that and see if we can reproduce the issue. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: stephenfin The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I have proposed two PRs, #3005 and #3006. Depending on the outcome of the Manila job on those PRs, we should learn whether (a) the job is permafailing, (b) we've broken something with gophercloud v2.8.0, or (c) we've broken the job with another changes in this PR. EDIT: So those other PRs both passed, and it failed again here, which suggests something is genuinely wrong here. Do we want to merge those two PRs and then iterate on this one? Would I be correct in saying that a bump in k8s will also result in a bump in the e2e tests? I wonder if something got stricter, if so? |
|
/test openstack-cloud-csi-manila-e2e-test |
| k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.33.3 | ||
| k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.33.3 | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to bump these replaces also?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, yes
I'll keep an eye on this but I suspect there isn't any point in rechecking this. While the first test run (link) failed due to a deployment issue, the next two (link, link) both failed with the same two failed tests. |
|
@zetaab: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
Yeah, exact same two failures again: (link) This is the full failure for both: |
|
Aha, git-blame shows a single change to these tests in recent months: this one 👉 kubernetes/kubernetes@784c589 EDIT: I have checked with @gnufied (the author of the commit) and he indicated that this not related to the CSI driver but rather a sign that our version of |
#3008 should fix this. I would also prefer if we could merge the following before this patch, since they make this PR smaller and allow me to use it as a case study for how to bump the kubernetes version. |
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
replaced with #3010 |
What this PR does / why we need it:
Which issue this PR fixes(if applicable):
fixes #
Special notes for reviewers:
Release note: