NO-JIRA: Enable test-e2e-encryption#2087
NO-JIRA: Enable test-e2e-encryption#2087openshift-merge-bot[bot] merged 1 commit intoopenshift:masterfrom
Conversation
|
@ardaguclu: This pull request explicitly references no jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
36e17a1 to
e976b88
Compare
| "kubeapiservers.operator.openshift.io=identity,aescbc:7;kubeschedulers.operator.openshift.io=identity,aescbc:7", | ||
| // 7 is migrated, plus one backed key (5). 6 is deleted, and therefore is not preserved (would be if the operand config was not deleted) | ||
| // 7 is migrated, backed key (5) is immediately included when rotating through identity. 6 is deleted, not preserved (would be if operand config was not deleted) | ||
| "kubeapiservers.operator.openshift.io=identity,aescbc:7,aescbc:5;kubeschedulers.operator.openshift.io=identity,aescbc:7,aescbc:5", |
There was a problem hiding this comment.
yeah, i think you are right.
|
/cc @p0lyn0mial |
|
|
||
| // wait for CRD to be ready by polling the resource | ||
| t.Logf("Waiting for CRD to be ready") | ||
| err = wait.PollUntilContextTimeout(ctx, 100*time.Millisecond, EncryptionTestTimeout, true, func(ctx context.Context) (bool, error) { |
There was a problem hiding this comment.
why not to use wait.ForeverTestTimeout ?
| // wait for CRD to be ready by polling the resource | ||
| t.Logf("Waiting for CRD to be ready") | ||
| err = wait.PollUntilContextTimeout(ctx, 100*time.Millisecond, EncryptionTestTimeout, true, func(ctx context.Context) (bool, error) { | ||
| _, err := dynamicClient.Resource(operatorGVR).List(ctx, metav1.ListOptions{}) |
There was a problem hiding this comment.
how does it ensure the CRD is ready ?
There was a problem hiding this comment.
should we get the CRD and check some condition ?
There was a problem hiding this comment.
Yes, it is better to check the established condition of the CRD. Updated.
| dynamicClient, err := dynamic.NewForConfig(kubeConfig) | ||
| require.NoError(t, err) | ||
| err = wait.PollImmediate(time.Second, wait.ForeverTestTimeout, func() (bool, error) { | ||
| err = wait.PollUntilContextTimeout(ctx, time.Second, wait.ForeverTestTimeout, true, func(ctx context.Context) (bool, error) { |
There was a problem hiding this comment.
here we use wait.ForeverTestTimeout :)
| go controllers.Run(ctx, 1) | ||
|
|
||
| // wait for operator client to be able to read the resource | ||
| t.Logf("Waiting for operator client to sync") |
There was a problem hiding this comment.
is this waiting for an informer to sync ?
if yes, should we wait until the informer is synced ?
| ) | ||
|
|
||
| // Encryption controllers run every minute. 2 minutes is sufficient for the required condition to be met. | ||
| var EncryptionTestTimeout = 2 * time.Minute |
There was a problem hiding this comment.
why not to use wait.ForeverTestTimeout ?
There was a problem hiding this comment.
I defined this timeout due to the fact that controllers run per minute
to give sufficient time for the expected conditions to be met. However, I believe that if we wait for the sync'ed signal of informers, we won't need this. So that we can usewait.ForeverTestTimeout (30 seconds). I've updated the PR with that.
| WithUnsupportedConfigOverrides(runtime.RawExtension{ | ||
| Raw: []byte(fmt.Sprintf(`{"encryption":{"reason":%q}}`, reason)), | ||
| }) | ||
| err = operatorClient.ApplyOperatorSpec(context.TODO(), "encryption-test", applyConfig) |
There was a problem hiding this comment.
don't we have a ctx already defined ?
| "kubeapiservers.operator.openshift.io=identity,aescbc:7;kubeschedulers.operator.openshift.io=identity,aescbc:7", | ||
| // 7 is migrated, plus one backed key (5). 6 is deleted, and therefore is not preserved (would be if the operand config was not deleted) | ||
| // 7 is migrated, backed key (5) is immediately included when rotating through identity. 6 is deleted, not preserved (would be if operand config was not deleted) | ||
| "kubeapiservers.operator.openshift.io=identity,aescbc:7,aescbc:5;kubeschedulers.operator.openshift.io=identity,aescbc:7,aescbc:5", |
There was a problem hiding this comment.
yeah, i think you are right.
b05eded to
dffb2b8
Compare
| return false, crdErr | ||
| } | ||
|
|
||
| for _, condition := range oCRD.Status.Conditions { |
There was a problem hiding this comment.
could we use apiextensions.IsCRDConditionTrue(oCRD, apiextensions.Established) ?
There was a problem hiding this comment.
It appears that IsCRDConditionTrue accepts internal CustomResourceDefinition but in our case, we have external CustomResourceDefinition.
| err = wait.PollUntilContextTimeout(ctx, 100*time.Millisecond, wait.ForeverTestTimeout, true, func(ctx context.Context) (bool, error) { | ||
| oCRD, crdErr := apiextensionsClient.CustomResourceDefinitions().Get(ctx, operatorCRD.Name, metav1.GetOptions{}) | ||
| if crdErr != nil { | ||
| if errors.IsNotFound(crdErr) { |
There was a problem hiding this comment.
A CRD after creation must be persisted in etcd. I think we can fail on any error.
| go controllers.Run(ctx, 1) | ||
|
|
||
| t.Logf("Waiting for operator informer to sync") | ||
| if !cache.WaitForCacheSync(stopCh, operatorClient.Informer().HasSynced) { |
There was a problem hiding this comment.
should we infSynced := operatorInformer.WaitForCacheSync(stopCh) ?
There was a problem hiding this comment.
Updated by using operatorInformer.WaitForCacheSync(stopCh). Please let me know your opinions about this. Should I use returned value (infSynced) in somewhere?
| if !cache.WaitForCacheSync(stopCh, operatorClient.Informer().HasSynced) { | ||
| t.Fatalf("failed to sync operator informer") | ||
| } | ||
| kubeInformers.WaitForCacheSync(stopCh) |
There was a problem hiding this comment.
kubeInformers := operatorInformer.WaitForCacheSync(stopCh) ?
There was a problem hiding this comment.
Shouldn't we wait kubeInformers sync'ed rather thanoperatorInformer?
There was a problem hiding this comment.
I think we should wait for all factories.
| operatorInformer.Start(stopCh) | ||
| go controllers.Run(ctx, 1) | ||
|
|
||
| t.Logf("Waiting for operator informer to sync") |
There was a problem hiding this comment.
should we also wait for fakeConfigInformer ?
| } | ||
| kubeInformers.WaitForCacheSync(stopCh) | ||
| err = wait.PollUntilContextTimeout(ctx, 100*time.Millisecond, wait.ForeverTestTimeout, true, func(ctx context.Context) (bool, error) { | ||
| _, _, _, err := operatorClient.GetOperatorState() |
There was a problem hiding this comment.
Yeah, I think it becomes redundant. Removed
dffb2b8 to
2159aa7
Compare
|
unrelated |
| go controllers.Run(ctx, 1) | ||
|
|
||
| t.Logf("Waiting for informers to sync") | ||
| operatorInformer.WaitForCacheSync(stopCh) |
There was a problem hiding this comment.
i think that we should check the response from WaitForCacheSync otherwise we don't know if an informer synced :)
There was a problem hiding this comment.
Good point :). I've updated the PR accordingly.
2159aa7 to
403a926
Compare
| go controllers.Run(ctx, 1) | ||
|
|
||
| t.Logf("Waiting for informers to sync") | ||
| if !operatorInformer.WaitForCacheSync(stopCh)[operatorGVR] { |
There was a problem hiding this comment.
could we just simply check all returned gvrs ?
effectively these are the ones used by the test.
| t.Fatalf("informer for %v not synced", typ) | ||
| } | ||
| } | ||
| for typ, synced := range kubeInformers.WaitForCacheSync(stopCh)["openshift-config-managed"] { |
There was a problem hiding this comment.
could we just simply check all returned gvrs ?
effectively these are the ones used by the test.
| if !operatorInformer.WaitForCacheSync(stopCh)[operatorGVR] { | ||
| t.Fatalf("informer for %v not synced", operatorGVR) | ||
| } | ||
| for typ, synced := range fakeConfigInformer.WaitForCacheSync(stopCh) { |
There was a problem hiding this comment.
nit: could we rename typ to grv ?
403a926 to
b402217
Compare
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ardaguclu, p0lyn0mial The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/hold |
|
/hold for |
|
@ardaguclu: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/cherrypick release-4.21 |
|
@ardaguclu: new pull request created: #2130 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
It appears that integration tests for encryption controllers are never being executed (example job). This PR updates the Makefile target to enable them.