Skip to content

NO-JIRA: Enable test-e2e-encryption#2087

Merged
openshift-merge-bot[bot] merged 1 commit intoopenshift:masterfrom
ardaguclu:test-integration
Jan 29, 2026
Merged

NO-JIRA: Enable test-e2e-encryption#2087
openshift-merge-bot[bot] merged 1 commit intoopenshift:masterfrom
ardaguclu:test-integration

Conversation

@ardaguclu
Copy link
Member

@ardaguclu ardaguclu commented Jan 23, 2026

It appears that integration tests for encryption controllers are never being executed (example job). This PR updates the Makefile target to enable them.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Jan 23, 2026
@openshift-ci-robot
Copy link

@ardaguclu: This pull request explicitly references no jira issue.

Details

In response to this:

SSIA

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from bertinatto and deads2k January 23, 2026 10:50
@ardaguclu ardaguclu force-pushed the test-integration branch 4 times, most recently from 36e17a1 to e976b88 Compare January 26, 2026 09:38
"kubeapiservers.operator.openshift.io=identity,aescbc:7;kubeschedulers.operator.openshift.io=identity,aescbc:7",
// 7 is migrated, plus one backed key (5). 6 is deleted, and therefore is not preserved (would be if the operand config was not deleted)
// 7 is migrated, backed key (5) is immediately included when rotating through identity. 6 is deleted, not preserved (would be if operand config was not deleted)
"kubeapiservers.operator.openshift.io=identity,aescbc:7,aescbc:5;kubeschedulers.operator.openshift.io=identity,aescbc:7,aescbc:5",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is missed in #615

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, i think you are right.

@ardaguclu
Copy link
Member Author

/cc @p0lyn0mial


// wait for CRD to be ready by polling the resource
t.Logf("Waiting for CRD to be ready")
err = wait.PollUntilContextTimeout(ctx, 100*time.Millisecond, EncryptionTestTimeout, true, func(ctx context.Context) (bool, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not to use wait.ForeverTestTimeout ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Updated.

// wait for CRD to be ready by polling the resource
t.Logf("Waiting for CRD to be ready")
err = wait.PollUntilContextTimeout(ctx, 100*time.Millisecond, EncryptionTestTimeout, true, func(ctx context.Context) (bool, error) {
_, err := dynamicClient.Resource(operatorGVR).List(ctx, metav1.ListOptions{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does it ensure the CRD is ready ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we get the CRD and check some condition ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is better to check the established condition of the CRD. Updated.

dynamicClient, err := dynamic.NewForConfig(kubeConfig)
require.NoError(t, err)
err = wait.PollImmediate(time.Second, wait.ForeverTestTimeout, func() (bool, error) {
err = wait.PollUntilContextTimeout(ctx, time.Second, wait.ForeverTestTimeout, true, func(ctx context.Context) (bool, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here we use wait.ForeverTestTimeout :)

go controllers.Run(ctx, 1)

// wait for operator client to be able to read the resource
t.Logf("Waiting for operator client to sync")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this waiting for an informer to sync ?

if yes, should we wait until the informer is synced ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

)

// Encryption controllers run every minute. 2 minutes is sufficient for the required condition to be met.
var EncryptionTestTimeout = 2 * time.Minute
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not to use wait.ForeverTestTimeout ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I defined this timeout due to the fact that controllers run per minute

to give sufficient time for the expected conditions to be met. However, I believe that if we wait for the sync'ed signal of informers, we won't need this. So that we can use wait.ForeverTestTimeout (30 seconds). I've updated the PR with that.

WithUnsupportedConfigOverrides(runtime.RawExtension{
Raw: []byte(fmt.Sprintf(`{"encryption":{"reason":%q}}`, reason)),
})
err = operatorClient.ApplyOperatorSpec(context.TODO(), "encryption-test", applyConfig)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't we have a ctx already defined ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

"kubeapiservers.operator.openshift.io=identity,aescbc:7;kubeschedulers.operator.openshift.io=identity,aescbc:7",
// 7 is migrated, plus one backed key (5). 6 is deleted, and therefore is not preserved (would be if the operand config was not deleted)
// 7 is migrated, backed key (5) is immediately included when rotating through identity. 6 is deleted, not preserved (would be if operand config was not deleted)
"kubeapiservers.operator.openshift.io=identity,aescbc:7,aescbc:5;kubeschedulers.operator.openshift.io=identity,aescbc:7,aescbc:5",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, i think you are right.

@ardaguclu ardaguclu force-pushed the test-integration branch 5 times, most recently from b05eded to dffb2b8 Compare January 28, 2026 10:17
return false, crdErr
}

for _, condition := range oCRD.Status.Conditions {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we use apiextensions.IsCRDConditionTrue(oCRD, apiextensions.Established) ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears that IsCRDConditionTrue accepts internal CustomResourceDefinition but in our case, we have external CustomResourceDefinition.

err = wait.PollUntilContextTimeout(ctx, 100*time.Millisecond, wait.ForeverTestTimeout, true, func(ctx context.Context) (bool, error) {
oCRD, crdErr := apiextensionsClient.CustomResourceDefinitions().Get(ctx, operatorCRD.Name, metav1.GetOptions{})
if crdErr != nil {
if errors.IsNotFound(crdErr) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A CRD after creation must be persisted in etcd. I think we can fail on any error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

go controllers.Run(ctx, 1)

t.Logf("Waiting for operator informer to sync")
if !cache.WaitForCacheSync(stopCh, operatorClient.Informer().HasSynced) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we infSynced := operatorInformer.WaitForCacheSync(stopCh) ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated by using operatorInformer.WaitForCacheSync(stopCh). Please let me know your opinions about this. Should I use returned value (infSynced) in somewhere?

if !cache.WaitForCacheSync(stopCh, operatorClient.Informer().HasSynced) {
t.Fatalf("failed to sync operator informer")
}
kubeInformers.WaitForCacheSync(stopCh)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubeInformers := operatorInformer.WaitForCacheSync(stopCh) ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we wait kubeInformers sync'ed rather thanoperatorInformer?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should wait for all factories.

operatorInformer.Start(stopCh)
go controllers.Run(ctx, 1)

t.Logf("Waiting for operator informer to sync")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we also wait for fakeConfigInformer ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

}
kubeInformers.WaitForCacheSync(stopCh)
err = wait.PollUntilContextTimeout(ctx, 100*time.Millisecond, wait.ForeverTestTimeout, true, func(ctx context.Context) (bool, error) {
_, _, _, err := operatorClient.GetOperatorState()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this required ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think it becomes redundant. Removed

@ardaguclu
Copy link
Member Author

unrelated
/retest

go controllers.Run(ctx, 1)

t.Logf("Waiting for informers to sync")
operatorInformer.WaitForCacheSync(stopCh)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think that we should check the response from WaitForCacheSync otherwise we don't know if an informer synced :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point :). I've updated the PR accordingly.

go controllers.Run(ctx, 1)

t.Logf("Waiting for informers to sync")
if !operatorInformer.WaitForCacheSync(stopCh)[operatorGVR] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we just simply check all returned gvrs ?
effectively these are the ones used by the test.

t.Fatalf("informer for %v not synced", typ)
}
}
for typ, synced := range kubeInformers.WaitForCacheSync(stopCh)["openshift-config-managed"] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could we just simply check all returned gvrs ?
effectively these are the ones used by the test.

if !operatorInformer.WaitForCacheSync(stopCh)[operatorGVR] {
t.Fatalf("informer for %v not synced", operatorGVR)
}
for typ, synced := range fakeConfigInformer.WaitForCacheSync(stopCh) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could we rename typ to grv ?

@p0lyn0mial
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jan 29, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 29, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ardaguclu, p0lyn0mial

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 29, 2026
@ardaguclu
Copy link
Member Author

/hold

@p0lyn0mial
Copy link
Contributor

/hold

for ci/prow/e2e-aws-encryption

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 29, 2026
@ardaguclu
Copy link
Member Author

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 29, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 29, 2026

@ardaguclu: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit 60005ae into openshift:master Jan 29, 2026
5 checks passed
@ardaguclu ardaguclu deleted the test-integration branch January 29, 2026 12:23
@ardaguclu
Copy link
Member Author

/cherrypick release-4.21

@openshift-cherrypick-robot

@ardaguclu: new pull request created: #2130

Details

In response to this:

/cherrypick release-4.21

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants