Skip to content

SPLAT-2172: AWS dedicate host support#10079

Open
vr4manta wants to merge 6 commits intoopenshift:mainfrom
vr4manta:SPLAT-2172
Open

SPLAT-2172: AWS dedicate host support#10079
vr4manta wants to merge 6 commits intoopenshift:mainfrom
vr4manta:SPLAT-2172

Conversation

@vr4manta
Copy link
Contributor

@vr4manta vr4manta commented Nov 13, 2025

SPLAT-2172

Changes

  • Bumped openshift/api
  • Added logic to allow HostPlacement of dedicated hosts

Dependencies

Notes

MAO and CAO changes are needed for it to fully work. For now, this PR is adding the ability to generate the needed outputs for bootstrapping.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added AWS Dedicated Hosts placement support for compute machine pools; users can configure dedicated host affinity and allocations during cluster installation.
    • Added validation to ensure dedicated host configurations are consistent and exist in the specified region.
  • Improvements

    • Enhanced cluster cleanup to properly release dedicated hosts during cluster destruction.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Nov 13, 2025
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Nov 13, 2025

@vr4manta: This pull request references SPLAT-2172 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

Details

In response to this:

SPLAT-2172

Changes

  • Bumped openshift/api
  • Added logic to allow HostPlacement of dedicated hosts

Dependencies

Notes

MAO and CAO changes are needed for it to fully work. For now, this PR is adding the ability to generate the needed outputs for bootstrapping.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@tthvo
Copy link
Member

tthvo commented Nov 14, 2025

/cc

@openshift-ci openshift-ci bot requested a review from tthvo November 14, 2025 00:35
@vr4manta vr4manta force-pushed the SPLAT-2172 branch 3 times, most recently from 24500b6 to 342ab2d Compare November 17, 2025 17:44
@tthvo
Copy link
Member

tthvo commented Nov 18, 2025

/retest-required

Copy link
Member

@tthvo tthvo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool feature 👍 🚀

I have quite some comments 😅 Sorry if I review a little too soon...

switch *p.HostPlacement.Affinity {
case aws.HostAffinityAnyAvailable:
if p.HostPlacement.DedicatedHost != nil {
allErrs = append(allErrs, field.Required(fldPath.Child("dedicatedHost"), "dedicatedHost is required when 'affinity' is set to DedicatedHost, and forbidden otherwise"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean hostPlacement.dedicatedHost is forbidden if affinity is AnyAvailable?

 allErrs = append(allErrs, field.Forbidden(fldPath.Child("dedicatedHost"), "dedicatedHost must not be set when affinity is AnyAvailable"))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tthvo So this was an interesting one. In openshift/api and openshift/machine-api-operator, it was suggested to error this way for this scenario. I was doing this to keep it consistent.

Now that said, I am happy to make your suggestion if you prefer installer say it this way. Just let me know.

if err != nil {
allErrs = append(allErrs, field.InternalError(placementPath.Child("dedicatedHost"), err))
} else {
// Check the returned configured hosts to see if the dedicated hosts defined in install-config exists.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 ❓ I got 2 questions here:

  1. Do the zones of dedicated hosts need to match machinepool zone field if defined?

    type MachinePool struct {
    // Zones is list of availability zones that can be used.
    //
    // +optional
    Zones []string `json:"zones,omitempty"`

  2. Do the user-input zones for dedicated hosts need to match with the actual zones returned from AWS?

    // If user specified a zone, validate it matches AWS
    if host.Zone != "" && host.Zone != hostDetails.Zone {
      allErrs = append(allErrs, field.Invalid(
          fldPath.Child("hostPlacement", "dedicatedHost").Index(i).Child("zone"),
          host.Zone,
          fmt.Sprintf("specified zone %s does not match actual host zone %s",
              host.Zone, hostDetails.Zone)))
    }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do the user-input zones for dedicated hosts need to match with the actual zones returned from AWS?

On that point, I'm curious if users should need to specify the zone for the dedicated host? It seems like we can just look up the dedicated host by id and then use that to determine the zone (which the pr is already doing IIUC). Seems like we might be able to shed zone from the install config.

@vr4manta
Copy link
Contributor Author

/hold
Need to rebase and add validation logic to make sure defined dedicated hosts are in zones based on the machinepool config.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 18, 2025
@vr4manta vr4manta force-pushed the SPLAT-2172 branch 5 times, most recently from 82915c2 to 8ef8679 Compare November 21, 2025 13:54
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 21, 2025
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 21, 2025
@vr4manta
Copy link
Contributor Author

/payload-job periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-dedicated

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 21, 2025

@vr4manta: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-nightly-4.21-e2e-aws-ovn-dedicated

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/1c5c6d80-c717-11f0-93e9-d918c346504c-0

@vr4manta
Copy link
Contributor Author

/unhold
/retest

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 24, 2025
@vr4manta
Copy link
Contributor Author

/retest

@vr4manta
Copy link
Contributor Author

vr4manta commented Dec 2, 2025

/hold
based on discussions with @patrickdillon and @JoelSpeed , I am adding MAPA support and then will enhance this PR to leverage that code path without being dependent on CAPI.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 2, 2025
@vr4manta vr4manta force-pushed the SPLAT-2172 branch 2 times, most recently from c873bb3 to 5a397fb Compare February 27, 2026 15:50
@vr4manta
Copy link
Contributor Author

/hold
After making changes based on reviews, getting error on checking dedicated hosts against defined zones

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 27, 2026
// Dedicated hosts cannot be configured in defaultMachinePlatform
if platform.DefaultMachinePlatform.HostPlacement != nil {
defaultPath := fldPath.Child("defaultMachinePlatform").Child("hostPlacement")
errMsg := "dedicated hosts cannot be configured in defaultMachinePlatform, they must be specified per machine pool"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: it might be more user friendly to specify that they're only supported on compute machine pools, but NBD

Comment on lines +512 to +526
// Dedicated hosts are only supported on worker (compute) pools
if poolName != "" && poolName != types.MachinePoolComputeRoleName {
placementPath := fldPath.Child("hostPlacement")
errMsg := fmt.Sprintf("dedicated hosts are only supported on %s pools, not on %s pools", types.MachinePoolComputeRoleName, poolName)
allErrs = append(allErrs, field.Invalid(placementPath, pool.HostPlacement, errMsg))
return allErrs
}

// Control plane pools cannot use dedicated hosts
if poolName == "" {
placementPath := fldPath.Child("hostPlacement")
errMsg := "dedicated hosts are not supported on control plane pools"
allErrs = append(allErrs, field.Invalid(placementPath, pool.HostPlacement, errMsg))
return allErrs
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I would have expected these validations in pkg/types (because they don't require aws sdk)

Comment on lines +970 to +977
describeOutput, err := client.DescribeHosts(ctx, &ec2v2.DescribeHostsInput{
HostIds: []string{id},
})
if err != nil {
errCode := HandleErrorCode(err)
if errCode == "InvalidHostID.NotFound" {
// Host doesn't exist, nothing to do
return nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This call to DescribeHosts is always made but permissions.go is listing the permission as optional. We need to either make the permission required, or gracefully handle the permission error (i think handling the permission error is the way to go).

@vr4manta vr4manta force-pushed the SPLAT-2172 branch 2 times, most recently from cc8a0b9 to 4b1d68a Compare March 2, 2026 16:41
Copy link
Contributor

@mtulio mtulio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @vr4manta , good progress!

A few questions related to tagging and covered use cases. It would be nice to know what would be the tagging behavior of a Day-2 instance created with BYO DH ID: do we need to set kubernetes cluster tags (kubernetes/cluster...) to it (mapa/capa component) or would be optional?

@vr4manta vr4manta force-pushed the SPLAT-2172 branch 2 times, most recently from 780e1ea to 35cdfe9 Compare March 5, 2026 14:39
@coderabbitai
Copy link

coderabbitai bot commented Mar 9, 2026

Walkthrough

This change introduces comprehensive AWS Dedicated Hosts support to the OpenShift installer. It adds a new hostPlacement schema field to multiple platform configurations, implements dedicated host retrieval and validation logic, integrates host placement into machine set generation, and includes cleanup operations for provisioned hosts.

Changes

Cohort / File(s) Summary
Data Schema & CRD Definitions
data/data/install.openshift.io_installconfigs.yaml
Introduces new hostPlacement field with affinity enum (DedicatedHost, AnyAvailable) and dedicatedHost array to multiple platform defaultMachinePlatform sections, including validation rules enforcing host requirements based on affinity.
AWS Dedicated Hosts Core Implementation
pkg/asset/installconfig/aws/dedicatedhosts.go, pkg/asset/installconfig/aws/metadata.go, pkg/asset/installconfig/aws/permissions.go
Adds dedicated host retrieval via EC2 API, caches results in Metadata.Hosts, and defines new permission groups (PermissionDedicatedHosts, PermissionDynamicHostAllocation) for EC2 operations.
Install Config Validation
pkg/asset/installconfig/aws/validation.go, pkg/asset/installconfig/aws/validation_test.go
Implements validateHostPlacement to enforce dedicated host rules: prevents configuration at defaultMachinePlatform level, validates host existence and region/zone alignment, detects duplicates and multiple hosts per zone. Includes comprehensive test coverage for valid/invalid scenarios.
Type Definitions & Constants
pkg/types/aws/machinepool.go, pkg/types/machinepools.go
Adds HostPlacement, HostAffinity, DedicatedHost types to MachinePool with validation rules and kubebuilder annotations. Introduces MachinePoolDefaultConfig constant.
Validation Layer
pkg/types/aws/validation/machinepool.go, pkg/types/aws/validation/machinepool_test.go, pkg/types/aws/validation/platform.go, pkg/types/validation/machinepools.go
Updates ValidateMachinePool signature to accept poolName parameter, adds validateHostPlacement enforcement (compute-only restriction, affinity/host consistency), and threads pool name through validation callstack.
Feature Gate Detection
pkg/types/aws/validation/featuregates.go, pkg/types/aws/validation/featuregates_test.go
Detects AWSDedicatedHosts feature gate activation by inspecting compute pools for host placement configuration, adds gated feature entry with appropriate field path.
Machine Asset Generation
pkg/asset/machines/aws/machines.go, pkg/asset/machines/aws/machinesets.go, pkg/asset/machines/worker.go, pkg/asset/machines/worker_test.go
Integrates dedicated host information into machine provisioning pipeline: adds host selection logic via DedicatedHost helper, threads Hosts map through MachineSetInput, retrieves hosts during worker asset generation, and updates test infrastructure (ptr helper usage).
AWS Cluster Tagging & Cleanup
pkg/asset/cluster/aws/aws.go, pkg/destroy/aws/ec2helpers.go
Adds tagSharedDedicatedHosts function to tag BYO dedicated hosts with cluster-specific tags during PreTerraform, and implements deleteEC2DedicatedHost cleanup flow with existence and state checks.

Sequence Diagram(s)

sequenceDiagram
    participant Installer as Installer Process
    participant InstallConfig as Install Config<br/>Validation
    participant EC2API as EC2 API
    participant Metadata as AWS Metadata<br/>Cache
    participant MachineAsset as Machine Asset<br/>Generator
    
    Installer->>InstallConfig: Validate install config
    InstallConfig->>InstallConfig: Check hostPlacement config
    InstallConfig->>Metadata: DedicatedHosts(ctx)
    Metadata->>EC2API: DescribeHosts
    EC2API-->>Metadata: Host list
    Metadata->>Metadata: Cache result in Hosts map
    Metadata-->>InstallConfig: map[hostID]Host
    InstallConfig->>InstallConfig: Validate host existence,<br/>region/zone alignment,<br/>no duplicates
    
    Installer->>MachineAsset: Generate machine assets
    MachineAsset->>Metadata: Retrieve cached Hosts
    Metadata-->>MachineAsset: map[hostID]Host
    MachineAsset->>MachineAsset: DedicatedHost(hosts, placement, zone)<br/>select host per zone
    MachineAsset->>MachineAsset: Configure machine with<br/>HostTenancy + DedicatedHost
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 3

❌ Failed checks (2 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.63% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Test Structure And Quality ⚠️ Warning Test assertions lack meaningful failure messages in machinepool_test.go, violating quality requirements. Test data includes invalid 8-character host IDs marked as valid, contradicting the schema pattern requirement of 19 characters. Add context messages to all assertions (e.g., assert.NoError(t, err, "ValidateMachinePool should not error for %s", tc.name)). Correct test data by removing 8-character host IDs or marking them as invalid test cases per the schema pattern ^h-[0-9a-f]{17}$.
Title check ❓ Inconclusive The title 'AWS dedicate host support' refers to a real and significant part of the changeset (dedicated hosts support), but uses a vague/generic term 'dedicate' (likely a typo for 'dedicated') and 'support' which only partially conveys the scope. The changeset includes schema definitions, validation logic, host placement configuration across multiple platform types, IAM permissions, cleanup logic, and feature gates—a substantial undertaking that the title doesn't fully capture. Consider using a more specific title like 'Add AWS dedicated hosts placement support with validation and cleanup' to better reflect the comprehensive nature of the changes across schema, validation, permissions, and resource cleanup.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Stable And Deterministic Test Names ✅ Passed Pull request does not use Ginkgo test framework; all test files use standard Go testing with testing.T, making the check not applicable.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.5.0)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Mar 9, 2026

@vr4manta: This pull request references SPLAT-2172 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

SPLAT-2172

Changes

  • Bumped openshift/api
  • Added logic to allow HostPlacement of dedicated hosts

Dependencies

Notes

MAO and CAO changes are needed for it to fully work. For now, this PR is adding the ability to generate the needed outputs for bootstrapping.

Summary by CodeRabbit

Release Notes

  • New Features

  • Added AWS Dedicated Hosts placement support for compute machine pools; users can configure dedicated host affinity and allocations during cluster installation.

  • Added validation to ensure dedicated host configurations are consistent and exist in the specified region.

  • Improvements

  • Enhanced cluster cleanup to properly release dedicated hosts during cluster destruction.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pkg/asset/machines/aws/machinesets.go (1)

91-109: ⚠️ Potential issue | 🟠 Major

Don’t silently drop host placement for unmatched zones.

If DedicatedHost() cannot find a host for the current AZ, dedicatedHost becomes empty and this code still emits a machineset. For Affinity=DedicatedHost, that creates a regular machineset for that zone instead of honoring the install-config.

Suggested fix
 		dedicatedHost := DedicatedHost(in.Hosts, mpool.HostPlacement, az)
+		if mpool.HostPlacement != nil &&
+			mpool.HostPlacement.Affinity != nil &&
+			*mpool.HostPlacement.Affinity == aws.HostAffinityDedicatedHost &&
+			dedicatedHost == "" {
+			return nil, errors.Errorf("no dedicated host configured for machine pool %q in zone %s", in.Pool.Name, az)
+		}
 
 		provider, err := provider(&machineProviderInput{
 			clusterID:        in.ClusterID,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/asset/machines/aws/machinesets.go` around lines 91 - 109, The code
currently continues to build and emit a machineset even when
DedicatedHost(in.Hosts, mpool.HostPlacement, az) returns an empty dedicatedHost,
which silently drops host placement and creates a regular machineset; update the
logic around calling provider(&machineProviderInput{...}) to check the pool's
affinity (mpool.HostPlacement or the field that indicates
Affinity=DedicatedHost) and if affinity is DedicatedHost and dedicatedHost == ""
then skip creating/emitting the machineset (return or continue) so we don't
produce a non-dedicated machineset for that AZ; reference DedicatedHost(), the
dedicatedHost variable, machineProviderInput and mpool.HostPlacement when making
the change.
♻️ Duplicate comments (2)
pkg/destroy/aws/ec2helpers.go (1)

973-985: ⚠️ Potential issue | 🟠 Major

Return early on unauthorized DescribeHosts.

The UnauthorizedOperation / AccessDenied branch says “skipping” and then still returns an error, so destroy will fail anyway when ec2:DescribeHosts is missing.

Suggested fix
 	if err != nil {
 		errCode := HandleErrorCode(err)
 		switch errCode {
 		case "InvalidHostID.NotFound":
 			// Host doesn't exist, nothing to do
 			return nil
 		case "UnauthorizedOperation", "AccessDenied":
 			// User doesn't have permission to describe dedicated hosts
 			// This is expected when dedicated host permissions are not configured
 			logger.Warn("User does not have permission to describe dedicated hosts, skipping")
+			return nil
 		}
 		return fmt.Errorf("failed to describe dedicated host %s: %w", id, err)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/destroy/aws/ec2helpers.go` around lines 973 - 985, The DescribeHosts
error handling in the block using HandleErrorCode(err) incorrectly logs a
warning for "UnauthorizedOperation" and "AccessDenied" but still returns an
error, causing destroy to fail when ec2:DescribeHosts is missing; update the
switch in the same function so that for cases "UnauthorizedOperation" and
"AccessDenied" you log the warning via logger.Warn("User does not have
permission to describe dedicated hosts, skipping") and then return nil instead
of falling through to the final fmt.Errorf return; keep the existing behavior
for "InvalidHostID.NotFound" (return nil) and for other errors continue to
return fmt.Errorf("failed to describe dedicated host %s: %w", id, err).
pkg/asset/installconfig/aws/metadata.go (1)

379-391: ⚠️ Potential issue | 🟡 Minor

Cache the empty dedicated-host result too.

len(m.Hosts) == 0 conflates “not loaded yet” with “loaded, but there are no hosts”. In an account with zero dedicated hosts, this will call DescribeHosts on every access and defeat the cache.

Suggested fix
-	if len(m.Hosts) == 0 {
+	if m.Hosts == nil {
 		client, err := m.EC2Client(ctx)
 		if err != nil {
 			return nil, err
 		}
 
-		m.Hosts, err = dedicatedHosts(ctx, client)
+		hosts, err := dedicatedHosts(ctx, client)
 		if err != nil {
 			return nil, fmt.Errorf("error listing dedicated hosts: %w", err)
 		}
+		m.Hosts = hosts
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/asset/installconfig/aws/metadata.go` around lines 379 - 391, The cache
check uses len(m.Hosts) == 0 which treats “not loaded” and “loaded-but-empty”
the same and causes repeated DescribeHosts calls; change the guard to check for
nil (if m.Hosts == nil) so an empty slice returned by dedicatedHosts is cached,
and ensure dedicatedHosts or the assignment leaves m.Hosts as a non-nil empty
slice when no hosts are found (refer to m.Hosts, EC2Client, and dedicatedHosts).
🧹 Nitpick comments (1)
pkg/types/aws/validation/machinepool.go (1)

111-121: Missing validation for duplicate host IDs.

Per past review discussion, duplicate host IDs in the DedicatedHost list should be flagged. Currently, duplicates are silently accepted but would cause logical issues downstream (e.g., the map in DedicatedHost() helper would deduplicate them).

♻️ Proposed fix to add uniqueness validation
 	case aws.HostAffinityDedicatedHost:
 		if len(p.HostPlacement.DedicatedHost) == 0 {
 			allErrs = append(allErrs, field.Required(fldPath.Child("dedicatedHost"), "dedicatedHost is required when 'affinity' is set to DedicatedHost, and forbidden otherwise"))
 		} else {
+			seenIDs := make(map[string]int)
 			for index, host := range p.HostPlacement.DedicatedHost {
 				hostPath := fldPath.Child("dedicatedHost").Index(index)
 				if len(host.ID) == 0 {
 					allErrs = append(allErrs, field.Required(hostPath.Child("id"), "a hostID must be specified when configuring 'dedicatedHost'"))
+				} else if prevIdx, seen := seenIDs[host.ID]; seen {
+					allErrs = append(allErrs, field.Duplicate(hostPath.Child("id"), host.ID))
+				} else {
+					seenIDs[host.ID] = index
 				}
 			}
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/types/aws/validation/machinepool.go` around lines 111 - 121, The
DedicatedHost branch in Validate (case aws.HostAffinityDedicatedHost) lacks a
check for duplicate host IDs in p.HostPlacement.DedicatedHost; add uniqueness
validation by iterating the slice (the same loop that checks host.ID) and track
seen IDs (e.g., with a map[string]bool) to append a field.Duplicate or
field.Invalid error to fldPath.Child("dedicatedHost").Index(index).Child("id")
when an ID repeats, ensuring behavior matches the DedicatedHost() helper which
deduplicates IDs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@data/data/install.openshift.io_installconfigs.yaml`:
- Around line 202-251: The schema allows affinity: DedicatedHost with an empty
dedicatedHost array because has(self.dedicatedHost) only checks presence; update
the dedicatedHost array definition (the dedicatedHost property under
hostPlacement with type: array and item schema containing id) to include
minItems: 1 so empty lists are rejected, and ensure the existing
x-kubernetes-validations rule that references self.dedicatedHost remains
unchanged; apply the same minItems: 1 addition to the other duplicated
dedicatedHost array blocks referenced in the comment (the other two
occurrences).

In `@pkg/asset/cluster/aws/aws.go`:
- Around line 253-269: The collector currently only reads compute.Platform.AWS
from installConfig.Config.Compute for the "worker" pool and misses values
inherited from installConfig.Config.AWS.DefaultMachinePlatform; update the logic
that builds dhNames to first start from
installConfig.Config.AWS.DefaultMachinePlatform (create an awstypes.MachinePool
and call mpool.Set on the default), then overlay the worker compute (if present)
by calling mpool.Set(compute.Platform.AWS) as done in
tagSharedIAMRoles/tagSharedIAMProfiles, handle the compute == nil fallback, and
then read mpool.HostPlacement.DedicatedHost to append name.ID into dhNames so
inherited dedicated hosts are included.
- Around line 284-288: The error wrapper for the ec2Client.CreateTags call is
misleading (mentions subnets); update the wrapped error message in the
CreateTags error return to correctly reference dedicated hosts (e.g., change
"could not add tags to subnets" to "could not add tags to dedicated hosts" or
similar) so callers of ec2Client.CreateTags (using dhNames, tagKey, tagValue)
see the correct resource in the error text.

In `@pkg/asset/installconfig/aws/dedicatedhosts.go`:
- Line 36: Fix the typo in the debug log call by updating the log message string
used in the logrus.Debugf invocation (currently logrus.Debugf("Found dedicatd
host: %s", id)) to the correct wording, e.g. "Found dedicated host: %s",
preserving the formatting and the id argument.

In `@pkg/asset/installconfig/aws/permissions.go`:
- Around line 442-445: RequiredPermissionGroups currently includes
PermissionDedicatedHosts but not PermissionDynamicHostAllocation, so delete-time
permission assembly omits the ec2:ReleaseHosts action; update the code that
builds the destroy/delete permission set (the logic that assembles
RequiredPermissionGroups for destroy) to also include
PermissionDynamicHostAllocation alongside PermissionDedicatedHosts so
ec2:ReleaseHosts is present at destroy time; make the same change where the
delete-time groups are assembled later in the file (the second occurrence around
the 581-583 area). Ensure you reference PermissionDynamicHostAllocation,
PermissionDedicatedHosts and RequiredPermissionGroups when making the change.
- Around line 795-804: The includesDedicatdHosts helper currently only checks
pools named exactly types.MachinePoolComputeRoleName and misses other compute
pools with AWS.HostPlacement; change the loop condition to scan all compute
pools except edge pools (i.e., skip only when mpool.Name ==
types.MachinePoolEdgeRoleName) and return true if any mpool.Platform.AWS != nil
&& mpool.Platform.AWS.HostPlacement != nil; update includesDedicatdHosts
accordingly so DescribeHosts/ReleaseHosts run for any non-edge compute pool with
host placement.

In `@pkg/asset/installconfig/aws/validation.go`:
- Around line 556-560: The validation currently allows a dedicated-host pool
when pool.Zones is empty, which makes zone-to-host mapping non-deterministic;
add an explicit check in the same validation block to reject dedicated-host
pools that don't specify Zones: if len(pool.Zones) == 0 (and this is a
dedicated-host pool context), append a field.Invalid error on dhPath (use host
as the bad value) with a message like "machine pool with dedicated hosts must
specify explicit zones" so we fail fast instead of accepting any zone and later
producing MachineSets that lose dedicated-host placement.

In `@pkg/asset/machines/aws/machines.go`:
- Around line 371-382: The DedicatedHost function currently returns an empty
string when a placement is provided but no host matches the zone, which silently
omits Placement.Host in provider(); change DedicatedHost to return (string,
error), return a non-nil error when placement != nil and no host.ID matches the
given zone (include context like requested host IDs and zone), and update the
caller provider() to propagate or handle that error so the invalid host/zone
mapping is surfaced (e.g., fail validation or emit a clear error) rather than
falling back to shared placement.

In `@pkg/asset/machines/worker.go`:
- Around line 538-543: The code currently calls
installConfig.AWS.DedicatedHosts(ctx) whenever mpool.HostPlacement != nil, which
also runs for valid AnyAvailable placements; change the guard to only call
DedicatedHosts when the placement explicitly indicates DedicatedHost affinity
(i.e., check mpool.HostPlacement == DedicatedHost or the equivalent
enum/constant used in your codebase), so only the DedicatedHost case triggers
installConfig.AWS.DedicatedHosts(ctx) and other placements (e.g., AnyAvailable)
skip the host lookup.

In `@pkg/types/aws/validation/machinepool_test.go`:
- Around line 299-321: Test cases assert short 8-character host IDs as valid but
the validator in pkg/types/aws/machinepool.go expects DedicatedHost.ID to match
^h-[0-9a-f]{17}$; update the failing test entries in
pkg/types/aws/validation/machinepool_test.go that construct aws.MachinePool ->
aws.HostPlacement -> DedicatedHost (IDs "h-1a2b3c4d" and "h-9876abcd") to use
valid 17-hex-character IDs (e.g., "h-0123456789abcdef0") or mark them as invalid
so they align with the DedicatedHost.ID validation. Ensure you adjust the test
name/expectation (valid vs invalid) for those cases accordingly.

In `@pkg/types/aws/validation/machinepool.go`:
- Around line 85-99: The error path is being double-nested because fldPath is
already hostPlacement; remove the extra fldPath.Child("hostPlacement") calls and
use fldPath directly when building the Invalid field errors. Update both checks
that append to allErrs (the block comparing poolName to
types.MachinePoolComputeRoleName and the block for control plane pools) so they
pass fldPath (not fldPath.Child(...)) to field.Invalid along with
p.HostPlacement and the existing errMsg; keep the rest of the logic and messages
unchanged.

---

Outside diff comments:
In `@pkg/asset/machines/aws/machinesets.go`:
- Around line 91-109: The code currently continues to build and emit a
machineset even when DedicatedHost(in.Hosts, mpool.HostPlacement, az) returns an
empty dedicatedHost, which silently drops host placement and creates a regular
machineset; update the logic around calling provider(&machineProviderInput{...})
to check the pool's affinity (mpool.HostPlacement or the field that indicates
Affinity=DedicatedHost) and if affinity is DedicatedHost and dedicatedHost == ""
then skip creating/emitting the machineset (return or continue) so we don't
produce a non-dedicated machineset for that AZ; reference DedicatedHost(), the
dedicatedHost variable, machineProviderInput and mpool.HostPlacement when making
the change.

---

Duplicate comments:
In `@pkg/asset/installconfig/aws/metadata.go`:
- Around line 379-391: The cache check uses len(m.Hosts) == 0 which treats “not
loaded” and “loaded-but-empty” the same and causes repeated DescribeHosts calls;
change the guard to check for nil (if m.Hosts == nil) so an empty slice returned
by dedicatedHosts is cached, and ensure dedicatedHosts or the assignment leaves
m.Hosts as a non-nil empty slice when no hosts are found (refer to m.Hosts,
EC2Client, and dedicatedHosts).

In `@pkg/destroy/aws/ec2helpers.go`:
- Around line 973-985: The DescribeHosts error handling in the block using
HandleErrorCode(err) incorrectly logs a warning for "UnauthorizedOperation" and
"AccessDenied" but still returns an error, causing destroy to fail when
ec2:DescribeHosts is missing; update the switch in the same function so that for
cases "UnauthorizedOperation" and "AccessDenied" you log the warning via
logger.Warn("User does not have permission to describe dedicated hosts,
skipping") and then return nil instead of falling through to the final
fmt.Errorf return; keep the existing behavior for "InvalidHostID.NotFound"
(return nil) and for other errors continue to return fmt.Errorf("failed to
describe dedicated host %s: %w", id, err).

---

Nitpick comments:
In `@pkg/types/aws/validation/machinepool.go`:
- Around line 111-121: The DedicatedHost branch in Validate (case
aws.HostAffinityDedicatedHost) lacks a check for duplicate host IDs in
p.HostPlacement.DedicatedHost; add uniqueness validation by iterating the slice
(the same loop that checks host.ID) and track seen IDs (e.g., with a
map[string]bool) to append a field.Duplicate or field.Invalid error to
fldPath.Child("dedicatedHost").Index(index).Child("id") when an ID repeats,
ensuring behavior matches the DedicatedHost() helper which deduplicates IDs.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 7ae35e0e-d65b-46e5-883d-66408e6032fb

📥 Commits

Reviewing files that changed from the base of the PR and between 4006847 and d687fa9.

⛔ Files ignored due to path filters (1)
  • pkg/types/aws/zz_generated.deepcopy.go is excluded by !**/zz_generated*
📒 Files selected for processing (20)
  • data/data/install.openshift.io_installconfigs.yaml
  • pkg/asset/cluster/aws/aws.go
  • pkg/asset/installconfig/aws/dedicatedhosts.go
  • pkg/asset/installconfig/aws/metadata.go
  • pkg/asset/installconfig/aws/permissions.go
  • pkg/asset/installconfig/aws/validation.go
  • pkg/asset/installconfig/aws/validation_test.go
  • pkg/asset/machines/aws/machines.go
  • pkg/asset/machines/aws/machinesets.go
  • pkg/asset/machines/worker.go
  • pkg/asset/machines/worker_test.go
  • pkg/destroy/aws/ec2helpers.go
  • pkg/types/aws/machinepool.go
  • pkg/types/aws/validation/featuregates.go
  • pkg/types/aws/validation/featuregates_test.go
  • pkg/types/aws/validation/machinepool.go
  • pkg/types/aws/validation/machinepool_test.go
  • pkg/types/aws/validation/platform.go
  • pkg/types/machinepools.go
  • pkg/types/validation/machinepools.go

Comment on lines +202 to +251
hostPlacement:
description: |-
hostPlacement configures placement on AWS Dedicated Hosts. This allows admins to assign instances to specific host
for a variety of needs including for regulatory compliance, to leverage existing per-socket or per-core software licenses (BYOL),
and to gain visibility and control over instance placement on a physical server.
When omitted, the instance is not constrained to a dedicated host.
properties:
affinity:
description: |-
affinity specifies the affinity setting for the instance.
Allowed values are AnyAvailable and DedicatedHost.
When Affinity is set to DedicatedHost, an instance started onto a specific host always restarts on the same host if stopped. In this scenario, the `dedicatedHost` field must be set.
When Affinity is set to AnyAvailable, and you stop and restart the instance, it can be restarted on any available host.
enum:
- DedicatedHost
- AnyAvailable
type: string
dedicatedHost:
description: |-
dedicatedHost specifies the exact host that an instance should be restarted on if stopped.
dedicatedHost is required when 'affinity' is set to DedicatedHost, and forbidden otherwise.
items:
description: DedicatedHost represents the configuration
for the usage of dedicated host.
properties:
id:
description: |-
id identifies the AWS Dedicated Host on which the instance must run.
The value must start with "h-" followed by 17 lowercase hexadecimal characters (0-9 and a-f).
Must be exactly 19 characters in length.
maxLength: 19
minLength: 19
type: string
x-kubernetes-validations:
- message: hostID must start with 'h-' followed
by 17 lowercase hexadecimal characters (0-9
and a-f)
rule: self.matches('^h-[0-9a-f]{17}$')
required:
- id
type: object
type: array
required:
- affinity
type: object
x-kubernetes-validations:
- message: dedicatedHost is required when affinity is DedicatedHost,
and forbidden otherwise
rule: 'has(self.affinity) && self.affinity == ''DedicatedHost''
? has(self.dedicatedHost) : !has(self.dedicatedHost)'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In Kubernetes CRD x-kubernetes-validations, does has(self.someList) evaluate to true when the list field is present but empty ([]), and is minItems: 1 the recommended schema-level way to require at least one list element?

💡 Result:

  • Yes. In CRD CEL, has(self.someList) is a presence test (for JSON-like objects: “is this field/key defined?”), not a “non-empty” test—so if someList is present in the object as [], has(self.someList) evaluates to true. This aligns with CEL’s intent of unifying presence checks across protobuf and JSON-like types. [1][2]

    • If you need “present and non-empty” in CEL, use something like has(self.someList) && self.someList.size() > 0.
  • Yes. To require at least one list element at the schema level, minItems: 1 is the standard / recommended OpenAPI-schema validation for arrays (preferred over CEL when an OpenAPI validation exists). [3][4]

Sources: Kubernetes CEL docs [2], cel-go README (has macro semantics) [1], Kubernetes CRD validation best practices (prefer OpenAPI validations) [3], controller-tools / OpenAPI marker for MinItems=1 [4].


Enforce non-empty dedicatedHost array when affinity: DedicatedHost.

The has(self.dedicatedHost) check only validates field presence, so a config with affinity: DedicatedHost and dedicatedHost: [] passes validation despite being unusable. Add minItems: 1 to the array definition to reject empty lists at the schema boundary.

Suggested fix
                          dedicatedHost:
+                           minItems: 1
                            description: |-

Apply the same change at lines 1812–1861, 3362–3411, and 5112–5161.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
hostPlacement:
description: |-
hostPlacement configures placement on AWS Dedicated Hosts. This allows admins to assign instances to specific host
for a variety of needs including for regulatory compliance, to leverage existing per-socket or per-core software licenses (BYOL),
and to gain visibility and control over instance placement on a physical server.
When omitted, the instance is not constrained to a dedicated host.
properties:
affinity:
description: |-
affinity specifies the affinity setting for the instance.
Allowed values are AnyAvailable and DedicatedHost.
When Affinity is set to DedicatedHost, an instance started onto a specific host always restarts on the same host if stopped. In this scenario, the `dedicatedHost` field must be set.
When Affinity is set to AnyAvailable, and you stop and restart the instance, it can be restarted on any available host.
enum:
- DedicatedHost
- AnyAvailable
type: string
dedicatedHost:
description: |-
dedicatedHost specifies the exact host that an instance should be restarted on if stopped.
dedicatedHost is required when 'affinity' is set to DedicatedHost, and forbidden otherwise.
items:
description: DedicatedHost represents the configuration
for the usage of dedicated host.
properties:
id:
description: |-
id identifies the AWS Dedicated Host on which the instance must run.
The value must start with "h-" followed by 17 lowercase hexadecimal characters (0-9 and a-f).
Must be exactly 19 characters in length.
maxLength: 19
minLength: 19
type: string
x-kubernetes-validations:
- message: hostID must start with 'h-' followed
by 17 lowercase hexadecimal characters (0-9
and a-f)
rule: self.matches('^h-[0-9a-f]{17}$')
required:
- id
type: object
type: array
required:
- affinity
type: object
x-kubernetes-validations:
- message: dedicatedHost is required when affinity is DedicatedHost,
and forbidden otherwise
rule: 'has(self.affinity) && self.affinity == ''DedicatedHost''
? has(self.dedicatedHost) : !has(self.dedicatedHost)'
hostPlacement:
description: |-
hostPlacement configures placement on AWS Dedicated Hosts. This allows admins to assign instances to specific host
for a variety of needs including for regulatory compliance, to leverage existing per-socket or per-core software licenses (BYOL),
and to gain visibility and control over instance placement on a physical server.
When omitted, the instance is not constrained to a dedicated host.
properties:
affinity:
description: |-
affinity specifies the affinity setting for the instance.
Allowed values are AnyAvailable and DedicatedHost.
When Affinity is set to DedicatedHost, an instance started onto a specific host always restarts on the same host if stopped. In this scenario, the `dedicatedHost` field must be set.
When Affinity is set to AnyAvailable, and you stop and restart the instance, it can be restarted on any available host.
enum:
- DedicatedHost
- AnyAvailable
type: string
dedicatedHost:
minItems: 1
description: |-
dedicatedHost specifies the exact host that an instance should be restarted on if stopped.
dedicatedHost is required when 'affinity' is set to DedicatedHost, and forbidden otherwise.
items:
description: DedicatedHost represents the configuration
for the usage of dedicated host.
properties:
id:
description: |-
id identifies the AWS Dedicated Host on which the instance must run.
The value must start with "h-" followed by 17 lowercase hexadecimal characters (0-9 and a-f).
Must be exactly 19 characters in length.
maxLength: 19
minLength: 19
type: string
x-kubernetes-validations:
- message: hostID must start with 'h-' followed
by 17 lowercase hexadecimal characters (0-9
and a-f)
rule: self.matches('^h-[0-9a-f]{17}$')
required:
- id
type: object
type: array
required:
- affinity
type: object
x-kubernetes-validations:
- message: dedicatedHost is required when affinity is DedicatedHost,
and forbidden otherwise
rule: 'has(self.affinity) && self.affinity == ''DedicatedHost''
? has(self.dedicatedHost) : !has(self.dedicatedHost)'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@data/data/install.openshift.io_installconfigs.yaml` around lines 202 - 251,
The schema allows affinity: DedicatedHost with an empty dedicatedHost array
because has(self.dedicatedHost) only checks presence; update the dedicatedHost
array definition (the dedicatedHost property under hostPlacement with type:
array and item schema containing id) to include minItems: 1 so empty lists are
rejected, and ensure the existing x-kubernetes-validations rule that references
self.dedicatedHost remains unchanged; apply the same minItems: 1 addition to the
other duplicated dedicatedHost array blocks referenced in the comment (the other
two occurrences).

Comment on lines +442 to +445
PermissionDynamicHostAllocation: {
// This is only used during cluster destroy if during cluster destroy we detect a dedicated host with appropriate tags on it.
"ec2:ReleaseHosts",
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Also append the destroy-time host release permission.

RequiredPermissionGroups opts into PermissionDedicatedHosts, but never PermissionDynamicHostAllocation. Since this function already assembles the delete-time permission set too, creds validation will miss ec2:ReleaseHosts, so dedicated-host clusters can validate successfully and still fail to clean up dynamically allocated hosts on destroy.

Suggested fix
 	if includesDedicatdHosts(ic) {
 		permissionGroups = append(permissionGroups, PermissionDedicatedHosts)
+		if !isSecretRegion {
+			permissionGroups = append(permissionGroups, PermissionDynamicHostAllocation)
+		}
 	}

Also applies to: 581-583

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/asset/installconfig/aws/permissions.go` around lines 442 - 445,
RequiredPermissionGroups currently includes PermissionDedicatedHosts but not
PermissionDynamicHostAllocation, so delete-time permission assembly omits the
ec2:ReleaseHosts action; update the code that builds the destroy/delete
permission set (the logic that assembles RequiredPermissionGroups for destroy)
to also include PermissionDynamicHostAllocation alongside
PermissionDedicatedHosts so ec2:ReleaseHosts is present at destroy time; make
the same change where the delete-time groups are assembled later in the file
(the second occurrence around the 581-583 area). Ensure you reference
PermissionDynamicHostAllocation, PermissionDedicatedHosts and
RequiredPermissionGroups when making the change.

Comment on lines +371 to +382
func DedicatedHost(hosts map[string]aws.Host, placement *awstypes.HostPlacement, zone string) string {
// If install-config has HostPlacements configured, lets check the DedicatedHosts to see if one matches our region & zone.
if placement != nil {
// We only support one host ID currently for an instance. Need to also get host that matches the zone the machines will be put into.
for _, host := range placement.DedicatedHost {
hostDetails, found := hosts[host.ID]
if found && hostDetails.Zone == zone {
return hostDetails.ID
}
}
}
return ""
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t silently fall back to shared placement when no host matches the zone.

Returning "" here causes provider() on Lines 309-318 to omit Placement.Host completely. For a pool with Affinity == DedicatedHost, that emits a normal instance config instead of surfacing an invalid host/zone mapping.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/asset/machines/aws/machines.go` around lines 371 - 382, The DedicatedHost
function currently returns an empty string when a placement is provided but no
host matches the zone, which silently omits Placement.Host in provider(); change
DedicatedHost to return (string, error), return a non-nil error when placement
!= nil and no host.ID matches the given zone (include context like requested
host IDs and zone), and update the caller provider() to propagate or handle that
error so the invalid host/zone mapping is surfaced (e.g., fail validation or
emit a clear error) rather than falling back to shared placement.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 10, 2026

@vr4manta: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-gcp-ovn-xpn 9364945 link false /test e2e-gcp-ovn-xpn
ci/prow/e2e-metal-ovn-two-node-fencing 9364945 link false /test e2e-metal-ovn-two-node-fencing
ci/prow/gcp-private 9364945 link false /test gcp-private
ci/prow/e2e-gcp-secureboot 9364945 link false /test e2e-gcp-secureboot
ci/prow/gcp-custom-endpoints-proxy-wif 9364945 link false /test gcp-custom-endpoints-proxy-wif
ci/prow/e2e-metal-assisted 9364945 link false /test e2e-metal-assisted
ci/prow/e2e-metal-ipi-ovn-virtualmedia 9364945 link false /test e2e-metal-ipi-ovn-virtualmedia
ci/prow/e2e-metal-ipi-ovn 9364945 link false /test e2e-metal-ipi-ovn
ci/prow/e2e-gcp-ovn 9364945 link true /test e2e-gcp-ovn
ci/prow/e2e-metal-single-node-live-iso 9364945 link false /test e2e-metal-single-node-live-iso
ci/prow/e2e-gcp-custom-dns 9364945 link false /test e2e-gcp-custom-dns
ci/prow/e2e-azurestack 9364945 link false /test e2e-azurestack
ci/prow/e2e-gcp-default-config 9364945 link false /test e2e-gcp-default-config
ci/prow/e2e-gcp-xpn-dedicated-dns-project 9364945 link false /test e2e-gcp-xpn-dedicated-dns-project
ci/prow/e2e-openstack-ovn 9364945 link true /test e2e-openstack-ovn
ci/prow/e2e-gcp-custom-endpoints 9364945 link false /test e2e-gcp-custom-endpoints
ci/prow/e2e-aws-ovn-dualstack-ipv6-primary-techpreview 37b76cb link false /test e2e-aws-ovn-dualstack-ipv6-primary-techpreview
ci/prow/e2e-aws-ovn-dualstack-ipv4-primary-techpreview 37b76cb link false /test e2e-aws-ovn-dualstack-ipv4-primary-techpreview
ci/prow/unit d687fa9 link true /test unit
ci/prow/e2e-aws-byo-subnet-role-security-groups d687fa9 link false /test e2e-aws-byo-subnet-role-security-groups
ci/prow/e2e-aws-default-config d687fa9 link false /test e2e-aws-default-config
ci/prow/e2e-aws-ovn-edge-zones d687fa9 link false /test e2e-aws-ovn-edge-zones
ci/prow/e2e-aws-ovn d687fa9 link true /test e2e-aws-ovn
ci/prow/e2e-aws-ovn-heterogeneous d687fa9 link false /test e2e-aws-ovn-heterogeneous
ci/prow/e2e-aws-ovn-single-node d687fa9 link false /test e2e-aws-ovn-single-node
ci/prow/e2e-aws-ovn-shared-vpc-edge-zones d687fa9 link false /test e2e-aws-ovn-shared-vpc-edge-zones

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants