CORS-3900: Add IPv6 frontend IP configurations for dual-stack#10328
CORS-3900: Add IPv6 frontend IP configurations for dual-stack#10328rna-afk wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
@rna-afk: This pull request references CORS-3900 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
| if virtualNetwork != nil { | ||
| virtualNetworkID = *virtualNetwork.ID | ||
| } | ||
| lbip, err := getNextAvailableIPForLoadBalancer(ctx, installConfig, lbip) |
There was a problem hiding this comment.
At this point for dualstack, apiServerLB.FrontendIPs would contain both ipv4 lbip and ipv6 IP addresses. But lbip ip address passed in here will contain only the IPv4 address. Can we update getNextAvailableIPForLoadBalancer() as part of this PR to handle ipv6 subnets in the field platform.Azure.Subnets?
There was a problem hiding this comment.
Wanted to try the dynamic allocation so we don't get the reserved IP problem. Was hoping this will work so I can fix the ipv4 address too to dynamic allocation. Need to test with John's work to see if it really works.
There was a problem hiding this comment.
Looks like John added it in his PR #10319
| } | ||
|
|
||
| func createAPILoadBalancer(ctx context.Context, pip *armnetwork.PublicIPAddress, in *lbInput) (*armnetwork.LoadBalancer, error) { | ||
| func createAPILoadBalancer(ctx context.Context, pip, pipv6 *armnetwork.PublicIPAddress, in *lbInput) (*armnetwork.LoadBalancer, error) { |
There was a problem hiding this comment.
pipv6 is not used in this method. Updates to this method are still being worked on?
There was a problem hiding this comment.
This is needed in the next backend pool PR. I had some errors coming in so added it here too.
55bb8c4 to
71f85d1
Compare
Adding IPv6 public IP creation and frontend IP configurations on external load balancers and adding IPv6 frontend IP to CAPZ apiServerLB manifest.
4f9fa53 to
7b6dffa
Compare
WalkthroughAdds IPv6/dual-stack networking support to Azure infrastructure provisioning by introducing IPv6 public IP creation, dual-stack load balancer frontend configurations, and IPv6 NAT rule association logic, guarded behind dual-stack enablement checks throughout the provider and network layers. Changes
Sequence Diagram(s)sequenceDiagram
participant Provider as Provider
participant InfraReady as InfraReady Flow
participant PublicIP as Public IP Creation
participant LBOps as Load Balancer Operations
participant NAT as NAT Rule Association
Provider->>InfraReady: Check DualStackEnabled()
alt Dual-Stack Enabled
InfraReady->>PublicIP: Create IPv6 Public IP
PublicIP-->>Provider: Return IPv6 address
Provider->>LBOps: Pass IPv6 IP reference
LBOps->>LBOps: Add IPv6 frontend config
LBOps->>LBOps: Create IPv6 backend pool
LBOps->>LBOps: Add IPv6 load balancing rules
LBOps-->>Provider: Updated LB with dual-stack
Provider->>NAT: Update internal LB post-reconciliation
NAT->>NAT: addIPv6InternalLBFrontend
NAT-->>Provider: IPv6 frontend appended
Provider->>NAT: Associate IPv6 NAT rules
NAT->>NAT: Detect -v6 in frontendIPConfigID
NAT->>NAT: Match to IPv6 IP config
NAT-->>Provider: NAT rules attached
else IPv4-Only
InfraReady->>LBOps: Use existing IPv4 logic
LBOps-->>Provider: IPv4-only LB
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.5.0)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@rna-afk: This pull request references CORS-3900 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/infrastructure/azure/azure.go`:
- Around line 622-645: The code creates a public IPv6 and updates the outbound
IPv6 LB but never provisions the IPv6 SSH inbound NAT rule; after
updateOutboundIPv6LoadBalancer(...) use addInboundNatRuleToLoadBalancer(...) to
create the SSH inbound NAT rule on outboundLBName using publicIPv6outbound (pass
the load balancer client from p.NetworkClientFactory.NewLoadBalancersClient(),
the resource group and infraID), then call
associateInboundNatRuleToInterface(...) (using
p.NetworkClientFactory.NewInterfacesClient(), the created NAT rule name,
resource group and the relevant interface(s)/VMSS NICs for bootstrap) so the
IPv6 SSH port is actually forwarded (refer to addInboundNatRuleToLoadBalancer
and associateInboundNatRuleToInterface for exact parameter ordering and naming).
In `@pkg/infrastructure/azure/network.go`:
- Around line 144-179: The dual-stack path adds an IPv6 frontend but never
creates a matching IPv6 load-balancing rule, so the API remains IPv4-only;
update both createAPILoadBalancer and
updateOutboundLoadBalancerToAPILoadBalancer to add a second
armnetwork.LoadBalancingRule (e.g., name "api-v6") mirroring the "api-v4" rule
but referencing the IPv6 frontend (frontendIPv6Name), using the same
ports/probe/backend pool and appropriate IPv6 settings; ensure the new rule is
appended to LoadBalancingRules alongside the existing "api-v4" entry so the IPv6
public IP forwards 6443 traffic too.
- Around line 533-557: The loop that attaches inboundNatRule to
bootstrapInterface.Properties.IPConfigurations can fall through without
attaching (due to the IPv4/IPv6 filter) but still return success; add a boolean
flag (e.g., attached) initialized false before the loop, set it true when you
append inboundNatRule to an IP config, and after the loop check the flag and
return an error if attached is false so the caller knows the NAT rule was not
attached to the bootstrap NIC (use the existing identifiers bootstrapInterface,
inboundNatRule, and in.frontendIPConfigID to locate the logic).
- Around line 373-397: The current logic always appends a new IPv6 frontend
config named frontendIPv6Name causing duplicates; instead, scan
existingFrontEndIPConfig (or intLB.Properties.FrontendIPConfigurations) for an
entry with Name == frontendIPv6Name and if found, update that entry's Properties
(set PrivateIPAllocationMethod, PrivateIPAddressVersion and Subnet.ID if
subnetID != "") to ensure it has the desired IPv6 settings; only if no matching
Name is found, append the new frontendIPv6. Use the existing variables
frontendIPv6Name, frontendIPv6 (or create a properties struct),
existingFrontEndIPConfig and intLB.Properties.FrontendIPConfigurations to locate
and mutate or append the config so reconciliation is idempotent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 6840915f-3259-48b4-b67f-5b83003c702e
📒 Files selected for processing (3)
pkg/asset/manifests/azure/cluster.gopkg/infrastructure/azure/azure.gopkg/infrastructure/azure/network.go
| // For dual-stack, create IPv6 inbound NAT rule for SSH access to bootstrap. | ||
| if in.InstallConfig.Config.Azure.IPFamily.DualStackEnabled() { | ||
| publicIPv6outbound, err := createPublicIP(ctx, &pipInput{ | ||
| name: fmt.Sprintf("%s-pip-v6-outbound-lb", in.InfraID), | ||
| infraID: in.InfraID, | ||
| region: in.InstallConfig.Config.Azure.Region, | ||
| resourceGroup: p.ResourceGroupName, | ||
| pipClient: p.NetworkClientFactory.NewPublicIPAddressesClient(), | ||
| tags: p.Tags, | ||
| ipversion: armnetwork.IPVersionIPv6, | ||
| }) | ||
| if err != nil { | ||
| return fmt.Errorf("failed to create public ipv6 for outbound ipv6 lb: %w", err) | ||
| } | ||
| logrus.Debugf("created public ipv6 for outbound ipv6 lb: %s", *publicIPv6outbound.ID) | ||
|
|
||
| // Update the outbound node IPv6 load balancer. | ||
| outboundLBName := fmt.Sprintf("%s-ipv6-outbound-node-lb", in.InfraID) | ||
| err = updateOutboundIPv6LoadBalancer(ctx, publicIPv6outbound, p.NetworkClientFactory.NewLoadBalancersClient(), p.ResourceGroupName, outboundLBName, in.InfraID) | ||
| if err != nil { | ||
| return fmt.Errorf("failed to set public ipv6 to outbound ipv6 lb: %w", err) | ||
| } | ||
| logrus.Debugf("updated outbound ipv6 lb %s with public ipv6: %s", outboundLBName, *publicIPv6outbound.ID) | ||
| } |
There was a problem hiding this comment.
This block never creates the IPv6 SSH forward it describes.
It allocates a public IPv6 and patches the outbound node LB, but it never calls addInboundNatRuleToLoadBalancer or associateInboundNatRuleToInterface. Dual-stack still only gets the IPv4 *_ssh_in path.
💡 Suggested change
// For dual-stack, create IPv6 inbound NAT rule for SSH access to bootstrap.
if in.InstallConfig.Config.Azure.IPFamily.DualStackEnabled() {
+ frontendIPv6ConfigID := fmt.Sprintf("/subscriptions/%s/resourceGroups/%s/providers/Microsoft.Network/loadBalancers/%s/frontendIPConfigurations/%s",
+ subscriptionID,
+ p.ResourceGroupName,
+ loadBalancerName,
+ "public-lb-ip-v6",
+ )
+ sshRuleNameV6 := fmt.Sprintf("%s_ssh_in_v6", in.InfraID)
+ inboundNatRuleV6, err := addInboundNatRuleToLoadBalancer(ctx, &inboundNatRuleInput{
+ resourceGroupName: p.ResourceGroupName,
+ loadBalancerName: loadBalancerName,
+ frontendIPConfigID: frontendIPv6ConfigID,
+ inboundNatRuleName: sshRuleNameV6,
+ inboundNatRulePort: 22,
+ networkClientFactory: p.NetworkClientFactory,
+ })
+ if err != nil {
+ return fmt.Errorf("failed to create inbound ipv6 nat rule: %w", err)
+ }
+ _, err = associateInboundNatRuleToInterface(ctx, &inboundNatRuleInput{
+ resourceGroupName: p.ResourceGroupName,
+ loadBalancerName: loadBalancerName,
+ bootstrapNicName: fmt.Sprintf("%s-bootstrap-nic", in.InfraID),
+ frontendIPConfigID: frontendIPv6ConfigID,
+ inboundNatRuleID: *inboundNatRuleV6.ID,
+ inboundNatRuleName: sshRuleNameV6,
+ inboundNatRulePort: 22,
+ networkClientFactory: p.NetworkClientFactory,
+ })
+ if err != nil {
+ return fmt.Errorf("failed to associate inbound ipv6 nat rule to interface: %w", err)
+ }
+
publicIPv6outbound, err := createPublicIP(ctx, &pipInput{
name: fmt.Sprintf("%s-pip-v6-outbound-lb", in.InfraID),
infraID: in.InfraID,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/infrastructure/azure/azure.go` around lines 622 - 645, The code creates a
public IPv6 and updates the outbound IPv6 LB but never provisions the IPv6 SSH
inbound NAT rule; after updateOutboundIPv6LoadBalancer(...) use
addInboundNatRuleToLoadBalancer(...) to create the SSH inbound NAT rule on
outboundLBName using publicIPv6outbound (pass the load balancer client from
p.NetworkClientFactory.NewLoadBalancersClient(), the resource group and
infraID), then call associateInboundNatRuleToInterface(...) (using
p.NetworkClientFactory.NewInterfacesClient(), the created NAT rule name,
resource group and the relevant interface(s)/VMSS NICs for bootstrap) so the
IPv6 SSH port is actually forwarded (refer to addInboundNatRuleToLoadBalancer
and associateInboundNatRuleToInterface for exact parameter ordering and naming).
There was a problem hiding this comment.
called in a later PR
There was a problem hiding this comment.
@rna-afk, understood — thanks for the heads-up! Would you like me to open a follow-up GitHub issue to track the missing IPv6 SSH inbound NAT rule creation and association for the dual-stack bootstrap path, so it isn't lost before that later PR lands?
| LoadBalancingRules: []*armnetwork.LoadBalancingRule{ | ||
| { | ||
| Name: to.Ptr("api-v4"), | ||
| Properties: &armnetwork.LoadBalancingRulePropertiesFormat{ | ||
| Protocol: to.Ptr(armnetwork.TransportProtocolTCP), | ||
| FrontendPort: to.Ptr[int32](6443), | ||
| BackendPort: to.Ptr[int32](6443), | ||
| IdleTimeoutInMinutes: to.Ptr[int32](30), | ||
| EnableFloatingIP: to.Ptr(false), | ||
| LoadDistribution: to.Ptr(armnetwork.LoadDistributionDefault), | ||
| FrontendIPConfiguration: &armnetwork.SubResource{ | ||
| ID: to.Ptr(fmt.Sprintf("/%s/%s/frontendIPConfigurations/%s", in.idPrefix, in.loadBalancerName, *frontendIPv4Name)), | ||
| }, | ||
| BackendAddressPool: &armnetwork.SubResource{ | ||
| ID: to.Ptr(fmt.Sprintf("/%s/%s/backendAddressPools/%s", in.idPrefix, in.loadBalancerName, in.backendAddressPoolName)), | ||
| }, | ||
| Probe: &armnetwork.SubResource{ | ||
| ID: to.Ptr(fmt.Sprintf("/%s/%s/probes/%s", in.idPrefix, in.loadBalancerName, probeName)), | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| Tags: in.tags, | ||
| }, nil) | ||
| }, | ||
| Tags: in.tags, | ||
| } | ||
| if in.isDualstack { | ||
| loadBalancer.Properties.FrontendIPConfigurations = append(loadBalancer.Properties.FrontendIPConfigurations, | ||
| &armnetwork.FrontendIPConfiguration{ | ||
| Name: frontendIPv6Name, | ||
| Properties: &armnetwork.FrontendIPConfigurationPropertiesFormat{ | ||
| PrivateIPAllocationMethod: to.Ptr(armnetwork.IPAllocationMethodDynamic), | ||
| PrivateIPAddressVersion: to.Ptr(armnetwork.IPVersionIPv6), | ||
| PublicIPAddress: pipv6, | ||
| }, | ||
| }) | ||
| } |
There was a problem hiding this comment.
Add an IPv6 API rule for the new -v6 frontend.
Both dual-stack code paths add an IPv6 frontend, but they only create api-v4. The IPv6 public IP will exist without any 6443 forwarding rule, so the API stays IPv4-only.
💡 Suggested change
if in.isDualstack {
loadBalancer.Properties.FrontendIPConfigurations = append(loadBalancer.Properties.FrontendIPConfigurations,
&armnetwork.FrontendIPConfiguration{
Name: frontendIPv6Name,
Properties: &armnetwork.FrontendIPConfigurationPropertiesFormat{
PrivateIPAllocationMethod: to.Ptr(armnetwork.IPAllocationMethodDynamic),
PrivateIPAddressVersion: to.Ptr(armnetwork.IPVersionIPv6),
PublicIPAddress: pipv6,
},
})
+ loadBalancer.Properties.LoadBalancingRules = append(loadBalancer.Properties.LoadBalancingRules,
+ &armnetwork.LoadBalancingRule{
+ Name: to.Ptr("api-v6"),
+ Properties: &armnetwork.LoadBalancingRulePropertiesFormat{
+ Protocol: to.Ptr(armnetwork.TransportProtocolTCP),
+ FrontendPort: to.Ptr[int32](6443),
+ BackendPort: to.Ptr[int32](6443),
+ IdleTimeoutInMinutes: to.Ptr[int32](30),
+ EnableFloatingIP: to.Ptr(false),
+ LoadDistribution: to.Ptr(armnetwork.LoadDistributionDefault),
+ FrontendIPConfiguration: &armnetwork.SubResource{
+ ID: to.Ptr(fmt.Sprintf("/%s/%s/frontendIPConfigurations/%s-v6", in.idPrefix, in.loadBalancerName, in.frontendIPConfigName)),
+ },
+ BackendAddressPool: &armnetwork.SubResource{
+ ID: to.Ptr(fmt.Sprintf("/%s/%s/backendAddressPools/%s", in.idPrefix, in.loadBalancerName, in.backendAddressPoolName)),
+ },
+ Probe: &armnetwork.SubResource{
+ ID: to.Ptr(fmt.Sprintf("/%s/%s/probes/%s", in.idPrefix, in.loadBalancerName, probeName)),
+ },
+ },
+ })
}Apply the same pattern in both createAPILoadBalancer and updateOutboundLoadBalancerToAPILoadBalancer.
Also applies to: 233-254
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/infrastructure/azure/network.go` around lines 144 - 179, The dual-stack
path adds an IPv6 frontend but never creates a matching IPv6 load-balancing
rule, so the API remains IPv4-only; update both createAPILoadBalancer and
updateOutboundLoadBalancerToAPILoadBalancer to add a second
armnetwork.LoadBalancingRule (e.g., name "api-v6") mirroring the "api-v4" rule
but referencing the IPv6 frontend (frontendIPv6Name), using the same
ports/probe/backend pool and appropriate IPv6 settings; ensure the new rule is
appended to LoadBalancingRules alongside the existing "api-v4" entry so the IPv6
public IP forwards 6443 traffic too.
There was a problem hiding this comment.
Rules are in the backend pr i think
There was a problem hiding this comment.
@rna-afk, understood — if the IPv6 load-balancing rules (e.g., api-v6 on port 6443) are being added in a separate backend PR, that makes sense as a staged approach. Just make sure that PR is merged before or alongside this one in production, since until those rules land the IPv6 frontend IP will exist but won't forward any API traffic. Would it help to open a follow-up issue to track the dependency?
| existingFrontEndIPConfig := intLB.Properties.FrontendIPConfigurations | ||
| if len(existingFrontEndIPConfig) == 0 { | ||
| return fmt.Errorf("could not get frontEndIPConfig for internal LB %s", *intLB.Name) | ||
| } | ||
|
|
||
| var subnetID string | ||
| if existingFrontEndIPConfig[0].Properties != nil && existingFrontEndIPConfig[0].Properties.Subnet != nil { | ||
| subnetID = *existingFrontEndIPConfig[0].Properties.Subnet.ID | ||
| } | ||
|
|
||
| existingFrontEndIPConfigName := *(existingFrontEndIPConfig[0].Name) | ||
| frontendIPv6Name := fmt.Sprintf("%s-v6", existingFrontEndIPConfigName) | ||
| frontendIPv6 := &armnetwork.FrontendIPConfiguration{ | ||
| Name: to.Ptr(frontendIPv6Name), | ||
| Properties: &armnetwork.FrontendIPConfigurationPropertiesFormat{ | ||
| PrivateIPAllocationMethod: to.Ptr(armnetwork.IPAllocationMethodDynamic), | ||
| PrivateIPAddressVersion: to.Ptr(armnetwork.IPVersionIPv6), | ||
| }, | ||
| } | ||
| if subnetID != "" { | ||
| frontendIPv6.Properties.Subnet = &armnetwork.Subnet{ | ||
| ID: to.Ptr(subnetID), | ||
| } | ||
| } | ||
| intLB.Properties.FrontendIPConfigurations = append(intLB.Properties.FrontendIPConfigurations, frontendIPv6) |
There was a problem hiding this comment.
Make the internal -v6 frontend reconciliation idempotent.
This always appends {existing}-v6. pkg/asset/manifests/azure/cluster.go already declares the same frontend name for dual-stack, so once CAPZ reconciles the LB, PostProvision will submit duplicate frontend configs. A retry hits the same failure even without CAPZ.
💡 Suggested change
existingFrontEndIPConfigName := *(existingFrontEndIPConfig[0].Name)
frontendIPv6Name := fmt.Sprintf("%s-v6", existingFrontEndIPConfigName)
+ for _, cfg := range existingFrontEndIPConfig {
+ if cfg.Name != nil && *cfg.Name == frontendIPv6Name {
+ return nil
+ }
+ }
frontendIPv6 := &armnetwork.FrontendIPConfiguration{
Name: to.Ptr(frontendIPv6Name),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| existingFrontEndIPConfig := intLB.Properties.FrontendIPConfigurations | |
| if len(existingFrontEndIPConfig) == 0 { | |
| return fmt.Errorf("could not get frontEndIPConfig for internal LB %s", *intLB.Name) | |
| } | |
| var subnetID string | |
| if existingFrontEndIPConfig[0].Properties != nil && existingFrontEndIPConfig[0].Properties.Subnet != nil { | |
| subnetID = *existingFrontEndIPConfig[0].Properties.Subnet.ID | |
| } | |
| existingFrontEndIPConfigName := *(existingFrontEndIPConfig[0].Name) | |
| frontendIPv6Name := fmt.Sprintf("%s-v6", existingFrontEndIPConfigName) | |
| frontendIPv6 := &armnetwork.FrontendIPConfiguration{ | |
| Name: to.Ptr(frontendIPv6Name), | |
| Properties: &armnetwork.FrontendIPConfigurationPropertiesFormat{ | |
| PrivateIPAllocationMethod: to.Ptr(armnetwork.IPAllocationMethodDynamic), | |
| PrivateIPAddressVersion: to.Ptr(armnetwork.IPVersionIPv6), | |
| }, | |
| } | |
| if subnetID != "" { | |
| frontendIPv6.Properties.Subnet = &armnetwork.Subnet{ | |
| ID: to.Ptr(subnetID), | |
| } | |
| } | |
| intLB.Properties.FrontendIPConfigurations = append(intLB.Properties.FrontendIPConfigurations, frontendIPv6) | |
| existingFrontEndIPConfig := intLB.Properties.FrontendIPConfigurations | |
| if len(existingFrontEndIPConfig) == 0 { | |
| return fmt.Errorf("could not get frontEndIPConfig for internal LB %s", *intLB.Name) | |
| } | |
| var subnetID string | |
| if existingFrontEndIPConfig[0].Properties != nil && existingFrontEndIPConfig[0].Properties.Subnet != nil { | |
| subnetID = *existingFrontEndIPConfig[0].Properties.Subnet.ID | |
| } | |
| existingFrontEndIPConfigName := *(existingFrontEndIPConfig[0].Name) | |
| frontendIPv6Name := fmt.Sprintf("%s-v6", existingFrontEndIPConfigName) | |
| for _, cfg := range existingFrontEndIPConfig { | |
| if cfg.Name != nil && *cfg.Name == frontendIPv6Name { | |
| return nil | |
| } | |
| } | |
| frontendIPv6 := &armnetwork.FrontendIPConfiguration{ | |
| Name: to.Ptr(frontendIPv6Name), | |
| Properties: &armnetwork.FrontendIPConfigurationPropertiesFormat{ | |
| PrivateIPAllocationMethod: to.Ptr(armnetwork.IPAllocationMethodDynamic), | |
| PrivateIPAddressVersion: to.Ptr(armnetwork.IPVersionIPv6), | |
| }, | |
| } | |
| if subnetID != "" { | |
| frontendIPv6.Properties.Subnet = &armnetwork.Subnet{ | |
| ID: to.Ptr(subnetID), | |
| } | |
| } | |
| intLB.Properties.FrontendIPConfigurations = append(intLB.Properties.FrontendIPConfigurations, frontendIPv6) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/infrastructure/azure/network.go` around lines 373 - 397, The current
logic always appends a new IPv6 frontend config named frontendIPv6Name causing
duplicates; instead, scan existingFrontEndIPConfig (or
intLB.Properties.FrontendIPConfigurations) for an entry with Name ==
frontendIPv6Name and if found, update that entry's Properties (set
PrivateIPAllocationMethod, PrivateIPAddressVersion and Subnet.ID if subnetID !=
"") to ensure it has the desired IPv6 settings; only if no matching Name is
found, append the new frontendIPv6. Use the existing variables frontendIPv6Name,
frontendIPv6 (or create a properties struct), existingFrontEndIPConfig and
intLB.Properties.FrontendIPConfigurations to locate and mutate or append the
config so reconciliation is idempotent.
| // Determine if this is an IPv6 NAT rule by checking the frontend IP config name | ||
| isIPv6NatRule := strings.Contains(in.frontendIPConfigID, "-v6") | ||
|
|
||
| for i, ipConfig := range bootstrapInterface.Properties.IPConfigurations { | ||
| var ipVersion armnetwork.IPVersion | ||
| if ipConfig.Properties.PrivateIPAddressVersion != nil { | ||
| ipVersion = *ipConfig.Properties.PrivateIPAddressVersion | ||
| } else { | ||
| ipVersion = armnetwork.IPVersionIPv4 | ||
| } | ||
|
|
||
| // Match NAT rule IP version to IP config version | ||
| isIPv6Config := ipVersion == armnetwork.IPVersionIPv6 | ||
| if isIPv6NatRule != isIPv6Config { | ||
| continue | ||
| } | ||
|
|
||
| // Add NAT rule to the first matching IP config and break | ||
| // (A NAT rule can only be attached to one IP config) | ||
| ipConfig.Properties.LoadBalancerInboundNatRules = append(ipConfig.Properties.LoadBalancerInboundNatRules, | ||
| &inboundNatRule, | ||
| ) | ||
| bootstrapInterface.Properties.IPConfigurations[i] = ipConfig | ||
| break | ||
| } |
There was a problem hiding this comment.
Fail when no bootstrap NIC IP configuration matches the NAT rule family.
After the IPv4/IPv6 filtering was added, this loop can fall through and still return success. That leaves the NAT rule on the load balancer but unattached to the bootstrap NIC.
💡 Suggested change
// Determine if this is an IPv6 NAT rule by checking the frontend IP config name
isIPv6NatRule := strings.Contains(in.frontendIPConfigID, "-v6")
+ matched := false
for i, ipConfig := range bootstrapInterface.Properties.IPConfigurations {
var ipVersion armnetwork.IPVersion
if ipConfig.Properties.PrivateIPAddressVersion != nil {
@@
ipConfig.Properties.LoadBalancerInboundNatRules = append(ipConfig.Properties.LoadBalancerInboundNatRules,
&inboundNatRule,
)
bootstrapInterface.Properties.IPConfigurations[i] = ipConfig
+ matched = true
break
}
+ if !matched {
+ return nil, fmt.Errorf("no bootstrap NIC IP configuration matches frontend %s", in.frontendIPConfigID)
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@pkg/infrastructure/azure/network.go` around lines 533 - 557, The loop that
attaches inboundNatRule to bootstrapInterface.Properties.IPConfigurations can
fall through without attaching (due to the IPv4/IPv6 filter) but still return
success; add a boolean flag (e.g., attached) initialized false before the loop,
set it true when you append inboundNatRule to an IP config, and after the loop
check the flag and return an error if attached is false so the caller knows the
NAT rule was not attached to the bootstrap NIC (use the existing identifiers
bootstrapInterface, inboundNatRule, and in.frontendIPConfigID to locate the
logic).
|
@rna-afk: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Adding IPv6 public IP creation and frontend IP configurations on external load balancers and adding IPv6 frontend IP to CAPZ apiServerLB manifest.
Also adding IPv6 NAT rule for bootstrap SSH access and updating NAT rules to the correct IP version on the bootstrap NIC.
Summary by CodeRabbit