Hcp tentant example#147
Conversation
|
🚀 Preview Deployment Success! View your live changes here: https://1025da2a.openshift-examples.pages.dev\n\nUploading... (337/491) 🌎 Deploying... |
|
🚀 Preview Deployment Success! View your live changes here: https://806a1b35.openshift-examples.pages.dev\n\nUploading... (345/491) 🌎 Deploying... |
|
🚀 Preview Deployment Success! View your live changes here: https://1b8cddf7.openshift-examples.pages.dev\n\nUploading... (487/491) 🌎 Deploying... |
|
🚀 Preview Deployment Success! View your live changes here: https://f1d0d251.openshift-examples.pages.dev\n\nUploading... (361/511) 🌎 Deploying... |
|
🚀 Preview Deployment Success! View your live changes here: https://1318dc30.openshift-examples.pages.dev\n\nUploading... (362/511) 🌎 Deploying... |
|
@rguske, please review. |
| |API Server|<li>LoadBalancer (recommended; Kubernetes `LoadBalancer` service)</li><li>NodePort* (not for production)</li>|✅|❌| | ||
| |OAuth|<li>Route (default)</li><li>NodePort* (not for production)</li>|❌|✅| | ||
| |Konnectivity|<li>Route (default)</li><li>LoadBalancer (Kubernetes `LoadBalancer` service)</li><li>NodePort* (not for production)</li>|✅|✅| | ||
| |Ignition|<li>Route (default)</li><li>NodePort* (not for production)</li>|✅|❌| |
There was a problem hiding this comment.
The ✅ is set for LoadBalancer but must be Route.
|
|
||
| Use a dedicated OpenShift Ingress Controller shard on the **hub** so only the hosted-cluster control-plane Routes are served by that shard. Tenant clients resolve OAuth, Konnectivity, and Ignition hostnames to `ingress-shared-lb`, which forwards to the shard’s NodePorts on the management network. | ||
|
|
||
| Place an external load balancer in front of that shard (for example F5 BIG-IP or NetScaler) that can reach the hub’s management network and present stable tenant-facing VIPs or addresses. |
There was a problem hiding this comment.
NIT: (for example F5, BIG-IP or NetScaler)
|
|
||
| ### Router between Mgmt and Tenant-A | ||
|
|
||
| [VyOS](https://vyos.io/) acts as router and firewall between management and Tenant-A. Restrict **lateral** traffic between the two segments (no full mesh); allow only what you need (for example DNS to resolvers, default route or NAT for internet egress). Hosted-cluster control-plane traffic from tenant nodes should flow to the **external load balancer VIPs** in the tenant segment (not directly into arbitrary management subnets). |
There was a problem hiding this comment.
NIT: VyOS acts as router and firewall between the management and the Tenant-A network.
|
|
||
| ## Exposing components via a dedicated router shard | ||
|
|
||
| Use a dedicated OpenShift Ingress Controller shard on the **hub** so only the hosted-cluster control-plane Routes are served by that shard. Tenant clients resolve OAuth, Konnectivity, and Ignition hostnames to `ingress-shared-lb`, which forwards to the shard’s NodePorts on the management network. |
There was a problem hiding this comment.
I wonder why NodePort? Why not using MetalLB? The endpointPublishingStrategy for the Ingress Shard could be type: LoadBalancerService.
| * [2.3.4. Ingress sharding in OpenShift Container Platform](https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/ingress_and_load_balancing/configuring-ingress-cluster-traffic#nw-ingress-sharding-concept_configuring-ingress-cluster-traffic-ingress-controller) | ||
| * [3.1.3.8.1. Example load balancer configuration for user-provisioned clusters](https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/installing_on_vmware_vsphere/user-provisioned-infrastructure) | ||
|
|
||
| ???+ example "Ingress Controller" |
There was a problem hiding this comment.
As already commented above. Why NodePort instead of LoadBalancer?
oc create -f - <<EOF
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: sharded-hcp-2003-57
namespace: openshift-ingress-operator
spec:
routeAdmission:
wildcardPolicy: WildcardsAllowed
domain: hosted.hcp-2003-57.hcp-cps.coe.muc.redhat.com
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: clusters-hcp-2003-57
endpointPublishingStrategy:
type: LoadBalancerService
EOF| router-nodeport-tenant-a NodePort 172.30.141.209 <none> 80:32460/TCP,443:32488/TCP,1936:32095/TCP 106s | ||
| ``` | ||
|
|
||
| The ingress shard load balancer is an RHEL 9 host running HAProxy (external load balancer `ingress-shared-lb`). |
There was a problem hiding this comment.
I think this will be the answer to my LoadBalancer related questions, correct? I mean, NodePort is generally something I try to avoid.
| | `api-lb` | Tenant-facing VIP for the hosted cluster **API** (`APIServer` publishing) | | ||
| | `ingress-lb` | Tenant-facing VIP for **hosted cluster** application Routes (`*.apps…`) | | ||
|
|
||
| Suggested order: (1) hub ingress shard + `ingress-shared-lb` + DNS for the three control-plane hostnames, (2) `api-lb` + API DNS, (3) `ingress-lb` + wildcard apps DNS, then (4) apply `HostedCluster` and `NodePool`. Adjust if your automation creates services first and you backfill DNS once NodePorts or service endpoints are known. |
There was a problem hiding this comment.
Make it a list for better readability. Furthermore, initially I found it hard to understand what is meant by "Suggested order". You mean the configuration/deployment order?
|
|
||
| The following two subsections describe (2) and (3); the hub shard and DNS for OAuth, Konnectivity, and Ignition are covered above. | ||
|
|
||
| ### Deploy External Load Balancer for API (`api-lb`) |
There was a problem hiding this comment.
Maybe rephrase to "Deploy External Load Balancer for the Hosted-Cluster API (api-lb)"
| api.tenant-a.coe.muc.redhat.com. IN A 192.168.203.<IP of VM> | ||
| ``` | ||
|
|
||
| ### Deploy External Load Balancer for Ingress (`ingress-lb`) of hosted cluster |
There was a problem hiding this comment.
Maybe rephrase to "Deploy External Load Balancer for the Hosted-Cluster Ingress (ingress-lb)"
| persistent: | ||
| size: 32Gi | ||
| additionalNetworks: | ||
| - name: default/cudn-localnet1-2003 # (1) |
There was a problem hiding this comment.
Maybe including the creation of the CUDN before this manifest as well`?
oc apply -f - <<EOF
apiVersion: k8s.ovn.org/v1
kind: ClusterUserDefinedNetwork
metadata:
name: cudn-localnet1-2003
spec:
namespaceSelector:
matchExpressions:
- key: hypershift.openshift.io/hosted-control-plane
operator: Exists
network:
topology: Localnet
localnet:
role: Secondary
physicalNetworkName: localnet1
ipam:
mode: Disabled
vlan:
mode: Access
access:
id: 2003
EOF
No description provided.