Any k8s, powered by kro.
Kany8s is a (work-in-progress) Cluster API provider suite — Kany8sCluster (Infrastructure) + Kany8sControlPlane (ControlPlane) — that uses kro (ResourceGraphDefinition / RGD) as a "concretization engine" to create managed Kubernetes control planes (and their prerequisites) on any cloud/provider.
The goal is simple: if you can express it as a kro RGD, Kany8s can drive it via Cluster API.
- Name:
Kany8s= "k(ro)" + "any" + "k8s" (and it’s pronounceable) - Repo status: design-first / prototype
Kany8s separates responsibilities clearly:
- Cluster API-facing CRDs
Kany8sCluster: Infrastructure provider (referenced byCluster.spec.infrastructureRef)Kany8sControlPlane: ControlPlane provider (sets endpoint/initialized/conditions per the CAPI contract)
- kro RGD (provider-specific): materializes real resources (EKS/ACK today, AKS/GKE tomorrow)
- Hides provider-specific status shapes
- Exposes a small, normalized status contract that Kany8s consumes
This keeps the controller provider-agnostic: no “if EKS then … else if GKE then …” branches.
- You create a Cluster API
Clusterthat referencesKany8sCluster+Kany8sControlPlane. Kany8sControlPlanereferences a kroResourceGraphDefinitionviaspec.resourceGraphDefinitionRef.- Kany8s resolves the RGD’s generated GVK and creates exactly one kro instance (1:1).
- Kany8s watches only the kro instance
status. - When the kro instance reports ready + endpoint, Kany8s writes
Kany8sControlPlane.spec.controlPlaneEndpointand setsstatus.initialization.controlPlaneInitialized(Cluster controller then mirrors the endpoint intoCluster.spec.controlPlaneEndpointper the CAPI contract).
Kany8s keeps the core controllers provider-agnostic and pushes provider-specific behavior to the edges.
-
Provider packs (docs + RGDs)
- Provider-specific realization lives in kro
ResourceGraphDefinition(RGD) YAMLs. - Convention:
docs/<provider>/contains runnable procedures, sample manifests, and design notes.
- Provider-specific realization lives in kro
-
Plugins (optional controllers)
- Some providers need extra “glue” beyond what fits naturally into the RGD status contract (e.g. short-lived auth, kubeconfig management, probes).
- For those cases, Kany8s supports opt-in plugins: separate binaries/controllers that run alongside the core controller-manager.
- Plugins should be:
- Explicitly enabled (annotation/label-driven), never on by default.
- Scoped and non-invasive (own only the resources they manage; avoid overwriting user-owned objects).
Repository conventions (current / intended):
- Docs:
docs/<provider>/plugin/ - Code:
internal/plugin/<provider>/andinternal/controller/plugin/<provider>/ - Manifests:
config/<provider>-plugin/ - Image/build:
Dockerfile.<provider>-plugin+make *-<provider>-plugintargets
Example: AWS EKS needs a short-lived IAM token in kubeconfig; see docs/eks/plugin/README.md (kubeconfig rotator plugin).
clusterctl init --infrastructure kany8s --control-plane kany8s (and
cluster-api-operator's InfrastructureProvider / ControlPlaneProvider
resources) install only the provider managers. The EKS-specific plugins live
outside the CAPI provider contract and ship as standalone Helm charts:
| Chart | OCI reference | What it does |
|---|---|---|
eks-kubeconfig-rotator |
oci://ghcr.io/appthrust/charts/eks-kubeconfig-rotator |
Rotates short-lived EKS tokens in the CAPI kubeconfig Secret. |
eks-karpenter-bootstrapper |
oci://ghcr.io/appthrust/charts/eks-karpenter-bootstrapper |
Provisions IAM Role / OIDC provider / SecurityGroup / Fargate profile and installs Karpenter via Flux. |
# replace <tag> with a released version, e.g. v0.1.1
helm install rotator oci://ghcr.io/appthrust/charts/eks-kubeconfig-rotator \
--version <tag> \
--namespace kany8s-eks-system --create-namespace \
--set "serviceAccount.annotations.eks\.amazonaws\.com/role-arn=arn:aws:iam::123456789012:role/eks-rotator"
helm install bootstrapper oci://ghcr.io/appthrust/charts/eks-karpenter-bootstrapper \
--version <tag> \
--namespace kany8s-eks-system \
--set "serviceAccount.annotations.eks\.amazonaws\.com/role-arn=arn:aws:iam::123456789012:role/eks-karpenter-bootstrapper"Both charts follow the ACK controller credential convention: set
aws.credentials.secretName=<secret> to mount a shared-credentials-file
Secret (same format ACK consumes), or leave it empty to let the AWS SDK
default chain resolve credentials from the ServiceAccount (IRSA / EKS Pod
Identity) or EC2 instance metadata. The image reference is a Bitnami-style
image.{registry,repository,tag,digest} split with a global.imageRegistry
override for mirrored / air-gapped environments. See the per-chart READMEs
at charts/eks-kubeconfig-rotator/README.md and
charts/eks-karpenter-bootstrapper/README.md for full value reference and
override recipes.
The config/eks-plugin/ and config/eks-karpenter-bootstrapper/ kustomize
overlays remain for local development and ACK co-location; the Helm charts
are the recommended install path for new deployments.
A Cluster API Cluster will look like this:
apiVersion: cluster.x-k8s.io/v1beta2
kind: Cluster
metadata:
name: demo-cluster
spec:
infrastructureRef:
apiGroup: infrastructure.cluster.x-k8s.io
kind: Kany8sCluster
name: demo-cluster
controlPlaneRef:
apiGroup: controlplane.cluster.x-k8s.io
kind: Kany8sControlPlane
name: demo-clusterKany8s is designed to be consumed via Cluster API ClusterTopology (ClusterClass).
Kany8sControlPlaneTemplateselects the provider implementation viaresourceGraphDefinitionRefand carries defaultkroSpec.Kany8sClusterTemplateprovides the InfrastructureRef required by Cluster API (minimal first; may later materialize shared prerequisites).Cluster.spec.topology.versionis the single source of truth forKany8sControlPlane.spec.version(and is injected into the kro instancespec.version).
A typical topology setup will look like:
apiVersion: cluster.x-k8s.io/v1beta2
kind: ClusterClass
metadata:
name: kany8s-eks
spec:
infrastructure:
ref:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: Kany8sClusterTemplate
name: kany8s-aws
controlPlane:
ref:
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: Kany8sControlPlaneTemplate
name: kany8s-eks
# variables + patches map into `.spec.kroSpec` (details TBD)apiVersion: cluster.x-k8s.io/v1beta2
kind: Cluster
metadata:
name: demo-cluster
spec:
topology:
class: kany8s-eks
version: "1.34"
variables:
- name: region
value: ap-northeast-1
- name: vpc.subnetIDs
value: ["subnet-xxxx", "subnet-yyyy"]
- name: vpc.securityGroupIDs
value: ["sg-zzzz"]Kany8s expects the referenced RGD instance to expose these fields:
status.ready: boolean- Meaning: "ControlPlane ready" (at minimum, the API endpoint is known)
status.endpoint: string- Format:
https://host[:port]orhost[:port] - If port is omitted, Kany8s treats it as
443
- Format:
- (optional)
status.reason: string - (optional)
status.message: string
Note: kro adds reserved fields like status.conditions and status.state automatically, so Kany8s uses the dedicated names above (ready/endpoint/reason/message).
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: Kany8sControlPlane
metadata:
name: demo-cluster
namespace: default
spec:
version: "1.34"
# `controlPlaneEndpoint` is set by Kany8s (CAPI contract)
# controlPlaneEndpoint:
# host: example.eks.amazonaws.com
# port: 443
resourceGraphDefinitionRef:
name: eks-control-plane
kroSpec:
region: ap-northeast-1
vpc:
subnetIDs:
- subnet-xxxx
- subnet-yyyy
securityGroupIDs:
- sg-zzzzapiVersion: kro.run/v1alpha1
kind: EKSControlPlane
metadata:
name: demo-cluster
namespace: default
spec:
version: "1.34" # injected/overwritten by Kany8s
region: ap-northeast-1
vpc:
subnetIDs:
- subnet-xxxx
- subnet-yyyy
securityGroupIDs:
- sg-zzzzschema:
status:
ready: ${cluster.status.status == "ACTIVE" && cluster.status.endpoint != ""}
endpoint: ${cluster.status.endpoint}- MVP focuses on ControlPlane provider responsibilities (
Kany8sControlPlane: endpoint/initialized/conditions) - Implements
spec.controlPlaneEndpoint+status.initialization.controlPlaneInitializedper the CAPI contract Kany8sCluster(Infrastructure provider) is planned/TBD- Keeps provider-specific logic inside RGD(s)
- Does not adopt CAPT’s Terraform-style "Template → Apply" pattern as a core concept
- Does not write Terraform-like outputs to Secrets for endpoint/initialized (for now)
- Kubeconfig secret management (
<cluster>-kubeconfig) is required by the CAPI contract (planned)
This is a local smoke test that exercises the full flow:
install (kro + Kany8s) -> apply RGD -> apply Cluster / Kany8sControlPlane.
Prereqs:
kind,kubectl, anddockerclusterctl(only if you want to apply the Cluster APIClusterobject)
-
Create a kind management cluster:
kind create cluster --name kany8s --wait 60s
-
Install kro (v0.7.1 tested):
kubectl create namespace kro-systemkubectl apply -f https://github.com/kubernetes-sigs/kro/releases/download/v0.7.1/kro-core-install-manifests.yamlkubectl rollout status -n kro-system deploy/kro
Note: kro v0.7.1 may require relaxed RBAC for its dynamic controller to watch generated CRDs. See
docs/reference/kro-v0.7.1-kind-notes.mdfor details and the exact manifest. -
Install Kany8s CRDs:
make install
-
Run the controller locally (in another terminal):
make run
-
Apply the demo RGD (normalized
ready/endpointstatus contract):kubectl apply -f examples/kro/ready-endpoint/rgd.yaml
-
Apply the sample Cluster + Kany8sControlPlane (requires Cluster API installed):
kubectl apply -f examples/capi/cluster.yaml
If you don't have Cluster API installed yet, apply only the
Kany8sControlPlaneobject from that file. -
Observe:
kubectl get kany8scontrolplanes -n default -o widekubectl get democontrolplanes.kro.run -n default -o wide
For reproducible end-to-end checks (fresh kind clusters + artifacts), see test/acceptance_test/README.md.
- kro demo flow (managed control plane reflection):
make test-acceptance-kro-reflection - kro demo flow (managed infra reflection):
make test-acceptance-kro-infra-reflection - kro demo flow (managed infra cluster identity):
make test-acceptance-kro-infra-cluster-identity - kro demo flow with 2 RGDs (multi-instance-kind):
make test-acceptance-kro-reflection-multi-rgd - self-managed (CAPD + kubeadm):
make test-acceptance-capd-kubeadm
Legacy aliases are still supported:
make test-acceptance->make test-acceptance-kro-reflectionmake test-acceptance-multi-rgd->make test-acceptance-kro-reflection-multi-rgdmake test-acceptance-self-managed->make test-acceptance-capd-kubeadm
- Go (toolchain is pinned in
go.mod) make- Optional:
docker(formake docker-build) - Optional:
kubectl+ access to a Kubernetes cluster (formake run) - Optional:
kind(formake test-e2e)
The Makefile auto-downloads build/test tooling into ./bin/ (kustomize, controller-gen, setup-envtest, golangci-lint).
make test: run unit tests (includesmake generate+make manifests)make lint: rungolangci-lintmake run: run the controller locally against your current kubeconfig context
For code generation only:
make generatemake manifests
docs/PRD.md: product requirements (Why/What/How)docs/adr/README.md: design decisions (ADR)docs/reference/rgd-contract.md: normalized status contract for RGD instancesdocs/reference/rgd-guidelines.md: RGD authoring guidance (kro pitfalls)docs/guides/e2e-and-acceptance-test.md: test layers and acceptance runnersdocs/runbooks/: operational runbooksdocs/eks/README.md: AWS EKS smoke test (ACK + kro) and BYO network flowdocs/eks/plugin/README.md: provider-specific plugin notes (EKS kubeconfig rotator)docs/archive/: historical notes/drafts
- Implement
Kany8sControlPlaneCRD + controller - Implement
Kany8sClusterCRD + controller (optional/minimal first) - Provide a working AWS/EKS RGD (
eks-control-plane) as a reference - Add clusterctl/helm packaging
- Add ClusterTopology/ClusterClass examples (templates + patches)
- Extend RGD catalog for other providers (AKS/GKE/etc.)
TBD