diff --git a/calico-cloud/operations/comms/secure-bgp.mdx b/calico-cloud/networking/configuring/secure-bgp.mdx similarity index 100% rename from calico-cloud/operations/comms/secure-bgp.mdx rename to calico-cloud/networking/configuring/secure-bgp.mdx diff --git a/calico-cloud/operations/comms/apiserver-tls.mdx b/calico-cloud/operations/comms/apiserver-tls.mdx deleted file mode 100644 index 32c4455e7c..0000000000 --- a/calico-cloud/operations/comms/apiserver-tls.mdx +++ /dev/null @@ -1,54 +0,0 @@ ---- -description: Add TLS certificates to secure access to the Calico Cloud API server. ---- - -# Provide TLS certificates for the API server - -## Big picture - -Provide TLS certificates to secure access to the $[prodname] API server. - -## Value - -Providing TLS certificates for $[prodname] components is recommended as part of a zero trust network model for security. - -## Concepts - -### $[prodname] API server - -The $[prodname] API server handles requests for $[prodname] API resources. The main Kubernetes API server has an aggregation layer and will proxy requests for the $[prodname] API resources to the $[prodname] API server. - -## Before you begin... - -By default, the $[prodname] API server uses self-signed certificates on connections. To provide TLS certificates, -get the certificate and key pair for the $[prodname] API Server using any X.509-compatible tool or from your organization's Certificate Authority. The certificate must have Common Name or a Subject Alternate Name of `calico-api.calico-system.svc`. - -This feature is available for Kubernetes and OpenShift. - -## How to - -### Add TLS certificates - -To provide certificates for use during deployment you must create a secret before applying the 'custom-resource.yaml' or before creating the Installation resource. To specify certificates for use in the $[prodname] web console, create a secret using the following command: - -```bash -kubectl create secret generic calico-apiserver-certs -n tigera-operator --from-file=apiserver.crt= --from-file=apiserver.key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic calico-apiserver-certs -n tigera-operator --from-file=apiserver.crt= --from-file=apiserver.key= --dry-run -o yaml --save-config | kubectl replace -f - -``` - -:::note - -If the $[prodname] API server is already running, updating the secret restarts the API server. While the server restarts, the $[prodname] API server may be unavailable for a short period of time. - -::: - -{/*TODO-XREFS-CC -## Additional resources - -Additional documentation is available for securing [the $[prodname] web console connections](crypto-auth.mdx). -*/} \ No newline at end of file diff --git a/calico-cloud/operations/comms/certificate-management.mdx b/calico-cloud/operations/comms/certificate-management.mdx deleted file mode 100644 index 60d77048ae..0000000000 --- a/calico-cloud/operations/comms/certificate-management.mdx +++ /dev/null @@ -1,145 +0,0 @@ ---- -description: Control the issuer of certificates used by Calico Cloud. ---- - -# Manage TLS certificates used by Calico Cloud - -## Big picture - -Enable custom workflows for issuing and signing certificates used to secure communication between $[prodname] components. - -## Value - -Some deployments have security requirements that strictly minimize or eliminate the access to private keys and/or -requirements to control the trusted certificates throughout clusters. Using the Kubernetes Certificates API that automates -certificate issuance, $[prodname] provides a simple configuration option that you add to your installation. - -## Before you begin - -**Limitations** - -If your cluster is already running $[prodname] and you would like to enable certificate management, you need to -temporarily remove [the logstorage resource](../../reference/installation/api.mdx#logstorage) -before following the steps to enable certificate management and then re-apply afterwards. - -{/*TODO-XREFS-CC -Currently, this feature is not supported in combination with [Multi-cluster management](/multicluster/create-a-management-cluster/). -*/} -**Supported algorithms** - -- Private Key Pair: RSA (size: 2048, 4096, 8192), ECDSA (curve: 256, 384, 521) -- Certificate Signature: RSA (sha: 256, 384, 512), ECDSA (sha: 256, 384, 512) - -## How to - -- [Enable certificate management](#enable-certificate-management) -- [Verify and monitor](#verify-and-monitor) -- [Implement your own signing/approval process](#implement-your-own-signing-and-approval-process) - -### Enable certificate management - -1. Modify your [the installation resource](../../reference/installation/api.mdx#installation) - resource and add the `certificateManagement` section. Apply the following change to your cluster. - -```yaml -apiVersion: operator.tigera.io/v1 -kind: Installation -metadata: - name: default -spec: - certificateManagement: - caCert: - signerName: / - signatureAlgorithm: SHA512WithRSA - keyAlgorithm: RSAWithSize4096 -``` - -Done! If you have an automatic signer and approver, there is nothing left to do. The next section explains in more detail -how to verify and monitor the status. - -### Verify and monitor - -1. Monitor your pods as they come up: - -``` -kubectl get pod -n calico-system -w -NAMESPACE NAME READY STATUS RESTARTS AGE -calico-system calico-node-5ckvq 0/1 Pending 0 0s -calico-system calico-typha-688c9957f5-h9c5w 0/1 Pending 0 0s -calico-system calico-node-5ckvq 0/1 Init:0/3 0 1s -calico-system calico-typha-688c9957f5-h9c5w 0/1 Init:0/1 0 1s -calico-system calico-node-5ckvq 0/1 PodInitializing 0 2s -calico-system calico-typha-688c9957f5-h9c5w 0/1 PodInitializing 0 2s -calico-system calico-node-5ckvq 1/1 Running 0 3s -calico-system calico-typha-688c9957f5-h9c5w 1/1 Running 0 3s -``` - -During the `Init` phase a certificate signing request (CSR) is created by an init container of the pod. It will be stuck in the -`Init` phase. Once the CSR has been approved and signed by the certificate authority, the pod continues with `PodInitializing` -and eventually `Running`. - -1. Monitor certificate signing requests: - -``` -kubectl get csr -w -NAME AGE REQUESTOR CONDITION -calico-system:calico-node-5ckvq:9a3a10 0s system:serviceaccount:calico-system:calico-node Pending -calico-system:calico-node-5ckvq:9a3a10 0s system:serviceaccount:calico-system:calico-node Pending,Issued -calico-system:calico-node-5ckvq:9a3a10 0s system:serviceaccount:calico-system:calico-node Approved,Issued -calico-system:typha-688c9957f5-h9c5w:2b0d82 0s system:serviceaccount:calico-system:calico-typha Pending -calico-system:typha-688c9957f5-h9c5w:2b0d82 0s system:serviceaccount:calico-system:calico-typha Pending,Issued -calico-system:typha-688c9957f5-h9c5w:2b0d82 0s system:serviceaccount:calico-system:calico-typha Approved,Issued -``` - -A CSR will be `Pending` until it has been `Issued` and `Approved`. The name of a CSR is based on the namespace, the pod -name and the first 6 characters of the pod's UID. The pod will be `Pending` until the CSR has been `Approved`. - -1. Monitor the status of this feature using the `TigeraStatus`: - -``` -kubectl get tigerastatus -NAME AVAILABLE PROGRESSING DEGRADED SINCE -calico True False False 2m40s -``` - -### Implement your own signing and approval process - -**Required steps** - -This feature uses api version `certificates.k8s.io/v1beta1` for [certificate signing requests](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/). -To automate the signing and approval process, run a server that performs the following actions: - -1. Watch `CertificateSigningRequests` resources with status `Pending` and `spec.signerName=`. - - :::note - - You can skip this step if you are using a version before Kubernetes v1.18; (the signerName field was not available). - - ::: - -1. For each `Pending` CSR perform (security) checks (see next heading) -1. Issue a certificate and update `.spec.status.certificate` -1. Approve the CSR and update `.spec.status.conditions` - -**Security requirements** - -Based on your requirements you may want to implement custom checks to make sure that no certificates are issued for a malicious user. -When a CSR is created, the kube-apiserver adds immutable fields to the spec to help you perform checks: - -- `.spec.username`: username of the requester -- `.spec.groups`: user groups of the requester -- `.spec.request`: certificate request in pem format - -Verify that the user and/or group match with the requested certificate subject (alt) names. - -**Implement your signer and approver using golang** - -- Use [client-go](https://github.com/kubernetes/client-go) to create a clientset -- To watch CSRs, use `clientset.CertificatesV1().CertificateSigningRequests().Watch(..)` -- To issue the certificate use `clientset.CertificatesV1().CertificateSigningRequests().UpdateStatus(...)` -- To approve the CSR use `clientset.CertificatesV1().CertificateSigningRequests().UpdateApproval(...)` - -### Additional resources - -- Read [kubernetes certificate signing requests](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/) for more information on CSRs -- Use [client-go](https://github.com/kubernetes/client-go) to implement a controller to sign and approve a CSR diff --git a/calico-cloud/operations/comms/compliance-tls.mdx b/calico-cloud/operations/comms/compliance-tls.mdx deleted file mode 100644 index 50d096c9b9..0000000000 --- a/calico-cloud/operations/comms/compliance-tls.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -description: Add TLS certificate to secure access to compliance. ---- - -# Provide TLS certificates for compliance - -## Big picture - -Provide TLS certificates to secure access to $[prodname] to the compliance components. - -## Value - -Providing TLS certificates for $[prodname] compliance components is recommended as part of a zero trust network model for security. - -## Before you begin... - -By default, $[prodname] uses self-signed certificates for its compliance reporting components. To provide TLS certificates, -get the certificate and key pair for the $[prodname] compliance using any X.509-compatible tool or from your organization's -Certificate Authority. The certificate must have Common Name or a Subject Alternate Name of `compliance.tigera-compliance.svc`. - -## How to - -### Add TLS certificates for compliance - -To provide TLS certificates for use by $[prodname] compliance components during deployment, you must create a secret before applying the 'custom-resource.yaml' or before creating the Compliance resource. Use the following command to create a secret: - -```bash -kubectl create secret generic tigera-compliance-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic tigera-compliance-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f - -``` diff --git a/calico-cloud/operations/comms/crypto-auth.mdx b/calico-cloud/operations/comms/crypto-auth.mdx deleted file mode 100644 index a10420b696..0000000000 --- a/calico-cloud/operations/comms/crypto-auth.mdx +++ /dev/null @@ -1,112 +0,0 @@ ---- -description: Enable TLS authentication and encryption for various Calico Cloud components. ---- - -# Configure encryption and authentication to secure Calico Cloud components - -## Connections from $[prodname] components to kube-apiserver (Kubernetes and OpenShift) - -We recommend enabling TLS on kube-apiserver, as well as the client certificate and JSON web token (JWT) -authentication modules. This ensures that all of its communications with $[prodname] components occur -over TLS. The $[prodname] components present either an X.509 certificate or a JWT to kube-apiserver -so that kube-apiserver can verify their identities. - -## Connections from Node to Typha (Kubernetes) - -Operator based installations automatically configure mutual TLS authentication on connections from -Felix to Typha. You may also configure this TLS by providing your own secrets. - -### Configure Node to Typha TLS based on your deployment - -For clusters installed using operator, see how to [provide TLS certificates for Typha and Node](typha-node-tls.mdx). - -For detailed reference information on TLS configuration parameters, refer to: - -- **Node**: [Node-Typha TLS configuration](../../reference/component-resources/node/felix/configuration.mdx#felix-typha-tls-configuration) - -{/*TODO-XREFS-CC - **Typha**: [Node-Typha TLS configuration](../../reference/component-resources/typha/configuration#felix-typha-tls-configuration)*/} - -# Web console connections - -Tigera the $[prodname] web console's web interface, run from your browser, uses HTTPS to securely communicate -with the $[prodname] web console, which in turn, communicates with the Kubernetes and $[prodname] API -servers also using HTTPS. Through the installation steps, secure communication between -$[prodname] components should already be configured, but secure communication through your web -browser of choice may not. To verify if this is properly configured, the web browser -you are using should display `Secure` in the address bar. - -Before we set up TLS certificates, it is important to understand the traffic -that we are securing. By default, your web browser of choice communicates with -the $[prodname] web console through a -[`NodePort` service](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport) -over port `30003`. The NodePort service passes through packets without modification. -TLS traffic is [terminated](https://en.wikipedia.org/wiki/TLS_termination_proxy) -at the $[prodname] web console. This means that the TLS certificates used to secure traffic -between your web browser and the $[prodname] web console do not need to be shared or related -to any other TLS certificates that may be used elsewhere in your cluster or when -configuring $[prodname]. The flow of traffic should look like the following: - -![the $[prodname] web console traffic diagram](/img/calico-enterprise/cnx-tls-mgr-comms.svg) - -:::note - -the `NodePort` service in the above diagram can be replaced with other -[Kubernetes services](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types). -Configuration will vary if another service, such as a load balancer, is placed between the web -browser and the $[prodname] web console. - -::: - -To properly configure TLS in the $[prodname] web console, you will need -certificates and keys signed by an appropriate Certificate Authority (CA). -For more high level information on certificates, keys, and CAs, see -[this blog post](http://www.steves-internet-guide.com/ssl-certificates-explained/). - -:::note - -It is important when generating your certificates to make sure -that the Common Name or Subject Alternative Name specified in your certificates -matches the host name/DNS entry/IP address that is used to access the $[prodname] web console -(i.e. what it says in the browser address bar). - -::: - -## Issues with certificates - -If your web browser still does not display `Secure` in the address bar, the most -common reasons and their fixes are listed below. - -- **Untrusted Certificate Authority**: Your browser may not display `Secure` because - it does not know (and therefore trust) the certificate authority (CA) that issued - the certificates that the $[prodname] web console is using. This is generally caused by using - self-signed certificates (either generated by Kubernetes or manually). If you have - certificates signed by a recognized CA, we recommend that you use them with the $[prodname] - Manager since the browser will automatically recognize them. - - If you opt to use self-signed certificates you can still configure your browser to - trust the CA on a per-browser basis by importing the CA certificates into the browser. - In Google Chrome, this can be achieved by selecting Settings, Advanced, Privacy and security, - Manage certificates, Authorities, Import. This is not recommended since it requires the CA - to be imported into every browser you access the $[prodname] web console from. - -- **Mismatched Common Name or Subject Alternative Name**: If you are still having issues - securely accessing the $[prodname] web console with TLS, you may want to make sure that the Common Name - or Subject Alternative Name specified in your certificates matches the host name/DNS - entry/IP address that is used to access the $[prodname] web console (i.e. what it says in the browser - address bar). In Google Chrome you can check the $[prodname] web console certificate with Developer Tools - (Ctrl+Shift+I), Security. If you are issued certificates which do not match, - you will need to reissue the certificates with the correct Common Name or - Subject Alternative Name and reconfigure the $[prodname] web console following the steps above. - -## Ingress proxies and load balancers - -You may wish to configure proxy elements, including hardware or software load balancers, Kubernetes Ingress -proxies etc., between user web browsers and the $[prodname] web console. If you do so, configure your proxy -such that the $[prodname] web console receives a HTTPS (TLS) connection, not unencrypted HTTP. - -If you require TLS termination at any of these proxy elements, you will need to - -- use a proxy that supports transparent HTTP/2 proxying, for example, [Envoy](https://www.envoyproxy.io/) -- re-originate a TLS connection from your proxy to the $[prodname] web console, as it expects TLS - -If you do not require TLS termination, configure your proxy to "pass thru" the TLS to the $[prodname] web console. diff --git a/calico-cloud/operations/comms/index.mdx b/calico-cloud/operations/comms/index.mdx deleted file mode 100644 index 24e13343f5..0000000000 --- a/calico-cloud/operations/comms/index.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -description: Secure communications for Calico components. -hide_table_of_contents: true ---- - -# Secure Calico component communications - -import DocCardList from '@theme/DocCardList'; -import { useCurrentSidebarCategory } from '@docusaurus/theme-common'; - - diff --git a/calico-cloud/operations/comms/log-storage-tls.mdx b/calico-cloud/operations/comms/log-storage-tls.mdx deleted file mode 100644 index 906dd93af1..0000000000 --- a/calico-cloud/operations/comms/log-storage-tls.mdx +++ /dev/null @@ -1,45 +0,0 @@ ---- -description: Add TLS certificate to secure access to log storage. ---- - -# Provide TLS certificates for log storage - -## Big picture - -Provide TLS certificates to secure access to $[prodname] to the log storage. - -## Value - -Providing TLS certificates for $[prodname] components is recommended as part of a zero trust network model for security. - -## Before you begin... - -By default, the $[prodname] log storage uses self-signed certificates on connections. To provide TLS certificates, -get the certificate and key pair for the $[prodname] log storage using any X.509-compatible tool or from your organization's -Certificate Authority. The certificate must include the following Subject Alternate Names or DNS names `tigera-secure-es-http.tigera-elasticsearch.svc` and `tigera-secure-es-gateway-http.tigera-elasticsearch.svc`. - -If your cluster has Windows nodes, the certificate must additionally include `tigera-secure-es-http.tigera-elasticsearch.svc.` where `` is the local domain specified for in-cluster DNS. - -## How to - -### Add TLS certificates for log storage - -To provide TLS certificates for use by $[prodname] components during deployment, you must create a secret before applying the 'custom-resource.yaml' or before creating the LogStorage resource. Use the following command to create a secret: - -```bash -kubectl create secret generic tigera-secure-elasticsearch-cert -n tigera-operator --from-file=tls.crt= --from-file=tls.key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic tigera-secure-elasticsearch-cert -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f - -``` - -:::note - -If the $[prodname] log storage already exists, you must manually delete the log storage pods one by one -after updating the secret. These pods will be in the `tigera-elasticsearch` namespace with the prefix, `tigera-secure-es`. -Other $[prodname] components will not be unable to communicate with log storage until the pods are restarted. - -::: diff --git a/calico-cloud/operations/comms/manager-tls.mdx b/calico-cloud/operations/comms/manager-tls.mdx deleted file mode 100644 index fe69f946f6..0000000000 --- a/calico-cloud/operations/comms/manager-tls.mdx +++ /dev/null @@ -1,41 +0,0 @@ ---- -description: Add TLS certificates to secure access to the web console user interface. ---- - -# Provide TLS certificates for the web console - -## Big picture - -Provide TLS certificates that secure access to the $[prodname] web console user interface. - -## Value - -By default, the $[prodname] web console uses self-signed TLS certificates on connections. This article describes how to provide TLS certificates that users' browsers will trust. - -## Before you begin... - -- **Get the certificate and key pair for the $[prodname] web console** - Generate the certificate using any X.509-compatible tool or from your organization's Certificate Authority. - -{/*TODO-XREFS-CC -The certificate must have Common Name or Subject Alternate Names that match the IPs or DNS names that will be used to [access the web console](/operations/cnx/access-the-manager/). -*/} -## How to - -To provide certificates for use during deployment you must create a secret before applying the 'custom-resource.yaml' or before creating the Installation resource. To specify certificates for use in the manager, create a secret using the following command: - -```bash -kubectl create secret generic manager-tls -n tigera-operator --from-file=cert= --from-file=key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic manager-tls -n tigera-operator --from-file=cert= --from-file=key= --dry-run -o yaml --save-config | kubectl replace -f - -``` - -If the $[prodname] web console is already running then updating the secret should cause it to restart and pickup the new certificate and key. This will result in a short period of unavailability of the $[prodname] web console. - -## Additional resources - -Additional documentation is available for securing [the $[prodname] web console connections](crypto-auth.mdx#calico-enterprise-manager-connections). diff --git a/calico-cloud/operations/comms/packetcapture-tls.mdx b/calico-cloud/operations/comms/packetcapture-tls.mdx deleted file mode 100644 index c75f45f90c..0000000000 --- a/calico-cloud/operations/comms/packetcapture-tls.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -description: Add TLS certificate to secure access to PacketCapture APIs. ---- - -# Provide TLS certificates for PacketCapture APIs - -## Big picture - -Provide TLS certificates to secure access to $[prodname] to the PacketCapture components. - -## Value - -Providing TLS certificates for $[prodname] PacketCapture components is recommended as part of a zero trust network model for security. - -## Before you begin... - -By default, $[prodname] uses self-signed certificates for its PacketCapture APIs components. To provide TLS certificates, -get the certificate and key pair for the $[prodname] PacketCapture using any X.509-compatible tool or from your organization's -Certificate Authority. The certificate must have Common Name or a Subject Alternate Name of `tigera-packetcapture.tigera-packetcapture.svc`. - -## How to - -### Add TLS certificates for PacketCapture - -To provide TLS certificates for use by $[prodname] PacketCapture components during deployment, you must create a secret before applying the 'custom-resource.yaml' or before creating the APIServer resource. Use the following command to create a secret: - -```bash -kubectl create secret generic tigera-packetcapture-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic tigera-packetcapture-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f - -``` diff --git a/calico-cloud/operations/comms/secure-metrics.mdx b/calico-cloud/operations/comms/secure-metrics.mdx deleted file mode 100644 index 14b4a1f3c5..0000000000 --- a/calico-cloud/operations/comms/secure-metrics.mdx +++ /dev/null @@ -1,514 +0,0 @@ ---- -description: Limit access to Calico Cloud metric endpoints using network policy. ---- - -# Secure Calico Cloud Prometheus endpoints - -## About securing access to $[prodname]'s metrics endpoints - -When using $[prodname] with Prometheus metrics enabled, we recommend using network policy -to limit access to $[prodname]'s metrics endpoints. - -## Prerequisites - -- $[prodname] is installed with Prometheus metrics reporting enabled. - -{/*TODO-XREFS-CC -- `calicoctl` is [installed in your PATH and configured to access the data store](/operations/clis/calicoctl/install). -*/} -## Choosing an approach - -This guide provides two example workflows for creating network policies to limit access -to $[prodname]'s Prometheus metrics. Choosing an approach depends on your requirements. - -- [Using a deny-list approach](#using-a-deny-list-approach) - - This approach allows all traffic to your hosts by default, but lets you limit access to specific ports using - $[prodname] policy. This approach allows you to restrict access to specific ports, while leaving other - host traffic unaffected. - -- [Using an allow-list approach](#using-an-allow-list-approach) - - This approach denies traffic to and from your hosts by default, and requires that all - desired communication be explicitly allowed by a network policy. This approach is more secure because - only explicitly-allowed traffic will get through, but it requires you to know all the ports that should be open on the host. - -## Using a deny-list approach - -### Overview - -The basic process is as follows: - -1. Create a default network policy that allows traffic to and from your hosts. -1. Create host endpoints for each node that you'd like to secure. -1. Create a network policy that denies unwanted traffic to the $[prodname] metrics endpoints. -1. Apply labels to allow access to the Prometheus metrics. - -### Example for $[nodecontainer] - -This example shows how to limit access to the $[nodecontainer] Prometheus metrics endpoints. - -1. Create a default network policy to allow host traffic - - First, create a default-allow policy. Do this first to avoid a drop in connectivity when adding the host endpoints - later, since host endpoints with no policy default to deny. - - To do this, create a file named `default-host-policy.yaml` with the following contents. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: GlobalNetworkPolicy - metadata: - name: default-host - spec: - # Select all $[prodname] nodes. - selector: running-calico == "true" - order: 5000 - ingress: - - action: Allow - egress: - - action: Allow - ``` - - Then, use `kubectl` to apply this policy. - - ```bash - kubectl apply -f default-host-policy.yaml - ``` - -1. List the nodes on which $[prodname] is running with the following command. - - ```bash - calicoctl get nodes - ``` - - In this case, we have two nodes in the cluster. - - ``` - NAME - kubeadm-master - kubeadm-node-0 - ``` - -1. Create host endpoints for each $[prodname] node. - - Create a file named `host-endpoints.yaml` containing a host endpoint for each node listed - above. In this example, the contents would look like this. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: HostEndpoint - metadata: - name: kubeadm-master.eth0 - labels: - running-calico: 'true' - spec: - node: kubeadm-master - interfaceName: eth0 - expectedIPs: - - 10.100.0.15 - --- - apiVersion: projectcalico.org/v3 - kind: HostEndpoint - metadata: - name: kubeadm-node-0.eth0 - labels: - running-calico: 'true' - spec: - node: kubeadm-node-0 - interfaceName: eth0 - expectedIPs: - - 10.100.0.16 - ``` - - In this file, replace `eth0` with the desired interface name on each node, and populate the - `expectedIPs` section with the IP addresses on that interface. - - Note the use of a label to indicate that this host endpoint is running $[prodname]. The - label matches the selector of the network policy created in step 1. - - Then, use `kubectl` to apply the host endpoints with the following command. - - ```bash - kubectl apply -f host-endpoints.yaml - ``` - -1. Create a network policy that restricts access to the $[nodecontainer] Prometheus metrics port. - - Now let's create a network policy that limits access to the Prometheus metrics port such that - only endpoints with the label `calico-prometheus-access: true` can access the metrics. - - To do this, create a file named `calico-prometheus-policy.yaml` with the following contents. - - ```yaml - # Allow traffic to Prometheus only from sources that are - # labeled as such, but don't impact any other traffic. - apiVersion: projectcalico.org/v3 - kind: GlobalNetworkPolicy - metadata: - name: restrict-calico-node-prometheus - spec: - # Select all $[prodname] nodes. - selector: running-calico == "true" - order: 500 - types: - - Ingress - ingress: - # Deny anything that tries to access the Prometheus port - # but that doesn't match the necessary selector. - - action: Deny - protocol: TCP - source: - notSelector: calico-prometheus-access == "true" - destination: - ports: - - 9091 - ``` - - This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress deny rule. - The ingress rule denies traffic to port 9091 unless the source of traffic has the label `calico-prometheus-access: true`, meaning - all $[prodname] workload endpoints, host endpoints, and global network sets that do not have the label, as well as any - other network endpoints unknown to $[prodname]. - - Then, use `kubectl` to apply this policy. - - ```bash - kubectl apply -f calico-prometheus-policy.yaml - ``` - -1. Apply labels to any endpoints that should have access to the metrics. - - At this point, only endpoints that have the label `calico-prometheus-access: true` can reach - $[prodname]'s Prometheus metrics endpoints on each node. To grant access, simply add this label to the - desired endpoints. - - For example, to allow access to a Kubernetes pod you can run the following command. - - ```bash - kubectl label pod my-prometheus-pod calico-prometheus-access=true - ``` - - If you would like to grant access to a specific IP network, you - can create a [global network set](../../reference/resources/globalnetworkset.mdx) using `kubectl`. - - For example, you might want to grant access to your management subnets. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: GlobalNetworkSet - metadata: - name: calico-prometheus-set - labels: - calico-prometheus-access: 'true' - spec: - nets: - - 172.15.0.0/24 - - 172.101.0.0/24 - ``` - -### Additional steps for Typha deployments - -If your $[prodname] installation uses the Kubernetes API datastore and has greater than 50 nodes, it is likely -that you have installed Typha. This section shows how to use an additional network policy to secure the Typha -Prometheus endpoints. - -After following the steps above, create a file named `typha-prometheus-policy.yaml` with the following contents. - -```yaml -# Allow traffic to Prometheus only from sources that are -# labeled as such, but don't impact any other traffic. -apiVersion: projectcalico.org/v3 -kind: GlobalNetworkPolicy -metadata: - name: restrict-calico-node-prometheus -spec: - # Select all $[prodname] nodes. - selector: running-calico == "true" - order: 500 - types: - - Ingress - ingress: - # Deny anything that tries to access the Prometheus port - # but that doesn't match the necessary selector. - - action: Deny - protocol: TCP - source: - notSelector: calico-prometheus-access == "true" - destination: - ports: - - 9093 -``` - -This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress deny rule. -The ingress rule denies traffic to port 9093 unless the source of traffic has the label `calico-prometheus-access: true`, meaning -all $[prodname] workload endpoints, host endpoints, and global network sets that do not have the label, as well as any -other network endpoints unknown to $[prodname]. - -Then, use `kubectl` to apply this policy. - -```bash -kubectl apply -f typha-prometheus-policy.yaml -``` - -### Example for kube-controllers - -If your $[prodname] installation exposes metrics from kube-controllers, you can limit access to those metrics -with the following network policy. - -Create a file named `kube-controllers-prometheus-policy.yaml` with the following contents. - -```yaml -apiVersion: projectcalico.org/v3 -kind: NetworkPolicy -metadata: - name: restrict-kube-controllers-prometheus - namespace: calico-system -spec: - # Select kube-controllers. - selector: k8s-app == "calico-kube-controllers" - order: 500 - types: - - Ingress - ingress: - # Deny anything that tries to access the Prometheus port - # but that doesn't match the necessary selector. - - action: Deny - protocol: TCP - source: - notSelector: calico-prometheus-access == "true" - destination: - ports: - - 9094 -``` - -:::note - -The above policy is installed in the calico-system namespace. If your cluster has $[prodname] installed -in the kube-system namespace, you will need to create the policy in that namespace instead. - -::: - -Then, use `calicoctl` to apply this policy. - -```bash -kubectl apply -f kube-controllers-prometheus-policy.yaml -``` - -## Using an allow-list approach - -### Overview - -The basic process is as follows: - -1. Create host endpoints for each node that you'd like to secure. -1. Create a network policy that allows desired traffic to the $[prodname] metrics endpoints. -1. Apply labels to allow access to the Prometheus metrics. - -### Example for $[nodecontainer] - -1. List the nodes on which $[prodname] is running with the following command. - - ```bash - calicoctl get nodes - ``` - - In this case, we have two nodes in the cluster. - - ``` - NAME - kubeadm-master - kubeadm-node-0 - ``` - -1. Create host endpoints for each $[prodname] node. - - Create a file named `host-endpoints.yaml` containing a host endpoint for each node listed - above. In this example, the contents would look like this. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: HostEndpoint - metadata: - name: kubeadm-master.eth0 - labels: - running-calico: 'true' - spec: - node: kubeadm-master - interfaceName: eth0 - expectedIPs: - - 10.100.0.15 - --- - apiVersion: projectcalico.org/v3 - kind: HostEndpoint - metadata: - name: kubeadm-node-0.eth0 - labels: - running-calico: 'true' - spec: - node: kubeadm-node-0 - interfaceName: eth0 - expectedIPs: - - 10.100.0.16 - ``` - - In this file, replace `eth0` with the desired interface name on each node, and populate the - `expectedIPs` section with the IP addresses on that interface. - - Note the use of a label to indicate that this host endpoint is running $[prodname]. The - label matches the selector of the network policy created in step 1. - - Then, use `kubectl` to apply the host endpoints with the following command. This will prevent all - traffic to and from the host endpoints. - - ```bash - kubectl apply -f host-endpoints.yaml - ``` - - :::note - - $[prodname] allows some traffic as a failsafe even after applying this policy. This can - be adjusted using the `failsafeInboundHostPorts` and `failsafeOutboundHostPorts` options - on the [FelixConfiguration resource](../../reference/resources/felixconfig.mdx). - - ::: - -1. Create a network policy that allows access to the $[nodecontainer] Prometheus metrics port. - - Now let's create a network policy that allows access to the Prometheus metrics port such that - only endpoints with the label `calico-prometheus-access: true` can access the metrics. - - To do this, create a file named `calico-prometheus-policy.yaml` with the following contents. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: GlobalNetworkPolicy - metadata: - name: restrict-calico-node-prometheus - spec: - # Select all $[prodname] nodes. - selector: running-calico == "true" - order: 500 - types: - - Ingress - ingress: - # Allow traffic from selected sources to the Prometheus port. - - action: Allow - protocol: TCP - source: - selector: calico-prometheus-access == "true" - destination: - ports: - - 9091 - ``` - - This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress allow rule. - The ingress rule allows traffic to port 9091 from any source with the label `calico-prometheus-access: true`, meaning - all $[prodname] workload endpoints, host endpoints, and global network sets that have the label will be allowed access. - - Then, use `kubectl` to apply this policy. - - ```bash - kubectl apply -f calico-prometheus-policy.yaml - ``` - -1. Apply labels to any endpoints that should have access to the metrics. - - At this point, only endpoints that have the label `calico-prometheus-access: true` can reach - $[prodname]'s Prometheus metrics endpoints on each node. To grant access, simply add this label to the - desired endpoints. - - For example, to allow access to a Kubernetes pod you can run the following command. - - ```bash - kubectl label pod my-prometheus-pod calico-prometheus-access=true - ``` - - If you would like to grant access to a specific IP address in your network, you - can create a [global network set](../../reference/resources/globalnetworkset.mdx) using `kubectl`. - - For example, creating the following network set would grant access to a host with IP 172.15.0.101. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: GlobalNetworkSet - metadata: - name: calico-prometheus-set - labels: - calico-prometheus-access: 'true' - spec: - nets: - - 172.15.0.101/32 - ``` - -### Additional steps for Typha deployments - -If your $[prodname] installation uses the Kubernetes API datastore and has greater than 50 nodes, it is likely -that you have installed Typha. This section shows how to use an additional network policy to secure the Typha -Prometheus endpoints. - -After following the steps above, create a file named `typha-prometheus-policy.yaml` with the following contents. - -```yaml -apiVersion: projectcalico.org/v3 -kind: GlobalNetworkPolicy -metadata: - name: restrict-typha-prometheus -spec: - # Select all $[prodname] nodes. - selector: running-calico == "true" - order: 500 - types: - - Ingress - ingress: - - action: Allow - protocol: TCP - source: - selector: calico-prometheus-access == "true" - destination: - ports: - - 9093 -``` - -This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress allow rule. -The ingress rule allows traffic to port 9093 from any source with the label `calico-prometheus-access: true`, meaning -all $[prodname] workload endpoints, host endpoints, and global network sets that have the label will be allowed access. - -Then, use `kubectl` to apply this policy. - -```bash -kubectl apply -f typha-prometheus-policy.yaml -``` - -### Example for kube-controllers - -If your $[prodname] installation exposes metrics from kube-controllers, you can limit access to those metrics -with the following network policy. - -Create a file named `kube-controllers-prometheus-policy.yaml` with the following contents. - -```yaml -apiVersion: projectcalico.org/v3 -kind: NetworkPolicy -metadata: - name: restrict-kube-controllers-prometheus - namespace: calico-system -spec: - selector: k8s-app == "calico-kube-controllers" - order: 500 - types: - - Ingress - ingress: - - action: Allow - protocol: TCP - source: - selector: calico-prometheus-access == "true" - destination: - ports: - - 9094 -``` - -Then, use `kubectl` to apply this policy. - -```bash -kubectl apply -f kube-controllers-prometheus-policy.yaml -``` diff --git a/calico-cloud/operations/comms/typha-node-tls.mdx b/calico-cloud/operations/comms/typha-node-tls.mdx deleted file mode 100644 index e18e854fd9..0000000000 --- a/calico-cloud/operations/comms/typha-node-tls.mdx +++ /dev/null @@ -1,85 +0,0 @@ ---- -description: Add TLS certificates to secure communications between if you are using Typha to scale your deployment. ---- - -# Provide TLS certificates for Typha and Node - -## Big picture - -Provide TLS certificates that allow mutual TLS authentication between Node and Typha. - -## Value - -By default, $[prodname] Typha and Node components are configured with self-signed Certificate Authority (CA) and certificates for mutual TLS authentication. This article describes how to provide a CA and TLS certificates. - -## Concepts - -**Mutual TLS authentication** means each side of a connection authenticates the other side. As such, the CA and certificates that are used must all be in sync. If one side of the connection is updated with a certificate that is not compatible with the other side, communication stops. So if certificate updates are mismatched on Typha, Node, or CA certificate, new pod networking and policy application will be interrupted until you restore compatibility. To make it easy to keep updates in sync, this article describes how to use one command to apply updates for all resources. - -## Before you begin... - -**Get the Certificate Authority certificate and signed certificate and key pairs for $[prodname] Typha and Node** - -- Generate the certificates using any X.509-compatible tool or from your organization's CA. -- Ensure the generated certificates meet the requirements for [TLS connections between Node and Typha](crypto-auth.mdx#connections-from-node-to-typha-kubernetes). - -## How to - -### Create resource file - -1. Create the CA ConfigMap with the following commands: - - ```bash - kubectl create configmap typha-ca -n tigera-operator --from-file=caBundle= --dry-run -o yaml --save-config > typha-node-tls.yaml - echo '---' >> typha-node-tls.yaml - ``` - - :::tip - - The contents of the caBundle field should contain the CA or the certificates for both Typha and Node. - It is possible to add multiple PEM blocks. - - ::: - -1. Create the Typha Secret with the following command: - - ```bash - kubectl create secret generic typha-certs -n tigera-operator \ - --from-file=tls.crt= --from-file=tls.key= \ - --from-literal=common-name= --dry-run -o yaml --save-config >> typha-node-tls.yaml - echo '---' >> typha-node-tls.yaml - ``` - - :::tip - - If using SPIFFE identifiers replace `--from-literal=common-name=` with `--from-literal=uri-san=`. - - ::: - -1. Create the Node Secret with the following command: - - ```bash - kubectl create secret generic node-certs -n tigera-operator \ - --from-file=tls.crt= --from-file=tls.key= \ - --from-literal=common-name= --dry-run -o yaml --save-config >> typha-node-tls.yaml - ``` - - :::tip - - If using SPIFFE identifiers replace `--from-literal=common-name=` with `--from-literal=uri-san=`. - - ::: - -### Apply or update resources - -1. Apply the `typha-node-tls.yaml` file. - - To create these resource for use during deployment, you must apply this file before applying `custom-resource.yaml` or before creating the Installation resource. To apply this file, use the following command: - ```bash - kubectl apply -f typha-node-tls.yaml - ``` - - To update existing resources, use the following command: - ```bash - kubectl replace -f typha-node-tls.yaml - ``` - -If $[prodname] Node and Typha are already running, the update causes a rolling restart of both. If the new CA and certificates are not compatible with the previous set, there may be a period where the Node pods produce errors until the old set CA and certificates are replaced with the new ones. diff --git a/calico-cloud/operations/index.mdx b/calico-cloud/operations/index.mdx index 203cb25a91..d4d2a8622b 100644 --- a/calico-cloud/operations/index.mdx +++ b/calico-cloud/operations/index.mdx @@ -15,13 +15,6 @@ Post-installation tasks for managing Calico Cloud. -## Secure component communications - - - - - - ## Monitoring diff --git a/calico-cloud/operations/monitor/metrics/bgp-metrics.mdx b/calico-cloud/operations/monitor/metrics/bgp-metrics.mdx index 43e8b8357e..f0f2212668 100644 --- a/calico-cloud/operations/monitor/metrics/bgp-metrics.mdx +++ b/calico-cloud/operations/monitor/metrics/bgp-metrics.mdx @@ -154,5 +154,4 @@ irate(bgp_route_updates_received{ip_version="IPv4"}[5m]) ## Additional resources -- [Secure $[prodname] Prometheus endpoints](../../comms/secure-metrics.mdx) - [Configuring Prometheus](../prometheus/index.mdx) diff --git a/calico-cloud/reference/resources/bgppeer.mdx b/calico-cloud/reference/resources/bgppeer.mdx index b8cf62f2ae..7dfe8bc029 100644 --- a/calico-cloud/reference/resources/bgppeer.mdx +++ b/calico-cloud/reference/resources/bgppeer.mdx @@ -48,7 +48,7 @@ spec: | localWorkloadSelector | Selector for the local workloads that the node should peer with. When this is set, the `peerSelector` and `peerIP` fields must be empty and the `localWorkloadPeeringIPV4` and/or `localWorkloadPeeringIPV6` fields in the `BGPConfiguration` resource must be configured. It is also important to configure appropriate import/export filters when using this feature. See the [guide](../../networking/configuring/bgp-to-workload.mdx) for details. | | [selector](#selectors) | | | keepOriginalNextHop | Maintain and forward the original next hop BGP route attribute to a specific Peer within a different AS. | | boolean | | extensions | Additional mapping of keys and values. Used for setting values in custom BGP configurations. | valid strings for both keys and values | map | | -| password | [BGP password](../../operations/comms/secure-bgp.mdx) for the peerings generated by this BGPPeer resource. | | [BGPPassword](#bgppassword) | `nil` (no password) | +| password | [BGP password](../../networking/configuring/secure-bgp.mdx) for the peerings generated by this BGPPeer resource. | | [BGPPassword](#bgppassword) | `nil` (no password) | | sourceAddress | Specifies whether and how to configure a source address for the peerings generated by this BGPPeer resource. Default value "UseNodeIP" means to configure the node IP as the source address. "None" means not to configure a source address. | "UseNodeIP", "None" | string | "UseNodeIP" | | failureDetectionMode | Specifies whether and how to detect loss of connectivity on the peerings generated by this BGPPeer resource. Default value "None" means nothing beyond BGP's own (slow) hold timer. "BFDIfDirectlyConnected" means to use BFD when the peer is directly connected. | "None", "BFDIfDirectlyConnected" | string | "None" | | restartMode | Specifies restart behaviour to configure on the peerings generated by this BGPPeer resource. Default value "GracefulRestart" means traditional graceful restart. "LongLivedGracefulRestart" means LLGR according to draft-uttaro-idr-bgp-persistence-05. | "GracefulRestart", "LongLivedGracefulRestart" | string | "GracefulRestart" | diff --git a/calico-enterprise/getting-started/bare-metal/about.mdx b/calico-enterprise/getting-started/bare-metal/about.mdx index aa73c76e46..4b7dbc936c 100644 --- a/calico-enterprise/getting-started/bare-metal/about.mdx +++ b/calico-enterprise/getting-started/bare-metal/about.mdx @@ -97,7 +97,7 @@ To learn how to restrict traffic to/from hosts and VMs using Calico network poli | -------------- | -------------------------------------------------------------------------------------------------------- | ---------------------- | | namespace | Optional. The namespace where the service account for non-cluster hosts resides. | calico-system | | serviceaccount | Optional. The service account used by non-cluster hosts to authenticate and securely access the cluster. | tigera-noncluster-host | - | certfile | Optional. Path to the file containing the PEM-encoded authority certificates. Use this option if you are providing your own [TLS certificates for Calico Enterprise Manager](../../operations/comms/manager-tls.mdx). If not specified, the Tigera root CA certificate will be used by default. | | + | certfile | Optional. Path to the file containing the PEM-encoded authority certificates. Use this option if you are providing your own [TLS certificates for $[prodname]](../../operations/comms/index.mdx). If not specified, the Tigera root CA certificate will be used by default. | | 1. Create a [`HostEndpoint` resource](../../reference/host-endpoints/overview.mdx) for each non-cluster host or VM. The `node` and `expectedIPs` fields are required to match the hostname and the expected interface IP addresses. diff --git a/calico-enterprise/getting-started/bare-metal/typha-node-tls.mdx b/calico-enterprise/getting-started/bare-metal/typha-node-tls.mdx index a45a8a516a..086be0ce8f 100644 --- a/calico-enterprise/getting-started/bare-metal/typha-node-tls.mdx +++ b/calico-enterprise/getting-started/bare-metal/typha-node-tls.mdx @@ -22,7 +22,7 @@ Get the Certificate Authority certificate and signed certificate and key pairs f 1. Package your CA certificates into a ConfigMap. - Run the following command to create a ConfigMap containing your CA certificates. If you have already created the `typha-ca` ConfigMap following the steps in [Provide TLS certificates for Typha and Node](../../operations/comms/typha-node-tls.mdx), and your BYO certificates are signed by the same CA included in that ConfigMap, you can skip this step. + Run the following command to create a ConfigMap containing your CA certificates. If you have already created the `typha-ca` ConfigMap following the steps in [Provide TLS certificates](../../operations/comms/index.mdx), and your BYO certificates are signed by the same CA included in that ConfigMap, you can skip this step. ```bash kubectl create configmap typha-ca -n tigera-operator --from-file=caBundle= diff --git a/calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm.mdx b/calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm.mdx index fce7e189ae..3ac74e30d3 100644 --- a/calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm.mdx +++ b/calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/helm.mdx @@ -34,7 +34,7 @@ have their reclaim policy set to [retain data](https://kubernetes.io/docs/tasks/ Retaining data is only recommended for users that use a valid Elastic license. Trial licenses can get invalidated during the upgrade. -If your cluster has Windows nodes and uses custom TLS certificates for log storage then, prior to upgrade, prepare and apply new certificates for [log storage](../../../../operations/comms/log-storage-tls.mdx) that include the required service DNS names. +If your cluster has Windows nodes and uses custom TLS certificates for log storage then, prior to upgrade, prepare and apply new certificates for [log storage](../../../../operations/comms/index.mdx) that include the required service DNS names. ### Upgrade OwnerReferences diff --git a/calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator.mdx b/calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator.mdx index 7baaad959b..d9831d718a 100644 --- a/calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator.mdx +++ b/calico-enterprise/getting-started/upgrading/upgrading-enterprise/kubernetes-upgrade-tsee/operator.mdx @@ -63,7 +63,7 @@ $[prodname] creates a default-deny for the calico-system namespace. If you deplo ### Windows -If your cluster has Windows nodes and uses custom TLS certificates for log storage, prior to upgrade, prepare and apply new certificates for [log storage](../../../../operations/comms/log-storage-tls.mdx) that include the required service DNS names. +If your cluster has Windows nodes and uses custom TLS certificates for log storage, prior to upgrade, prepare and apply new certificates for [log storage](../../../../operations/comms/index.mdx) that include the required service DNS names. For AKS only, upgrades to a newer version will automatically upgrade $[prodnameWindows]. During the upgrade, Windows nodes will be tainted so new pods will not be scheduled until the upgrade of the node has finished. The $[prodnameWindows] upgrade status can be monitored with: `kubectl get tigerastatus calico -oyaml`. diff --git a/calico-enterprise/getting-started/upgrading/upgrading-enterprise/openshift-upgrade.mdx b/calico-enterprise/getting-started/upgrading/upgrading-enterprise/openshift-upgrade.mdx index ab25afbf4f..1c1e5b3c00 100644 --- a/calico-enterprise/getting-started/upgrading/upgrading-enterprise/openshift-upgrade.mdx +++ b/calico-enterprise/getting-started/upgrading/upgrading-enterprise/openshift-upgrade.mdx @@ -66,7 +66,7 @@ $[prodname] creates a default-deny for the calico-system namespace. If you deplo ### Windows -If your cluster has Windows nodes and uses custom TLS certificates for log storage, prior to upgrade, prepare and apply new certificates for [log storage](../../../operations/comms/log-storage-tls.mdx) that include the required service DNS names. +If your cluster has Windows nodes and uses custom TLS certificates for log storage, prior to upgrade, prepare and apply new certificates for [log storage](../../../operations/comms/index.mdx) that include the required service DNS names. ### Multi-cluster management diff --git a/calico-enterprise/operations/comms/secure-bgp.mdx b/calico-enterprise/networking/configuring/secure-bgp.mdx similarity index 100% rename from calico-enterprise/operations/comms/secure-bgp.mdx rename to calico-enterprise/networking/configuring/secure-bgp.mdx diff --git a/calico-enterprise/operations/comms/apiserver-tls.mdx b/calico-enterprise/operations/comms/apiserver-tls.mdx deleted file mode 100644 index 6aac5dd588..0000000000 --- a/calico-enterprise/operations/comms/apiserver-tls.mdx +++ /dev/null @@ -1,52 +0,0 @@ ---- -description: Add TLS certificates to secure access to the Calico Enterprise API server. ---- - -# Provide TLS certificates for the API server - -## Big picture - -Provide TLS certificates to secure access to the $[prodname] API server. - -## Value - -Providing TLS certificates for $[prodname] components is recommended as part of a zero trust network model for security. - -## Concepts - -### $[prodname] API server - -The $[prodname] API server handles requests for $[prodname] API resources. The main Kubernetes API server has an aggregation layer and will proxy requests for the $[prodname] API resources to the $[prodname] API server. - -## Before you begin... - -By default, the $[prodname] API server uses self-signed certificates on connections. To provide TLS certificates, -get the certificate and key pair for the $[prodname] API Server using any X.509-compatible tool or from your organization's Certificate Authority. The certificate must have Common Name or a Subject Alternate Name of `calico-api.calico-system.svc`. - -This feature is available for Kubernetes and OpenShift. - -## How to - -### Add TLS certificates - -To provide certificates for use during deployment you must create a secret before applying the 'custom-resource.yaml' or before creating the Installation resource. To specify certificates for use in the $[prodname] web console, create a secret using the following command: - -```bash -kubectl create secret generic calico-apiserver-certs -n tigera-operator --from-file=apiserver.crt= --from-file=apiserver.key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic calico-apiserver-certs -n tigera-operator --from-file=apiserver.crt= --from-file=apiserver.key= --dry-run -o yaml --save-config | kubectl replace -f - -``` - -:::note - -If the $[prodname] API server is already running, updating the secret restarts the API server. While the server restarts, the $[prodname] API server may be unavailable for a short period of time. - -::: - -## Additional resources - -Additional documentation is available for securing [the $[prodname] web console connections](crypto-auth.mdx#connections-from-calico-enterprise-components-to-kube-apiserver-kubernetes-and-openshift). diff --git a/calico-enterprise/operations/comms/certificate-management.mdx b/calico-enterprise/operations/comms/certificate-management.mdx deleted file mode 100644 index e5d4868ead..0000000000 --- a/calico-enterprise/operations/comms/certificate-management.mdx +++ /dev/null @@ -1,146 +0,0 @@ ---- -description: Control the issuer of certificates used by Calico Enterprise. ---- - -# Manage TLS certificates used by Calico Enterprise - -## Big picture - -Enable custom workflows for issuing and signing certificates used to secure communication between $[prodname] components. - -## Value - -Some deployments have security requirements that strictly minimize or eliminate the access to private keys and/or -requirements to control the trusted certificates throughout clusters. Using the Kubernetes Certificates API that automates -certificate issuance, $[prodname] provides a simple configuration option that you add to your installation. - -## Before you begin - -**Limitations** - -If your cluster is already running $[prodname] and you would like to enable certificate management, you need to -temporarily remove [the logstorage resource](../../reference/installation/api.mdx#logstorage) -before following the steps to enable certificate management and then re-apply afterwards. For detailed steps on -re-creating logstorage, read more on [how to create a new Elasticsearch cluster](../../observability/elastic/troubleshoot.mdx#how-to-create-a-new-cluster). - -Currently, this feature is not supported in combination with [Multi-cluster management](../../multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx). - -**Supported algorithms** - -- Private Key Pair: RSA (size: 2048, 4096, 8192), ECDSA (curve: 256, 384, 521) -- Certificate Signature: RSA (sha: 256, 384, 512), ECDSA (sha: 256, 384, 512) - -## How to - -- [Enable certificate management](#enable-certificate-management) -- [Verify and monitor](#verify-and-monitor) -- [Implement your own signing/approval process](#implement-your-own-signing-and-approval-process) - -### Enable certificate management - -1. Modify your [the installation resource](../../reference/installation/api.mdx#installation) - resource and add the `certificateManagement` section. Apply the following change to your cluster. - - ```yaml - apiVersion: operator.tigera.io/v1 - kind: Installation - metadata: - name: default - spec: - certificateManagement: - caCert: - signerName: / - signatureAlgorithm: SHA512WithRSA - keyAlgorithm: RSAWithSize4096 - ``` - - Done! If you have an automatic signer and approver, there is nothing left to do. - The next section explains in more detail how to verify and monitor the status. - -### Verify and monitor - -1. Monitor your pods as they come up: - - ``` - kubectl get pod -n calico-system -w - NAMESPACE NAME READY STATUS RESTARTS AGE - calico-system calico-node-5ckvq 0/1 Pending 0 0s - calico-system calico-typha-688c9957f5-h9c5w 0/1 Pending 0 0s - calico-system calico-node-5ckvq 0/1 Init:0/3 0 1s - calico-system calico-typha-688c9957f5-h9c5w 0/1 Init:0/1 0 1s - calico-system calico-node-5ckvq 0/1 PodInitializing 0 2s - calico-system calico-typha-688c9957f5-h9c5w 0/1 PodInitializing 0 2s - calico-system calico-node-5ckvq 1/1 Running 0 3s - calico-system calico-typha-688c9957f5-h9c5w 1/1 Running 0 3s - ``` - - During the `Init` phase a certificate signing request (CSR) is created by an init container of the pod. - It will be stuck in the `Init` phase. - Once the CSR has been approved and signed by the certificate authority, the pod continues with `PodInitializing` and eventually `Running`. - -2. Monitor certificate signing requests: - - ``` - kubectl get csr -w - NAME AGE REQUESTOR CONDITION - calico-system:calico-node-5ckvq:9a3a10 0s system:serviceaccount:calico-system:calico-node Pending - calico-system:calico-node-5ckvq:9a3a10 0s system:serviceaccount:calico-system:calico-node Pending,Issued - calico-system:calico-node-5ckvq:9a3a10 0s system:serviceaccount:calico-system:calico-node Approved,Issued - calico-system:typha-688c9957f5-h9c5w:2b0d82 0s system:serviceaccount:calico-system:calico-typha Pending - calico-system:typha-688c9957f5-h9c5w:2b0d82 0s system:serviceaccount:calico-system:calico-typha Pending,Issued - calico-system:typha-688c9957f5-h9c5w:2b0d82 0s system:serviceaccount:calico-system:calico-typha Approved,Issued - ``` - - A CSR will be `Pending` until it has been `Issued` and `Approved`. - The name of a CSR is based on the namespace, the pod name and the first 6 characters of the pod's UID. - The pod will be `Pending` until the CSR has been `Approved`. - -3. Monitor the status of this feature using the `TigeraStatus`: - - ``` - kubectl get tigerastatus - NAME AVAILABLE PROGRESSING DEGRADED SINCE - calico True False False 2m40s - ``` - -### Implement your own signing and approval process - -**Required steps** - -This feature uses api version `certificates.k8s.io/v1beta1` for [certificate signing requests](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/). -To automate the signing and approval process, run a server that performs the following actions: - -1. Watch `CertificateSigningRequests` resources with status `Pending` and `spec.signerName=`. - - :::note - - You can skip this step if you are using a version before Kubernetes v1.18; (the signerName field was not available). - - ::: - -1. For each `Pending` CSR perform (security) checks (see next heading) -1. Issue a certificate and update `.spec.status.certificate` -1. Approve the CSR and update `.spec.status.conditions` - -**Security requirements** - -Based on your requirements you may want to implement custom checks to make sure that no certificates are issued for a malicious user. -When a CSR is created, the kube-apiserver adds immutable fields to the spec to help you perform checks: - -- `.spec.username`: username of the requester -- `.spec.groups`: user groups of the requester -- `.spec.request`: certificate request in pem format - -Verify that the user and/or group match with the requested certificate subject (alt) names. - -**Implement your signer and approver using golang** - -- Use [client-go](https://github.com/kubernetes/client-go) to create a clientset -- To watch CSRs, use `clientset.CertificatesV1().CertificateSigningRequests().Watch(..)` -- To issue the certificate use `clientset.CertificatesV1().CertificateSigningRequests().UpdateStatus(...)` -- To approve the CSR use `clientset.CertificatesV1().CertificateSigningRequests().UpdateApproval(...)` - -### Additional resources - -- Read [kubernetes certificate signing requests](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/) for more information on CSRs -- Use [client-go](https://github.com/kubernetes/client-go) to implement a controller to sign and approve a CSR diff --git a/calico-enterprise/operations/comms/compliance-tls.mdx b/calico-enterprise/operations/comms/compliance-tls.mdx deleted file mode 100644 index 50d096c9b9..0000000000 --- a/calico-enterprise/operations/comms/compliance-tls.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -description: Add TLS certificate to secure access to compliance. ---- - -# Provide TLS certificates for compliance - -## Big picture - -Provide TLS certificates to secure access to $[prodname] to the compliance components. - -## Value - -Providing TLS certificates for $[prodname] compliance components is recommended as part of a zero trust network model for security. - -## Before you begin... - -By default, $[prodname] uses self-signed certificates for its compliance reporting components. To provide TLS certificates, -get the certificate and key pair for the $[prodname] compliance using any X.509-compatible tool or from your organization's -Certificate Authority. The certificate must have Common Name or a Subject Alternate Name of `compliance.tigera-compliance.svc`. - -## How to - -### Add TLS certificates for compliance - -To provide TLS certificates for use by $[prodname] compliance components during deployment, you must create a secret before applying the 'custom-resource.yaml' or before creating the Compliance resource. Use the following command to create a secret: - -```bash -kubectl create secret generic tigera-compliance-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic tigera-compliance-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f - -``` diff --git a/calico-enterprise/operations/comms/crypto-auth.mdx b/calico-enterprise/operations/comms/crypto-auth.mdx deleted file mode 100644 index 06cee6aac3..0000000000 --- a/calico-enterprise/operations/comms/crypto-auth.mdx +++ /dev/null @@ -1,112 +0,0 @@ ---- -description: Enable TLS authentication and encryption for various Calico Enterprise components. ---- - -# Configure encryption and authentication to secure Calico Enterprise components - -## Connections from $[prodname] components to kube-apiserver (Kubernetes and OpenShift) - -We recommend enabling TLS on kube-apiserver, as well as the client certificate and JSON web token (JWT) -authentication modules. This ensures that all of its communications with $[prodname] components occur -over TLS. The $[prodname] components present either an X.509 certificate or a JWT to kube-apiserver -so that kube-apiserver can verify their identities. - -## Connections from Node to Typha (Kubernetes) - -Operator based installations automatically configure mutual TLS authentication on connections from -Felix to Typha. You may also configure this TLS by providing your own secrets. - -### Configure Node to Typha TLS based on your deployment - -For clusters installed using operator, see how to [provide TLS certificates for Typha and Node](typha-node-tls.mdx). - -For detailed reference information on TLS configuration parameters, refer to: - -- **Typha**: [Node-Typha TLS configuration](../../reference/component-resources/typha/configuration.mdx#felix-typha-tls-configuration) - -- **Node**: [Node-Typha TLS configuration](../../reference/component-resources/node/felix/configuration.mdx#felix-typha-tls-configuration) - -# Calico Enterprise Manager connections - -Tigera the $[prodname] web console's web interface, run from your browser, uses HTTPS to securely communicate -with the $[prodname] web console, which in turn, communicates with the Kubernetes and $[prodname] API -servers also using HTTPS. Through the installation steps, secure communication between -$[prodname] components should already be configured, but secure communication through your web -browser of choice may not. To verify if this is properly configured, the web browser -you are using should display `Secure` in the address bar. - -Before we set up TLS certificates, it is important to understand the traffic -that we are securing. By default, your web browser of choice communicates with -the $[prodname] web console through a -[`NodePort` service](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport) -over port `30003`. The NodePort service passes through packets without modification. -TLS traffic is [terminated](https://en.wikipedia.org/wiki/TLS_termination_proxy) -at the $[prodname] web console. This means that the TLS certificates used to secure traffic -between your web browser and the $[prodname] web console do not need to be shared or related -to any other TLS certificates that may be used elsewhere in your cluster or when -configuring $[prodname]. The flow of traffic should look like the following: - -![the $[prodname] web console traffic diagram](/img/calico-enterprise/cnx-tls-mgr-comms.svg) - -:::note - -the `NodePort` service in the above diagram can be replaced with other -[Kubernetes services](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types). -Configuration will vary if another service, such as a load balancer, is placed between the web -browser and the $[prodname] web console. - -::: - -To properly configure TLS in the $[prodname] web console, you will need -certificates and keys signed by an appropriate Certificate Authority (CA). -For more high level information on certificates, keys, and CAs, see -[this blog post](http://www.steves-internet-guide.com/ssl-certificates-explained/). - -:::note - -It is important when generating your certificates to make sure -that the Common Name or Subject Alternative Name specified in your certificates -matches the host name/DNS entry/IP address that is used to access the $[prodname] web console -(i.e. what it says in the browser address bar). - -::: - -## Issues with certificates - -If your web browser still does not display `Secure` in the address bar, the most -common reasons and their fixes are listed below. - -- **Untrusted Certificate Authority**: Your browser may not display `Secure` because - it does not know (and therefore trust) the certificate authority (CA) that issued - the certificates that the $[prodname] web console is using. This is generally caused by using - self-signed certificates (either generated by Kubernetes or manually). If you have - certificates signed by a recognized CA, we recommend that you use them with the $[prodname] - Manager since the browser will automatically recognize them. - - If you opt to use self-signed certificates you can still configure your browser to - trust the CA on a per-browser basis by importing the CA certificates into the browser. - In Google Chrome, this can be achieved by selecting Settings, Advanced, Privacy and security, - Manage certificates, Authorities, Import. This is not recommended since it requires the CA - to be imported into every browser you access the $[prodname] web console from. - -- **Mismatched Common Name or Subject Alternative Name**: If you are still having issues - securely accessing the $[prodname] web console with TLS, you may want to make sure that the Common Name - or Subject Alternative Name specified in your certificates matches the host name/DNS - entry/IP address that is used to access the $[prodname] web console (i.e. what it says in the browser - address bar). In Google Chrome you can check the $[prodname] web console certificate with Developer Tools - (Ctrl+Shift+I), Security. If you are issued certificates which do not match, - you will need to reissue the certificates with the correct Common Name or - Subject Alternative Name and reconfigure the $[prodname] web console following the steps above. - -## Ingress proxies and load balancers - -You may wish to configure proxy elements, including hardware or software load balancers, Kubernetes Ingress -proxies etc., between user web browsers and the $[prodname] web console. If you do so, configure your proxy -such that the $[prodname] web console receives a HTTPS (TLS) connection, not unencrypted HTTP. - -If you require TLS termination at any of these proxy elements, you will need to - -- use a proxy that supports transparent HTTP/2 proxying, for example, [Envoy](https://www.envoyproxy.io/) -- re-originate a TLS connection from your proxy to the $[prodname] web console, as it expects TLS - -If you do not require TLS termination, configure your proxy to "pass thru" the TLS to the $[prodname] web console. diff --git a/calico-enterprise/operations/comms/index.mdx b/calico-enterprise/operations/comms/index.mdx index 24e13343f5..aba8ef3829 100644 --- a/calico-enterprise/operations/comms/index.mdx +++ b/calico-enterprise/operations/comms/index.mdx @@ -1,11 +1,157 @@ --- -description: Secure communications for Calico components. -hide_table_of_contents: true +description: Provide custom TLS certificates for Calico Enterprise components. --- -# Secure Calico component communications +# Provide TLS certificates for $[prodname] components -import DocCardList from '@theme/DocCardList'; -import { useCurrentSidebarCategory } from '@docusaurus/theme-common'; +$[prodname] uses TLS certificates for mutual authentication between components. The operator automatically generates and manages these certificates using a self-signed CA (`tigera-operator-signer`). - +To replace any certificate with your own, create a Kubernetes secret with the same name in the `tigera-operator` namespace. The operator will detect it and use your certificate instead of the auto-generated one. + +## Certificate requirements + +- **Extended Key Usages**: include both `TLS Web Server Authentication` and `TLS Web Client Authentication` +- **Common Name (CN)**: must match the first DNS name listed in the table below. Note that `node-certs` and `typha-certs` use `typha-client` and `typha-server` respectively (not the component name). +- **Subject Alternative Names (SANs)**: include all DNS names listed for the secret + +## Create or update a secret + +```bash +SIGNER=my-ca-signer +kubectl create secret generic -n tigera-operator \ + --from-file=tls.crt= \ + --from-file=tls.key= && \ +kubectl label secret -n tigera-operator operator.tigera.io/signer=$SIGNER +``` + +To update an existing secret: + +```bash +SIGNER=my-ca-signer +kubectl create secret generic -n tigera-operator \ + --from-file=tls.crt= \ + --from-file=tls.key= \ + --dry-run=client -o yaml | kubectl replace -f - && \ +kubectl label secret -n tigera-operator operator.tigera.io/signer=$SIGNER --overwrite +``` + +:::note + +Updating a certificate causes the affected components to restart. Expect brief unavailability during the rollout. + +::: + +## TLS certificate reference + +The **Deployed to** column shows the namespace where the operator places the secret at runtime. To override, always create your secret in `tigera-operator`. + +| Secret name | DNS name(s) | Deployed to | Owner resource | +|---|---|---|---| +| `calico-apiserver-certs` | `calico-api` | `calico-system` | APIServer/tigera-secure | +| `calico-kube-controllers-metrics-tls` | `calico-kube-controllers-metrics` | `calico-system` | Installation/default | +| `calico-node-prometheus-client-tls` | `calico-node-prometheus-client-tls` | `tigera-prometheus` | Monitor/tigera-secure | +| `calico-node-prometheus-server-tls` | `calico-node-metrics` | `calico-system` | Installation/default | +| `calico-node-prometheus-tls` | `prometheus-http-api` | `tigera-prometheus` | Monitor/tigera-secure | +| `deep-packet-inspection-tls` | `intrusion-detection-tls` | `tigera-dpi` | IntrusionDetection/tigera-secure | +| `internal-manager-tls` | `calico-manager` | `calico-system` | Manager/tigera-secure | +| `intrusion-detection-tls` | `intrusion-detection-tls` | `tigera-intrusion-detection` | IntrusionDetection/tigera-secure | +| `manager-tls` | `calico-manager` | `calico-system` | Manager/tigera-secure | +| `node-certs` | `typha-client` | `calico-system` | Installation/default | +| `node-certs` | `typha-client` | `tigera-dpi` | IntrusionDetection/tigera-secure | +| `policy-recommendation-tls` | `policy-recommendation-tls` | `calico-system` | PolicyRecommendation/tigera-secure | +| `tigera-ee-elasticsearch-metrics-tls` | `tigera-elasticsearch-metrics` | `tigera-elasticsearch` | LogStorage/tigera-secure | +| `tigera-fluentd-prometheus-tls` | `fluentd-http-input` | `tigera-fluentd` | LogCollector/tigera-secure | +| `tigera-operator-tls` | `tigera-operator-metrics` | `tigera-prometheus` | Monitor/tigera-secure | +| `tigera-secure-elasticsearch-cert` | `tigera-secure-es-gateway-http` | `tigera-elasticsearch` | LogStorage/tigera-secure | +| `tigera-secure-internal-elasticsearch-cert` | `tigera-secure-es-http` | `tigera-elasticsearch` | LogStorage/tigera-secure | +| `tigera-secure-kibana-cert` | `tigera-secure-kb-http` | `tigera-kibana` | LogStorage/tigera-secure | +| `tigera-secure-linseed-cert` | `tigera-linseed` | `tigera-elasticsearch` | LogStorage/tigera-secure | +| `typha-certs` | `typha-server` | `calico-system` | Installation/default | +| `typha-certs-noncluster-host` | `typha-server-noncluster-host` | `calico-system` | Installation/default | + +:::tip + +Typha and Node use mutual TLS. If you replace `typha-certs`, `typha-certs-noncluster-host`, or `node-certs`: +- Ensure they are all signed by the same CA. Mismatched certificates will break Node-to-Typha communication. +- These secrets require an additional `common-name` data field containing the CN. For example: + ```bash + kubectl create secret generic typha-certs -n tigera-operator \ + --from-file=tls.crt= \ + --from-file=tls.key= \ + --from-literal=common-name=typha-server + ``` + +::: + +## Monitor certificates + +The operator labels and annotates every TLS secret it manages: + +- **Label** `certificates.operator.tigera.io/signer` — the signer that issued the certificate +- **Annotation** `certificates.operator.tigera.io/issuer` — the issuer name +- **Annotation** `certificates.operator.tigera.io/expiry` — certificate expiration timestamp + +List all managed certificates with their issuer and expiry: + +```bash +kubectl get secrets -A -l certificates.operator.tigera.io/signer -o json | \ + jq -r '.items[] | select(.metadata.namespace != "tigera-operator") | + [.metadata.namespace, .metadata.name, + .metadata.annotations["certificates.operator.tigera.io/issuer"], + .metadata.annotations["certificates.operator.tigera.io/expiry"]] | @tsv' | \ + column -t -N NAMESPACE,NAME,ISSUER,EXPIRY +``` + +``` +NAMESPACE NAME ISSUER EXPIRY +calico-system calico-apiserver-certs tigera-operator-signer 2028-07-12T17:59:28Z +calico-system manager-tls tigera-operator-signer 2028-07-12T18:01:21Z +calico-system node-certs tigera-operator-signer 2028-07-12T17:59:27Z +calico-system typha-certs tigera-operator-signer 2028-07-12T17:59:27Z +tigera-elasticsearch tigera-secure-linseed-cert tigera-operator-signer 2028-07-12T17:59:23Z +... +``` + +The operator also exposes Prometheus metrics for certificate expiration. See [Operator metrics](../monitor/metrics/operator-metrics.mdx) to configure monitoring and alerts. + +## Certificate management with Kubernetes CSR API + +Instead of providing certificates directly, you can configure $[prodname] to use the [Kubernetes Certificates API](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/) for automated certificate issuance. + +### Enable certificate management + +Add the `certificateManagement` section to your Installation resource: + +```yaml +apiVersion: operator.tigera.io/v1 +kind: Installation +metadata: + name: default +spec: + certificateManagement: + caCert: + signerName: / + signatureAlgorithm: SHA512WithRSA + keyAlgorithm: RSAWithSize4096 +``` + +**Supported algorithms:** +- Private key: RSA (2048, 4096, 8192), ECDSA (256, 384, 521) +- Signature: RSA-SHA (256, 384, 512), ECDSA-SHA (256, 384, 512) + +### How it works + +Pods use an init container to create a CertificateSigningRequest (CSR). The pod remains in `Init` state until the CSR is approved and signed by your certificate authority. + +Monitor CSRs: + +```bash +kubectl get csr -w +``` + +CSR names follow the pattern: `::`. + +### Limitations + +- If enabling on an existing cluster, you must temporarily remove the [LogStorage resource](../../reference/installation/api.mdx#logstorage) and re-apply it after enabling certificate management. See [how to create a new cluster](../../observability/elastic/troubleshoot.mdx#how-to-create-a-new-cluster). +- Not supported with [multi-cluster management](../../multicluster/set-up-multi-cluster-management/standard-install/create-a-management-cluster.mdx). diff --git a/calico-enterprise/operations/comms/linseed-tls.mdx b/calico-enterprise/operations/comms/linseed-tls.mdx deleted file mode 100644 index 0e2233f8d5..0000000000 --- a/calico-enterprise/operations/comms/linseed-tls.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -description: Add TLS certificate to secure access to Linseed APIs. ---- - -# Provide TLS certificates for Linseed APIs - -## Big picture - -Provide TLS certificates to secure access to $[prodname] to the Linseed components. - -## Value - -Providing TLS certificates for $[prodname] Linseed components is recommended as part of a zero trust network model for security. - -## Before you begin... - -By default, $[prodname] uses self-signed certificates for its Linseed components. To provide TLS certificates, -get the certificate and key pair for the $[prodname] Linseed using any X.509-compatible tool or from your organization's -Certificate Authority. The certificate must have Common Name or a Subject Alternate Name of `tigera-linseed.tigera-elasticsearch.svc`. - -## How to - -### Add TLS certificates for PacketCapture - -To provide TLS certificates for use by $[prodname] Linseed components during deployment, you must create a secret before applying the 'custom-resource.yaml' or before creating the LogStorage resource. Use the following command to create a secret: - -```bash -kubectl create secret generic tigera-secure-linseed-cert -n tigera-operator --from-file=tls.crt= --from-file=tls.key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic tigera-secure-linseed-cert -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f - -``` diff --git a/calico-enterprise/operations/comms/log-storage-tls.mdx b/calico-enterprise/operations/comms/log-storage-tls.mdx deleted file mode 100644 index 906dd93af1..0000000000 --- a/calico-enterprise/operations/comms/log-storage-tls.mdx +++ /dev/null @@ -1,45 +0,0 @@ ---- -description: Add TLS certificate to secure access to log storage. ---- - -# Provide TLS certificates for log storage - -## Big picture - -Provide TLS certificates to secure access to $[prodname] to the log storage. - -## Value - -Providing TLS certificates for $[prodname] components is recommended as part of a zero trust network model for security. - -## Before you begin... - -By default, the $[prodname] log storage uses self-signed certificates on connections. To provide TLS certificates, -get the certificate and key pair for the $[prodname] log storage using any X.509-compatible tool or from your organization's -Certificate Authority. The certificate must include the following Subject Alternate Names or DNS names `tigera-secure-es-http.tigera-elasticsearch.svc` and `tigera-secure-es-gateway-http.tigera-elasticsearch.svc`. - -If your cluster has Windows nodes, the certificate must additionally include `tigera-secure-es-http.tigera-elasticsearch.svc.` where `` is the local domain specified for in-cluster DNS. - -## How to - -### Add TLS certificates for log storage - -To provide TLS certificates for use by $[prodname] components during deployment, you must create a secret before applying the 'custom-resource.yaml' or before creating the LogStorage resource. Use the following command to create a secret: - -```bash -kubectl create secret generic tigera-secure-elasticsearch-cert -n tigera-operator --from-file=tls.crt= --from-file=tls.key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic tigera-secure-elasticsearch-cert -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f - -``` - -:::note - -If the $[prodname] log storage already exists, you must manually delete the log storage pods one by one -after updating the secret. These pods will be in the `tigera-elasticsearch` namespace with the prefix, `tigera-secure-es`. -Other $[prodname] components will not be unable to communicate with log storage until the pods are restarted. - -::: diff --git a/calico-enterprise/operations/comms/manager-tls.mdx b/calico-enterprise/operations/comms/manager-tls.mdx deleted file mode 100644 index 9c92370c4d..0000000000 --- a/calico-enterprise/operations/comms/manager-tls.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -description: Add TLS certificates to secure access to Calico Enterprise Manager user interface. ---- - -# Provide TLS certificates for Calico Enterprise Manager - -## Big picture - -Provide TLS certificates that secure access to the $[prodname] web console user interface. - -## Value - -By default, the $[prodname] web console uses self-signed TLS certificates on connections. This article describes how to provide TLS certificates that users' browsers will trust. - -## Before you begin... - -- **Get the certificate and key pair for the $[prodname] web console** - Generate the certificate using any X.509-compatible tool or from your organization's Certificate Authority. The certificate must have Common Name or Subject Alternate Names that match the IPs or DNS names that will be used to [access the web console](../cnx/access-the-manager.mdx). - -## How to - -To provide certificates for use during deployment you must create a secret before applying the 'custom-resource.yaml' or before creating the Installation resource. To specify certificates for use in the manager, create a secret using the following command: - -```bash -kubectl create secret generic manager-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic manager-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f - -``` - -If the $[prodname] web console is already running then updating the secret should cause it to restart and pickup the new certificate and key. This will result in a short period of unavailability of the $[prodname] web console. - -## Additional resources - -Additional documentation is available for securing [the $[prodname] web console connections](crypto-auth.mdx#calico-enterprise-manager-connections). diff --git a/calico-enterprise/operations/comms/packetcapture-tls.mdx b/calico-enterprise/operations/comms/packetcapture-tls.mdx deleted file mode 100644 index c75f45f90c..0000000000 --- a/calico-enterprise/operations/comms/packetcapture-tls.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -description: Add TLS certificate to secure access to PacketCapture APIs. ---- - -# Provide TLS certificates for PacketCapture APIs - -## Big picture - -Provide TLS certificates to secure access to $[prodname] to the PacketCapture components. - -## Value - -Providing TLS certificates for $[prodname] PacketCapture components is recommended as part of a zero trust network model for security. - -## Before you begin... - -By default, $[prodname] uses self-signed certificates for its PacketCapture APIs components. To provide TLS certificates, -get the certificate and key pair for the $[prodname] PacketCapture using any X.509-compatible tool or from your organization's -Certificate Authority. The certificate must have Common Name or a Subject Alternate Name of `tigera-packetcapture.tigera-packetcapture.svc`. - -## How to - -### Add TLS certificates for PacketCapture - -To provide TLS certificates for use by $[prodname] PacketCapture components during deployment, you must create a secret before applying the 'custom-resource.yaml' or before creating the APIServer resource. Use the following command to create a secret: - -```bash -kubectl create secret generic tigera-packetcapture-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= -``` - -To update existing certificates, run the following command: - -```bash -kubectl create secret generic tigera-packetcapture-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f - -``` diff --git a/calico-enterprise/operations/comms/secure-metrics.mdx b/calico-enterprise/operations/comms/secure-metrics.mdx deleted file mode 100644 index 3790f95c77..0000000000 --- a/calico-enterprise/operations/comms/secure-metrics.mdx +++ /dev/null @@ -1,512 +0,0 @@ ---- -description: Limit access to Calico Enterprise metric endpoints using network policy. ---- - -# Secure Calico Enterprise Prometheus endpoints - -## About securing access to $[prodname]'s metrics endpoints - -When using $[prodname] with Prometheus metrics enabled, we recommend using network policy -to limit access to $[prodname]'s metrics endpoints. - -## Prerequisites - -- $[prodname] is installed with Prometheus metrics reporting enabled. -- `calicoctl` is [installed in your PATH and configured to access the data store](../clis/calicoctl/install.mdx). - -## Choosing an approach - -This guide provides two example workflows for creating network policies to limit access -to $[prodname]'s Prometheus metrics. Choosing an approach depends on your requirements. - -- [Using a deny-list approach](#using-a-deny-list-approach) - - This approach allows all traffic to your hosts by default, but lets you limit access to specific ports using - $[prodname] policy. This approach allows you to restrict access to specific ports, while leaving other - host traffic unaffected. - -- [Using an allow-list approach](#using-an-allow-list-approach) - - This approach denies traffic to and from your hosts by default, and requires that all - desired communication be explicitly allowed by a network policy. This approach is more secure because - only explicitly-allowed traffic will get through, but it requires you to know all the ports that should be open on the host. - -## Using a deny-list approach - -### Overview - -The basic process is as follows: - -1. Create a default network policy that allows traffic to and from your hosts. -1. Create host endpoints for each node that you'd like to secure. -1. Create a network policy that denies unwanted traffic to the $[prodname] metrics endpoints. -1. Apply labels to allow access to the Prometheus metrics. - -### Example for $[nodecontainer] - -This example shows how to limit access to the $[nodecontainer] Prometheus metrics endpoints. - -1. Create a default network policy to allow host traffic - - First, create a default-allow policy. Do this first to avoid a drop in connectivity when adding the host endpoints - later, since host endpoints with no policy default to deny. - - To do this, create a file named `default-host-policy.yaml` with the following contents. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: GlobalNetworkPolicy - metadata: - name: default-host - spec: - # Select all $[prodname] nodes. - selector: running-calico == "true" - order: 5000 - ingress: - - action: Allow - egress: - - action: Allow - ``` - - Then, use `kubectl` to apply this policy. - - ```bash - kubectl apply -f default-host-policy.yaml - ``` - -1. List the nodes on which $[prodname] is running with the following command. - - ```bash - calicoctl get nodes - ``` - - In this case, we have two nodes in the cluster. - - ``` - NAME - kubeadm-master - kubeadm-node-0 - ``` - -1. Create host endpoints for each $[prodname] node. - - Create a file named `host-endpoints.yaml` containing a host endpoint for each node listed - above. In this example, the contents would look like this. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: HostEndpoint - metadata: - name: kubeadm-master.eth0 - labels: - running-calico: 'true' - spec: - node: kubeadm-master - interfaceName: eth0 - expectedIPs: - - 10.100.0.15 - --- - apiVersion: projectcalico.org/v3 - kind: HostEndpoint - metadata: - name: kubeadm-node-0.eth0 - labels: - running-calico: 'true' - spec: - node: kubeadm-node-0 - interfaceName: eth0 - expectedIPs: - - 10.100.0.16 - ``` - - In this file, replace `eth0` with the desired interface name on each node, and populate the - `expectedIPs` section with the IP addresses on that interface. - - Note the use of a label to indicate that this host endpoint is running $[prodname]. The - label matches the selector of the network policy created in step 1. - - Then, use `kubectl` to apply the host endpoints with the following command. - - ```bash - kubectl apply -f host-endpoints.yaml - ``` - -1. Create a network policy that restricts access to the $[nodecontainer] Prometheus metrics port. - - Now let's create a network policy that limits access to the Prometheus metrics port such that - only endpoints with the label `calico-prometheus-access: true` can access the metrics. - - To do this, create a file named `calico-prometheus-policy.yaml` with the following contents. - - ```yaml - # Allow traffic to Prometheus only from sources that are - # labeled as such, but don't impact any other traffic. - apiVersion: projectcalico.org/v3 - kind: GlobalNetworkPolicy - metadata: - name: restrict-calico-node-prometheus - spec: - # Select all $[prodname] nodes. - selector: running-calico == "true" - order: 500 - types: - - Ingress - ingress: - # Deny anything that tries to access the Prometheus port - # but that doesn't match the necessary selector. - - action: Deny - protocol: TCP - source: - notSelector: calico-prometheus-access == "true" - destination: - ports: - - 9091 - ``` - - This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress deny rule. - The ingress rule denies traffic to port 9091 unless the source of traffic has the label `calico-prometheus-access: true`, meaning - all $[prodname] workload endpoints, host endpoints, and global network sets that do not have the label, as well as any - other network endpoints unknown to $[prodname]. - - Then, use `kubectl` to apply this policy. - - ```bash - kubectl apply -f calico-prometheus-policy.yaml - ``` - -1. Apply labels to any endpoints that should have access to the metrics. - - At this point, only endpoints that have the label `calico-prometheus-access: true` can reach - $[prodname]'s Prometheus metrics endpoints on each node. To grant access, simply add this label to the - desired endpoints. - - For example, to allow access to a Kubernetes pod you can run the following command. - - ```bash - kubectl label pod my-prometheus-pod calico-prometheus-access=true - ``` - - If you would like to grant access to a specific IP network, you - can create a [global network set](../../reference/resources/globalnetworkset.mdx) using `kubectl`. - - For example, you might want to grant access to your management subnets. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: GlobalNetworkSet - metadata: - name: calico-prometheus-set - labels: - calico-prometheus-access: 'true' - spec: - nets: - - 172.15.0.0/24 - - 172.101.0.0/24 - ``` - -### Additional steps for Typha deployments - -If your $[prodname] installation uses the Kubernetes API datastore and has greater than 50 nodes, it is likely -that you have installed Typha. This section shows how to use an additional network policy to secure the Typha -Prometheus endpoints. - -After following the steps above, create a file named `typha-prometheus-policy.yaml` with the following contents. - -```yaml -# Allow traffic to Prometheus only from sources that are -# labeled as such, but don't impact any other traffic. -apiVersion: projectcalico.org/v3 -kind: GlobalNetworkPolicy -metadata: - name: restrict-calico-node-prometheus -spec: - # Select all $[prodname] nodes. - selector: running-calico == "true" - order: 500 - types: - - Ingress - ingress: - # Deny anything that tries to access the Prometheus port - # but that doesn't match the necessary selector. - - action: Deny - protocol: TCP - source: - notSelector: calico-prometheus-access == "true" - destination: - ports: - - 9093 -``` - -This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress deny rule. -The ingress rule denies traffic to port 9093 unless the source of traffic has the label `calico-prometheus-access: true`, meaning -all $[prodname] workload endpoints, host endpoints, and global network sets that do not have the label, as well as any -other network endpoints unknown to $[prodname]. - -Then, use `kubectl` to apply this policy. - -```bash -kubectl apply -f typha-prometheus-policy.yaml -``` - -### Example for kube-controllers - -If your $[prodname] installation exposes metrics from kube-controllers, you can limit access to those metrics -with the following network policy. - -Create a file named `kube-controllers-prometheus-policy.yaml` with the following contents. - -```yaml -apiVersion: projectcalico.org/v3 -kind: NetworkPolicy -metadata: - name: restrict-kube-controllers-prometheus - namespace: calico-system -spec: - # Select kube-controllers. - selector: k8s-app == "calico-kube-controllers" - order: 500 - types: - - Ingress - ingress: - # Deny anything that tries to access the Prometheus port - # but that doesn't match the necessary selector. - - action: Deny - protocol: TCP - source: - notSelector: calico-prometheus-access == "true" - destination: - ports: - - 9094 -``` - -:::note - -The above policy is installed in the calico-system namespace. If your cluster has $[prodname] installed -in the kube-system namespace, you will need to create the policy in that namespace instead. - -::: - -Then, use `calicoctl` to apply this policy. - -```bash -kubectl apply -f kube-controllers-prometheus-policy.yaml -``` - -## Using an allow-list approach - -### Overview - -The basic process is as follows: - -1. Create host endpoints for each node that you'd like to secure. -1. Create a network policy that allows desired traffic to the $[prodname] metrics endpoints. -1. Apply labels to allow access to the Prometheus metrics. - -### Example for $[nodecontainer] - -1. List the nodes on which $[prodname] is running with the following command. - - ```bash - calicoctl get nodes - ``` - - In this case, we have two nodes in the cluster. - - ``` - NAME - kubeadm-master - kubeadm-node-0 - ``` - -1. Create host endpoints for each $[prodname] node. - - Create a file named `host-endpoints.yaml` containing a host endpoint for each node listed - above. In this example, the contents would look like this. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: HostEndpoint - metadata: - name: kubeadm-master.eth0 - labels: - running-calico: 'true' - spec: - node: kubeadm-master - interfaceName: eth0 - expectedIPs: - - 10.100.0.15 - --- - apiVersion: projectcalico.org/v3 - kind: HostEndpoint - metadata: - name: kubeadm-node-0.eth0 - labels: - running-calico: 'true' - spec: - node: kubeadm-node-0 - interfaceName: eth0 - expectedIPs: - - 10.100.0.16 - ``` - - In this file, replace `eth0` with the desired interface name on each node, and populate the - `expectedIPs` section with the IP addresses on that interface. - - Note the use of a label to indicate that this host endpoint is running $[prodname]. The - label matches the selector of the network policy created in step 1. - - Then, use `kubectl` to apply the host endpoints with the following command. This will prevent all - traffic to and from the host endpoints. - - ```bash - kubectl apply -f host-endpoints.yaml - ``` - - :::note - - $[prodname] allows some traffic as a failsafe even after applying this policy. This can - be adjusted using the `failsafeInboundHostPorts` and `failsafeOutboundHostPorts` options - on the [FelixConfiguration resource](../../reference/resources/felixconfig.mdx). - - ::: - -1. Create a network policy that allows access to the $[nodecontainer] Prometheus metrics port. - - Now let's create a network policy that allows access to the Prometheus metrics port such that - only endpoints with the label `calico-prometheus-access: true` can access the metrics. - - To do this, create a file named `calico-prometheus-policy.yaml` with the following contents. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: GlobalNetworkPolicy - metadata: - name: restrict-calico-node-prometheus - spec: - # Select all $[prodname] nodes. - selector: running-calico == "true" - order: 500 - types: - - Ingress - ingress: - # Allow traffic from selected sources to the Prometheus port. - - action: Allow - protocol: TCP - source: - selector: calico-prometheus-access == "true" - destination: - ports: - - 9091 - ``` - - This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress allow rule. - The ingress rule allows traffic to port 9091 from any source with the label `calico-prometheus-access: true`, meaning - all $[prodname] workload endpoints, host endpoints, and global network sets that have the label will be allowed access. - - Then, use `kubectl` to apply this policy. - - ```bash - kubectl apply -f calico-prometheus-policy.yaml - ``` - -1. Apply labels to any endpoints that should have access to the metrics. - - At this point, only endpoints that have the label `calico-prometheus-access: true` can reach - $[prodname]'s Prometheus metrics endpoints on each node. To grant access, simply add this label to the - desired endpoints. - - For example, to allow access to a Kubernetes pod you can run the following command. - - ```bash - kubectl label pod my-prometheus-pod calico-prometheus-access=true - ``` - - If you would like to grant access to a specific IP address in your network, you - can create a [global network set](../../reference/resources/globalnetworkset.mdx) using `kubectl`. - - For example, creating the following network set would grant access to a host with IP 172.15.0.101. - - ```yaml - apiVersion: projectcalico.org/v3 - kind: GlobalNetworkSet - metadata: - name: calico-prometheus-set - labels: - calico-prometheus-access: 'true' - spec: - nets: - - 172.15.0.101/32 - ``` - -### Additional steps for Typha deployments - -If your $[prodname] installation uses the Kubernetes API datastore and has greater than 50 nodes, it is likely -that you have installed Typha. This section shows how to use an additional network policy to secure the Typha -Prometheus endpoints. - -After following the steps above, create a file named `typha-prometheus-policy.yaml` with the following contents. - -```yaml -apiVersion: projectcalico.org/v3 -kind: GlobalNetworkPolicy -metadata: - name: restrict-typha-prometheus -spec: - # Select all $[prodname] nodes. - selector: running-calico == "true" - order: 500 - types: - - Ingress - ingress: - - action: Allow - protocol: TCP - source: - selector: calico-prometheus-access == "true" - destination: - ports: - - 9093 -``` - -This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress allow rule. -The ingress rule allows traffic to port 9093 from any source with the label `calico-prometheus-access: true`, meaning -all $[prodname] workload endpoints, host endpoints, and global network sets that have the label will be allowed access. - -Then, use `kubectl` to apply this policy. - -```bash -kubectl apply -f typha-prometheus-policy.yaml -``` - -### Example for kube-controllers - -If your $[prodname] installation exposes metrics from kube-controllers, you can limit access to those metrics -with the following network policy. - -Create a file named `kube-controllers-prometheus-policy.yaml` with the following contents. - -```yaml -apiVersion: projectcalico.org/v3 -kind: NetworkPolicy -metadata: - name: restrict-kube-controllers-prometheus - namespace: calico-system -spec: - selector: k8s-app == "calico-kube-controllers" - order: 500 - types: - - Ingress - ingress: - - action: Allow - protocol: TCP - source: - selector: calico-prometheus-access == "true" - destination: - ports: - - 9094 -``` - -Then, use `kubectl` to apply this policy. - -```bash -kubectl apply -f kube-controllers-prometheus-policy.yaml -``` diff --git a/calico-enterprise/operations/comms/typha-node-tls.mdx b/calico-enterprise/operations/comms/typha-node-tls.mdx deleted file mode 100644 index e18e854fd9..0000000000 --- a/calico-enterprise/operations/comms/typha-node-tls.mdx +++ /dev/null @@ -1,85 +0,0 @@ ---- -description: Add TLS certificates to secure communications between if you are using Typha to scale your deployment. ---- - -# Provide TLS certificates for Typha and Node - -## Big picture - -Provide TLS certificates that allow mutual TLS authentication between Node and Typha. - -## Value - -By default, $[prodname] Typha and Node components are configured with self-signed Certificate Authority (CA) and certificates for mutual TLS authentication. This article describes how to provide a CA and TLS certificates. - -## Concepts - -**Mutual TLS authentication** means each side of a connection authenticates the other side. As such, the CA and certificates that are used must all be in sync. If one side of the connection is updated with a certificate that is not compatible with the other side, communication stops. So if certificate updates are mismatched on Typha, Node, or CA certificate, new pod networking and policy application will be interrupted until you restore compatibility. To make it easy to keep updates in sync, this article describes how to use one command to apply updates for all resources. - -## Before you begin... - -**Get the Certificate Authority certificate and signed certificate and key pairs for $[prodname] Typha and Node** - -- Generate the certificates using any X.509-compatible tool or from your organization's CA. -- Ensure the generated certificates meet the requirements for [TLS connections between Node and Typha](crypto-auth.mdx#connections-from-node-to-typha-kubernetes). - -## How to - -### Create resource file - -1. Create the CA ConfigMap with the following commands: - - ```bash - kubectl create configmap typha-ca -n tigera-operator --from-file=caBundle= --dry-run -o yaml --save-config > typha-node-tls.yaml - echo '---' >> typha-node-tls.yaml - ``` - - :::tip - - The contents of the caBundle field should contain the CA or the certificates for both Typha and Node. - It is possible to add multiple PEM blocks. - - ::: - -1. Create the Typha Secret with the following command: - - ```bash - kubectl create secret generic typha-certs -n tigera-operator \ - --from-file=tls.crt= --from-file=tls.key= \ - --from-literal=common-name= --dry-run -o yaml --save-config >> typha-node-tls.yaml - echo '---' >> typha-node-tls.yaml - ``` - - :::tip - - If using SPIFFE identifiers replace `--from-literal=common-name=` with `--from-literal=uri-san=`. - - ::: - -1. Create the Node Secret with the following command: - - ```bash - kubectl create secret generic node-certs -n tigera-operator \ - --from-file=tls.crt= --from-file=tls.key= \ - --from-literal=common-name= --dry-run -o yaml --save-config >> typha-node-tls.yaml - ``` - - :::tip - - If using SPIFFE identifiers replace `--from-literal=common-name=` with `--from-literal=uri-san=`. - - ::: - -### Apply or update resources - -1. Apply the `typha-node-tls.yaml` file. - - To create these resource for use during deployment, you must apply this file before applying `custom-resource.yaml` or before creating the Installation resource. To apply this file, use the following command: - ```bash - kubectl apply -f typha-node-tls.yaml - ``` - - To update existing resources, use the following command: - ```bash - kubectl replace -f typha-node-tls.yaml - ``` - -If $[prodname] Node and Typha are already running, the update causes a rolling restart of both. If the new CA and certificates are not compatible with the previous set, there may be a period where the Node pods produce errors until the old set CA and certificates are replaced with the new ones. diff --git a/calico-enterprise/operations/index.mdx b/calico-enterprise/operations/index.mdx index e5329f9be2..829ac99686 100644 --- a/calico-enterprise/operations/index.mdx +++ b/calico-enterprise/operations/index.mdx @@ -31,17 +31,7 @@ Post-installation tasks for managing Calico Enterprise. ## Securing component communications - - - - - - - - - - - + ## Storage diff --git a/calico-enterprise/operations/monitor/metrics/bgp-metrics.mdx b/calico-enterprise/operations/monitor/metrics/bgp-metrics.mdx index 96c2d216b5..3c6ba32075 100644 --- a/calico-enterprise/operations/monitor/metrics/bgp-metrics.mdx +++ b/calico-enterprise/operations/monitor/metrics/bgp-metrics.mdx @@ -164,5 +164,4 @@ kubectl patch felixConfiguration default --type merge --patch '{"spec":{"windows ## Additional resources -- [Secure $[prodname] Prometheus endpoints](../../comms/secure-metrics.mdx) - [Configuring Prometheus](../prometheus/index.mdx) diff --git a/calico-enterprise/reference/component-resources/typha/configuration.mdx b/calico-enterprise/reference/component-resources/typha/configuration.mdx index 039acdcd07..a13fef97f9 100644 --- a/calico-enterprise/reference/component-resources/typha/configuration.mdx +++ b/calico-enterprise/reference/component-resources/typha/configuration.mdx @@ -100,7 +100,7 @@ that is signed by one of the trusted CAs in the | `ServerKeyFile` | `TYPHA_SERVERKEYFILE` | Path to the file containing the private key matching the Typha server certificate. Example: `/etc/typha/key.pem` (optional) | string | For more information on how to use and set these variables, refer to -[Connections from Node to Typha (Kubernetes)](../../../operations/comms/crypto-auth.mdx#connections-from-node-to-typha-kubernetes). +[Provide TLS certificates](../../../operations/comms/index.mdx). diff --git a/calico-enterprise/reference/resources/bgppeer.mdx b/calico-enterprise/reference/resources/bgppeer.mdx index a43812aa38..57dc3916b7 100644 --- a/calico-enterprise/reference/resources/bgppeer.mdx +++ b/calico-enterprise/reference/resources/bgppeer.mdx @@ -49,7 +49,7 @@ spec: | localWorkloadSelector | Selector for the local workloads that the node should peer with. When this is set, the `peerSelector` and `peerIP` fields must be empty and the `localWorkloadPeeringIPV4` and/or `localWorkloadPeeringIPV6` fields in the `BGPConfiguration` resource must be configured. It is also important to configure appropriate import/export filters when using this feature. See the [guide](../../networking/configuring/bgp-to-workload.mdx) for details. | | [selector](#selectors) | | | keepOriginalNextHop | Maintain and forward the original next hop BGP route attribute to a specific Peer within a different AS. | | boolean | | extensions | Additional mapping of keys and values. Used for setting values in custom BGP configurations. | valid strings for both keys and values | map | | -| password | [BGP password](../../operations/comms/secure-bgp.mdx) for the peerings generated by this BGPPeer resource. | | [BGPPassword](#bgppassword) | `nil` (no password) | +| password | [BGP password](../../networking/configuring/secure-bgp.mdx) for the peerings generated by this BGPPeer resource. | | [BGPPassword](#bgppassword) | `nil` (no password) | | sourceAddress | Specifies whether and how to configure a source address for the peerings generated by this BGPPeer resource. Default value "UseNodeIP" means to configure the node IP as the source address. "None" means not to configure a source address. | "UseNodeIP", "None" | string | "UseNodeIP" | | failureDetectionMode | Specifies whether and how to detect loss of connectivity on the peerings generated by this BGPPeer resource. Default value "None" means nothing beyond BGP's own (slow) hold timer. "BFDIfDirectlyConnected" means to use BFD when the peer is directly connected. | "None", "BFDIfDirectlyConnected" | string | "None" | | restartMode | Specifies restart behaviour to configure on the peerings generated by this BGPPeer resource. Default value "GracefulRestart" means traditional graceful restart. "LongLivedGracefulRestart" means LLGR according to draft-uttaro-idr-bgp-persistence-05. | "GracefulRestart", "LongLivedGracefulRestart" | string | "GracefulRestart" | diff --git a/sidebars-calico-cloud.js b/sidebars-calico-cloud.js index be688ce5f7..a29bc9d992 100644 --- a/sidebars-calico-cloud.js +++ b/sidebars-calico-cloud.js @@ -359,6 +359,7 @@ module.exports = { 'networking/configuring/advertise-service-ips', 'networking/configuring/mtu', 'networking/configuring/custom-bgp-config', + 'networking/configuring/secure-bgp', 'networking/configuring/workloads-outside-cluster', 'networking/configuring/pod-mac-address', 'networking/configuring/node-local-dns-cache', @@ -418,12 +419,6 @@ module.exports = { 'operations/cluster-management', 'operations/disconnect', 'operations/usage-metrics', - { - type: 'category', - label: 'Secure component communications', - link: { type: 'doc', id: 'operations/comms/index' }, - items: ['operations/comms/secure-metrics', 'operations/comms/secure-bgp'], - }, { type: 'category', label: 'Monitoring', diff --git a/sidebars-calico-enterprise.js b/sidebars-calico-enterprise.js index 3ddc5a675a..22532d58aa 100644 --- a/sidebars-calico-enterprise.js +++ b/sidebars-calico-enterprise.js @@ -179,6 +179,7 @@ module.exports = { 'networking/configuring/advertise-service-ips', 'networking/configuring/mtu', 'networking/configuring/custom-bgp-config', + 'networking/configuring/secure-bgp', 'networking/configuring/workloads-outside-cluster', 'networking/configuring/pod-mac-address', 'networking/configuring/node-local-dns-cache', @@ -499,24 +500,7 @@ module.exports = { 'operations/cnx/roles-and-permissions', ], }, - { - type: 'category', - label: 'Secure component communications', - link: { type: 'doc', id: 'operations/comms/index' }, - items: [ - 'operations/comms/crypto-auth', - 'operations/comms/secure-metrics', - 'operations/comms/secure-bgp', - 'operations/comms/manager-tls', - 'operations/comms/log-storage-tls', - 'operations/comms/linseed-tls', - 'operations/comms/apiserver-tls', - 'operations/comms/typha-node-tls', - 'operations/comms/compliance-tls', - 'operations/comms/packetcapture-tls', - 'operations/comms/certificate-management', - ], - }, + 'operations/comms/index', { type: 'category', label: 'CLIs',