diff --git a/linkerd.io/content/2-edge/overview/_index.md b/linkerd.io/content/2-edge/overview/_index.md index fae411108e..2173e72dab 100644 --- a/linkerd.io/content/2-edge/overview/_index.md +++ b/linkerd.io/content/2-edge/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2-edge/reference/proxy-configuration.md b/linkerd.io/content/2-edge/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2-edge/reference/proxy-configuration.md +++ b/linkerd.io/content/2-edge/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2-edge/tasks/automatic-failover.md b/linkerd.io/content/2-edge/tasks/automatic-failover.md index ed9b8d0cb9..cabed90b38 100644 --- a/linkerd.io/content/2-edge/tasks/automatic-failover.md +++ b/linkerd.io/content/2-edge/tasks/automatic-failover.md @@ -48,29 +48,14 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. ```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - -``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +91,38 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.10/reference/proxy-configuration.md b/linkerd.io/content/2.10/reference/proxy-configuration.md index fa9aa8997e..66916221a1 100644 --- a/linkerd.io/content/2.10/reference/proxy-configuration.md +++ b/linkerd.io/content/2.10/reference/proxy-configuration.md @@ -46,7 +46,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by used the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by used the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.11/reference/proxy-configuration.md b/linkerd.io/content/2.11/reference/proxy-configuration.md index fa9aa8997e..66916221a1 100644 --- a/linkerd.io/content/2.11/reference/proxy-configuration.md +++ b/linkerd.io/content/2.11/reference/proxy-configuration.md @@ -46,7 +46,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by used the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by used the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.11/tasks/automatic-failover.md b/linkerd.io/content/2.11/tasks/automatic-failover.md index d2f38ea0ff..cabed90b38 100644 --- a/linkerd.io/content/2.11/tasks/automatic-failover.md +++ b/linkerd.io/content/2.11/tasks/automatic-failover.md @@ -48,29 +48,14 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. ```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - -``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -82,7 +67,7 @@ TrafficSplit resource in the west cluster with the backend is the primary and all other backends will be treated as the fallbacks: ```bash -> cat < linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.12/overview/_index.md b/linkerd.io/content/2.12/overview/_index.md index 52dc0a1977..d01045c4b6 100644 --- a/linkerd.io/content/2.12/overview/_index.md +++ b/linkerd.io/content/2.12/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.12/reference/proxy-configuration.md b/linkerd.io/content/2.12/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.12/reference/proxy-configuration.md +++ b/linkerd.io/content/2.12/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.12/tasks/automatic-failover.md b/linkerd.io/content/2.12/tasks/automatic-failover.md index ed9b8d0cb9..cabed90b38 100644 --- a/linkerd.io/content/2.12/tasks/automatic-failover.md +++ b/linkerd.io/content/2.12/tasks/automatic-failover.md @@ -48,29 +48,14 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. ```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - -``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +91,38 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.13/overview/_index.md b/linkerd.io/content/2.13/overview/_index.md index 52dc0a1977..d01045c4b6 100644 --- a/linkerd.io/content/2.13/overview/_index.md +++ b/linkerd.io/content/2.13/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.13/reference/proxy-configuration.md b/linkerd.io/content/2.13/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.13/reference/proxy-configuration.md +++ b/linkerd.io/content/2.13/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.13/tasks/automatic-failover.md b/linkerd.io/content/2.13/tasks/automatic-failover.md index ed9b8d0cb9..cabed90b38 100644 --- a/linkerd.io/content/2.13/tasks/automatic-failover.md +++ b/linkerd.io/content/2.13/tasks/automatic-failover.md @@ -48,29 +48,14 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. ```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - -``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +91,38 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.14/overview/_index.md b/linkerd.io/content/2.14/overview/_index.md index 52dc0a1977..d01045c4b6 100644 --- a/linkerd.io/content/2.14/overview/_index.md +++ b/linkerd.io/content/2.14/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.14/reference/proxy-configuration.md b/linkerd.io/content/2.14/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.14/reference/proxy-configuration.md +++ b/linkerd.io/content/2.14/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.14/tasks/automatic-failover.md b/linkerd.io/content/2.14/tasks/automatic-failover.md index ed9b8d0cb9..cabed90b38 100644 --- a/linkerd.io/content/2.14/tasks/automatic-failover.md +++ b/linkerd.io/content/2.14/tasks/automatic-failover.md @@ -48,29 +48,14 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. ```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - -``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +91,38 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.15/overview/_index.md b/linkerd.io/content/2.15/overview/_index.md index fae411108e..2173e72dab 100644 --- a/linkerd.io/content/2.15/overview/_index.md +++ b/linkerd.io/content/2.15/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.15/reference/proxy-configuration.md b/linkerd.io/content/2.15/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.15/reference/proxy-configuration.md +++ b/linkerd.io/content/2.15/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.15/tasks/automatic-failover.md b/linkerd.io/content/2.15/tasks/automatic-failover.md index ed9b8d0cb9..cabed90b38 100644 --- a/linkerd.io/content/2.15/tasks/automatic-failover.md +++ b/linkerd.io/content/2.15/tasks/automatic-failover.md @@ -48,29 +48,14 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. ```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - -``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +91,38 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.16/overview/_index.md b/linkerd.io/content/2.16/overview/_index.md index fae411108e..2173e72dab 100644 --- a/linkerd.io/content/2.16/overview/_index.md +++ b/linkerd.io/content/2.16/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.16/reference/proxy-configuration.md b/linkerd.io/content/2.16/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.16/reference/proxy-configuration.md +++ b/linkerd.io/content/2.16/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.16/tasks/automatic-failover.md b/linkerd.io/content/2.16/tasks/automatic-failover.md index ed9b8d0cb9..cabed90b38 100644 --- a/linkerd.io/content/2.16/tasks/automatic-failover.md +++ b/linkerd.io/content/2.16/tasks/automatic-failover.md @@ -48,29 +48,14 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. ```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - -``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +91,38 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.17/overview/_index.md b/linkerd.io/content/2.17/overview/_index.md index fae411108e..2173e72dab 100644 --- a/linkerd.io/content/2.17/overview/_index.md +++ b/linkerd.io/content/2.17/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.17/reference/proxy-configuration.md b/linkerd.io/content/2.17/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.17/reference/proxy-configuration.md +++ b/linkerd.io/content/2.17/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.17/tasks/automatic-failover.md b/linkerd.io/content/2.17/tasks/automatic-failover.md index ed9b8d0cb9..cabed90b38 100644 --- a/linkerd.io/content/2.17/tasks/automatic-failover.md +++ b/linkerd.io/content/2.17/tasks/automatic-failover.md @@ -48,29 +48,14 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. ```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - -``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +91,38 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/data/cli/2-10.yaml b/linkerd.io/data/cli/2-10.yaml index 53a1166972..388ca94779 100644 --- a/linkerd.io/data/cli/2-10.yaml +++ b/linkerd.io/data/cli/2-10.yaml @@ -62,7 +62,7 @@ AnnotationsReference: - Description: Used to configure the outbound TCP connection timeout in the proxy Name: config.linkerd.io/proxy-outbound-connect-timeout - Description: The proxy sidecar will stay alive for at least the given period before - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. If not provided, it will be defaulted to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds CLIReference: diff --git a/linkerd.io/data/cli/2-11.yaml b/linkerd.io/data/cli/2-11.yaml index 8d50ee9a49..247f74064a 100644 --- a/linkerd.io/data/cli/2-11.yaml +++ b/linkerd.io/data/cli/2-11.yaml @@ -62,7 +62,7 @@ AnnotationsReference: - Description: Used to configure the outbound TCP connection timeout in the proxy Name: config.linkerd.io/proxy-outbound-connect-timeout - Description: The proxy sidecar will stay alive for at least the given period before - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. If not provided, it will be defaulted to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-12.yaml b/linkerd.io/data/cli/2-12.yaml index 5aab7505e8..264efb3060 100644 --- a/linkerd.io/data/cli/2-12.yaml +++ b/linkerd.io/data/cli/2-12.yaml @@ -60,7 +60,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -76,7 +76,7 @@ AnnotationsReference: - Description: Inbound TCP connection timeout in the proxy Name: config.linkerd.io/proxy-inbound-connect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-13.yaml b/linkerd.io/data/cli/2-13.yaml index 97de9dbc5d..8ff0a92080 100644 --- a/linkerd.io/data/cli/2-13.yaml +++ b/linkerd.io/data/cli/2-13.yaml @@ -60,7 +60,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -82,7 +82,7 @@ AnnotationsReference: from the cache. Defaults to `90s` Name: config.linkerd.io/proxy-inbound-discovery-cache-unused-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-14.yaml b/linkerd.io/data/cli/2-14.yaml index d9b53df957..2a26ad37f9 100644 --- a/linkerd.io/data/cli/2-14.yaml +++ b/linkerd.io/data/cli/2-14.yaml @@ -60,7 +60,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -88,7 +88,7 @@ AnnotationsReference: side of the proxy by setting it to a very high value Name: config.linkerd.io/proxy-disable-inbound-protocol-detect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-15.yaml b/linkerd.io/data/cli/2-15.yaml index 925eae37ff..4973ad45da 100644 --- a/linkerd.io/data/cli/2-15.yaml +++ b/linkerd.io/data/cli/2-15.yaml @@ -60,7 +60,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -88,7 +88,7 @@ AnnotationsReference: side of the proxy by setting it to a very high value Name: config.linkerd.io/proxy-disable-inbound-protocol-detect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-16.yaml b/linkerd.io/data/cli/2-16.yaml index 6ccc7ade54..8fa0e5b071 100644 --- a/linkerd.io/data/cli/2-16.yaml +++ b/linkerd.io/data/cli/2-16.yaml @@ -62,7 +62,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -91,7 +91,7 @@ AnnotationsReference: side of the proxy by setting it to a very high value Name: config.linkerd.io/proxy-disable-inbound-protocol-detect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-17.yaml b/linkerd.io/data/cli/2-17.yaml index 49a1e8650e..63173ef2de 100644 --- a/linkerd.io/data/cli/2-17.yaml +++ b/linkerd.io/data/cli/2-17.yaml @@ -62,7 +62,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -91,7 +91,7 @@ AnnotationsReference: side of the proxy by setting it to a very high value Name: config.linkerd.io/proxy-disable-inbound-protocol-detect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-edge.yaml b/linkerd.io/data/cli/2-edge.yaml index 925eae37ff..4973ad45da 100644 --- a/linkerd.io/data/cli/2-edge.yaml +++ b/linkerd.io/data/cli/2-edge.yaml @@ -60,7 +60,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -88,7 +88,7 @@ AnnotationsReference: side of the proxy by setting it to a very high value Name: config.linkerd.io/proxy-disable-inbound-protocol-detect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready;