From a357ea17c7951825f1e3be644d5caa883f829c63 Mon Sep 17 00:00:00 2001 From: Ivan Porta Date: Wed, 29 Jan 2025 14:16:42 +0100 Subject: [PATCH 1/5] fix docs Signed-off-by: Ivan Porta --- .../2-edge/tasks/automatic-failover.md | 53 +++++++++++------- .../content/2.11/tasks/automatic-failover.md | 55 ++++++++++++------- .../content/2.12/tasks/automatic-failover.md | 53 +++++++++++------- .../content/2.13/tasks/automatic-failover.md | 53 +++++++++++------- .../content/2.14/tasks/automatic-failover.md | 53 +++++++++++------- .../content/2.15/tasks/automatic-failover.md | 53 +++++++++++------- .../content/2.16/tasks/automatic-failover.md | 53 +++++++++++------- .../content/2.17/tasks/automatic-failover.md | 53 +++++++++++------- 8 files changed, 265 insertions(+), 161 deletions(-) diff --git a/linkerd.io/content/2-edge/tasks/automatic-failover.md b/linkerd.io/content/2-edge/tasks/automatic-failover.md index ed9b8d0cb9..a7b348cf59 100644 --- a/linkerd.io/content/2-edge/tasks/automatic-failover.md +++ b/linkerd.io/content/2-edge/tasks/automatic-failover.md @@ -48,29 +48,13 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. -```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - ``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +90,35 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< alert severity="warning" >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /alert >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.11/tasks/automatic-failover.md b/linkerd.io/content/2.11/tasks/automatic-failover.md index d2f38ea0ff..a7b348cf59 100644 --- a/linkerd.io/content/2.11/tasks/automatic-failover.md +++ b/linkerd.io/content/2.11/tasks/automatic-failover.md @@ -48,29 +48,13 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. -```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - ``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -82,7 +66,7 @@ TrafficSplit resource in the west cluster with the backend is the primary and all other backends will be treated as the fallbacks: ```bash -> cat < linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< alert severity="warning" >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /alert >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.12/tasks/automatic-failover.md b/linkerd.io/content/2.12/tasks/automatic-failover.md index ed9b8d0cb9..a7b348cf59 100644 --- a/linkerd.io/content/2.12/tasks/automatic-failover.md +++ b/linkerd.io/content/2.12/tasks/automatic-failover.md @@ -48,29 +48,13 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. -```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - ``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +90,35 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< alert severity="warning" >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /alert >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.13/tasks/automatic-failover.md b/linkerd.io/content/2.13/tasks/automatic-failover.md index ed9b8d0cb9..a7b348cf59 100644 --- a/linkerd.io/content/2.13/tasks/automatic-failover.md +++ b/linkerd.io/content/2.13/tasks/automatic-failover.md @@ -48,29 +48,13 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. -```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - ``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +90,35 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< alert severity="warning" >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /alert >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.14/tasks/automatic-failover.md b/linkerd.io/content/2.14/tasks/automatic-failover.md index ed9b8d0cb9..a7b348cf59 100644 --- a/linkerd.io/content/2.14/tasks/automatic-failover.md +++ b/linkerd.io/content/2.14/tasks/automatic-failover.md @@ -48,29 +48,13 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. -```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - ``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +90,35 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< alert severity="warning" >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /alert >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.15/tasks/automatic-failover.md b/linkerd.io/content/2.15/tasks/automatic-failover.md index ed9b8d0cb9..a7b348cf59 100644 --- a/linkerd.io/content/2.15/tasks/automatic-failover.md +++ b/linkerd.io/content/2.15/tasks/automatic-failover.md @@ -48,29 +48,13 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. -```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - ``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +90,35 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< alert severity="warning" >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /alert >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.16/tasks/automatic-failover.md b/linkerd.io/content/2.16/tasks/automatic-failover.md index ed9b8d0cb9..a7b348cf59 100644 --- a/linkerd.io/content/2.16/tasks/automatic-failover.md +++ b/linkerd.io/content/2.16/tasks/automatic-failover.md @@ -48,29 +48,13 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. -```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - ``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +90,35 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< alert severity="warning" >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /alert >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic diff --git a/linkerd.io/content/2.17/tasks/automatic-failover.md b/linkerd.io/content/2.17/tasks/automatic-failover.md index ed9b8d0cb9..a7b348cf59 100644 --- a/linkerd.io/content/2.17/tasks/automatic-failover.md +++ b/linkerd.io/content/2.17/tasks/automatic-failover.md @@ -48,29 +48,13 @@ them in that cluster: > helm --kube-context=west install linkerd-failover -n linkerd-failover --create-namespace --devel linkerd-edge/linkerd-failover ``` -## Installing and Exporting Emojivoto +## Create the emojivoto namespace -We'll now install the Emojivoto example application into both clusters: +First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. -```bash -> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - -> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - ``` - -Next we'll "export" the `web-svc` in the east cluster by setting the -`mirror.linkerd.io/exported=true` label. This will instruct the -multicluster extension to create a mirror service called `web-svc-east` in the -west cluster, making the east Emojivoto application available in the west -cluster: - -```bash -> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true -> kubectl --context=west -n emojivoto get svc -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m -voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m -web-svc ClusterIP 10.96.222.169 80/TCP 13m -web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +> kubectl --context=west create ns emojivoto +> kubectl --context=east create ns emojivoto ``` ## Creating the Failover TrafficSplit @@ -106,6 +90,35 @@ This TrafficSplit indicates that the local (west) `web-svc` should be used as the primary, but traffic should be shifted to the remote (east) `web-svc-east` if the primary becomes unavailable. +## Installing and Exporting Emojivoto + +We'll now install the Emojivoto example application into both clusters: + +```bash +> linkerd --context=west inject https://run.linkerd.io/emojivoto.yml | kubectl --context=west apply -f - +> linkerd --context=east inject https://run.linkerd.io/emojivoto.yml | kubectl --context=east apply -f - +``` + +Next we'll "export" the `web-svc` in the east cluster by setting the +`mirror.linkerd.io/exported=true` label. This will instruct the +multicluster extension to create a mirror service called `web-svc-east` in the +west cluster, making the east Emojivoto application available in the west +cluster: + +```bash +> kubectl --context=east -n emojivoto label svc/web-svc mirror.linkerd.io/exported=true +> kubectl --context=west -n emojivoto get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +emoji-svc ClusterIP 10.96.41.137 8080/TCP,8801/TCP 13m +voting-svc ClusterIP 10.96.247.68 8080/TCP,8801/TCP 13m +web-svc ClusterIP 10.96.222.169 80/TCP 13m +web-svc-east ClusterIP 10.96.244.245 80/TCP 92s +``` + +{{< alert severity="warning" >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /alert >}} + ## Testing the Failover We can use the `linkerd viz stat` command to see that the `vote-bot` traffic From 215f604b3f6bb0dcf17def43099633caa616da2c Mon Sep 17 00:00:00 2001 From: Ivan Porta Date: Mon, 3 Feb 2025 06:44:46 +0100 Subject: [PATCH 2/5] fix style Signed-off-by: Ivan Porta --- linkerd.io/content/2-edge/tasks/automatic-failover.md | 11 ++++++++--- linkerd.io/content/2.11/tasks/automatic-failover.md | 11 ++++++++--- linkerd.io/content/2.12/tasks/automatic-failover.md | 11 ++++++++--- linkerd.io/content/2.13/tasks/automatic-failover.md | 11 ++++++++--- linkerd.io/content/2.14/tasks/automatic-failover.md | 11 ++++++++--- linkerd.io/content/2.15/tasks/automatic-failover.md | 11 ++++++++--- linkerd.io/content/2.16/tasks/automatic-failover.md | 11 ++++++++--- linkerd.io/content/2.17/tasks/automatic-failover.md | 10 +++++++--- 8 files changed, 63 insertions(+), 24 deletions(-) diff --git a/linkerd.io/content/2-edge/tasks/automatic-failover.md b/linkerd.io/content/2-edge/tasks/automatic-failover.md index a7b348cf59..c9cc2dafa9 100644 --- a/linkerd.io/content/2-edge/tasks/automatic-failover.md +++ b/linkerd.io/content/2-edge/tasks/automatic-failover.md @@ -50,9 +50,10 @@ them in that cluster: ## Create the emojivoto namespace -First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. -``` +```bash > kubectl --context=west create ns emojivoto > kubectl --context=east create ns emojivoto ``` @@ -116,7 +117,11 @@ web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` {{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +The order in which the Application and the ServiceProfile used by the +TrafficSplit resource are created is important. If a ServiceProfile is +created after the pod has already started, the workloads will need to be +restarted. For more details on Service Profiles, +check out the [Service Profiles documentation](../features/service-profiles.md). {{< /alert >}} ## Testing the Failover diff --git a/linkerd.io/content/2.11/tasks/automatic-failover.md b/linkerd.io/content/2.11/tasks/automatic-failover.md index a7b348cf59..c9cc2dafa9 100644 --- a/linkerd.io/content/2.11/tasks/automatic-failover.md +++ b/linkerd.io/content/2.11/tasks/automatic-failover.md @@ -50,9 +50,10 @@ them in that cluster: ## Create the emojivoto namespace -First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. -``` +```bash > kubectl --context=west create ns emojivoto > kubectl --context=east create ns emojivoto ``` @@ -116,7 +117,11 @@ web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` {{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +The order in which the Application and the ServiceProfile used by the +TrafficSplit resource are created is important. If a ServiceProfile is +created after the pod has already started, the workloads will need to be +restarted. For more details on Service Profiles, +check out the [Service Profiles documentation](../features/service-profiles.md). {{< /alert >}} ## Testing the Failover diff --git a/linkerd.io/content/2.12/tasks/automatic-failover.md b/linkerd.io/content/2.12/tasks/automatic-failover.md index a7b348cf59..c9cc2dafa9 100644 --- a/linkerd.io/content/2.12/tasks/automatic-failover.md +++ b/linkerd.io/content/2.12/tasks/automatic-failover.md @@ -50,9 +50,10 @@ them in that cluster: ## Create the emojivoto namespace -First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. -``` +```bash > kubectl --context=west create ns emojivoto > kubectl --context=east create ns emojivoto ``` @@ -116,7 +117,11 @@ web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` {{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +The order in which the Application and the ServiceProfile used by the +TrafficSplit resource are created is important. If a ServiceProfile is +created after the pod has already started, the workloads will need to be +restarted. For more details on Service Profiles, +check out the [Service Profiles documentation](../features/service-profiles.md). {{< /alert >}} ## Testing the Failover diff --git a/linkerd.io/content/2.13/tasks/automatic-failover.md b/linkerd.io/content/2.13/tasks/automatic-failover.md index a7b348cf59..c9cc2dafa9 100644 --- a/linkerd.io/content/2.13/tasks/automatic-failover.md +++ b/linkerd.io/content/2.13/tasks/automatic-failover.md @@ -50,9 +50,10 @@ them in that cluster: ## Create the emojivoto namespace -First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. -``` +```bash > kubectl --context=west create ns emojivoto > kubectl --context=east create ns emojivoto ``` @@ -116,7 +117,11 @@ web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` {{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +The order in which the Application and the ServiceProfile used by the +TrafficSplit resource are created is important. If a ServiceProfile is +created after the pod has already started, the workloads will need to be +restarted. For more details on Service Profiles, +check out the [Service Profiles documentation](../features/service-profiles.md). {{< /alert >}} ## Testing the Failover diff --git a/linkerd.io/content/2.14/tasks/automatic-failover.md b/linkerd.io/content/2.14/tasks/automatic-failover.md index a7b348cf59..c9cc2dafa9 100644 --- a/linkerd.io/content/2.14/tasks/automatic-failover.md +++ b/linkerd.io/content/2.14/tasks/automatic-failover.md @@ -50,9 +50,10 @@ them in that cluster: ## Create the emojivoto namespace -First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. -``` +```bash > kubectl --context=west create ns emojivoto > kubectl --context=east create ns emojivoto ``` @@ -116,7 +117,11 @@ web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` {{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +The order in which the Application and the ServiceProfile used by the +TrafficSplit resource are created is important. If a ServiceProfile is +created after the pod has already started, the workloads will need to be +restarted. For more details on Service Profiles, +check out the [Service Profiles documentation](../features/service-profiles.md). {{< /alert >}} ## Testing the Failover diff --git a/linkerd.io/content/2.15/tasks/automatic-failover.md b/linkerd.io/content/2.15/tasks/automatic-failover.md index a7b348cf59..c9cc2dafa9 100644 --- a/linkerd.io/content/2.15/tasks/automatic-failover.md +++ b/linkerd.io/content/2.15/tasks/automatic-failover.md @@ -50,9 +50,10 @@ them in that cluster: ## Create the emojivoto namespace -First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. -``` +```bash > kubectl --context=west create ns emojivoto > kubectl --context=east create ns emojivoto ``` @@ -116,7 +117,11 @@ web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` {{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +The order in which the Application and the ServiceProfile used by the +TrafficSplit resource are created is important. If a ServiceProfile is +created after the pod has already started, the workloads will need to be +restarted. For more details on Service Profiles, +check out the [Service Profiles documentation](../features/service-profiles.md). {{< /alert >}} ## Testing the Failover diff --git a/linkerd.io/content/2.16/tasks/automatic-failover.md b/linkerd.io/content/2.16/tasks/automatic-failover.md index a7b348cf59..c9cc2dafa9 100644 --- a/linkerd.io/content/2.16/tasks/automatic-failover.md +++ b/linkerd.io/content/2.16/tasks/automatic-failover.md @@ -50,9 +50,10 @@ them in that cluster: ## Create the emojivoto namespace -First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. -``` +```bash > kubectl --context=west create ns emojivoto > kubectl --context=east create ns emojivoto ``` @@ -116,7 +117,11 @@ web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` {{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +The order in which the Application and the ServiceProfile used by the +TrafficSplit resource are created is important. If a ServiceProfile is +created after the pod has already started, the workloads will need to be +restarted. For more details on Service Profiles, +check out the [Service Profiles documentation](../features/service-profiles.md). {{< /alert >}} ## Testing the Failover diff --git a/linkerd.io/content/2.17/tasks/automatic-failover.md b/linkerd.io/content/2.17/tasks/automatic-failover.md index a7b348cf59..750e90128a 100644 --- a/linkerd.io/content/2.17/tasks/automatic-failover.md +++ b/linkerd.io/content/2.17/tasks/automatic-failover.md @@ -50,9 +50,10 @@ them in that cluster: ## Create the emojivoto namespace -First, we need to create the namespace where we will deploy our application and the `TrafficSplit` resource. +First, we need to create the namespace where we will deploy our application +and the `TrafficSplit` resource. -``` +```bash > kubectl --context=west create ns emojivoto > kubectl --context=east create ns emojivoto ``` @@ -116,7 +117,10 @@ web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` {{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). {{< /alert >}} ## Testing the Failover From 2ee0bfcc035d4072dd92ae70bc7e4706af24eb34 Mon Sep 17 00:00:00 2001 From: Flynn Date: Thu, 30 Jan 2025 10:14:31 -0500 Subject: [PATCH 3/5] Fix a typo originally covered in PR#1780 (#1917) Signed-off-by: Flynn Signed-off-by: Ivan Porta --- linkerd.io/content/2-edge/overview/_index.md | 2 +- linkerd.io/content/2.12/overview/_index.md | 2 +- linkerd.io/content/2.13/overview/_index.md | 2 +- linkerd.io/content/2.14/overview/_index.md | 2 +- linkerd.io/content/2.15/overview/_index.md | 2 +- linkerd.io/content/2.16/overview/_index.md | 2 +- linkerd.io/content/2.17/overview/_index.md | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/linkerd.io/content/2-edge/overview/_index.md b/linkerd.io/content/2-edge/overview/_index.md index fae411108e..2173e72dab 100644 --- a/linkerd.io/content/2-edge/overview/_index.md +++ b/linkerd.io/content/2-edge/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.12/overview/_index.md b/linkerd.io/content/2.12/overview/_index.md index 52dc0a1977..d01045c4b6 100644 --- a/linkerd.io/content/2.12/overview/_index.md +++ b/linkerd.io/content/2.12/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.13/overview/_index.md b/linkerd.io/content/2.13/overview/_index.md index 52dc0a1977..d01045c4b6 100644 --- a/linkerd.io/content/2.13/overview/_index.md +++ b/linkerd.io/content/2.13/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.14/overview/_index.md b/linkerd.io/content/2.14/overview/_index.md index 52dc0a1977..d01045c4b6 100644 --- a/linkerd.io/content/2.14/overview/_index.md +++ b/linkerd.io/content/2.14/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.15/overview/_index.md b/linkerd.io/content/2.15/overview/_index.md index fae411108e..2173e72dab 100644 --- a/linkerd.io/content/2.15/overview/_index.md +++ b/linkerd.io/content/2.15/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.16/overview/_index.md b/linkerd.io/content/2.16/overview/_index.md index fae411108e..2173e72dab 100644 --- a/linkerd.io/content/2.16/overview/_index.md +++ b/linkerd.io/content/2.16/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog diff --git a/linkerd.io/content/2.17/overview/_index.md b/linkerd.io/content/2.17/overview/_index.md index fae411108e..2173e72dab 100644 --- a/linkerd.io/content/2.17/overview/_index.md +++ b/linkerd.io/content/2.17/overview/_index.md @@ -35,7 +35,7 @@ latency. In order to be as small, lightweight, and safe as possible, Linkerd's micro-proxies are written in [Rust](https://www.rust-lang.org/) and specialized -for Linkerd. You can learn more about the these micro-proxies in our blog post, +for Linkerd. You can learn more about these micro-proxies in our blog post, [Under the hood of Linkerd's state-of-the-art Rust proxy, Linkerd2-proxy](/2020/07/23/under-the-hood-of-linkerds-state-of-the-art-rust-proxy-linkerd2-proxy/), (If you want to know why Linkerd doesn't use Envoy, you can learn why in our blog From 18c96ce9bda8ee99f9a27d42012fe6c44670c056 Mon Sep 17 00:00:00 2001 From: joedrf Date: Mon, 3 Feb 2025 02:42:28 +0000 Subject: [PATCH 4/5] found a couple of small docs updates we could make (#1919) Signed-off-by: Joe Fuller Signed-off-by: Ivan Porta --- linkerd.io/content/2-edge/reference/proxy-configuration.md | 2 +- linkerd.io/content/2.10/reference/proxy-configuration.md | 2 +- linkerd.io/content/2.11/reference/proxy-configuration.md | 2 +- linkerd.io/content/2.12/reference/proxy-configuration.md | 2 +- linkerd.io/content/2.13/reference/proxy-configuration.md | 2 +- linkerd.io/content/2.14/reference/proxy-configuration.md | 2 +- linkerd.io/content/2.15/reference/proxy-configuration.md | 2 +- linkerd.io/content/2.16/reference/proxy-configuration.md | 2 +- linkerd.io/content/2.17/reference/proxy-configuration.md | 2 +- linkerd.io/data/cli/2-10.yaml | 2 +- linkerd.io/data/cli/2-11.yaml | 2 +- linkerd.io/data/cli/2-12.yaml | 4 ++-- linkerd.io/data/cli/2-13.yaml | 4 ++-- linkerd.io/data/cli/2-14.yaml | 4 ++-- linkerd.io/data/cli/2-15.yaml | 4 ++-- linkerd.io/data/cli/2-16.yaml | 4 ++-- linkerd.io/data/cli/2-17.yaml | 4 ++-- linkerd.io/data/cli/2-edge.yaml | 4 ++-- 18 files changed, 25 insertions(+), 25 deletions(-) diff --git a/linkerd.io/content/2-edge/reference/proxy-configuration.md b/linkerd.io/content/2-edge/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2-edge/reference/proxy-configuration.md +++ b/linkerd.io/content/2-edge/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.10/reference/proxy-configuration.md b/linkerd.io/content/2.10/reference/proxy-configuration.md index fa9aa8997e..66916221a1 100644 --- a/linkerd.io/content/2.10/reference/proxy-configuration.md +++ b/linkerd.io/content/2.10/reference/proxy-configuration.md @@ -46,7 +46,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by used the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by used the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.11/reference/proxy-configuration.md b/linkerd.io/content/2.11/reference/proxy-configuration.md index fa9aa8997e..66916221a1 100644 --- a/linkerd.io/content/2.11/reference/proxy-configuration.md +++ b/linkerd.io/content/2.11/reference/proxy-configuration.md @@ -46,7 +46,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by used the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by used the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.12/reference/proxy-configuration.md b/linkerd.io/content/2.12/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.12/reference/proxy-configuration.md +++ b/linkerd.io/content/2.12/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.13/reference/proxy-configuration.md b/linkerd.io/content/2.13/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.13/reference/proxy-configuration.md +++ b/linkerd.io/content/2.13/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.14/reference/proxy-configuration.md b/linkerd.io/content/2.14/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.14/reference/proxy-configuration.md +++ b/linkerd.io/content/2.14/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.15/reference/proxy-configuration.md b/linkerd.io/content/2.15/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.15/reference/proxy-configuration.md +++ b/linkerd.io/content/2.15/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.16/reference/proxy-configuration.md b/linkerd.io/content/2.16/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.16/reference/proxy-configuration.md +++ b/linkerd.io/content/2.16/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/content/2.17/reference/proxy-configuration.md b/linkerd.io/content/2.17/reference/proxy-configuration.md index 377acf9999..c570e7faff 100644 --- a/linkerd.io/content/2.17/reference/proxy-configuration.md +++ b/linkerd.io/content/2.17/reference/proxy-configuration.md @@ -52,7 +52,7 @@ instead of their original destination. This will inform Linkerd to override the endpoint selection of the ingress container and to perform its own endpoint selection, enabling features such as per-route metrics and traffic splitting. -The proxy can be made to run in `ingress` mode by using the `linkerd.io/inject: +The proxy can be configured to run in `ingress` mode by using the `linkerd.io/inject: ingress` annotation rather than the default `linkerd.io/inject: enabled` annotation. This can also be done with the `--ingress` flag in the `inject` CLI command: diff --git a/linkerd.io/data/cli/2-10.yaml b/linkerd.io/data/cli/2-10.yaml index 53a1166972..388ca94779 100644 --- a/linkerd.io/data/cli/2-10.yaml +++ b/linkerd.io/data/cli/2-10.yaml @@ -62,7 +62,7 @@ AnnotationsReference: - Description: Used to configure the outbound TCP connection timeout in the proxy Name: config.linkerd.io/proxy-outbound-connect-timeout - Description: The proxy sidecar will stay alive for at least the given period before - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. If not provided, it will be defaulted to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds CLIReference: diff --git a/linkerd.io/data/cli/2-11.yaml b/linkerd.io/data/cli/2-11.yaml index 8d50ee9a49..247f74064a 100644 --- a/linkerd.io/data/cli/2-11.yaml +++ b/linkerd.io/data/cli/2-11.yaml @@ -62,7 +62,7 @@ AnnotationsReference: - Description: Used to configure the outbound TCP connection timeout in the proxy Name: config.linkerd.io/proxy-outbound-connect-timeout - Description: The proxy sidecar will stay alive for at least the given period before - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. If not provided, it will be defaulted to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-12.yaml b/linkerd.io/data/cli/2-12.yaml index 5aab7505e8..264efb3060 100644 --- a/linkerd.io/data/cli/2-12.yaml +++ b/linkerd.io/data/cli/2-12.yaml @@ -60,7 +60,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -76,7 +76,7 @@ AnnotationsReference: - Description: Inbound TCP connection timeout in the proxy Name: config.linkerd.io/proxy-inbound-connect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-13.yaml b/linkerd.io/data/cli/2-13.yaml index 97de9dbc5d..8ff0a92080 100644 --- a/linkerd.io/data/cli/2-13.yaml +++ b/linkerd.io/data/cli/2-13.yaml @@ -60,7 +60,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -82,7 +82,7 @@ AnnotationsReference: from the cache. Defaults to `90s` Name: config.linkerd.io/proxy-inbound-discovery-cache-unused-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-14.yaml b/linkerd.io/data/cli/2-14.yaml index d9b53df957..2a26ad37f9 100644 --- a/linkerd.io/data/cli/2-14.yaml +++ b/linkerd.io/data/cli/2-14.yaml @@ -60,7 +60,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -88,7 +88,7 @@ AnnotationsReference: side of the proxy by setting it to a very high value Name: config.linkerd.io/proxy-disable-inbound-protocol-detect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-15.yaml b/linkerd.io/data/cli/2-15.yaml index 925eae37ff..4973ad45da 100644 --- a/linkerd.io/data/cli/2-15.yaml +++ b/linkerd.io/data/cli/2-15.yaml @@ -60,7 +60,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -88,7 +88,7 @@ AnnotationsReference: side of the proxy by setting it to a very high value Name: config.linkerd.io/proxy-disable-inbound-protocol-detect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-16.yaml b/linkerd.io/data/cli/2-16.yaml index 6ccc7ade54..8fa0e5b071 100644 --- a/linkerd.io/data/cli/2-16.yaml +++ b/linkerd.io/data/cli/2-16.yaml @@ -62,7 +62,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -91,7 +91,7 @@ AnnotationsReference: side of the proxy by setting it to a very high value Name: config.linkerd.io/proxy-disable-inbound-protocol-detect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-17.yaml b/linkerd.io/data/cli/2-17.yaml index 49a1e8650e..63173ef2de 100644 --- a/linkerd.io/data/cli/2-17.yaml +++ b/linkerd.io/data/cli/2-17.yaml @@ -62,7 +62,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -91,7 +91,7 @@ AnnotationsReference: side of the proxy by setting it to a very high value Name: config.linkerd.io/proxy-disable-inbound-protocol-detect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; diff --git a/linkerd.io/data/cli/2-edge.yaml b/linkerd.io/data/cli/2-edge.yaml index 925eae37ff..4973ad45da 100644 --- a/linkerd.io/data/cli/2-edge.yaml +++ b/linkerd.io/data/cli/2-edge.yaml @@ -60,7 +60,7 @@ AnnotationsReference: - Description: Log format (plain or json) for the proxy Name: config.linkerd.io/proxy-log-format - Description: Enables HTTP access logging in the proxy. Accepted values are `apache`, - to output the access log in the Appache Common Log Format, and `json`, to output + to output the access log in the Apache Common Log Format, and `json`, to output the access log in JSON. Name: config.linkerd.io/access-log - Description: Enable service profiles for non-Kubernetes services @@ -88,7 +88,7 @@ AnnotationsReference: side of the proxy by setting it to a very high value Name: config.linkerd.io/proxy-disable-inbound-protocol-detect-timeout - Description: The proxy sidecar will stay alive for at least the given period after - receiving SIGTERM signal from Kubernetes but no longer than pod's `terminationGracePeriodSeconds`. + receiving SIGTERM signal from Kubernetes but no longer than the pod's `terminationGracePeriodSeconds`. Defaults to `0` Name: config.alpha.linkerd.io/proxy-wait-before-exit-seconds - Description: The application container will not start until the proxy is ready; From 6a63f339943f6f526296a72d15ed7e4c461b73f1 Mon Sep 17 00:00:00 2001 From: Ivan Porta Date: Mon, 3 Feb 2025 06:57:47 +0100 Subject: [PATCH 5/5] fix style Signed-off-by: Ivan Porta --- .../content/2-edge/tasks/automatic-failover.md | 13 ++++++------- linkerd.io/content/2.11/tasks/automatic-failover.md | 13 ++++++------- linkerd.io/content/2.12/tasks/automatic-failover.md | 13 ++++++------- linkerd.io/content/2.13/tasks/automatic-failover.md | 13 ++++++------- linkerd.io/content/2.14/tasks/automatic-failover.md | 13 ++++++------- linkerd.io/content/2.15/tasks/automatic-failover.md | 13 ++++++------- linkerd.io/content/2.16/tasks/automatic-failover.md | 13 ++++++------- linkerd.io/content/2.17/tasks/automatic-failover.md | 4 ++-- 8 files changed, 44 insertions(+), 51 deletions(-) diff --git a/linkerd.io/content/2-edge/tasks/automatic-failover.md b/linkerd.io/content/2-edge/tasks/automatic-failover.md index c9cc2dafa9..cabed90b38 100644 --- a/linkerd.io/content/2-edge/tasks/automatic-failover.md +++ b/linkerd.io/content/2-edge/tasks/automatic-failover.md @@ -116,13 +116,12 @@ web-svc ClusterIP 10.96.222.169 80/TCP 13m web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` -{{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the -TrafficSplit resource are created is important. If a ServiceProfile is -created after the pod has already started, the workloads will need to be -restarted. For more details on Service Profiles, -check out the [Service Profiles documentation](../features/service-profiles.md). -{{< /alert >}} +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} ## Testing the Failover diff --git a/linkerd.io/content/2.11/tasks/automatic-failover.md b/linkerd.io/content/2.11/tasks/automatic-failover.md index c9cc2dafa9..cabed90b38 100644 --- a/linkerd.io/content/2.11/tasks/automatic-failover.md +++ b/linkerd.io/content/2.11/tasks/automatic-failover.md @@ -116,13 +116,12 @@ web-svc ClusterIP 10.96.222.169 80/TCP 13m web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` -{{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the -TrafficSplit resource are created is important. If a ServiceProfile is -created after the pod has already started, the workloads will need to be -restarted. For more details on Service Profiles, -check out the [Service Profiles documentation](../features/service-profiles.md). -{{< /alert >}} +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} ## Testing the Failover diff --git a/linkerd.io/content/2.12/tasks/automatic-failover.md b/linkerd.io/content/2.12/tasks/automatic-failover.md index c9cc2dafa9..cabed90b38 100644 --- a/linkerd.io/content/2.12/tasks/automatic-failover.md +++ b/linkerd.io/content/2.12/tasks/automatic-failover.md @@ -116,13 +116,12 @@ web-svc ClusterIP 10.96.222.169 80/TCP 13m web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` -{{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the -TrafficSplit resource are created is important. If a ServiceProfile is -created after the pod has already started, the workloads will need to be -restarted. For more details on Service Profiles, -check out the [Service Profiles documentation](../features/service-profiles.md). -{{< /alert >}} +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} ## Testing the Failover diff --git a/linkerd.io/content/2.13/tasks/automatic-failover.md b/linkerd.io/content/2.13/tasks/automatic-failover.md index c9cc2dafa9..cabed90b38 100644 --- a/linkerd.io/content/2.13/tasks/automatic-failover.md +++ b/linkerd.io/content/2.13/tasks/automatic-failover.md @@ -116,13 +116,12 @@ web-svc ClusterIP 10.96.222.169 80/TCP 13m web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` -{{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the -TrafficSplit resource are created is important. If a ServiceProfile is -created after the pod has already started, the workloads will need to be -restarted. For more details on Service Profiles, -check out the [Service Profiles documentation](../features/service-profiles.md). -{{< /alert >}} +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} ## Testing the Failover diff --git a/linkerd.io/content/2.14/tasks/automatic-failover.md b/linkerd.io/content/2.14/tasks/automatic-failover.md index c9cc2dafa9..cabed90b38 100644 --- a/linkerd.io/content/2.14/tasks/automatic-failover.md +++ b/linkerd.io/content/2.14/tasks/automatic-failover.md @@ -116,13 +116,12 @@ web-svc ClusterIP 10.96.222.169 80/TCP 13m web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` -{{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the -TrafficSplit resource are created is important. If a ServiceProfile is -created after the pod has already started, the workloads will need to be -restarted. For more details on Service Profiles, -check out the [Service Profiles documentation](../features/service-profiles.md). -{{< /alert >}} +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} ## Testing the Failover diff --git a/linkerd.io/content/2.15/tasks/automatic-failover.md b/linkerd.io/content/2.15/tasks/automatic-failover.md index c9cc2dafa9..cabed90b38 100644 --- a/linkerd.io/content/2.15/tasks/automatic-failover.md +++ b/linkerd.io/content/2.15/tasks/automatic-failover.md @@ -116,13 +116,12 @@ web-svc ClusterIP 10.96.222.169 80/TCP 13m web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` -{{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the -TrafficSplit resource are created is important. If a ServiceProfile is -created after the pod has already started, the workloads will need to be -restarted. For more details on Service Profiles, -check out the [Service Profiles documentation](../features/service-profiles.md). -{{< /alert >}} +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} ## Testing the Failover diff --git a/linkerd.io/content/2.16/tasks/automatic-failover.md b/linkerd.io/content/2.16/tasks/automatic-failover.md index c9cc2dafa9..cabed90b38 100644 --- a/linkerd.io/content/2.16/tasks/automatic-failover.md +++ b/linkerd.io/content/2.16/tasks/automatic-failover.md @@ -116,13 +116,12 @@ web-svc ClusterIP 10.96.222.169 80/TCP 13m web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` -{{< alert severity="warning" >}} -The order in which the Application and the ServiceProfile used by the -TrafficSplit resource are created is important. If a ServiceProfile is -created after the pod has already started, the workloads will need to be -restarted. For more details on Service Profiles, -check out the [Service Profiles documentation](../features/service-profiles.md). -{{< /alert >}} +{{< warning >}} +The order in which the Application and the ServiceProfile used by the TrafficSplit +resource are created is important. If a ServiceProfile is created after the pod has +already started, the workloads will need to be restarted. For more details on Service +Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). +{{< /warning >}} ## Testing the Failover diff --git a/linkerd.io/content/2.17/tasks/automatic-failover.md b/linkerd.io/content/2.17/tasks/automatic-failover.md index 750e90128a..cabed90b38 100644 --- a/linkerd.io/content/2.17/tasks/automatic-failover.md +++ b/linkerd.io/content/2.17/tasks/automatic-failover.md @@ -116,12 +116,12 @@ web-svc ClusterIP 10.96.222.169 80/TCP 13m web-svc-east ClusterIP 10.96.244.245 80/TCP 92s ``` -{{< alert severity="warning" >}} +{{< warning >}} The order in which the Application and the ServiceProfile used by the TrafficSplit resource are created is important. If a ServiceProfile is created after the pod has already started, the workloads will need to be restarted. For more details on Service Profiles, check out the [Service Profiles documentation](../features/service-profiles.md). -{{< /alert >}} +{{< /warning >}} ## Testing the Failover