Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .vale/styles/config/vocabularies/vocab/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -42,3 +42,11 @@ backchannel
frontchannel
URL
timeframe
hostnames
keystores
vCPUs
failover
[Ff]ailover
liveness
Hazelcast
hazelcast
13 changes: 13 additions & 0 deletions en/base.yml
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,7 @@ extra:
generator: false
isolated_templates:
- templates/complete-guide.html
- templates/deployment-guide.html
- templates/sdk.html

expanded_navs:
Expand Down Expand Up @@ -161,6 +162,18 @@ extra:
Frontend Security:
link: complete-guides/fesecurity/introduction
level: 2
"Path A: Evaluation (single node)":
link: complete-guides/deploy-eval/introduction
level: 3
"Path B: Production (single region, HA)":
link: complete-guides/deploy-ha/introduction
level: 3
"Path C: Production (multi-region, DR)":
link: complete-guides/deploy-dr/introduction
level: 3
"Path D: Container platforms":
link: complete-guides/deploy-containers/introduction
level: 3

nav_icons:
Overview:
Expand Down
1 change: 1 addition & 0 deletions en/identity-server/is_common.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ extra:
- Home
- Get Started
- Guides
- Deployment guides
- Setup
- Integrations
- APIs
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 5 mins
---

Set the production hostname that WSO2 Identity Server advertises to clients. In Kubernetes, this is the hostname exposed via your Ingress or Route resource — it must match the hostname in your TLS certificate and DNS records.

!!! note "Before this step"
DNS records for the production hostname point to your cluster's ingress IP address or load balancer. The TLS certificate for this hostname is available.

{% include "../../../../../includes/deploy/change-the-hostname.md" %}

!!! tip "Verify"
Confirm `deployment.toml` inside your ConfigMap or mounted configuration file contains the correct `hostname` value before proceeding.
Comment on lines +11 to +14
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add container-specific execution steps around the shared hostname include.

The shared include is generic and does not tell users how to apply hostname changes through Kubernetes/OpenShift configuration objects. This leaves the container path incomplete at execution time.

Suggested container-specific addendum
 {% include "../../../../../includes/deploy/change-the-hostname.md" %}
+
+For Kubernetes/OpenShift deployments, apply the hostname through your deployment configuration source:
+
+1. Update the `deployment.toml` content in your ConfigMap or mounted configuration.
+2. Ensure the updated configuration is mounted by all Identity Server pods.
+3. Roll out/restart the workload and confirm all pods use the same `hostname` value.
As per coding guidelines "Task-based documentation must follow a logical, goal-oriented structure including ... sequential steps ... and outcome confirmation."
🧰 Tools
🪛 LanguageTool

[style] ~12-~12: Using many exclamation marks might seem excessive (in this case: 6 exclamation marks for a text that’s 645 characters long)
Context: ...udes/deploy/change-the-hostname.md" %} !!! tip "Verify" Confirm `deployment.to...

(EN_EXCESSIVE_EXCLAMATION)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-containers/change-hostname.md`
around lines 11 - 14, The shared include
("../../../../../includes/deploy/change-the-hostname.md") is generic; add a
container-specific addendum around it that provides concrete, sequential steps
for Kubernetes/OpenShift: (1) edit or patch the ConfigMap containing
deployment.toml (refer to the ConfigMap name used by the app) or update the
mounted file, (2) apply the change (kubectl/oc apply or kubectl patch), (3)
trigger a rolling restart of the Deployment/StatefulSet/Pod (kubectl rollout
restart or oc rollout) so containers pick up the new mounted config, and (4)
verify inside the running pod that deployment.toml contains the updated hostname
and the service responds (kubectl exec + cat/grep and curl). Insert these steps
before or after the existing "Verify" tip and reference the file name
deployment.toml and the ConfigMap/mount so readers can follow exact commands for
containerized deployments.

Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 10 mins
---

Create separate keystores for token signing, data encryption, and TLS. Mount these into your pods as Kubernetes Secrets — never bake keystore files into container images.

!!! note "Before this step"
TLS is configured (previous step complete). The JDK `keytool` utility is available to generate keystores locally before mounting them as Secrets.

{% include "../../../../../includes/deploy/security/keystores/index.md" %}

!!! tip "Verify"
After deploying with the new keystores, check pod logs for any `KeyStore` or `KeyManager` errors. A clean startup confirms the keystores mounted and loaded correctly.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 10 mins
---

Enable HTTPS on the WSO2 Identity Server transport layer. In Kubernetes deployments, TLS is typically terminated at the Ingress controller, but WSO2 Identity Server still requires its own TLS configuration for inter-service communication and the management Console.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Kubernetes-specific TLS guidance is missing from the included content.

The introduction mentions Kubernetes-specific considerations: "TLS is typically terminated at the Ingress controller, but WSO2 Identity Server still requires its own TLS configuration for inter-service communication and the management Console." However, the included content (configure-transport-level-security.md) provides only generic application-level TLS configuration (SSL protocols, ciphers, deployment.toml edits) without addressing:

  • Ingress TLS termination vs. passthrough modes
  • Managing certificates as Kubernetes Secrets
  • Mounting certificates into pods
  • ConfigMap-based configuration for containerized deployments

Users following generic file-system instructions may struggle to adapt them to Kubernetes environments.

Would you like me to draft Kubernetes-specific supplemental content for this page, or create an issue to enhance the shared TLS documentation with container-platform guidance?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-containers/configure-tls.md`
at line 6, The page introduces Kubernetes-specific TLS concerns but the included
configure-transport-level-security.md only covers generic filesystem edits; add
a Kubernetes-specific supplement that explains Ingress TLS termination vs
passthrough modes, how to store and manage certificates as Kubernetes Secrets,
patterns for mounting certificates into Identity Server pods (or using projected
volumes), and how to apply deployment.toml or other runtime configuration via
ConfigMaps/Secrets for containerized deployments; update the page to link to the
new supplement and/or create an issue to include this container-platform
guidance in the shared TLS documentation.


!!! note "Before this step"
TLS certificates are available as Kubernetes Secrets or in a secrets manager your deployment can access. The hostname step is complete.

{% include "../../deploy/security/configure-transport-level-security.md" %}

!!! tip "Verify"
After deploying, run `kubectl exec -it <pod-name> -- openssl s_client -connect localhost:9443 -brief 2>/dev/null | head -3`. The certificate CN or SAN should match the configured hostname.
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
template: templates/deployment-guide.html
read_time: 30 mins
platform_label: Kubernetes
---

Deploy WSO2 Identity Server to a Kubernetes cluster. This step covers writing the Deployment and Service manifests, configuring the ConfigMap for `deployment.toml`, and exposing WSO2 Identity Server through an Ingress resource.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Page description misrepresents the included content.

The description states this step covers "writing the Deployment and Service manifests, configuring the ConfigMap for deployment.toml, and exposing WSO2 Identity Server through an Ingress resource." However, the included content (deploy-is-on-kubernetes.md) uses a Helm chart approach, which abstracts away the underlying manifests. The page does not teach manual manifest writing.

✏️ Suggested revision
-Deploy WSO2 Identity Server to a Kubernetes cluster. This step covers writing the Deployment and Service manifests, configuring the ConfigMap for `deployment.toml`, and exposing WSO2 Identity Server through an Ingress resource.
+Deploy WSO2 Identity Server to a Kubernetes cluster using Helm. This step covers installing the WSO2 Identity Server Helm chart, which automates the creation of Deployment, Service, ConfigMap, and Ingress resources.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-containers/deploy-on-kubernetes.md`
at line 6, The page description incorrectly claims the guide covers manual
Deployment/Service manifests and ConfigMap editing, while the actual content
(deploy-is-on-kubernetes.md) uses a Helm chart; update the single-line summary
to state that this guide demonstrates deploying WSO2 Identity Server to
Kubernetes using the provided Helm chart (which abstracts underlying manifests),
remove or rephrase references to "writing the Deployment and Service manifests"
and "configuring the ConfigMap for `deployment.toml`" and instead mention
configuring Helm values and exposing the service via the chart’s Ingress
configuration.


!!! note "Before this step"
Configuration steps (hostname, TLS, keystores) are complete. Your `kubeconfig` is set to the target cluster and you have permission to create Deployments, Services, ConfigMaps, and Ingress resources.

{% include "../../../../../includes/deploy/deploy-is-on-kubernetes.md" %}

!!! tip "Verify"
Run `kubectl get pods -n <namespace>` and confirm all WSO2 Identity Server pods reach `Running` status. Then open `https://<hostname>/console` and sign in with admin credentials.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use 'log in' instead of 'sign in' for consistency.

Replace "sign in" with "log in" to align with the established repository terminology.

📝 Proposed fix
-    Run `kubectl get pods -n <namespace>` and confirm all WSO2 Identity Server pods reach `Running` status. Then open `https://<hostname>/console` and sign in with admin credentials.
+    Run `kubectl get pods -n <namespace>` and confirm all WSO2 Identity Server pods reach `Running` status. Then open `https://<hostname>/console` and log in with admin credentials.

Based on learnings: Enforce the established terminology in the wso2/docs-is repository: use 'log in' as the verb and 'login' as the noun/adjective consistently across all Markdown documentation.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Run `kubectl get pods -n <namespace>` and confirm all WSO2 Identity Server pods reach `Running` status. Then open `https://<hostname>/console` and sign in with admin credentials.
Run `kubectl get pods -n <namespace>` and confirm all WSO2 Identity Server pods reach `Running` status. Then open `https://<hostname>/console` and log in with admin credentials.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-containers/deploy-on-kubernetes.md`
at line 14, Replace the phrase "sign in" with "log in" in the sentence that
reads "Run `kubectl get pods -n <namespace>` and confirm all WSO2 Identity
Server pods reach `Running` status. Then open `https://<hostname>/console` and
sign in with admin credentials." to match repository terminology (use "log in"
as the verb and "login" as noun/adjective elsewhere); update only the verb to
"log in with admin credentials" so the rest of the sentence and code snippets
remain unchanged.

Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
template: templates/deployment-guide.html
read_time: 30 mins
platform_label: OpenShift
---

Deploy WSO2 Identity Server to an OpenShift cluster. OpenShift applies stricter security policies than standard Kubernetes, including restricted pod security contexts and the requirement to use Routes instead of Ingress resources.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Page description misrepresents the deployment approach.

The description emphasizes OpenShift-specific considerations including "restricted pod security contexts and the requirement to use Routes instead of Ingress resources," which suggests manual configuration of these resources. However, the included content (deploy-is-on-openshift.md) uses a Helm chart that abstracts these details. The actual deployment is Helm-based, not raw manifest writing.

✏️ Suggested revision
-Deploy WSO2 Identity Server to an OpenShift cluster. OpenShift applies stricter security policies than standard Kubernetes, including restricted pod security contexts and the requirement to use Routes instead of Ingress resources.
+Deploy WSO2 Identity Server to an OpenShift cluster using Helm. OpenShift applies stricter security policies than standard Kubernetes. The Helm chart addresses these requirements, including support for restricted pod security contexts and OpenShift Routes.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Deploy WSO2 Identity Server to an OpenShift cluster. OpenShift applies stricter security policies than standard Kubernetes, including restricted pod security contexts and the requirement to use Routes instead of Ingress resources.
Deploy WSO2 Identity Server to an OpenShift cluster using Helm. OpenShift applies stricter security policies than standard Kubernetes. The Helm chart addresses these requirements, including support for restricted pod security contexts and OpenShift Routes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-containers/deploy-on-openshift.md`
at line 6, The page description incorrectly implies manual manifest-level
OpenShift configuration (mentioning "restricted pod security contexts" and
"requirement to use Routes instead of Ingress resources"); update the summary to
state that deployment uses a Helm chart (reference deploy-is-on-openshift.md)
which abstracts OpenShift specifics and handles Routes/security contexts for
you, and rephrase the sentence to mention Helm-based deployment and that the
chart adapts to OpenShift rather than instructing users to manually configure
pod security contexts or Routes.


!!! note "Before this step"
Configuration steps (hostname, TLS, keystores) are complete. You have `oc` CLI access to the target project with permission to create DeploymentConfigs or Deployments, Services, ConfigMaps, and Routes.

{% include "../../../../../includes/deploy/deploy-is-on-openshift.md" %}

!!! tip "Verify"
Run `oc get pods -n <project>` and confirm all WSO2 Identity Server pods reach `Running` status. Open `https://<hostname>/console` and sign in with admin credentials.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use 'log in' instead of 'sign in' for consistency.

Replace "sign in" with "log in" to align with the established repository terminology.

📝 Proposed fix
-    Run `oc get pods -n <project>` and confirm all WSO2 Identity Server pods reach `Running` status. Open `https://<hostname>/console` and sign in with admin credentials.
+    Run `oc get pods -n <project>` and confirm all WSO2 Identity Server pods reach `Running` status. Open `https://<hostname>/console` and log in with admin credentials.

Based on learnings: Enforce the established terminology in the wso2/docs-is repository: use 'log in' as the verb and 'login' as the noun/adjective consistently across all Markdown documentation.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Run `oc get pods -n <project>` and confirm all WSO2 Identity Server pods reach `Running` status. Open `https://<hostname>/console` and sign in with admin credentials.
Run `oc get pods -n <project>` and confirm all WSO2 Identity Server pods reach `Running` status. Open `https://<hostname>/console` and log in with admin credentials.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-containers/deploy-on-openshift.md`
at line 14, Replace the phrase "sign in" with "log in" in the sentence "Open
`https://<hostname>/console` and sign in with admin credentials" to align with
repository terminology; update the text to "Open `https://<hostname>/console`
and log in with admin credentials" and scan nearby documentation in this file
for any other occurrences of "sign in" to change to "log in" (keeping "login" as
the noun/adjective where applicable).

Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
template: templates/deployment-guide.html
heading: Introduction
read_time: 2 mins
---

{% include "../../../../../includes/complete-guides/deploy-containers/introduction.md" %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
template: templates/deployment-guide.html
heading: Next steps
read_time: 2 mins
---

{% include "../../../../../includes/complete-guides/deploy-containers/next-steps.md" %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
template: templates/deployment-guide.html
heading: Prerequisites
read_time: 2 mins
---

{% include "../../../../../includes/complete-guides/deploy-containers/prerequisites.md" %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 15 mins
---

Apply product, OS, and network-level hardening before directing production traffic to this deployment. On Kubernetes, also review pod security contexts, network policies, and RBAC rules as part of the hardening process.

!!! note "Before this step"
WSO2 Identity Server is deployed and running (previous step complete). Hardening is the last configuration step before final verification.

{% include "../../deploy/security/security-guidelines/index.md" %}
Comment on lines +6 to +11
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Kubernetes hardening scope is stated but not actually provided

The text says readers should review pod security contexts, network policies, and RBAC, but the included page only provides product/OS/network guideline categories. Please either add Kubernetes-specific hardening content (or a dedicated include/link) or remove that claim.

✏️ Proposed doc fix
-Apply product, OS, and network-level hardening before directing production traffic to this deployment. On Kubernetes, also review pod security contexts, network policies, and RBAC rules as part of the hardening process.
+Apply product-, operating system-, and network-level hardening before directing production traffic to this deployment. For Kubernetes-specific controls (pod security contexts, network policies, and RBAC), complete your platform hardening checklist before continuing.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Apply product, OS, and network-level hardening before directing production traffic to this deployment. On Kubernetes, also review pod security contexts, network policies, and RBAC rules as part of the hardening process.
!!! note "Before this step"
WSO2 Identity Server is deployed and running (previous step complete). Hardening is the last configuration step before final verification.
{% include "../../deploy/security/security-guidelines/index.md" %}
Apply product-, operating system-, and network-level hardening before directing production traffic to this deployment. For Kubernetes-specific controls (pod security contexts, network policies, and RBAC), complete your platform hardening checklist before continuing.
!!! note "Before this step"
WSO2 Identity Server is deployed and running (previous step complete). Hardening is the last configuration step before final verification.
{% include "../../deploy/security/security-guidelines/index.md" %}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-containers/security-hardening.md`
around lines 6 - 11, The paragraph that claims "On Kubernetes, also review pod
security contexts, network policies, and RBAC rules" is misleading because the
included file {% include "../../deploy/security/security-guidelines/index.md" %}
lacks Kubernetes-specific guidance; either add Kubernetes hardening content (a
new include or section covering podSecurityContext examples, NetworkPolicy
patterns, and RBAC role/rolebinding recommendations) and reference it here, or
remove/soften the Kubernetes claim and instead link to a dedicated Kubernetes
hardening include (e.g., create and include
../../deploy/security/security-guidelines/kubernetes.md) so the statement
matches the actual content.


!!! tip "Verify"
Restart all pods (for example, with `kubectl rollout restart deployment/<name>`) and confirm a clean startup before proceeding to the deployment checklist.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 10 mins
---

Run the deployment checklist to confirm this container deployment is production-ready. Work through each item and resolve any gaps before directing real user traffic here.

!!! note "Before this step"
The deployment is running and security hardening is applied (previous step complete).

{% include "../../deploy/deployment-checklist.md" %}

!!! tip "Verify"
Complete a full end-to-end authentication flow: sign in to `https://<hostname>/console` with admin credentials, then test a user sign-in through a connected application to confirm the deployment is working correctly.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use 'log in' instead of 'sign in' for consistency.

The established terminology in this repository uses 'log in' as the verb and 'login' as the noun/adjective. Replace "sign in" with "log in" in both occurrences.

📝 Proposed fix
-    Complete a full end-to-end authentication flow: sign in to `https://<hostname>/console` with admin credentials, then test a user sign-in through a connected application to confirm the deployment is working correctly.
+    Complete a full end-to-end authentication flow: log in to `https://<hostname>/console` with admin credentials, then test a user login through a connected application to confirm the deployment is working correctly.

Based on learnings: Enforce the established terminology in the wso2/docs-is repository: use 'log in' as the verb and 'login' as the noun/adjective consistently across all Markdown documentation.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Complete a full end-to-end authentication flow: sign in to `https://<hostname>/console` with admin credentials, then test a user sign-in through a connected application to confirm the deployment is working correctly.
Complete a full end-to-end authentication flow: log in to `https://<hostname>/console` with admin credentials, then test a user login through a connected application to confirm the deployment is working correctly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-containers/verify-deployment.md`
at line 14, Replace the two occurrences of "sign in" in the sentence that
references "https://<hostname>/console" with the repository-standard verb "log
in" (keeping "login" as noun/adjective elsewhere); update both phrases so it
reads e.g. "log in to `https://<hostname>/console` with admin credentials, then
test a user log in through a connected application" to ensure consistent
terminology across the document.

Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 5 mins
---

Set the production hostname on every node in every region. Each region typically uses a region-specific hostname (for example, `is-us.example.com`) alongside a global hostname that resolves to the active region.

!!! note "Before this step"
DNS records for your regional and global hostnames are configured. TLS certificates covering those hostnames are ready.

{% include "../../../../../includes/deploy/change-the-hostname.md" %}
Comment on lines +6 to +11
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

DR hostname strategy conflicts with the included procedure

This page describes region-specific hostnames per node, but the included hostname guide is written for a single shared hostname configuration. Please align these so the instructions and verify criteria match.

✏️ Suggested alignment options
-Set the production hostname on every node in every region. Each region typically uses a region-specific hostname (for example, `is-us.example.com`) alongside a global hostname that resolves to the active region.
+Set the production hostname on every node in every region. Use a hostname strategy that matches your DR routing design, then apply it consistently to each node.

Or replace the include with a DR-specific include that explicitly documents per-region hostname values.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Set the production hostname on every node in every region. Each region typically uses a region-specific hostname (for example, `is-us.example.com`) alongside a global hostname that resolves to the active region.
!!! note "Before this step"
DNS records for your regional and global hostnames are configured. TLS certificates covering those hostnames are ready.
{% include "../../../../../includes/deploy/change-the-hostname.md" %}
Set the production hostname on every node in every region. Use a hostname strategy that matches your DR routing design, then apply it consistently to each node.
!!! note "Before this step"
DNS records for your regional and global hostnames are configured. TLS certificates covering those hostnames are ready.
{% include "../../../../../includes/deploy/change-the-hostname.md" %}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@en/identity-server/next/docs/complete-guides/deploy-dr/change-hostname.md`
around lines 6 - 11, The page's text describes region-specific per-node
hostnames but still pulls in the generic single-hostname include ("{% include
../../../../../includes/deploy/change-the-hostname.md %}"), causing a mismatch;
update the page so the behavior and verification match by either (a) replacing
that include with a DR-specific include that documents per-region hostname
values and verification steps, or (b) editing the surrounding copy to describe
the single shared-hostname flow and expected verification criteria to match the
included file; search for the include string and the surrounding paragraphs
("Set the production hostname..." and the note block) to locate where to change
the include or the explanatory text and update the verify criteria accordingly.


!!! tip "Verify"
Run `grep 'hostname' <IS_HOME>/repository/conf/deployment.toml` on each node. The value should match the region-specific hostname for that node.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 15 mins
---

Enable Hazelcast clustering within each region. Nodes in the same region cluster together; cross-region coordination is handled at the database layer in the next step, not through Hazelcast.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add explicit guidance for enforcing same-region clustering.

The page states "Nodes in the same region cluster together" and the verification step expects "only that region's IP addresses are listed," but the included content (configure-hazelcast.md) uses a generic member list configuration without explaining how to ensure same-region-only membership. Users may not understand they must:

  • Manually filter and list only same-region node IPs in each region's configuration
  • Avoid cross-region IPs in the Hazelcast well-known member list
  • Replicate this filtering process for each region independently

Without explicit guidance, users could inadvertently create cross-region Hazelcast clusters, violating the DR architecture.

Would you like me to draft additional content explaining how to configure region-specific member lists, or create an issue to enhance the Hazelcast documentation with multi-region deployment patterns?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-dr/configure-clustering.md`
at line 6, The doc lacks explicit instructions to enforce region-only Hazelcast
membership: update the Configure Hazelcast content (referencing the "Enable
Hazelcast clustering" guidance and the configure-hazelcast.md member list) to
state that each region's Hazelcast config must use a curated well-known member
list containing only that region's node IPs, describe the required manual
filtering/automation steps to produce per-region IP lists, and add a
verification step that the region's cluster membership command returns only
those IPs; ensure guidance warns not to include cross-region IPs and suggests
automating list generation per-region (e.g., via tags or metadata) and
replicating the process for every region.


!!! note "Before this step"
All nodes within each region have identical configuration. TCP port 5701 is open between nodes within the same region.

{% include "../../deploy/configure-hazelcast.md" %}

!!! tip "Verify"
Start all nodes in one region. In each node's `wso2carbon.log`, confirm a `Members [N] {` line where N equals the per-region node count and only that region's IP addresses are listed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 15 mins
---

Point each regional cluster at an external RDBMS. You will configure cross-region database replication in the next step — this step establishes the connection configuration that replication will build on.

!!! note "Before this step"
An external RDBMS is running in each region. You have database credentials and the JDBC driver JAR ready for each region's database server.

{% include "../../../../../includes/deploy/configure/databases/clustering.md" %}

!!! tip "Verify"
Start one node per region temporarily. Confirm no JDBC connection errors appear in `<IS_HOME>/repository/logs/wso2carbon.log`. Stop the servers before continuing.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 15 mins
---

Configure the mechanism that detects a primary region failure and redirects traffic to the standby region. Failover can be automatic (via a global load balancer health check) or manual (via a DNS update) — choose the approach that matches your target recovery time and operational capabilities.

!!! note "Before this step"
Database replication is running and verified (previous step complete). Regional load balancers are in place in all regions.

{% include "../../../../../includes/complete-guides/deploy-dr/configure-failover.md" %}

!!! tip "Verify"
Perform a planned failover test: stop the primary region's nodes and confirm that traffic routes to the standby region within your target time. Verify that authentication flows complete against the standby. Restore the primary and confirm traffic returns to it.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 10 mins
---

Create separate keystores for token signing, data encryption, and TLS on every node. In a multi-region deployment, use the same signing and encryption keystores across all regions so tokens issued in one region are valid in another.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add explicit guidance for cross-region keystore distribution.

The page states "use the same signing and encryption keystores across all regions so tokens issued in one region are valid in another," but the included content (keystores/index.md) does not provide specific instructions for implementing cross-region keystore sharing. Users may not understand the operational steps required, such as:

  • Whether to create keystores once and copy them to all regions
  • How to securely distribute keystores across regions
  • How to verify keystores are identical across regions

Would you like me to draft additional content that explains the cross-region keystore distribution workflow, or open an issue to enhance the shared keystores documentation with multi-region guidance?

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-dr/configure-keystores.md`
at line 6, The guidance in configure-keystores.md says to "use the same signing
and encryption keystores across all regions" but keystores/index.md lacks
operational steps; update keystores/index.md (and add a brief note in
configure-keystores.md near the "Create separate keystores..." sentence) to: 1)
describe a recommended workflow (create canonical signing and encryption
keystores once, export securely, and distribute copies to each region) 2) list
secure transport options (transfer via S3 with SSE+KMS and restricted IAM,
rsync/scp over bastion with SSH keys, or use a secrets/replication system like
HashiCorp Vault or AWS Secrets Manager cross-region replication) 3) show
verification steps to ensure identical keystores (generate and compare sha256
checksums and keystore fingerprints using keytool/openssl) and 4) note that TLS
keystores remain node-local and include rotation/rotation-record guidance; link
the new instructions from configure-keystores.md to keystores/index.md for users
who need the how-to.


!!! note "Before this step"
TLS is configured (previous step complete). The JDK `keytool` utility is available on each node.

{% include "../../../../../includes/deploy/security/keystores/index.md" %}

!!! tip "Verify"
Start one node per region temporarily. Confirm no `KeyStore` or `KeyManager` errors appear in `wso2carbon.log`. Stop the servers before continuing.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 10 mins
---

Deploy a load balancer in each region to front that region's cluster. You also need a global load balancer or DNS-based routing policy to direct traffic to the active region — this step covers the per-region configuration.

!!! note "Before this step"
All nodes within each region are running and show the correct Hazelcast member count. Port 9443 is reachable on each node from its regional load balancer.

{% include "../../deploy/front-with-the-nginx-load-balancer.md" %}

!!! tip "Verify"
Run `curl -k -o /dev/null -s -w "%{http_code}" https://<regional-hostname>/console` against each regional load balancer. Each should return `200`.
Comment on lines +11 to +14
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clarify DR-specific adaptations after the shared NGINX include.

The included guide is single-region oriented. In this DR page, readers still need explicit instructions to (1) repeat configuration per region and (2) use only same-region backend nodes in each regional load balancer. Without that, the step can be misapplied in multi-region DR.

Suggested addition after the include
 {% include "../../deploy/front-with-the-nginx-load-balancer.md" %}
+
+After you complete the shared NGINX setup, apply these DR-specific rules:
+
+1. Repeat the load balancer configuration in **each** region.
+2. In each regional load balancer, configure upstream servers from that same region only.
+3. Use a region-specific public hostname per regional load balancer.
+4. Keep global traffic steering (DNS or global load balancer) separate from regional upstream definitions.
As per coding guidelines "Task-based documentation must follow a logical, goal-oriented structure including ... sequential steps ... and next steps."
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{% include "../../deploy/front-with-the-nginx-load-balancer.md" %}
!!! tip "Verify"
Run `curl -k -o /dev/null -s -w "%{http_code}" https://<regional-hostname>/console` against each regional load balancer. Each should return `200`.
{% include "../../deploy/front-with-the-nginx-load-balancer.md" %}
After you complete the shared NGINX setup, apply these DR-specific rules:
1. Repeat the load balancer configuration in **each** region.
2. In each regional load balancer, configure upstream servers from that same region only.
3. Use a region-specific public hostname per regional load balancer.
4. Keep global traffic steering (DNS or global load balancer) separate from regional upstream definitions.
!!! tip "Verify"
Run `curl -k -o /dev/null -s -w "%{http_code}" https://<regional-hostname>/console` against each regional load balancer. Each should return `200`.
🧰 Tools
🪛 LanguageTool

[style] ~12-~12: Using many exclamation marks might seem excessive (in this case: 6 exclamation marks for a text that’s 653 characters long)
Context: ...nt-with-the-nginx-load-balancer.md" %} !!! tip "Verify" Run `curl -k -o /dev/n...

(EN_EXCESSIVE_EXCLAMATION)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@en/identity-server/next/docs/complete-guides/deploy-dr/configure-load-balancer.md`
around lines 11 - 14, The shared NGINX include
(../../deploy/front-with-the-nginx-load-balancer.md) is single-region; update
configure-load-balancer.md to add DR-specific steps immediately after the
include: explicitly instruct operators to repeat the NGINX load‑balancer
configuration for each region, ensure each regional load balancer's
upstream/backend block contains only same‑region backend nodes (no cross‑region
backends), and update the Verify step to run the curl check against each
regional hostname (e.g., curl https://<regional-hostname>/console for every
region). Reference the include and the Verify paragraph so readers perform these
region-specific, per-load-balancer actions in sequence.

Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 20 mins
---

Configure your RDBMS to replicate identity data from the primary region to standby regions. WSO2 Identity Server stores all persistent state in the database, so replication at the database layer is the foundation of any disaster recovery strategy.

!!! note "Before this step"
Database connections are configured and verified on all regional nodes (previous step complete). You have administrative access to the database servers in all regions.

{% include "../../../../../includes/complete-guides/deploy-dr/configure-replication.md" %}

!!! tip "Verify"
Write a test record to the primary database and confirm it appears on the standby within your target replication lag. Most RDBMS platforms provide a replication status view or command — confirm replication lag is within acceptable limits before continuing.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 10 mins
---

Enable HTTPS on the WSO2 Identity Server transport layer in each region. Apply the same TLS configuration on all nodes across all regions.

!!! note "Before this step"
TLS certificates for regional and global hostnames are available. The hostname configuration step is complete on all nodes.

{% include "../../deploy/security/configure-transport-level-security.md" %}

!!! tip "Verify"
On one node per region, run `openssl s_client -connect <regional-hostname>:9443 -brief 2>/dev/null | head -5`. The certificate CN or SAN should match the regional hostname.
14 changes: 14 additions & 0 deletions en/identity-server/next/docs/complete-guides/deploy-dr/install.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 10 mins
---

Install WSO2 Identity Server on each node in each region. Every node across all regions must run the same version.

!!! note "Before this step"
Java 11, 17, or 21 is installed and `JAVA_HOME` is set on each node in each region. Run `java -version` to confirm.

{% include "../../deploy/get-started/install.md" %}

!!! tip "Verify"
On each node in each region, run `<IS_HOME>/bin/wso2server.sh --version`. The same version string must display on all nodes across all regions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
template: templates/deployment-guide.html
heading: Introduction
read_time: 2 mins
---

{% include "../../../../../includes/complete-guides/deploy-dr/introduction.md" %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
template: templates/deployment-guide.html
heading: Next steps
read_time: 2 mins
---

{% include "../../../../../includes/complete-guides/deploy-dr/next-steps.md" %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
template: templates/deployment-guide.html
heading: Prerequisites
read_time: 2 mins
---

{% include "../../../../../includes/complete-guides/deploy-dr/prerequisites.md" %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 15 mins
---

Apply product, OS, and network-level hardening on every node in every region before directing production traffic to this deployment.

!!! note "Before this step"
Regional failover is configured and tested. Hardening is the last configuration step before final verification.

{% include "../../deploy/security/security-guidelines/index.md" %}

!!! tip "Verify"
Restart all nodes across all regions and confirm a clean startup in `wso2carbon.log`. Then proceed to the deployment checklist.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 10 mins
---

Run the deployment checklist across all regions. Also confirm that database replication is active and that a failover test routes traffic to the secondary region as expected.

!!! note "Before this step"
All previous steps are complete across all regions. All regional clusters are running with the correct Hazelcast member counts.

{% include "../../deploy/deployment-checklist.md" %}

!!! tip "Verify"
Simulate a regional failure by stopping all nodes in the primary region. Confirm DNS or the global load balancer routes traffic to the secondary region and users can authenticate within your target recovery time.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
template: templates/deployment-guide.html
read_time: 10 mins
---

Install WSO2 Identity Server on your evaluation machine. The embedded H2 database ships pre-configured — no external database setup is required for this path.

!!! note "Before this step"
Confirm Java 11, 17, or 21 is installed and `JAVA_HOME` is set. Run `java -version` to verify.

{% include "../../deploy/get-started/install.md" %}

!!! tip "Verify"
Run `<IS_HOME>/bin/wso2server.sh --version` (Linux/macOS) or `<IS_HOME>\bin\wso2server.bat --version` (Windows). The version string should display without errors.
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
template: templates/deployment-guide.html
heading: Introduction
read_time: 2 mins
---

{% include "../../../../../includes/complete-guides/deploy-eval/introduction.md" %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
template: templates/deployment-guide.html
heading: Next steps
read_time: 2 mins
---

{% include "../../../../../includes/complete-guides/deploy-eval/next-steps.md" %}
Loading