-
Notifications
You must be signed in to change notification settings - Fork 403
Revamp deployment guides with dedicated path pages #6002
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
41654f6
de683ce
8f45081
bb28f24
128fcce
91af95a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -39,6 +39,7 @@ extra: | |
| - Home | ||
| - Get Started | ||
| - Guides | ||
| - Deployment guides | ||
| - Setup | ||
| - Integrations | ||
| - APIs | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 5 mins | ||
| --- | ||
|
|
||
| Set the production hostname that WSO2 Identity Server advertises to clients. In Kubernetes, this is the hostname exposed via your Ingress or Route resource — it must match the hostname in your TLS certificate and DNS records. | ||
|
|
||
| !!! note "Before this step" | ||
| DNS records for the production hostname point to your cluster's ingress IP address or load balancer. The TLS certificate for this hostname is available. | ||
|
|
||
| {% include "../../../../../includes/deploy/change-the-hostname.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| Confirm `deployment.toml` inside your ConfigMap or mounted configuration file contains the correct `hostname` value before proceeding. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 10 mins | ||
| --- | ||
|
|
||
| Create separate keystores for token signing, data encryption, and TLS. Mount these into your pods as Kubernetes Secrets — never bake keystore files into container images. | ||
|
|
||
| !!! note "Before this step" | ||
| TLS is configured (previous step complete). The JDK `keytool` utility is available to generate keystores locally before mounting them as Secrets. | ||
|
|
||
| {% include "../../../../../includes/deploy/security/keystores/index.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| After deploying with the new keystores, check pod logs for any `KeyStore` or `KeyManager` errors. A clean startup confirms the keystores mounted and loaded correctly. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 10 mins | ||
| --- | ||
|
|
||
| Enable HTTPS on the WSO2 Identity Server transport layer. In Kubernetes deployments, TLS is typically terminated at the Ingress controller, but WSO2 Identity Server still requires its own TLS configuration for inter-service communication and the management Console. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Kubernetes-specific TLS guidance is missing from the included content. The introduction mentions Kubernetes-specific considerations: "TLS is typically terminated at the Ingress controller, but WSO2 Identity Server still requires its own TLS configuration for inter-service communication and the management Console." However, the included content (
Users following generic file-system instructions may struggle to adapt them to Kubernetes environments. Would you like me to draft Kubernetes-specific supplemental content for this page, or create an issue to enhance the shared TLS documentation with container-platform guidance? 🤖 Prompt for AI Agents |
||
|
|
||
| !!! note "Before this step" | ||
| TLS certificates are available as Kubernetes Secrets or in a secrets manager your deployment can access. The hostname step is complete. | ||
|
|
||
| {% include "../../deploy/security/configure-transport-level-security.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| After deploying, run `kubectl exec -it <pod-name> -- openssl s_client -connect localhost:9443 -brief 2>/dev/null | head -3`. The certificate CN or SAN should match the configured hostname. | ||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,15 @@ | ||||||
| --- | ||||||
| template: templates/deployment-guide.html | ||||||
| read_time: 30 mins | ||||||
| platform_label: Kubernetes | ||||||
| --- | ||||||
|
|
||||||
| Deploy WSO2 Identity Server to a Kubernetes cluster. This step covers writing the Deployment and Service manifests, configuring the ConfigMap for `deployment.toml`, and exposing WSO2 Identity Server through an Ingress resource. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Page description misrepresents the included content. The description states this step covers "writing the Deployment and Service manifests, configuring the ConfigMap for ✏️ Suggested revision-Deploy WSO2 Identity Server to a Kubernetes cluster. This step covers writing the Deployment and Service manifests, configuring the ConfigMap for `deployment.toml`, and exposing WSO2 Identity Server through an Ingress resource.
+Deploy WSO2 Identity Server to a Kubernetes cluster using Helm. This step covers installing the WSO2 Identity Server Helm chart, which automates the creation of Deployment, Service, ConfigMap, and Ingress resources.🤖 Prompt for AI Agents |
||||||
|
|
||||||
| !!! note "Before this step" | ||||||
| Configuration steps (hostname, TLS, keystores) are complete. Your `kubeconfig` is set to the target cluster and you have permission to create Deployments, Services, ConfigMaps, and Ingress resources. | ||||||
|
|
||||||
| {% include "../../../../../includes/deploy/deploy-is-on-kubernetes.md" %} | ||||||
|
|
||||||
| !!! tip "Verify" | ||||||
| Run `kubectl get pods -n <namespace>` and confirm all WSO2 Identity Server pods reach `Running` status. Then open `https://<hostname>/console` and sign in with admin credentials. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use 'log in' instead of 'sign in' for consistency. Replace "sign in" with "log in" to align with the established repository terminology. 📝 Proposed fix- Run `kubectl get pods -n <namespace>` and confirm all WSO2 Identity Server pods reach `Running` status. Then open `https://<hostname>/console` and sign in with admin credentials.
+ Run `kubectl get pods -n <namespace>` and confirm all WSO2 Identity Server pods reach `Running` status. Then open `https://<hostname>/console` and log in with admin credentials.Based on learnings: Enforce the established terminology in the wso2/docs-is repository: use 'log in' as the verb and 'login' as the noun/adjective consistently across all Markdown documentation. 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,15 @@ | ||||||
| --- | ||||||
| template: templates/deployment-guide.html | ||||||
| read_time: 30 mins | ||||||
| platform_label: OpenShift | ||||||
| --- | ||||||
|
|
||||||
| Deploy WSO2 Identity Server to an OpenShift cluster. OpenShift applies stricter security policies than standard Kubernetes, including restricted pod security contexts and the requirement to use Routes instead of Ingress resources. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Page description misrepresents the deployment approach. The description emphasizes OpenShift-specific considerations including "restricted pod security contexts and the requirement to use Routes instead of Ingress resources," which suggests manual configuration of these resources. However, the included content ( ✏️ Suggested revision-Deploy WSO2 Identity Server to an OpenShift cluster. OpenShift applies stricter security policies than standard Kubernetes, including restricted pod security contexts and the requirement to use Routes instead of Ingress resources.
+Deploy WSO2 Identity Server to an OpenShift cluster using Helm. OpenShift applies stricter security policies than standard Kubernetes. The Helm chart addresses these requirements, including support for restricted pod security contexts and OpenShift Routes.📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||
|
|
||||||
| !!! note "Before this step" | ||||||
| Configuration steps (hostname, TLS, keystores) are complete. You have `oc` CLI access to the target project with permission to create DeploymentConfigs or Deployments, Services, ConfigMaps, and Routes. | ||||||
|
|
||||||
| {% include "../../../../../includes/deploy/deploy-is-on-openshift.md" %} | ||||||
|
|
||||||
| !!! tip "Verify" | ||||||
| Run `oc get pods -n <project>` and confirm all WSO2 Identity Server pods reach `Running` status. Open `https://<hostname>/console` and sign in with admin credentials. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use 'log in' instead of 'sign in' for consistency. Replace "sign in" with "log in" to align with the established repository terminology. 📝 Proposed fix- Run `oc get pods -n <project>` and confirm all WSO2 Identity Server pods reach `Running` status. Open `https://<hostname>/console` and sign in with admin credentials.
+ Run `oc get pods -n <project>` and confirm all WSO2 Identity Server pods reach `Running` status. Open `https://<hostname>/console` and log in with admin credentials.Based on learnings: Enforce the established terminology in the wso2/docs-is repository: use 'log in' as the verb and 'login' as the noun/adjective consistently across all Markdown documentation. 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| heading: Introduction | ||
| read_time: 2 mins | ||
| --- | ||
|
|
||
| {% include "../../../../../includes/complete-guides/deploy-containers/introduction.md" %} |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| heading: Next steps | ||
| read_time: 2 mins | ||
| --- | ||
|
|
||
| {% include "../../../../../includes/complete-guides/deploy-containers/next-steps.md" %} |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| heading: Prerequisites | ||
| read_time: 2 mins | ||
| --- | ||
|
|
||
| {% include "../../../../../includes/complete-guides/deploy-containers/prerequisites.md" %} |
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,14 @@ | ||||||||||||||||||||||||||
| --- | ||||||||||||||||||||||||||
| template: templates/deployment-guide.html | ||||||||||||||||||||||||||
| read_time: 15 mins | ||||||||||||||||||||||||||
| --- | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| Apply product, OS, and network-level hardening before directing production traffic to this deployment. On Kubernetes, also review pod security contexts, network policies, and RBAC rules as part of the hardening process. | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| !!! note "Before this step" | ||||||||||||||||||||||||||
| WSO2 Identity Server is deployed and running (previous step complete). Hardening is the last configuration step before final verification. | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| {% include "../../deploy/security/security-guidelines/index.md" %} | ||||||||||||||||||||||||||
|
Comment on lines
+6
to
+11
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Kubernetes hardening scope is stated but not actually provided The text says readers should review pod security contexts, network policies, and RBAC, but the included page only provides product/OS/network guideline categories. Please either add Kubernetes-specific hardening content (or a dedicated include/link) or remove that claim. ✏️ Proposed doc fix-Apply product, OS, and network-level hardening before directing production traffic to this deployment. On Kubernetes, also review pod security contexts, network policies, and RBAC rules as part of the hardening process.
+Apply product-, operating system-, and network-level hardening before directing production traffic to this deployment. For Kubernetes-specific controls (pod security contexts, network policies, and RBAC), complete your platform hardening checklist before continuing.📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| !!! tip "Verify" | ||||||||||||||||||||||||||
| Restart all pods (for example, with `kubectl rollout restart deployment/<name>`) and confirm a clean startup before proceeding to the deployment checklist. | ||||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,14 @@ | ||||||
| --- | ||||||
| template: templates/deployment-guide.html | ||||||
| read_time: 10 mins | ||||||
| --- | ||||||
|
|
||||||
| Run the deployment checklist to confirm this container deployment is production-ready. Work through each item and resolve any gaps before directing real user traffic here. | ||||||
|
|
||||||
| !!! note "Before this step" | ||||||
| The deployment is running and security hardening is applied (previous step complete). | ||||||
|
|
||||||
| {% include "../../deploy/deployment-checklist.md" %} | ||||||
|
|
||||||
| !!! tip "Verify" | ||||||
| Complete a full end-to-end authentication flow: sign in to `https://<hostname>/console` with admin credentials, then test a user sign-in through a connected application to confirm the deployment is working correctly. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use 'log in' instead of 'sign in' for consistency. The established terminology in this repository uses 'log in' as the verb and 'login' as the noun/adjective. Replace "sign in" with "log in" in both occurrences. 📝 Proposed fix- Complete a full end-to-end authentication flow: sign in to `https://<hostname>/console` with admin credentials, then test a user sign-in through a connected application to confirm the deployment is working correctly.
+ Complete a full end-to-end authentication flow: log in to `https://<hostname>/console` with admin credentials, then test a user login through a connected application to confirm the deployment is working correctly.Based on learnings: Enforce the established terminology in the wso2/docs-is repository: use 'log in' as the verb and 'login' as the noun/adjective consistently across all Markdown documentation. 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,14 @@ | ||||||||||||||||||||||||||
| --- | ||||||||||||||||||||||||||
| template: templates/deployment-guide.html | ||||||||||||||||||||||||||
| read_time: 5 mins | ||||||||||||||||||||||||||
| --- | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| Set the production hostname on every node in every region. Each region typically uses a region-specific hostname (for example, `is-us.example.com`) alongside a global hostname that resolves to the active region. | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| !!! note "Before this step" | ||||||||||||||||||||||||||
| DNS records for your regional and global hostnames are configured. TLS certificates covering those hostnames are ready. | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| {% include "../../../../../includes/deploy/change-the-hostname.md" %} | ||||||||||||||||||||||||||
|
Comment on lines
+6
to
+11
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. DR hostname strategy conflicts with the included procedure This page describes region-specific hostnames per node, but the included hostname guide is written for a single shared hostname configuration. Please align these so the instructions and verify criteria match. ✏️ Suggested alignment options-Set the production hostname on every node in every region. Each region typically uses a region-specific hostname (for example, `is-us.example.com`) alongside a global hostname that resolves to the active region.
+Set the production hostname on every node in every region. Use a hostname strategy that matches your DR routing design, then apply it consistently to each node.Or replace the include with a DR-specific include that explicitly documents per-region hostname values. 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| !!! tip "Verify" | ||||||||||||||||||||||||||
| Run `grep 'hostname' <IS_HOME>/repository/conf/deployment.toml` on each node. The value should match the region-specific hostname for that node. | ||||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 15 mins | ||
| --- | ||
|
|
||
| Enable Hazelcast clustering within each region. Nodes in the same region cluster together; cross-region coordination is handled at the database layer in the next step, not through Hazelcast. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion | 🟠 Major Add explicit guidance for enforcing same-region clustering. The page states "Nodes in the same region cluster together" and the verification step expects "only that region's IP addresses are listed," but the included content (
Without explicit guidance, users could inadvertently create cross-region Hazelcast clusters, violating the DR architecture. Would you like me to draft additional content explaining how to configure region-specific member lists, or create an issue to enhance the Hazelcast documentation with multi-region deployment patterns? 🤖 Prompt for AI Agents |
||
|
|
||
| !!! note "Before this step" | ||
| All nodes within each region have identical configuration. TCP port 5701 is open between nodes within the same region. | ||
|
|
||
| {% include "../../deploy/configure-hazelcast.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| Start all nodes in one region. In each node's `wso2carbon.log`, confirm a `Members [N] {` line where N equals the per-region node count and only that region's IP addresses are listed. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 15 mins | ||
| --- | ||
|
|
||
| Point each regional cluster at an external RDBMS. You will configure cross-region database replication in the next step — this step establishes the connection configuration that replication will build on. | ||
|
|
||
| !!! note "Before this step" | ||
| An external RDBMS is running in each region. You have database credentials and the JDBC driver JAR ready for each region's database server. | ||
|
|
||
| {% include "../../../../../includes/deploy/configure/databases/clustering.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| Start one node per region temporarily. Confirm no JDBC connection errors appear in `<IS_HOME>/repository/logs/wso2carbon.log`. Stop the servers before continuing. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 15 mins | ||
| --- | ||
|
|
||
| Configure the mechanism that detects a primary region failure and redirects traffic to the standby region. Failover can be automatic (via a global load balancer health check) or manual (via a DNS update) — choose the approach that matches your target recovery time and operational capabilities. | ||
|
|
||
| !!! note "Before this step" | ||
| Database replication is running and verified (previous step complete). Regional load balancers are in place in all regions. | ||
|
|
||
| {% include "../../../../../includes/complete-guides/deploy-dr/configure-failover.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| Perform a planned failover test: stop the primary region's nodes and confirm that traffic routes to the standby region within your target time. Verify that authentication flows complete against the standby. Restore the primary and confirm traffic returns to it. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 10 mins | ||
| --- | ||
|
|
||
| Create separate keystores for token signing, data encryption, and TLS on every node. In a multi-region deployment, use the same signing and encryption keystores across all regions so tokens issued in one region are valid in another. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion | 🟠 Major Add explicit guidance for cross-region keystore distribution. The page states "use the same signing and encryption keystores across all regions so tokens issued in one region are valid in another," but the included content (
Would you like me to draft additional content that explains the cross-region keystore distribution workflow, or open an issue to enhance the shared keystores documentation with multi-region guidance? 🤖 Prompt for AI Agents |
||
|
|
||
| !!! note "Before this step" | ||
| TLS is configured (previous step complete). The JDK `keytool` utility is available on each node. | ||
|
|
||
| {% include "../../../../../includes/deploy/security/keystores/index.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| Start one node per region temporarily. Confirm no `KeyStore` or `KeyManager` errors appear in `wso2carbon.log`. Stop the servers before continuing. | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,14 @@ | ||||||||||||||||||||||||||||||||
| --- | ||||||||||||||||||||||||||||||||
| template: templates/deployment-guide.html | ||||||||||||||||||||||||||||||||
| read_time: 10 mins | ||||||||||||||||||||||||||||||||
| --- | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| Deploy a load balancer in each region to front that region's cluster. You also need a global load balancer or DNS-based routing policy to direct traffic to the active region — this step covers the per-region configuration. | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| !!! note "Before this step" | ||||||||||||||||||||||||||||||||
| All nodes within each region are running and show the correct Hazelcast member count. Port 9443 is reachable on each node from its regional load balancer. | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| {% include "../../deploy/front-with-the-nginx-load-balancer.md" %} | ||||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||||
| !!! tip "Verify" | ||||||||||||||||||||||||||||||||
| Run `curl -k -o /dev/null -s -w "%{http_code}" https://<regional-hostname>/console` against each regional load balancer. Each should return `200`. | ||||||||||||||||||||||||||||||||
|
Comment on lines
+11
to
+14
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Clarify DR-specific adaptations after the shared NGINX include. The included guide is single-region oriented. In this DR page, readers still need explicit instructions to (1) repeat configuration per region and (2) use only same-region backend nodes in each regional load balancer. Without that, the step can be misapplied in multi-region DR. Suggested addition after the include {% include "../../deploy/front-with-the-nginx-load-balancer.md" %}
+
+After you complete the shared NGINX setup, apply these DR-specific rules:
+
+1. Repeat the load balancer configuration in **each** region.
+2. In each regional load balancer, configure upstream servers from that same region only.
+3. Use a region-specific public hostname per regional load balancer.
+4. Keep global traffic steering (DNS or global load balancer) separate from regional upstream definitions.📝 Committable suggestion
Suggested change
🧰 Tools🪛 LanguageTool[style] ~12-~12: Using many exclamation marks might seem excessive (in this case: 6 exclamation marks for a text that’s 653 characters long) (EN_EXCESSIVE_EXCLAMATION) 🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 20 mins | ||
| --- | ||
|
|
||
| Configure your RDBMS to replicate identity data from the primary region to standby regions. WSO2 Identity Server stores all persistent state in the database, so replication at the database layer is the foundation of any disaster recovery strategy. | ||
|
|
||
| !!! note "Before this step" | ||
| Database connections are configured and verified on all regional nodes (previous step complete). You have administrative access to the database servers in all regions. | ||
|
|
||
| {% include "../../../../../includes/complete-guides/deploy-dr/configure-replication.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| Write a test record to the primary database and confirm it appears on the standby within your target replication lag. Most RDBMS platforms provide a replication status view or command — confirm replication lag is within acceptable limits before continuing. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 10 mins | ||
| --- | ||
|
|
||
| Enable HTTPS on the WSO2 Identity Server transport layer in each region. Apply the same TLS configuration on all nodes across all regions. | ||
|
|
||
| !!! note "Before this step" | ||
| TLS certificates for regional and global hostnames are available. The hostname configuration step is complete on all nodes. | ||
|
|
||
| {% include "../../deploy/security/configure-transport-level-security.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| On one node per region, run `openssl s_client -connect <regional-hostname>:9443 -brief 2>/dev/null | head -5`. The certificate CN or SAN should match the regional hostname. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 10 mins | ||
| --- | ||
|
|
||
| Install WSO2 Identity Server on each node in each region. Every node across all regions must run the same version. | ||
|
|
||
| !!! note "Before this step" | ||
| Java 11, 17, or 21 is installed and `JAVA_HOME` is set on each node in each region. Run `java -version` to confirm. | ||
|
|
||
| {% include "../../deploy/get-started/install.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| On each node in each region, run `<IS_HOME>/bin/wso2server.sh --version`. The same version string must display on all nodes across all regions. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| heading: Introduction | ||
| read_time: 2 mins | ||
| --- | ||
|
|
||
| {% include "../../../../../includes/complete-guides/deploy-dr/introduction.md" %} |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| heading: Next steps | ||
| read_time: 2 mins | ||
| --- | ||
|
|
||
| {% include "../../../../../includes/complete-guides/deploy-dr/next-steps.md" %} |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| heading: Prerequisites | ||
| read_time: 2 mins | ||
| --- | ||
|
|
||
| {% include "../../../../../includes/complete-guides/deploy-dr/prerequisites.md" %} |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 15 mins | ||
| --- | ||
|
|
||
| Apply product, OS, and network-level hardening on every node in every region before directing production traffic to this deployment. | ||
|
|
||
| !!! note "Before this step" | ||
| Regional failover is configured and tested. Hardening is the last configuration step before final verification. | ||
|
|
||
| {% include "../../deploy/security/security-guidelines/index.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| Restart all nodes across all regions and confirm a clean startup in `wso2carbon.log`. Then proceed to the deployment checklist. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 10 mins | ||
| --- | ||
|
|
||
| Run the deployment checklist across all regions. Also confirm that database replication is active and that a failover test routes traffic to the secondary region as expected. | ||
|
|
||
| !!! note "Before this step" | ||
| All previous steps are complete across all regions. All regional clusters are running with the correct Hazelcast member counts. | ||
|
|
||
| {% include "../../deploy/deployment-checklist.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| Simulate a regional failure by stopping all nodes in the primary region. Confirm DNS or the global load balancer routes traffic to the secondary region and users can authenticate within your target recovery time. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| read_time: 10 mins | ||
| --- | ||
|
|
||
| Install WSO2 Identity Server on your evaluation machine. The embedded H2 database ships pre-configured — no external database setup is required for this path. | ||
|
|
||
| !!! note "Before this step" | ||
| Confirm Java 11, 17, or 21 is installed and `JAVA_HOME` is set. Run `java -version` to verify. | ||
|
|
||
| {% include "../../deploy/get-started/install.md" %} | ||
|
|
||
| !!! tip "Verify" | ||
| Run `<IS_HOME>/bin/wso2server.sh --version` (Linux/macOS) or `<IS_HOME>\bin\wso2server.bat --version` (Windows). The version string should display without errors. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| heading: Introduction | ||
| read_time: 2 mins | ||
| --- | ||
|
|
||
| {% include "../../../../../includes/complete-guides/deploy-eval/introduction.md" %} |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| --- | ||
| template: templates/deployment-guide.html | ||
| heading: Next steps | ||
| read_time: 2 mins | ||
| --- | ||
|
|
||
| {% include "../../../../../includes/complete-guides/deploy-eval/next-steps.md" %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add container-specific execution steps around the shared hostname include.
The shared include is generic and does not tell users how to apply hostname changes through Kubernetes/OpenShift configuration objects. This leaves the container path incomplete at execution time.
Suggested container-specific addendum
{% include "../../../../../includes/deploy/change-the-hostname.md" %} + +For Kubernetes/OpenShift deployments, apply the hostname through your deployment configuration source: + +1. Update the `deployment.toml` content in your ConfigMap or mounted configuration. +2. Ensure the updated configuration is mounted by all Identity Server pods. +3. Roll out/restart the workload and confirm all pods use the same `hostname` value.🧰 Tools
🪛 LanguageTool
[style] ~12-~12: Using many exclamation marks might seem excessive (in this case: 6 exclamation marks for a text that’s 645 characters long)
Context: ...udes/deploy/change-the-hostname.md" %} !!! tip "Verify" Confirm `deployment.to...
(EN_EXCESSIVE_EXCLAMATION)
🤖 Prompt for AI Agents