diff --git a/docs/en/preview/kubeblocks-for-clickhouse/03-architecture.mdx b/docs/en/preview/kubeblocks-for-clickhouse/03-architecture.mdx index 69ecf65..f1ec718 100644 --- a/docs/en/preview/kubeblocks-for-clickhouse/03-architecture.mdx +++ b/docs/en/preview/kubeblocks-for-clickhouse/03-architecture.mdx @@ -101,6 +101,20 @@ For direct pod addressing (replication traffic, ClickHouse Keeper communication) {pod-name}.{cluster}-{shardComponentName}-headless.{namespace}.svc.cluster.local ``` +## Automatic Failover + +ClickHouse does not use a primary/replica role distinction at the application level — all replicas within a shard are equivalent and can serve queries. Recovery after a pod failure does not involve a role switch: + +1. **A replica pod crashes** — the failed pod stops serving queries for its shard +2. **CH Keeper detects the lost connection** (`topology: cluster` only) — remaining replicas continue serving if at least one replica is healthy; Keeper tracks which data parts each replica holds +3. **KubeBlocks restarts the failed pod** — the InstanceSet controller schedules a pod restart +4. **Recovered replica reconnects to CH Keeper** — the pod re-registers and fetches data parts it missed during downtime from peer replicas automatically +5. **ClusterIP service is unchanged** — all replicas are equivalent; no endpoint update is needed; the recovered pod resumes receiving traffic once it passes its readiness check + +:::note +Steps 2 and 4 require `topology: cluster` (CH Keeper deployed). In `topology: standalone`, no Keeper is present — inter-replica part synchronization is not available unless an external ZooKeeper or Keeper is configured. +::: + ## System Accounts KubeBlocks automatically manages the following ClickHouse system account. Passwords are auto-generated and stored in a Secret named `{cluster}-{component}-account-{name}`. diff --git a/docs/en/preview/kubeblocks-for-elasticsearch/03-architecture.mdx b/docs/en/preview/kubeblocks-for-elasticsearch/03-architecture.mdx index b7163bd..eebfd74 100644 --- a/docs/en/preview/kubeblocks-for-elasticsearch/03-architecture.mdx +++ b/docs/en/preview/kubeblocks-for-elasticsearch/03-architecture.mdx @@ -41,18 +41,32 @@ Every Elasticsearch pod runs three main containers (plus three init containers o Each pod mounts its own **PVC** for the Elasticsearch data directory (`/usr/share/elasticsearch/data`), providing independent persistent storage per node. -## Node Roles +## Node roles (Elasticsearch) -Elasticsearch supports multiple node roles, and KubeBlocks maps each role to a dedicated Component: +Elasticsearch nodes are configured with **roles** (master-eligible, data, ingest, coordinating, etc.). The sections below describe what each role means for **capacity and HA**. -| Node Role | Responsibility | +| Node role | Responsibility | |-----------|----------------| | **Master-eligible** | Participates in leader election; manages cluster state, index mappings, and shard allocation | | **Data** | Stores shard data; handles indexing and search requests for its assigned shards | | **Ingest** | Pre-processes documents before indexing via ingest pipelines | | **Coordinating** (optional) | Routes client requests to the appropriate data nodes and aggregates results | -In smaller deployments, a single node type can hold all roles. For production, dedicated master, data, and ingest components improve stability and resource isolation. +In smaller deployments, one process can hold several roles. In production, splitting roles across nodes improves stability. + +## Topologies and component names (ClusterDefinition) + +In the **kubeblocks-addons** Elasticsearch chart, **`spec.topology`** selects a layout. KubeBlocks creates **one Component per entry** in that topology; component **names** are short labels (`master`, `dit`, `mdit`, …), while the **Elasticsearch role set** is defined inside the image/config for each layout. + +| Topology (`spec.topology`) | Components created | Notes | +|---------------------------|-------------------|--------| +| `single-node` | `mdit` | Single-node layout | +| `multi-node` (chart default) | `master`, `dit` | Split layout: dedicated `master` Component plus `dit` Component for the remaining node group | +| `m-dit` | `master`, `dit` | Same component names as `multi-node`; chart distinguishes layouts for ordering/defaults | +| `mdit` | `mdit` | Combined multi-role naming under one component | +| `m-d-i-t` | `m`, `d`, `i`, `t` | Dedicated components per role family (master / data / ingest / coordinating) | + +Service names look like `{cluster}-{component}-http` — the **`{component}`** segment is the KubeBlocks Component name above (for example `mdit`, `master`, `dit`), not the long English phrase “master-eligible”. Use your Cluster’s `status` or `kubectl get component -n ` to see the exact names for a running cluster. ## High Availability via Cluster Coordination diff --git a/docs/en/preview/kubeblocks-for-kafka/03-architecture.mdx b/docs/en/preview/kubeblocks-for-kafka/03-architecture.mdx index 20358a2..1649cf9 100644 --- a/docs/en/preview/kubeblocks-for-kafka/03-architecture.mdx +++ b/docs/en/preview/kubeblocks-for-kafka/03-architecture.mdx @@ -21,6 +21,10 @@ KubeBlocks supports three Kafka deployment topologies: The `*_monitor` variants add a standalone `kafka-exporter` Component that scrapes Kafka-specific metrics (consumer group lag, partition offsets, topic throughput) and exposes them on port 9308 for Prometheus. +:::note +**Configuration templates and `configs`:** the Kafka `ComponentDefinition` treats main config slots (for example **`kafka-configuration-tpl`**) as **externally managed** in current addon charts. When you create a `Cluster`, you must **wire those slots** by setting **`configs`** on the matching component (or sharding template) to ConfigMaps whose keys match the template file names — typically the ConfigMaps shipped with the addon in **`kb-system`**, or your own copies in the application namespace. If provisioning fails with a message about missing templates, compare your manifest to the **Kafka examples** in [kubeblocks-addons](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/kafka) for the same chart version. +::: + --- ## Combined Architecture (combined / combined_monitor) diff --git a/docs/en/preview/kubeblocks-for-milvus/03-architecture.mdx b/docs/en/preview/kubeblocks-for-milvus/03-architecture.mdx index e0c9277..75cbca1 100644 --- a/docs/en/preview/kubeblocks-for-milvus/03-architecture.mdx +++ b/docs/en/preview/kubeblocks-for-milvus/03-architecture.mdx @@ -114,6 +114,18 @@ Only the `proxy` container exposes port 19530 (gRPC) for client traffic. All com | **Segment redundancy** | Sealed segments persist in MinIO; a restarted QueryNode reloads them from object storage without data loss | | **MixCoord recovery** | MixCoord is stateless against etcd — it reloads all coordinator state from etcd on restart | +### MixCoord Recovery + +MixCoord is the only single-replica component in the Distributed topology. When it crashes, KubeBlocks automatically recovers it without data loss: + +1. **MixCoord pod crashes** — coordinator functions (DDL, segment lifecycle, query assignments, index scheduling) are temporarily unavailable +2. **KubeBlocks InstanceSet detects the pod failure** and schedules a restart +3. **New MixCoord pod starts** and reloads all coordinator state from etcd — no data is lost because MixCoord is fully stateless against etcd +4. **Worker nodes reconnect** — QueryNode, DataNode, and IndexNode pods reconnect to the restored MixCoord and resume their assigned work +5. **Cluster resumes serving requests** through the Proxy + +Worker nodes (QueryNode, DataNode, IndexNode) run multiple replicas and tolerate individual pod failures without coordinator involvement — KubeBlocks restarts the failed pod, and MixCoord reassigns its work to healthy replicas. + ### Traffic Routing | Service | Type | Port | Selector | @@ -121,3 +133,13 @@ Only the `proxy` container exposes port 19530 (gRPC) for client traffic. All com | `{cluster}-proxy` | ClusterIP | 19530 (gRPC), 9091 (metrics/health) | proxy pods | Client applications (Milvus SDK) connect to the proxy on port 19530 (gRPC). Port 9091 is the metrics/health endpoint — it is not a client-facing REST API. The proxy is the single entry point — it handles authentication, routing, and result aggregation across worker components. + +## System Accounts + +In the Milvus add-on, only the **in-cluster MinIO object-storage** Component (ComponentDefinition `milvus-minio`) declares KubeBlocks **`systemAccounts`**. Other Milvus stack components in this add-on (for example **etcd**, **milvus**, **proxy**, **mixcoord**, DataNode, QueryNode, IndexNode) do **not** define `systemAccounts` in their ComponentDefinitions. If you use an external object store instead of the bundled MinIO, this managed account does not apply to that store. + +For the bundled MinIO component (typically named **`minio`** in `componentSpecs`, for example in standalone topology), KubeBlocks creates one account. Passwords are auto-generated unless overridden at the Cluster level. Credentials are stored in a Secret named **`{cluster}-minio-account-admin`** when the component name is `minio` (substitute your Cluster `metadata.name` and the MinIO component’s `name`). + +| Account | Component (typical name) | Role | Purpose | +|---------|--------------------------|------|---------| +| `admin` | `minio` | Object store admin | MinIO root credentials; injected into MinIO pods as `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY` for S3-compatible access to buckets used by Milvus | diff --git a/docs/en/preview/kubeblocks-for-mongodb/03-architecture.mdx b/docs/en/preview/kubeblocks-for-mongodb/03-architecture.mdx index 76509c2..f5be7e7 100644 --- a/docs/en/preview/kubeblocks-for-mongodb/03-architecture.mdx +++ b/docs/en/preview/kubeblocks-for-mongodb/03-architecture.mdx @@ -62,7 +62,17 @@ MongoDB replica sets use **oplog-based replication** and a **majority-vote (Raft | **Election** | When the primary fails, secondaries vote; the candidate with the most up-to-date oplog and a majority of votes wins | | **Write concern** | `w:majority` ensures a write is durable on a quorum before acknowledging | -A 3-member replica set tolerates **1 failure**. Failover typically completes within **10–30 seconds**. +A 3-member replica set tolerates **1 failure**. + +### Automatic Failover + +1. **Primary pod crashes or becomes unreachable** — secondaries stop receiving heartbeat pings +2. **Election timeout** — after approximately 10 seconds (`electionTimeoutMillis`), one secondary calls for an election +3. **Majority vote** — the candidate with the most up-to-date oplog and a majority of votes wins and becomes the new primary +4. **KubeBlocks roleProbe detects the change** — `syncerctl getrole` returns `primary` for the new pod → `kubeblocks.io/role=primary` label is applied +5. **Service endpoints switch** — the `{cluster}-mongodb-mongodb` ClusterIP service automatically routes writes to the new primary + +Failover typically completes within **10–30 seconds**. ### Traffic Routing diff --git a/docs/en/preview/kubeblocks-for-mysql/03-architecture.mdx b/docs/en/preview/kubeblocks-for-mysql/03-architecture.mdx index b971bfa..46c51d5 100644 --- a/docs/en/preview/kubeblocks-for-mysql/03-architecture.mdx +++ b/docs/en/preview/kubeblocks-for-mysql/03-architecture.mdx @@ -61,6 +61,16 @@ Each pod also runs multiple init containers on startup: `init-syncer` (copies sy | **Failover trigger** | syncer roleProbe fails repeatedly → KubeBlocks selects replica with most advanced binlog position | | **Promotion** | KubeBlocks calls the switchover API to promote the chosen replica; remaining replicas repoint to new primary | +### Automatic Failover + +1. **Primary pod crashes** — replicas stop receiving binlog events +2. **syncer roleProbe fails** — `syncerctl getrole` returns an error repeatedly; detection takes approximately 30 seconds +3. **KubeBlocks marks the primary unavailable** and selects the replica with the most advanced binlog position as the promotion candidate +4. **Chosen replica is promoted** — KubeBlocks calls the switchover lifecycle action; the replica stops replicating and takes over as primary +5. **Remaining replicas repoint** to the new primary via `CHANGE REPLICATION SOURCE TO` +6. **Pod label updated** — `kubeblocks.io/role=primary` applied to the new primary pod +7. **Service endpoints switch** — the `{cluster}-mysql` ClusterIP service automatically routes writes to the new primary + Failover typically completes within **30–60 seconds**. ### Traffic Routing @@ -114,7 +124,15 @@ The roleProbe runs `/tools/syncerctl getrole` inside the `mysql` container. Each | **syncer role update** | syncer roleProbe detects the new primary role → updates `kubeblocks.io/role` label → ClusterIP service endpoints switch | | **Quorum tolerance** | 3-member group tolerates 1 failure; 5-member tolerates 2 | -Failover typically completes within **5–15 seconds**. Group-internal primary election is near-instant after expulsion; the subsequent `kubeblocks.io/role` label update and Service endpoint switch still depend on the syncer roleProbe cycle. +### Automatic Failover + +1. **Primary pod becomes unreachable** — group communication times out for the failed member +2. **GCS expulsion** — the remaining members detect the failure via the Group Communication System (GCS) and expel the unreachable member +3. **Group elects a new PRIMARY** — the remaining certified secondaries autonomously elect a new primary; no external coordinator is needed +4. **syncer roleProbe detects the new PRIMARY** — `syncerctl getrole` returns `primary` for the elected pod → `kubeblocks.io/role=primary` label updated +5. **Service endpoints switch** — the `{cluster}-mysql` ClusterIP service automatically routes writes to the new primary + +Failover typically completes within **5–15 seconds**. Group-internal primary election is near-instant after expulsion; the subsequent label update and service endpoint switch depend on the syncer roleProbe cycle. ### Traffic Routing diff --git a/docs/en/preview/kubeblocks-for-redis/03-architecture.mdx b/docs/en/preview/kubeblocks-for-redis/03-architecture.mdx index eadc0e6..24c6e4f 100644 --- a/docs/en/preview/kubeblocks-for-redis/03-architecture.mdx +++ b/docs/en/preview/kubeblocks-for-redis/03-architecture.mdx @@ -8,19 +8,41 @@ sidebar_label: Architecture import RedisArchitectureDiagram from '@/components/RedisArchitectureDiagram'; import RedisClusterArchitectureDiagram from '@/components/RedisClusterArchitectureDiagram'; +import RedisStandaloneArchitectureDiagram from '@/components/RedisStandaloneArchitectureDiagram'; # Redis Architecture in KubeBlocks -KubeBlocks supports two distinct Redis architectures that serve different use cases: +The Redis **ClusterDefinition** exposes three patterns. They differ by topology name and whether **Redis Sentinel** is deployed: -| Architecture | Topology name | Use Case | -|---|---|---| -| **Sentinel** | `replication` (also: `standalone`, `replication-twemproxy`) | Single-shard HA; simple client compatibility; datasets that fit on one node | -| **Redis Cluster** | `cluster` | Horizontal write/read scaling; datasets too large for a single node; high-throughput workloads | +| Pattern | Topology name | Components | HA / failover | +|---|---|---|---| +| **Standalone** | `standalone` | `redis` only | No Sentinel; **no** automatic primary failover | +| **Sentinel (primary + replicas)** | `replication`, `replication-twemproxy` | `redis` + `redis-sentinel` | Sentinel quorum monitors the primary and drives failover | +| **Redis Cluster (sharded)** | `cluster` | Sharding (`shard`) | Gossip on the cluster bus; **no** Sentinel processes | --- -## Sentinel Architecture +## Standalone architecture + +Use **`topology: standalone`** when you want a **single Redis Component** and do **not** provision Sentinel. + + + +``` +Cluster → Component (redis) → InstanceSet → Pod × N +``` + +- There is **no** `redis-sentinel` Component and **no** Sentinel quorum — the monitoring and failover flow in the [Sentinel architecture](#sentinel-architecture) section does **not** apply. +- Pod layout (Redis server + metrics sidecar, PVC per data pod) matches the **Redis data pods** description under Sentinel below. +- For HA with automatic failover on a single shard, use **`replication`** (or **`replication-twemproxy`** with Twemproxy), not `standalone`. + +--- + +## Sentinel architecture + +:::note +Everything in this section applies to **`replication`** and **`replication-twemproxy`** only. It does **not** apply to **`standalone`** (no Sentinel) or **`cluster`** (Redis Cluster / gossip, no Sentinel). +::: Redis Sentinel uses a dedicated set of Sentinel processes to monitor the Redis primary, detect failures, and coordinate automatic failover. All data lives on a single primary; replicas serve as hot standbys and optional read targets. @@ -188,7 +210,7 @@ For external access, per-pod NodePort or LoadBalancer services (`redis-advertise ## System Accounts -KubeBlocks manages the following Redis account for both architectures. The password is auto-generated and stored in a Secret named `{cluster}-{component}-account-default`. +KubeBlocks manages the following Redis account for all topology patterns on this page (`standalone`, Sentinel, and Redis Cluster). The password is auto-generated and stored in a Secret named `{cluster}-{component}-account-default`. | Account | Role | Purpose | |---------|------|---------| diff --git a/docs/en/preview/kubeblocks-for-rocketmq/03-architecture.mdx b/docs/en/preview/kubeblocks-for-rocketmq/03-architecture.mdx index 954f5af..544bb65 100644 --- a/docs/en/preview/kubeblocks-for-rocketmq/03-architecture.mdx +++ b/docs/en/preview/kubeblocks-for-rocketmq/03-architecture.mdx @@ -133,3 +133,16 @@ When a RocketMQ component fails: 3. **KubeBlocks pod restart** — KubeBlocks detects the failed pod and restarts it; the recovered pod resumes as master (brokerId=0) after the process starts 4. **Master re-registers** — the recovered master re-registers with all NameServer instances; topic routing is refreshed 5. **Slaves reconnect** — slaves re-establish the HA replication connection to the restored master on port 10912 and resync any missed log entries + +## System Accounts + +The RocketMQ add-on declares KubeBlocks **`systemAccounts`** on the **broker** and **Dashboard** ComponentDefinitions. NameServer and other components do not use this mechanism in the same way. Passwords are generated according to each account’s policy unless overridden on the Cluster. + +Secrets follow **`{cluster}-{component}-account-{accountName}`** — where `{component}` is each component’s **`name`** in the Cluster spec (for example **`dashboard`** for the Dashboard, and each **broker shard** component name such as **`broker-0`**, **`broker-1`**, …). + +| Account | Component (typical) | Role | Purpose | +|---------|---------------------|------|---------| +| `rocketmq-admin` | Per broker shard (`broker-*`) | Broker admin / ACL user | Injected into broker pods as `ROCKETMQ_USER` and `ROCKETMQ_PASSWORD` for broker authentication configuration | +| `console-admin` | `dashboard` | Dashboard login | Injected into Dashboard pods as `CONSOLE_USER` and `CONSOLE_PASSWORD` for the RocketMQ Dashboard web UI | + +The Dashboard also needs the broker admin identity to talk to the cluster: it reads **`rocketmq-admin`** credentials from the broker ComponentDefinition via **`credentialVarRef`** (same username/password as the broker shard’s `rocketmq-admin` account). diff --git a/docs/en/preview/kubeblocks-for-zookeeper/03-architecture.mdx b/docs/en/preview/kubeblocks-for-zookeeper/03-architecture.mdx index 3184f48..f9a48cc 100644 --- a/docs/en/preview/kubeblocks-for-zookeeper/03-architecture.mdx +++ b/docs/en/preview/kubeblocks-for-zookeeper/03-architecture.mdx @@ -105,3 +105,11 @@ When a ZooKeeper ensemble member fails: 2. **Leader election** (if the lost member was the leader) — surviving members elect a new leader in milliseconds to seconds 3. **Write continuity** — as long as a quorum remains available, all write and read operations continue normally 4. **Pod recovery** — when the failed pod restarts, it reads its `myid` from the PVC, contacts the leader, and syncs any missed transactions before rejoining the ensemble + +## System Accounts + +KubeBlocks manages the following ZooKeeper system account. The password is auto-generated and stored in a Secret named `{cluster}-{component}-account-admin` (replace `{cluster}` and `{component}` with your Cluster metadata.name and the ZooKeeper component name, typically `zookeeper`). + +| Account | Role | Purpose | +|---------|------|---------| +| `admin` | Admin | Administrator user when ZooKeeper authentication is enabled (`ZOO_ENABLE_AUTH=yes`); credentials are injected into pods as `ZK_ADMIN_USER` and `ZK_ADMIN_PASSWORD` for authenticated client and administrative access | diff --git a/docs/en/release-1_0_2/kubeblocks-for-elasticsearch/03-architecture.mdx b/docs/en/release-1_0_2/kubeblocks-for-elasticsearch/03-architecture.mdx index b7163bd..eebfd74 100644 --- a/docs/en/release-1_0_2/kubeblocks-for-elasticsearch/03-architecture.mdx +++ b/docs/en/release-1_0_2/kubeblocks-for-elasticsearch/03-architecture.mdx @@ -41,18 +41,32 @@ Every Elasticsearch pod runs three main containers (plus three init containers o Each pod mounts its own **PVC** for the Elasticsearch data directory (`/usr/share/elasticsearch/data`), providing independent persistent storage per node. -## Node Roles +## Node roles (Elasticsearch) -Elasticsearch supports multiple node roles, and KubeBlocks maps each role to a dedicated Component: +Elasticsearch nodes are configured with **roles** (master-eligible, data, ingest, coordinating, etc.). The sections below describe what each role means for **capacity and HA**. -| Node Role | Responsibility | +| Node role | Responsibility | |-----------|----------------| | **Master-eligible** | Participates in leader election; manages cluster state, index mappings, and shard allocation | | **Data** | Stores shard data; handles indexing and search requests for its assigned shards | | **Ingest** | Pre-processes documents before indexing via ingest pipelines | | **Coordinating** (optional) | Routes client requests to the appropriate data nodes and aggregates results | -In smaller deployments, a single node type can hold all roles. For production, dedicated master, data, and ingest components improve stability and resource isolation. +In smaller deployments, one process can hold several roles. In production, splitting roles across nodes improves stability. + +## Topologies and component names (ClusterDefinition) + +In the **kubeblocks-addons** Elasticsearch chart, **`spec.topology`** selects a layout. KubeBlocks creates **one Component per entry** in that topology; component **names** are short labels (`master`, `dit`, `mdit`, …), while the **Elasticsearch role set** is defined inside the image/config for each layout. + +| Topology (`spec.topology`) | Components created | Notes | +|---------------------------|-------------------|--------| +| `single-node` | `mdit` | Single-node layout | +| `multi-node` (chart default) | `master`, `dit` | Split layout: dedicated `master` Component plus `dit` Component for the remaining node group | +| `m-dit` | `master`, `dit` | Same component names as `multi-node`; chart distinguishes layouts for ordering/defaults | +| `mdit` | `mdit` | Combined multi-role naming under one component | +| `m-d-i-t` | `m`, `d`, `i`, `t` | Dedicated components per role family (master / data / ingest / coordinating) | + +Service names look like `{cluster}-{component}-http` — the **`{component}`** segment is the KubeBlocks Component name above (for example `mdit`, `master`, `dit`), not the long English phrase “master-eligible”. Use your Cluster’s `status` or `kubectl get component -n ` to see the exact names for a running cluster. ## High Availability via Cluster Coordination diff --git a/docs/en/release-1_0_2/kubeblocks-for-kafka/03-architecture.mdx b/docs/en/release-1_0_2/kubeblocks-for-kafka/03-architecture.mdx index 20358a2..1649cf9 100644 --- a/docs/en/release-1_0_2/kubeblocks-for-kafka/03-architecture.mdx +++ b/docs/en/release-1_0_2/kubeblocks-for-kafka/03-architecture.mdx @@ -21,6 +21,10 @@ KubeBlocks supports three Kafka deployment topologies: The `*_monitor` variants add a standalone `kafka-exporter` Component that scrapes Kafka-specific metrics (consumer group lag, partition offsets, topic throughput) and exposes them on port 9308 for Prometheus. +:::note +**Configuration templates and `configs`:** the Kafka `ComponentDefinition` treats main config slots (for example **`kafka-configuration-tpl`**) as **externally managed** in current addon charts. When you create a `Cluster`, you must **wire those slots** by setting **`configs`** on the matching component (or sharding template) to ConfigMaps whose keys match the template file names — typically the ConfigMaps shipped with the addon in **`kb-system`**, or your own copies in the application namespace. If provisioning fails with a message about missing templates, compare your manifest to the **Kafka examples** in [kubeblocks-addons](https://github.com/apecloud/kubeblocks-addons/tree/main/examples/kafka) for the same chart version. +::: + --- ## Combined Architecture (combined / combined_monitor) diff --git a/docs/en/release-1_0_2/kubeblocks-for-milvus/03-architecture.mdx b/docs/en/release-1_0_2/kubeblocks-for-milvus/03-architecture.mdx index e0c9277..da80016 100644 --- a/docs/en/release-1_0_2/kubeblocks-for-milvus/03-architecture.mdx +++ b/docs/en/release-1_0_2/kubeblocks-for-milvus/03-architecture.mdx @@ -121,3 +121,13 @@ Only the `proxy` container exposes port 19530 (gRPC) for client traffic. All com | `{cluster}-proxy` | ClusterIP | 19530 (gRPC), 9091 (metrics/health) | proxy pods | Client applications (Milvus SDK) connect to the proxy on port 19530 (gRPC). Port 9091 is the metrics/health endpoint — it is not a client-facing REST API. The proxy is the single entry point — it handles authentication, routing, and result aggregation across worker components. + +## System Accounts + +In the Milvus add-on, only the **in-cluster MinIO object-storage** Component (ComponentDefinition `milvus-minio`) declares KubeBlocks **`systemAccounts`**. Other Milvus stack components in this add-on (for example **etcd**, **milvus**, **proxy**, **mixcoord**, DataNode, QueryNode, IndexNode) do **not** define `systemAccounts` in their ComponentDefinitions. If you use an external object store instead of the bundled MinIO, this managed account does not apply to that store. + +For the bundled MinIO component (typically named **`minio`** in `componentSpecs`, for example in standalone topology), KubeBlocks creates one account. Passwords are auto-generated unless overridden at the Cluster level. Credentials are stored in a Secret named **`{cluster}-minio-account-admin`** when the component name is `minio` (substitute your Cluster `metadata.name` and the MinIO component’s `name`). + +| Account | Component (typical name) | Role | Purpose | +|---------|--------------------------|------|---------| +| `admin` | `minio` | Object store admin | MinIO root credentials; injected into MinIO pods as `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY` for S3-compatible access to buckets used by Milvus | diff --git a/docs/en/release-1_0_2/kubeblocks-for-redis/03-architecture.mdx b/docs/en/release-1_0_2/kubeblocks-for-redis/03-architecture.mdx index eadc0e6..d6f2a5f 100644 --- a/docs/en/release-1_0_2/kubeblocks-for-redis/03-architecture.mdx +++ b/docs/en/release-1_0_2/kubeblocks-for-redis/03-architecture.mdx @@ -11,16 +11,35 @@ import RedisClusterArchitectureDiagram from '@/components/RedisClusterArchitectu # Redis Architecture in KubeBlocks -KubeBlocks supports two distinct Redis architectures that serve different use cases: +The Redis **ClusterDefinition** exposes three patterns. They differ by topology name and whether **Redis Sentinel** is deployed: -| Architecture | Topology name | Use Case | -|---|---|---| -| **Sentinel** | `replication` (also: `standalone`, `replication-twemproxy`) | Single-shard HA; simple client compatibility; datasets that fit on one node | -| **Redis Cluster** | `cluster` | Horizontal write/read scaling; datasets too large for a single node; high-throughput workloads | +| Pattern | Topology name | Components | HA / failover | +|---|---|---|---| +| **Standalone** | `standalone` | `redis` only | No Sentinel; **no** automatic primary failover | +| **Sentinel (primary + replicas)** | `replication`, `replication-twemproxy` | `redis` + `redis-sentinel` | Sentinel quorum monitors the primary and drives failover | +| **Redis Cluster (sharded)** | `cluster` | Sharding (`shard`) | Gossip on the cluster bus; **no** Sentinel processes | --- -## Sentinel Architecture +## Standalone architecture + +Use **`topology: standalone`** when you want a **single Redis Component** and do **not** provision Sentinel. + +``` +Cluster → Component (redis) → InstanceSet → Pod × N +``` + +- There is **no** `redis-sentinel` Component and **no** Sentinel quorum — the monitoring and failover flow in the [Sentinel architecture](#sentinel-architecture) section does **not** apply. +- Pod layout (Redis server + metrics sidecar, PVC per data pod) matches the **Redis data pods** description under Sentinel below. +- For HA with automatic failover on a single shard, use **`replication`** (or **`replication-twemproxy`** with Twemproxy), not `standalone`. + +--- + +## Sentinel architecture + +:::note +Everything in this section applies to **`replication`** and **`replication-twemproxy`** only. It does **not** apply to **`standalone`** (no Sentinel) or **`cluster`** (Redis Cluster / gossip, no Sentinel). +::: Redis Sentinel uses a dedicated set of Sentinel processes to monitor the Redis primary, detect failures, and coordinate automatic failover. All data lives on a single primary; replicas serve as hot standbys and optional read targets. @@ -188,7 +207,7 @@ For external access, per-pod NodePort or LoadBalancer services (`redis-advertise ## System Accounts -KubeBlocks manages the following Redis account for both architectures. The password is auto-generated and stored in a Secret named `{cluster}-{component}-account-default`. +KubeBlocks manages the following Redis account for all topology patterns on this page (`standalone`, Sentinel, and Redis Cluster). The password is auto-generated and stored in a Secret named `{cluster}-{component}-account-default`. | Account | Role | Purpose | |---------|------|---------| diff --git a/docs/en/release-1_0_2/kubeblocks-for-rocketmq/03-architecture.mdx b/docs/en/release-1_0_2/kubeblocks-for-rocketmq/03-architecture.mdx index 954f5af..544bb65 100644 --- a/docs/en/release-1_0_2/kubeblocks-for-rocketmq/03-architecture.mdx +++ b/docs/en/release-1_0_2/kubeblocks-for-rocketmq/03-architecture.mdx @@ -133,3 +133,16 @@ When a RocketMQ component fails: 3. **KubeBlocks pod restart** — KubeBlocks detects the failed pod and restarts it; the recovered pod resumes as master (brokerId=0) after the process starts 4. **Master re-registers** — the recovered master re-registers with all NameServer instances; topic routing is refreshed 5. **Slaves reconnect** — slaves re-establish the HA replication connection to the restored master on port 10912 and resync any missed log entries + +## System Accounts + +The RocketMQ add-on declares KubeBlocks **`systemAccounts`** on the **broker** and **Dashboard** ComponentDefinitions. NameServer and other components do not use this mechanism in the same way. Passwords are generated according to each account’s policy unless overridden on the Cluster. + +Secrets follow **`{cluster}-{component}-account-{accountName}`** — where `{component}` is each component’s **`name`** in the Cluster spec (for example **`dashboard`** for the Dashboard, and each **broker shard** component name such as **`broker-0`**, **`broker-1`**, …). + +| Account | Component (typical) | Role | Purpose | +|---------|---------------------|------|---------| +| `rocketmq-admin` | Per broker shard (`broker-*`) | Broker admin / ACL user | Injected into broker pods as `ROCKETMQ_USER` and `ROCKETMQ_PASSWORD` for broker authentication configuration | +| `console-admin` | `dashboard` | Dashboard login | Injected into Dashboard pods as `CONSOLE_USER` and `CONSOLE_PASSWORD` for the RocketMQ Dashboard web UI | + +The Dashboard also needs the broker admin identity to talk to the cluster: it reads **`rocketmq-admin`** credentials from the broker ComponentDefinition via **`credentialVarRef`** (same username/password as the broker shard’s `rocketmq-admin` account). diff --git a/docs/en/release-1_0_2/kubeblocks-for-zookeeper/03-architecture.mdx b/docs/en/release-1_0_2/kubeblocks-for-zookeeper/03-architecture.mdx index 3184f48..f9a48cc 100644 --- a/docs/en/release-1_0_2/kubeblocks-for-zookeeper/03-architecture.mdx +++ b/docs/en/release-1_0_2/kubeblocks-for-zookeeper/03-architecture.mdx @@ -105,3 +105,11 @@ When a ZooKeeper ensemble member fails: 2. **Leader election** (if the lost member was the leader) — surviving members elect a new leader in milliseconds to seconds 3. **Write continuity** — as long as a quorum remains available, all write and read operations continue normally 4. **Pod recovery** — when the failed pod restarts, it reads its `myid` from the PVC, contacts the leader, and syncs any missed transactions before rejoining the ensemble + +## System Accounts + +KubeBlocks manages the following ZooKeeper system account. The password is auto-generated and stored in a Secret named `{cluster}-{component}-account-admin` (replace `{cluster}` and `{component}` with your Cluster metadata.name and the ZooKeeper component name, typically `zookeeper`). + +| Account | Role | Purpose | +|---------|------|---------| +| `admin` | Admin | Administrator user when ZooKeeper authentication is enabled (`ZOO_ENABLE_AUTH=yes`); credentials are injected into pods as `ZK_ADMIN_USER` and `ZK_ADMIN_PASSWORD` for authenticated client and administrative access | diff --git a/src/components/ClickhouseArchitectureDiagram.tsx b/src/components/ClickhouseArchitectureDiagram.tsx index a2f8c8f..1cbf630 100644 --- a/src/components/ClickhouseArchitectureDiagram.tsx +++ b/src/components/ClickhouseArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function ClickhouseArchitectureDiagram() { .ch-ha-diagram .dot-blue { background: #388bfd; } .ch-ha-diagram .dot-green { background: #3fb950; } .ch-ha-diagram .dot-purple { background: #a371f7; } - .ch-ha-diagram .dot-orange { background: #e3b341; } - .ch-ha-diagram .dot-teal { background: #56d4dd; } .ch-ha-diagram .dot-red { background: #f85149; } .ch-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function ClickhouseArchitectureDiagram() { .ch-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .ch-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .ch-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -128,49 +122,7 @@ export default function ClickhouseArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .ch-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .ch-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .ch-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .ch-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .ch-ha-diagram .dcs-item span { color: #7d8590; } - .ch-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .ch-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .ch-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .ch-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .ch-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .ch-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .ch-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .ch-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .ch-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .ch-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .ch-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .ch-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .ch-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .ch-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .ch-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .ch-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .ch-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .ch-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .ch-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .ch-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } - .ch-ha-diagram .legend { + .ch-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; } @@ -312,125 +264,13 @@ export default function ClickhouseArchitectureDiagram() { {/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* ZooKeeper Coordination */} -
-
- - ClickHouse Keeper (ch-keeper) -
-
-
- CH Keeper :9181 (ZK-compatible TCP) · :9234 (Raft)
- replication queue · part checksums · DDL coordination -
-
- Supported roles leader / follower / observer
- Raft-based quorum (typically 3 nodes) · roleProbe on keeper pods only · observer is optional (not deployed by default in a standard 3-node ensemble) -
-
- Secret account-*
- admin credentials -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1One replica pod crashes
-
2CH Keeper detects lost connection; remaining replicas may continue serving depending on replication health and quorum
-
3KubeBlocks restarts the failed pod
-
4Recovered replica reconnects to CH Keeper, fetches missing data parts from peers
-
5ClusterIP service never changes — no role switch needed (all replicas are equivalent)
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
adminsuperuser (initAccount)
-
-
- -
{/* /sidebar */} {/* /main-area */} - - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Sharding
- -
Component ×N shards
- -
InstanceSet
- -
Pods
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ CH server: no roleProbe (cmpd-ch defines no roles)
- ⚙ CH Keeper: roleProbe (leader/follower/observer) + switchover
- ⚙ Execute memberJoin / scale ops
- ⚙ Manage SystemAccount Secrets -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
CRD Resource
Client Traffic
Replica Pod (all equivalent)
-
ClickHouse Keeper Coordination
-
Failover Path
Persistent Storage
diff --git a/src/components/ElasticsearchArchitectureDiagram.tsx b/src/components/ElasticsearchArchitectureDiagram.tsx index 9b92e63..0fd550d 100644 --- a/src/components/ElasticsearchArchitectureDiagram.tsx +++ b/src/components/ElasticsearchArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function ElasticsearchArchitectureDiagram() { .es-ha-diagram .dot-blue { background: #388bfd; } .es-ha-diagram .dot-green { background: #3fb950; } .es-ha-diagram .dot-purple { background: #a371f7; } - .es-ha-diagram .dot-orange { background: #e3b341; } - .es-ha-diagram .dot-teal { background: #56d4dd; } .es-ha-diagram .dot-red { background: #f85149; } .es-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function ElasticsearchArchitectureDiagram() { .es-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .es-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .es-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -128,49 +122,7 @@ export default function ElasticsearchArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .es-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .es-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .es-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .es-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .es-ha-diagram .dcs-item span { color: #7d8590; } - .es-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .es-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .es-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .es-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .es-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .es-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .es-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .es-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .es-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .es-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .es-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .es-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .es-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .es-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .es-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .es-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .es-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .es-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .es-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .es-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } - .es-ha-diagram .legend { + .es-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; } @@ -362,119 +314,7 @@ export default function ElasticsearchArchitectureDiagram() { {/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* ES Cluster Coordination */} -
-
- - ES Cluster Coordination -
-
-
- ConfigMap {'{scope}'}-config
- cluster settings · master nodes list -
-
- Master election
- quorum = (N/2)+1 nodes -
-
- Secret account-*
- xpack security credentials -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Node stops responding (heartbeat timeout)
-
2ES removes node from cluster state
-
3If master: quorum elects new master
-
4In-sync replica promoted to primary shard
-
5Shard rebalancing to restore replication factor
-
6KubeBlocks restarts failed pod via InstanceSet
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
elasticsuperuser
-
kbadmincluster admin
-
kbdataprotectionbackup
-
kbprobemonitor
-
kbmonitoringmetrics
-
-
- -
{/* /sidebar */} {/* /main-area */} - - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 3
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ es-agent: lifecycle & config management
- ⚙ Execute scale / reconfigure ops
- ⚙ Manage SystemAccount Secrets -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
@@ -482,7 +322,6 @@ export default function ElasticsearchArchitectureDiagram() {
Master / REST Traffic
Data Pod
ES Cluster Coordination
-
Failover Path
Persistent Storage
diff --git a/src/components/EtcdArchitectureDiagram.tsx b/src/components/EtcdArchitectureDiagram.tsx index 0fdb506..6ec3ffd 100644 --- a/src/components/EtcdArchitectureDiagram.tsx +++ b/src/components/EtcdArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function EtcdArchitectureDiagram() { .etcd-ha-diagram .dot-blue { background: #388bfd; } .etcd-ha-diagram .dot-green { background: #3fb950; } .etcd-ha-diagram .dot-purple { background: #a371f7; } - .etcd-ha-diagram .dot-orange { background: #e3b341; } - .etcd-ha-diagram .dot-teal { background: #56d4dd; } .etcd-ha-diagram .dot-red { background: #f85149; } .etcd-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function EtcdArchitectureDiagram() { .etcd-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .etcd-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .etcd-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -128,49 +122,7 @@ export default function EtcdArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .etcd-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .etcd-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .etcd-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .etcd-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .etcd-ha-diagram .dcs-item span { color: #7d8590; } - .etcd-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .etcd-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .etcd-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .etcd-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .etcd-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .etcd-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .etcd-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .etcd-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .etcd-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .etcd-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .etcd-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .etcd-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .etcd-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .etcd-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .etcd-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .etcd-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .etcd-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .etcd-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .etcd-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .etcd-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } - .etcd-ha-diagram .legend { + .etcd-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; } @@ -313,128 +265,13 @@ export default function EtcdArchitectureDiagram() { {/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* Raft Consensus */} -
-
- - Raft Consensus -
-
-
- Leader lease
- TTL configurable · heartbeat interval -
-
- ConfigMap {'{scope}'}-config
- cluster member list -
-
- Secret account-*
- root credentials (if auth enabled) -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Leader Pod crashes
-
2Raft heartbeat timeout (election timeout ≈150-300ms)
-
3Follower starts election, increments term
-
4Majority votes → new leader elected
-
5roleProbe detects new leader
-
6Pod label role=leader updated
-
7Service Endpoints auto-switch
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
rootsuperuser
-
kbadminadmin
-
kbdataprotectionbackup
-
kbprobemonitor
-
-
- -
{/* /sidebar */} {/* /main-area */} - - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 3
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ roleProbe: probe role every 1s
- ⚙ Update Pod label role=leader
- ⚙ Execute switchover / scale ops
- ⚙ Manage SystemAccount Secrets -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
CRD Resource
Leader / Client Traffic
Follower Pod
-
Raft Consensus
-
Failover Path
Persistent Storage
diff --git a/src/components/KafkaArchitectureDiagram.tsx b/src/components/KafkaArchitectureDiagram.tsx index 0600446..bb32413 100644 --- a/src/components/KafkaArchitectureDiagram.tsx +++ b/src/components/KafkaArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function KafkaArchitectureDiagram() { .kafka-ha-diagram .dot-blue { background: #388bfd; } .kafka-ha-diagram .dot-green { background: #3fb950; } .kafka-ha-diagram .dot-purple { background: #a371f7; } - .kafka-ha-diagram .dot-orange { background: #e3b341; } - .kafka-ha-diagram .dot-teal { background: #56d4dd; } .kafka-ha-diagram .dot-red { background: #f85149; } .kafka-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function KafkaArchitectureDiagram() { .kafka-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .kafka-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .kafka-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -128,49 +122,7 @@ export default function KafkaArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .kafka-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .kafka-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .kafka-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .kafka-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .kafka-ha-diagram .dcs-item span { color: #7d8590; } - .kafka-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .kafka-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .kafka-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .kafka-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .kafka-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .kafka-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .kafka-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .kafka-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .kafka-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .kafka-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .kafka-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .kafka-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .kafka-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .kafka-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .kafka-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .kafka-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .kafka-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .kafka-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .kafka-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .kafka-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } - .kafka-ha-diagram .legend { + .kafka-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; } @@ -334,128 +286,13 @@ export default function KafkaArchitectureDiagram() { {/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* HA Mechanism - KRaft */} -
-
- - HA Mechanism - (KRaft — no ZooKeeper) -
-
-
- KRaft metadata log
- replicated across controller-eligible nodes -
-
- ConfigMap {'{scope}'}-config
- broker config -
-
- Secret account-*
- SASL credentials -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Controller/Broker Pod crashes
-
2KRaft leader election (≈10s timeout)
-
3New controller elected via Raft quorum
-
4Partition leaders reassigned by controller
-
5ISR updated for affected partitions
-
6Clients reconnect to new partition leaders
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
adminsuperuser
-
kbdataprotectionbackup
-
kbprobemonitor
-
kbmonitoringmetrics
-
-
- -
{/* /sidebar */} {/* /main-area */} - - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Scale / Reconfigure -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 3
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ roleProbe: probe role every 1s
- ⚙ Update Pod label role=controller
- ⚙ Execute scale / reconfigure ops
- ⚙ Manage SystemAccount Secrets -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
CRD Resource
Controller Pod (KRaft)
Broker Pod
-
KRaft Consensus
-
Failover Path
Persistent Storage
diff --git a/src/components/KafkaCombinedArchitectureDiagram.tsx b/src/components/KafkaCombinedArchitectureDiagram.tsx index a14c542..3e90b14 100644 --- a/src/components/KafkaCombinedArchitectureDiagram.tsx +++ b/src/components/KafkaCombinedArchitectureDiagram.tsx @@ -24,12 +24,8 @@ export default function KafkaCombinedArchitectureDiagram() { text-transform: uppercase; margin-bottom: 10px; display: flex; align-items: center; gap: 6px; } - .kafka-combined-diagram .main-area { display: flex; gap: 16px; align-items: flex-start; } + .kafka-combined-diagram .main-area { display: flex; align-items: flex-start; } .kafka-combined-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .kafka-combined-diagram .mgmt-sidebar { - width: 256px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } /* Client */ .kafka-combined-diagram .client-mini { @@ -115,44 +111,6 @@ export default function KafkaCombinedArchitectureDiagram() { display: flex; align-items: center; gap: 8px; } - /* Sidebar */ - .kafka-combined-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .kafka-combined-diagram .dcs-card { border-color: #6e40c944; background: #120d2a; } - .kafka-combined-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .kafka-combined-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #6e40c933; - background: #0d0820; font-size: 10px; color: #c084fc; line-height: 1.5; - } - .kafka-combined-diagram .dcs-item span { color: #7d8590; } - .kafka-combined-diagram .failover-card { border-color: #da363344; background: #1c0a0a; } - .kafka-combined-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .kafka-combined-diagram .step { display: flex; align-items: flex-start; gap: 6px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .kafka-combined-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - - /* Operator */ - .kafka-combined-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 14px 18px; - } - .kafka-combined-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .kafka-combined-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .kafka-combined-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .kafka-combined-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .kafka-combined-diagram .crd-chain { display: flex; align-items: center; gap: 5px; margin-top: 8px; flex-wrap: wrap; } - .kafka-combined-diagram .crd-chip { padding: 3px 9px; border-radius: 20px; border: 1px solid; font-size: 10px; font-weight: 600; white-space: nowrap; } - .kafka-combined-diagram .crd-chip.cluster { border-color:#a371f7; color:#d2a8ff; background:#2d1f5e; } - .kafka-combined-diagram .crd-chip.component { border-color:#3fb950; color:#7ee787; background:#1a3020; } - .kafka-combined-diagram .crd-chip.instanceset { border-color:#79c0ff; color:#79c0ff; background:#0d2035; } - .kafka-combined-diagram .crd-chip.pod { border-color:#e3b341; color:#e3b341; background:#302010; } - .kafka-combined-diagram .crd-arrow { color: #484f58; font-size: 12px; } .kafka-combined-diagram .legend { display: flex; gap: 16px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; @@ -236,7 +194,9 @@ export default function KafkaCombinedArchitectureDiagram() {
KRaft Quorum (port :9093) - all 3 nodes participate · Raft consensus for cluster metadata · active controller elected among them + + same kafka container on each node — not a separate metadata deployment · Raft consensus for cluster metadata · one active controller at a time +
🔗 @@ -245,96 +205,8 @@ export default function KafkaCombinedArchitectureDiagram() {
{/* /data-plane */} - - {/* Sidebar */} -
- -
-
- - KRaft Metadata - (__cluster_metadata topic) -
-
-
- Active controller:
one of the 3 combined nodes
all metadata writes go through it -
-
- Metadata log:
internal topic replicated
across all controller-eligible nodes -
-
- Quorum:
3 nodes → tolerates 1 failure
no external ZooKeeper needed -
-
-
- -
-
- - Failover Process -
-
-
1Node fails (broker + controller role lost)
-
2KRaft quorum detects leader loss → elects new active controller
-
3New controller reassigns partition leaders from ISR
-
4Clients refresh metadata → reconnect to new partition leaders
-
5KubeBlocks restarts failed pod; rejoins quorum on startup
-
-
- -
{/* /main-area */} -
-
- Management Plane · KubeBlocks Operator -
-
- -
-
-
- - KubeBlocks Operator - · single Component for all combined nodes; uniform pod spec -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Scale / Rolling Update -
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component (kafka-combine)
- -
InstanceSet
- -
Pod × N
-
-
-
-
Port Reference
-
- :9092 — client (producer/consumer)
- :9093 — controller quorum (KRaft)
- :9094 — internal broker replication
- :5556 — jmx-exporter (Prometheus scrape)
- :5555 — JMX_PORT (internal Java JMX) -
-
-
-
KubeBlocks Operator
Client Traffic (:9092)
diff --git a/src/components/MilvusArchitectureDiagram.tsx b/src/components/MilvusArchitectureDiagram.tsx index 872c15e..df51897 100644 --- a/src/components/MilvusArchitectureDiagram.tsx +++ b/src/components/MilvusArchitectureDiagram.tsx @@ -17,7 +17,6 @@ export default function MilvusArchitectureDiagram() { .milvus-ha-diagram .dot-blue { background: #388bfd; } .milvus-ha-diagram .dot-green { background: #3fb950; } .milvus-ha-diagram .dot-purple { background: #a371f7; } - .milvus-ha-diagram .dot-orange { background: #e3b341; } .milvus-ha-diagram .dot-teal { background: #56d4dd; } .milvus-ha-diagram .dot-red { background: #f85149; } .milvus-ha-diagram .card-title { @@ -31,10 +30,6 @@ export default function MilvusArchitectureDiagram() { .milvus-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .milvus-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .milvus-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -148,48 +143,6 @@ export default function MilvusArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .milvus-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .milvus-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .milvus-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .milvus-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .milvus-ha-diagram .dcs-item span { color: #7d8590; } - .milvus-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .milvus-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .milvus-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .milvus-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .milvus-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .milvus-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .milvus-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .milvus-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .milvus-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .milvus-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .milvus-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .milvus-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .milvus-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .milvus-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .milvus-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .milvus-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .milvus-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .milvus-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .milvus-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .milvus-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } .milvus-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; @@ -376,119 +329,8 @@ export default function MilvusArchitectureDiagram() {
{/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* HA Mechanism */} -
-
- - etcd Coordination + S3 Storage -
-
-
- etcd
- metadata store · component discovery · leader election -
-
- MinIO/S3
- segment storage · vector index files -
-
- ConfigMap {'{scope}'}-config
- Milvus component topology -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1MixCoord pod crashes (single replica)
-
2KubeBlocks InstanceSet detects pod failure
-
3KubeBlocks restarts the MixCoord pod
-
4New MixCoord reloads all coordinator state from etcd
-
5Query/Data/Index nodes reconnect to restored MixCoord
-
6Cluster resumes serving requests
-
-
- - {/* Scalability */} -
-
- - Horizontal Scalability -
-
-
proxyscalable
-
querynodescalable
-
datanodescalable
-
indexnodescalable
-
mixcoordsingle replica
-
-
- -
{/* /sidebar */} {/* /main-area */} - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component ×5
- -
InstanceSet
- -
Pods
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ Inject serviceRef endpoints (etcd, Pulsar/Kafka, MinIO)
- ⚙ Restart pods on failure; MixCoord reloads from etcd
- ⚙ Execute scale ops per component independently
- ⚙ Manage multi-component topology ordering -
-
-
{/* Legend */}
@@ -496,8 +338,6 @@ export default function MilvusArchitectureDiagram() {
Coordinator Pod
Proxy / Entry Point
Worker Pod
-
etcd Coordination
-
Failover Path
Persistent Storage
diff --git a/src/components/MilvusStandaloneArchitectureDiagram.tsx b/src/components/MilvusStandaloneArchitectureDiagram.tsx index 270e875..f0ef83b 100644 --- a/src/components/MilvusStandaloneArchitectureDiagram.tsx +++ b/src/components/MilvusStandaloneArchitectureDiagram.tsx @@ -17,7 +17,6 @@ export default function MilvusStandaloneArchitectureDiagram() { .milvus-sa-diagram .dot-blue { background: #388bfd; } .milvus-sa-diagram .dot-purple { background: #a371f7; } .milvus-sa-diagram .dot-teal { background: #56d4dd; } - .milvus-sa-diagram .dot-orange { background: #e3b341; } .milvus-sa-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; text-transform: uppercase; margin-bottom: 10px; @@ -25,10 +24,6 @@ export default function MilvusStandaloneArchitectureDiagram() { } .milvus-sa-diagram .main-area { display: flex; gap: 20px; align-items: flex-start; } .milvus-sa-diagram .center-col { flex: 1; display: flex; flex-direction: column; gap: 0; } - .milvus-sa-diagram .right-sidebar { - width: 240px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; - } /* Client */ .milvus-sa-diagram .client-box { @@ -127,34 +122,7 @@ export default function MilvusStandaloneArchitectureDiagram() { font-size: 9px; color: #7d8590; } - /* Sidebar */ - .milvus-sa-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .milvus-sa-diagram .note-card { border-color: #e3b34144; background: #1a1505; } - .milvus-sa-diagram .note-items { display: flex; flex-direction: column; gap: 6px; margin-top: 8px; } - .milvus-sa-diagram .note-item { - padding: 5px 8px; border-radius: 5px; border: 1px solid #e3b34133; - background: #0d0900; font-size: 10px; color: #e3b341; line-height: 1.5; - } - .milvus-sa-diagram .note-item span { color: #7d8590; } - .milvus-sa-diagram .ha-card { border-color: #da363344; background: #1c0a0a; } - .milvus-sa-diagram .ha-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .milvus-sa-diagram .ha-item { - padding: 5px 8px; border-radius: 5px; border: 1px solid #da363322; - background: #0d0505; font-size: 10px; color: #f85149; line-height: 1.5; - } - /* Operator */ - .milvus-sa-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 14px 18px; - } - .milvus-sa-diagram .crd-chain { display: flex; align-items: center; gap: 5px; margin-top: 8px; flex-wrap: wrap; } - .milvus-sa-diagram .crd-chip { padding: 3px 9px; border-radius: 20px; border: 1px solid; font-size: 10px; font-weight: 600; white-space: nowrap; } - .milvus-sa-diagram .crd-chip.cluster { border-color:#a371f7; color:#d2a8ff; background:#2d1f5e; } - .milvus-sa-diagram .crd-chip.component { border-color:#56d4dd; color:#56d4dd; background:#061515; } - .milvus-sa-diagram .crd-chip.instanceset { border-color:#7ee787; color:#7ee787; background:#1a3020; } - .milvus-sa-diagram .crd-chip.pod { border-color:#e3b341; color:#e3b341; background:#302010; } - .milvus-sa-diagram .crd-arrow { color: #484f58; font-size: 12px; } .milvus-sa-diagram .legend { display: flex; gap: 16px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; @@ -283,96 +251,13 @@ export default function MilvusStandaloneArchitectureDiagram() { {/* /center-col */} - {/* RIGHT SIDEBAR */} -
- -
-
- - Standalone Characteristics -
-
-
- Deployment:
3 Components in 1 Cluster (etcd + minio + milvus) -
-
- Process model:
All Milvus roles run in a single process — no inter-pod RPC -
-
- Scalability:
Not horizontally scalable — scale up by increasing pod resources -
-
- Use case:
Development, testing, demos, small vector datasets -
-
-
- -
-
- - HA Limitations -
-
-
Single Milvus pod = single point of failure
-
No automatic failover for the milvus process
-
KubeBlocks restarts the pod on crash (pod-level recovery only)
-
etcd and minio are also single replicas by default
-
-
- For production HA, use the Distributed topology instead. -
-
- -
{/* /main-area */} - {/* Separator */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* Operator */} -
-
-
- - KubeBlocks Operator - · provisions etcd → minio → milvus in order; manages lifecycle of all 3 Components -
-
CRD RESOURCE HIERARCHY (same for all 3 components)
-
-
Cluster
- -
Component (etcd)
- + -
Component (minio)
- + -
Component (milvus)
- -
InstanceSet
- -
Pod × 1
-
-
-
-
Provisioning Order
-
- ① etcd provisions first
- ② minio provisions
- ③ milvus provisions last
-    (reads etcd + minio config) -
-
-
-
KubeBlocks Operator
Milvus (all-in-one)
Storage Components (etcd / MinIO)
Persistent Storage
-
Single Point of Failure
diff --git a/src/components/MinioArchitectureDiagram.tsx b/src/components/MinioArchitectureDiagram.tsx index 71d0b86..5973d16 100644 --- a/src/components/MinioArchitectureDiagram.tsx +++ b/src/components/MinioArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function MinioArchitectureDiagram() { .minio-ha-diagram .dot-blue { background: #388bfd; } .minio-ha-diagram .dot-green { background: #3fb950; } .minio-ha-diagram .dot-purple { background: #a371f7; } - .minio-ha-diagram .dot-orange { background: #e3b341; } - .minio-ha-diagram .dot-teal { background: #56d4dd; } .minio-ha-diagram .dot-red { background: #f85149; } .minio-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function MinioArchitectureDiagram() { .minio-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .minio-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .minio-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -133,48 +127,6 @@ export default function MinioArchitectureDiagram() { border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; flex-wrap: wrap; text-align: center; } - .minio-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .minio-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .minio-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .minio-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .minio-ha-diagram .dcs-item span { color: #7d8590; } - .minio-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .minio-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .minio-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .minio-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .minio-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .minio-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .minio-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .minio-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .minio-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .minio-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .minio-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .minio-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .minio-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .minio-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .minio-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .minio-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .minio-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .minio-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .minio-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .minio-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } .minio-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; @@ -320,116 +272,8 @@ export default function MinioArchitectureDiagram() { {/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* Distributed Mode Config */} -
-
- - Distributed Erasure Coding - (Distributed Mode Config) -
-
-
- Erasure set / EC:4+4 (configurable)
- no single point of failure -
-
- Bitrot protection / per-object checksums
- healing on access -
-
- ConfigMap {'{scope}'}-config
- server pool topology -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Node/drive failure detected
-
2Quorum check: ≥ N/2+1 drives available
-
3Read healing via parity reconstruction
-
4Background heal task re-encodes missing shards
-
5Node rejoins and re-syncs data
-
6Cluster returns to full redundancy
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
rootsuperuser (initAccount)
-
-
- -
{/* /sidebar */} {/* /main-area */} - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Scale / Expand -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 4
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ roleProbe: exec mc admin info → readwrite / notready
- ⚙ Volume expansion (online)
- ⚙ Execute horizontal scale ops
- ⚙ Manage root account Secret -
-
-
{/* Legend */}
@@ -437,7 +281,6 @@ export default function MinioArchitectureDiagram() {
CRD Resource
S3 API Traffic
Distributed Node (symmetric)
-
Auto-Healing
Persistent Storage
diff --git a/src/components/MongodbArchitectureDiagram.tsx b/src/components/MongodbArchitectureDiagram.tsx index 4e8e56a..08bc214 100644 --- a/src/components/MongodbArchitectureDiagram.tsx +++ b/src/components/MongodbArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function MongodbArchitectureDiagram() { .mongodb-ha-diagram .dot-blue { background: #388bfd; } .mongodb-ha-diagram .dot-green { background: #3fb950; } .mongodb-ha-diagram .dot-purple { background: #a371f7; } - .mongodb-ha-diagram .dot-orange { background: #e3b341; } - .mongodb-ha-diagram .dot-teal { background: #56d4dd; } .mongodb-ha-diagram .dot-red { background: #f85149; } .mongodb-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function MongodbArchitectureDiagram() { .mongodb-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .mongodb-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .mongodb-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -138,48 +132,6 @@ export default function MongodbArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .mongodb-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .mongodb-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .mongodb-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .mongodb-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .mongodb-ha-diagram .dcs-item span { color: #7d8590; } - .mongodb-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .mongodb-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .mongodb-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .mongodb-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .mongodb-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .mongodb-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .mongodb-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .mongodb-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .mongodb-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .mongodb-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .mongodb-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .mongodb-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .mongodb-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .mongodb-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .mongodb-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .mongodb-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .mongodb-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .mongodb-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .mongodb-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .mongodb-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } .mongodb-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; @@ -364,123 +316,8 @@ export default function MongodbArchitectureDiagram() { {/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* MongoDB Replica Set DCS */} -
-
- - MongoDB Replica Set - (K8s API) -
-
-
- ConfigMap {'{scope}'}-config
- replica set config -
-
- Endpoints {'{scope}'}
- primary lease · election -
-
- Secret account-*
- system account passwords -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Primary Pod crashes
-
2Replica Set heartbeat timeout (≈10s)
-
3Secondary initiates election
-
4Majority votes → new primary elected
-
5roleProbe detects role change
-
6Pod label role=primary updated
-
7Service Endpoints auto-switch
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
rootsuperuser
-
kbadminsuperuser
-
kbdataprotectionbackup
-
kbprobemonitor
-
kbmonitoringmetrics
-
kbreplicatorreplication
-
-
- -
{/* /sidebar */} {/* /main-area */} - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 3
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ roleProbe: exec /tools/syncerctl getrole in mongodb container
- ⚙ Update Pod label role=primary
- ⚙ Execute switchover / scale ops
- ⚙ Manage SystemAccount Secrets -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
@@ -488,7 +325,6 @@ export default function MongodbArchitectureDiagram() {
Primary / RW Traffic
Secondary / RO Traffic
Replica Set DCS
-
Failover Path
Persistent Storage
diff --git a/src/components/MongodbShardingArchitectureDiagram.tsx b/src/components/MongodbShardingArchitectureDiagram.tsx index 8cce00e..7265e2a 100644 --- a/src/components/MongodbShardingArchitectureDiagram.tsx +++ b/src/components/MongodbShardingArchitectureDiagram.tsx @@ -29,10 +29,6 @@ export default function MongodbShardingArchitectureDiagram() { /* Layout */ .mdb-shard-diagram .main-area { display: flex; gap: 16px; align-items: flex-start; } .mdb-shard-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .mdb-shard-diagram .mgmt-sidebar { - width: 256px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } /* Client */ .mdb-shard-diagram .client-box { @@ -151,46 +147,6 @@ export default function MongodbShardingArchitectureDiagram() { .mdb-shard-diagram .container-name { color: #e6edf3; font-weight: 600; } .mdb-shard-diagram .container-port { color: #7d8590; margin-left: auto; } - /* Sidebar */ - .mdb-shard-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .mdb-shard-diagram .routing-card { border-color: #a371f744; background: #120d2a; } - .mdb-shard-diagram .failover-card { border-color: #da363344; background: #1c0a0a; } - .mdb-shard-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .mdb-shard-diagram .step { display: flex; align-items: flex-start; gap: 6px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .mdb-shard-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .mdb-shard-diagram .routing-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .mdb-shard-diagram .routing-item { - padding: 5px 8px; border-radius: 6px; border: 1px solid #a371f733; - background: #0d0820; font-size: 10px; color: #d2a8ff; line-height: 1.5; - } - .mdb-shard-diagram .routing-item span { color: #7d8590; } - - /* Operator */ - .mdb-shard-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .mdb-shard-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .mdb-shard-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .mdb-shard-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .mdb-shard-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .mdb-shard-diagram .crd-chain { display: flex; align-items: center; gap: 5px; margin-top: 10px; flex-wrap: wrap; } - .mdb-shard-diagram .crd-chip { padding: 3px 9px; border-radius: 20px; border: 1px solid; font-size: 10px; font-weight: 600; white-space: nowrap; } - .mdb-shard-diagram .crd-chip.cluster { border-color:#a371f7; color:#d2a8ff; background:#2d1f5e; } - .mdb-shard-diagram .crd-chip.component { border-color:#56d4dd; color:#56d4dd; background:#061515; } - .mdb-shard-diagram .crd-chip.sharding { border-color:#3fb950; color:#7ee787; background:#1a3020; } - .mdb-shard-diagram .crd-chip.shard { border-color:#e3b341; color:#e3b341; background:#302010; } - .mdb-shard-diagram .crd-chip.instanceset { border-color:#79c0ff; color:#79c0ff; background:#0d2035; } - .mdb-shard-diagram .crd-arrow { color: #484f58; font-size: 12px; } - /* Legend */ .mdb-shard-diagram .legend { display: flex; gap: 16px; flex-wrap: wrap; justify-content: center; @@ -363,116 +319,8 @@ export default function MongodbShardingArchitectureDiagram() { {/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* Routing */} -
-
- - How Mongos Routes Queries -
-
-
- 1. Client sends query
to any mongos instance on :27017 -
-
- 2. Mongos reads chunk map
from config server CSRS -
-
- 3. Shard key hashed
→ determines target shard -
-
- 4. Mongos forwards
to that shard's primary -
-
- 5. Result merged
(for scatter-gather queries, all shards queried) -
-
-
- - {/* Failover */} -
-
- - Failover (per shard) -
-
-
1Shard primary pod fails
-
2Shard replica set election (≈10 s)
-
3Secondary with latest oplog elected
-
4role=primary label updated
-
5Mongos retries on new primary
-
-
- Config server failover follows the same replica set election process independently. -
-
- -
{/* /main-area */} - {/* Separator */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* Operator */} -
-
-
- - KubeBlocks Operator - · manages Sharding + Component resources independently; orchestrates provisioning order -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Sharding Controller - Shard scale in/out -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component (mongos)
- + -
Component (config-server)
- + -
Sharding (shard)
- -
Shard × N
- -
InstanceSet
- -
Pod × replicas
-
-
-
-
-
Startup Dependencies
-
- ① config-server (CSRS) must be ready
- ② shards register with CSRS
- ③ mongos requires reachable CSRS
-    before routing queries
- ⚙ Scale shards: add/remove shard,
-    KubeBlocks migrates chunks -
-
-
- {/* Legend */}
KubeBlocks Operator
@@ -480,7 +328,6 @@ export default function MongodbShardingArchitectureDiagram() {
Config Server (CSRS)
Shard Primary
Shard Secondary
-
Failover Path
Persistent Storage
diff --git a/src/components/MysqlArchitectureDiagram.tsx b/src/components/MysqlArchitectureDiagram.tsx index 726110b..3505769 100644 --- a/src/components/MysqlArchitectureDiagram.tsx +++ b/src/components/MysqlArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function MysqlArchitectureDiagram() { .mysql-ha-diagram .dot-blue { background: #388bfd; } .mysql-ha-diagram .dot-green { background: #3fb950; } .mysql-ha-diagram .dot-purple { background: #a371f7; } - .mysql-ha-diagram .dot-orange { background: #e3b341; } - .mysql-ha-diagram .dot-teal { background: #56d4dd; } .mysql-ha-diagram .dot-red { background: #f85149; } .mysql-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function MysqlArchitectureDiagram() { .mysql-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .mysql-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .mysql-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -128,49 +122,7 @@ export default function MysqlArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .mysql-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .mysql-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .mysql-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .mysql-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .mysql-ha-diagram .dcs-item span { color: #7d8590; } - .mysql-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .mysql-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .mysql-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .mysql-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .mysql-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .mysql-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .mysql-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .mysql-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .mysql-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .mysql-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .mysql-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .mysql-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .mysql-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .mysql-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .mysql-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .mysql-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .mysql-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .mysql-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .mysql-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .mysql-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } - .mysql-ha-diagram .legend { + .mysql-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; } @@ -350,131 +302,13 @@ export default function MysqlArchitectureDiagram() { {/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* MySQL Replication */} -
-
- - MySQL Replication - (async/semi-sync) -
-
-
- ConfigMap {'{scope}'}-config
- source host · binlog position -
-
- Endpoints {'{scope}'}
- primary detection · role probe -
-
- Secret account-*
- system account passwords -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Primary Pod crashes
-
2syncer roleProbe fails (syncerctl getrole) (≈30s)
-
3KubeBlocks marks primary unavailable
-
4Best replica promoted to primary
-
5Replicas reconnect to new primary
-
6Pod label role=primary updated
-
7Service Endpoints auto-switch
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
rootsuperuser
-
kbadminsuperuser
-
kbdataprotectionbackup
-
kbprobemonitor
-
kbmonitoringmetrics
-
kbreplicatorreplication
-
-
- -
{/* /sidebar */} {/* /main-area */} - - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 3
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ roleProbe: probe role every 1s
- ⚙ Update Pod label role=primary
- ⚙ Execute switchover / scale ops
- ⚙ Manage SystemAccount Secrets -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
CRD Resource
Primary / RW Traffic
Replica Pod
-
MySQL Replication
-
Failover Path
Persistent Storage
diff --git a/src/components/MysqlMGRArchitectureDiagram.tsx b/src/components/MysqlMGRArchitectureDiagram.tsx index b298213..fd67798 100644 --- a/src/components/MysqlMGRArchitectureDiagram.tsx +++ b/src/components/MysqlMGRArchitectureDiagram.tsx @@ -31,10 +31,6 @@ export default function MysqlMGRArchitectureDiagram() { .mysql-mgr-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .mysql-mgr-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .mysql-mgr-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -148,54 +144,6 @@ export default function MysqlMGRArchitectureDiagram() { font-size: 10px; color: #7d8590; } - /* Sidebar cards */ - .mysql-mgr-diagram .sb-card { - border-radius: 10px; border: 1px solid #30363d; - background: #161b22; padding: 14px; - } - .mysql-mgr-diagram .sb-title { - font-size: 10px; font-weight: 700; letter-spacing: 1px; - text-transform: uppercase; margin-bottom: 10px; - } - .mysql-mgr-diagram .step { - display: flex; align-items: flex-start; gap: 8px; - font-size: 10px; color: #8b949e; line-height: 1.5; margin-bottom: 6px; - } - .mysql-mgr-diagram .step-num { - display: inline-flex; align-items: center; justify-content: center; - width: 16px; height: 16px; border-radius: 50%; - background: #21262d; color: #f0f6fc; - font-size: 9px; font-weight: 700; flex-shrink: 0; margin-top: 1px; - } - .mysql-mgr-diagram .info-row { - display: flex; align-items: flex-start; gap: 6px; - font-size: 10px; color: #8b949e; line-height: 1.5; margin-bottom: 5px; - } - .mysql-mgr-diagram .info-key { - color: #f0f6fc; font-weight: 600; min-width: 72px; flex-shrink: 0; - } - - /* CRD chain */ - .mysql-mgr-diagram .crd-section { - border-radius: 10px; border: 1px solid #30363d; - background: #161b22; padding: 14px; - } - .mysql-mgr-diagram .crd-chain { - display: flex; flex-direction: column; gap: 6px; margin-top: 8px; - } - .mysql-mgr-diagram .crd-row { - display: flex; align-items: center; gap: 6px; - } - .mysql-mgr-diagram .crd-chip { - font-size: 10px; font-weight: 700; padding: 3px 8px; - border-radius: 5px; white-space: nowrap; - } - .mysql-mgr-diagram .crd-chip.cluster { background: #1a2a0a; color: #7ee787; border: 1px solid #2ea04344; } - .mysql-mgr-diagram .crd-chip.component { background: #0d1f38; color: #79c0ff; border: 1px solid #1f6feb44; } - .mysql-mgr-diagram .crd-chip.instanceset { background: #1a0d2e; color: #c084fc; border: 1px solid #6e40c944; } - .mysql-mgr-diagram .crd-chip.pod { background: #1c1400; color: #e3b341; border: 1px solid #e3b34144; } - .mysql-mgr-diagram .crd-arrow { color: #484f58; font-size: 12px; } - /* Legend */ .mysql-mgr-diagram .legend { display: flex; flex-wrap: wrap; gap: 12px; @@ -368,73 +316,6 @@ export default function MysqlMGRArchitectureDiagram() { - {/* RIGHT: sidebar */} -
- {/* Group Replication card */} -
-
⟳ Group Replication
-
- Mode: - Single-primary (default) — one PRIMARY accepts writes, secondaries replicate -
-
- Protocol: - Paxos-based GCS on port :33061 — ensures at-least-once delivery and consistent ordering -
-
- Certification: - Each transaction broadcast for conflict detection before commit -
-
- Quorum: - 3-member group tolerates 1 failure; 5-member tolerates 2 -
-
- - {/* Failover card */} -
-
⚡ Automatic Failover
-
1Primary pod becomes unreachable — group communication times out
-
2Remaining members detect expulsion via GCS
-
3Group elects a new PRIMARY from certified secondaries
-
4syncer exec roleProbe detects new PRIMARY → updates kubeblocks.io/role label
-
5ClusterIP service endpoints switch to new primary automatically
-
- - {/* CRD chain */} -
-
📦 Resource Hierarchy
-
-
-
Cluster
-
-
- -
Component (mysql)
-
-
- -
InstanceSet
-
-
- -
Pod × 3 (or N)
-
-
-
- Same hierarchy as semisync — the difference is the replication protocol inside each pod. -
-
- - {/* Port reference */} -
-
🔌 Port Reference
-
:3306MySQL client connections
-
:33061Group Replication communication (GCS)
-
syncerctlroleProbe — exec inside mysql container
-
:9104Prometheus mysqld_exporter
-
-
{/* Legend */} diff --git a/src/components/MysqlOrchestratorArchitectureDiagram.tsx b/src/components/MysqlOrchestratorArchitectureDiagram.tsx index d0057c3..152df7b 100644 --- a/src/components/MysqlOrchestratorArchitectureDiagram.tsx +++ b/src/components/MysqlOrchestratorArchitectureDiagram.tsx @@ -19,10 +19,6 @@ export default function MysqlOrchestratorArchitectureDiagram() { .mysql-orc-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .mysql-orc-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } /* Client */ .mysql-orc-diagram .client-mini { @@ -192,51 +188,6 @@ export default function MysqlOrchestratorArchitectureDiagram() { border-radius: 4px; border: 1px solid #21262d; } - /* Sidebar cards */ - .mysql-orc-diagram .sb-card { - border-radius: 10px; border: 1px solid #30363d; - background: #161b22; padding: 14px; - } - .mysql-orc-diagram .sb-title { - font-size: 10px; font-weight: 700; letter-spacing: 1px; - text-transform: uppercase; margin-bottom: 10px; - } - .mysql-orc-diagram .step { - display: flex; align-items: flex-start; gap: 8px; - font-size: 10px; color: #8b949e; line-height: 1.5; margin-bottom: 6px; - } - .mysql-orc-diagram .step-num { - display: inline-flex; align-items: center; justify-content: center; - width: 16px; height: 16px; border-radius: 50%; - background: #21262d; color: #f0f6fc; - font-size: 9px; font-weight: 700; flex-shrink: 0; margin-top: 1px; - } - .mysql-orc-diagram .info-row { - display: flex; align-items: flex-start; gap: 6px; - font-size: 10px; color: #8b949e; line-height: 1.5; margin-bottom: 5px; - } - .mysql-orc-diagram .info-key { - color: #f0f6fc; font-weight: 600; min-width: 68px; flex-shrink: 0; - } - - /* CRD chain */ - .mysql-orc-diagram .crd-section { - border-radius: 10px; border: 1px solid #30363d; - background: #161b22; padding: 14px; - } - .mysql-orc-diagram .crd-chain { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .mysql-orc-diagram .crd-row { display: flex; align-items: center; gap: 6px; } - .mysql-orc-diagram .crd-chip { - font-size: 10px; font-weight: 700; padding: 3px 8px; - border-radius: 5px; white-space: nowrap; - } - .mysql-orc-diagram .crd-chip.cluster { background: #1a2a0a; color: #7ee787; border: 1px solid #2ea04344; } - .mysql-orc-diagram .crd-chip.component { background: #0d1f38; color: #79c0ff; border: 1px solid #1f6feb44; } - .mysql-orc-diagram .crd-chip.comp-orc { background: #2a1800; color: #e3b341; border: 1px solid #5a3a0044; } - .mysql-orc-diagram .crd-chip.instanceset { background: #1a0d2e; color: #c084fc; border: 1px solid #6e40c944; } - .mysql-orc-diagram .crd-chip.pod { background: #1c1400; color: #e3b341; border: 1px solid #e3b34144; } - .mysql-orc-diagram .crd-arrow { color: #484f58; font-size: 12px; } - /* Legend */ .mysql-orc-diagram .legend { display: flex; flex-wrap: wrap; gap: 12px; @@ -427,56 +378,6 @@ export default function MysqlOrchestratorArchitectureDiagram() { - {/* RIGHT: sidebar */} -
- {/* Failover steps */} -
-
⚡ Failover by Orchestrator
-
1Orchestrator polls all MySQL pods; primary stops responding
-
2Orchestrator identifies replica with most advanced relay log
-
3Orchestrator promotes chosen replica and reconnects remaining replicas
-
4exec roleProbe (orchestrator-client) detects new master → updates kubeblocks.io/role label
-
5mysql-server has no roleSelector — load-balances to all pods; use orc-proxysql for write-only routing
-
- - {/* Orchestrator capabilities */} -
-
🔭 Orchestrator Features
-
Monitoring:Continuous replication topology polling via SHOW SLAVE STATUS
-
Web UI:Visual replication graph; manual promote / relocate operations
-
HTTP API:Web UI + programmatic topology management; Service port :80 (container :3000)
-
Recovery:Configurable hooks for pre/post failover actions
-
- - {/* CRD chain */} -
-
📦 Resource Hierarchy
-
-
-
MySQL Cluster
-
-
- -
Component (mysql)
-
-
- -
InstanceSet → Pod × N
-
-
-
Orchestrator Cluster
-
-
- -
Component (orchestrator)
-
-
- -
Pod × N
-
-
-
-
{/* Legend */} diff --git a/src/components/PgHaArchitectureDiagram.tsx b/src/components/PgHaArchitectureDiagram.tsx index 0e34b06..adee4f6 100644 --- a/src/components/PgHaArchitectureDiagram.tsx +++ b/src/components/PgHaArchitectureDiagram.tsx @@ -16,24 +16,14 @@ export default function PgHaArchitectureDiagram() { .pg-ha-diagram .dot { width: 8px; height: 8px; border-radius: 50%; display: inline-block; } .pg-ha-diagram .dot-blue { background: #388bfd; } .pg-ha-diagram .dot-green { background: #3fb950; } - .pg-ha-diagram .dot-purple { background: #a371f7; } - .pg-ha-diagram .dot-orange { background: #e3b341; } .pg-ha-diagram .dot-teal { background: #56d4dd; } - .pg-ha-diagram .dot-red { background: #f85149; } .pg-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; text-transform: uppercase; margin-bottom: 10px; display: flex; align-items: center; gap: 6px; } - .pg-ha-diagram .main-area { - display: flex; gap: 16px; align-items: flex-start; - } .pg-ha-diagram .data-plane { - flex: 1; display: flex; flex-direction: column; gap: 0; - } - .pg-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; + display: flex; flex-direction: column; gap: 0; } .pg-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; @@ -55,13 +45,6 @@ export default function PgHaArchitectureDiagram() { border-left: 5px solid transparent; border-right: 5px solid transparent; border-top: 7px solid #3fb950; } - .pg-ha-diagram .v-line-gray { background: linear-gradient(to bottom, #30363d88, #484f58); } - .pg-ha-diagram .v-line-gray::after { - content: ''; position: absolute; bottom: 0; left: 50%; - transform: translateX(-50%); - border-left: 5px solid transparent; border-right: 5px solid transparent; - border-top: 7px solid #484f58; - } .pg-ha-diagram .v-arrow-label { font-size: 9px; color: #484f58; letter-spacing: 1px; white-space: nowrap; } .pg-ha-diagram .services-block { border-radius: 12px; border: 1px solid #238636; @@ -72,25 +55,54 @@ export default function PgHaArchitectureDiagram() { } .pg-ha-diagram .svc-card { border-radius: 8px; border: 1px solid; padding: 10px 12px; } .pg-ha-diagram .svc-rw { border-color: #3fb950; background: #0a1a0a; } - .pg-ha-diagram .svc-hl { border-color: #30363d; background: #0d1117; } - .pg-ha-diagram .svc-name { font-size: 12px; font-weight: 700; margin-bottom: 4px; } - .pg-ha-diagram .svc-rw .svc-name { color: #3fb950; } - .pg-ha-diagram .svc-hl .svc-name { color: #7d8590; } + .pg-ha-diagram .svc-name { font-size: 12px; font-weight: 700; margin-bottom: 4px; color: #3fb950; } .pg-ha-diagram .svc-detail { font-size: 10px; color: #7d8590; line-height: 1.6; } .pg-ha-diagram .svc-tag { display: inline-block; font-size: 9px; font-weight: 700; letter-spacing: 1px; padding: 2px 6px; border-radius: 4px; text-transform: uppercase; margin-top: 4px; + background: #1a4a1a; color: #3fb950; } - .pg-ha-diagram .tag-green { background: #1a4a1a; color: #3fb950; } - .pg-ha-diagram .tag-gray { background: #21262d; color: #8b949e; } .pg-ha-diagram .section-label { - font-size: 10px; font-weight: 700; letter-spacing: 2px; - text-transform: uppercase; color: #7d8590; margin-bottom: 8px; + font-size: 10px; font-weight: 700; letter-spacing: 1.5px; + text-transform: uppercase; color: #7d8590; margin-bottom: 12px; + } + .pg-ha-diagram .section-label-teal { color: #56d4dd; } + + /* DCS teal arrow */ + .pg-ha-diagram .v-line-teal { background: linear-gradient(to bottom, #1b7c8388, #56d4dd); } + .pg-ha-diagram .v-line-teal::after { + content: ''; position: absolute; bottom: 0; left: 50%; + transform: translateX(-50%); + border-left: 5px solid transparent; border-right: 5px solid transparent; + border-top: 7px solid #56d4dd; + } + + /* DCS bottom block */ + .pg-ha-diagram .dcs-block { + border-radius: 12px; border: 1px solid #1b7c83; + background: #061515; padding: 14px; + } + .pg-ha-diagram .dcs-resources { + display: flex; gap: 8px; flex-wrap: wrap; margin-top: 8px; + } + .pg-ha-diagram .dcs-resource { + background: #0d1117; border-radius: 8px; padding: 8px 12px; + border: 1px solid #1b7c8344; font-size: 10px; flex: 1; min-width: 130px; } + .pg-ha-diagram .dcs-res-name { font-weight: 600; color: #56d4dd; display: block; margin-bottom: 2px; } + .pg-ha-diagram .dcs-res-note { color: #7d8590; font-size: 9px; } + .pg-ha-diagram .dcs-feature { + font-size: 9px; color: #7d8590; + padding: 6px 10px; background: #0d1117; + border-radius: 6px; border: 1px solid #21262d; + white-space: nowrap; + } + + /* Pods section */ .pg-ha-diagram .pods-section { border-radius: 12px; border: 1px solid #30363d; - background: #0d1117; padding: 16px; + background: #0d1117; padding: 14px; } .pg-ha-diagram .pods-grid { display: grid; grid-template-columns: repeat(3, 1fr); gap: 12px; margin-top: 10px; @@ -128,48 +140,6 @@ export default function PgHaArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .pg-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .pg-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .pg-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .pg-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .pg-ha-diagram .dcs-item span { color: #7d8590; } - .pg-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .pg-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .pg-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .pg-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .pg-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .pg-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .pg-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .pg-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .pg-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .pg-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .pg-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .pg-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .pg-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .pg-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .pg-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .pg-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .pg-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .pg-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .pg-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .pg-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } .pg-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; @@ -180,66 +150,62 @@ export default function PgHaArchitectureDiagram() {
- {/* ══ MAIN AREA: data plane (left) + sidebar (right) ══ */} -
- - {/* LEFT: Client → Service → Pods */} -
+
- {/* Client */} -
- - - - -
-
Application / Client
-
- Read/Write  pg-cluster-postgresql-postgresql:5432
- Connection Pool  pg-cluster-postgresql-postgresql:6432 (pgbouncer) -
+ {/* Client */} +
+ + + + +
+
Application / Client
+
+ Read/Write  pg-cluster-postgresql-postgresql:5432
+ Connection Pool  pg-cluster-postgresql-postgresql:6432 (pgbouncer)
+
- {/* Arrow: Client → Service */} -
-
-
-
- RW traffic → roleSelector: primary + {/* Arrow: Client → Service */} +
+
+
+ RW traffic → roleSelector: primary +
- {/* Services */} -
-
- - Kubernetes Services -
-
-
-
pg-cluster-postgresql-postgresql
-
- ClusterIP · :5432 / :6432
- selector: kubeblocks.io/role=primary
- Endpoints auto-switch with primary -
- ReadWrite + {/* Services */} +
+
+ + Kubernetes Services +
+
+
+
pg-cluster-postgresql-postgresql
+
+ ClusterIP · :5432 / :6432
+ selector: kubeblocks.io/role=primary
+ Endpoints auto-switch with primary
+ ReadWrite
+
- {/* Arrow: Service → Pods */} -
-
-
-
- → primary pod only + {/* Arrow: Service → Pods */} +
+
+
+ → primary pod only +
- {/* Pods */} -
-
Pods · Worker Nodes
-
+ {/* Pods */} +
+
Pods · Worker Nodes
+
{/* Primary */}
@@ -266,7 +232,7 @@ export default function PgHaArchitectureDiagram() {
🔍
-
dbctl (role probe sidecar)
+
dbctl (role probe)
:5001 /v1.0/getrole
@@ -306,7 +272,7 @@ export default function PgHaArchitectureDiagram() {
🔍
-
dbctl (role probe sidecar)
+
dbctl (role probe)
:5001 /v1.0/getrole
@@ -346,7 +312,7 @@ export default function PgHaArchitectureDiagram() {
🔍
-
dbctl (role probe sidecar)
+
dbctl (role probe)
:5001 /v1.0/getrole
@@ -372,134 +338,45 @@ export default function PgHaArchitectureDiagram() { 🔗 Headless service — stable pod DNS for internal use (replication, HA heartbeat, operator probes); not a client endpoint
-
- -
{/* /data-plane */} - - {/* RIGHT SIDEBAR */} -
- - {/* Patroni DCS */} -
-
- - Patroni DCS - (K8s API) -
-
-
- ConfigMap {'{scope}'}-config
- TTL 30s · loop 10s -
-
- ConfigMap {'{scope}'}
- leader lease · heartbeat -
-
- Secret account-*
- system account passwords -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Primary Pod crashes
-
2Patroni heartbeat timeout (≈30s)
-
3Replica acquires ConfigMap leader lease
-
4roleProbe detects role change
-
5Pod label role=primary updated
-
6Service Endpoints auto-switch
-
-
+
- {/* System Accounts */} -
-
- - System Accounts -
-
-
postgressuperuser
-
kbadminsuperuser
-
kbdataprotectionbackup
-
kbprobemonitor
-
kbmonitoringmetrics
-
kbreplicatorreplication
-
+ {/* Arrow: pods → DCS */} +
+
+
+ each Patroni agent reads/writes K8s API via :8008 +
-
{/* /sidebar */} -
{/* /main-area */} - - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component + {/* DCS block */} +
+
Patroni DCS · K8s API
+
+
+ ConfigMap {'{scope}'}-config + cluster config · TTL 30s
-
- Workloads Controller - InstanceSet → Pods +
+ ConfigMap {'{scope}'} + leader lease · heartbeat
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 3
+
+ Secret account-* + system account passwords
+
Poll every 10s · TTL 30s
+
Leader election via K8s lock
+
Failover → service re-routes
-
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ roleProbe: probe role every 1s
- ⚙ Update Pod label role=primary
- ⚙ Execute switchover / scale ops
- ⚙ Manage SystemAccount Secrets -
-
-
+
{/* /data-plane */} {/* Legend */}
-
KubeBlocks Operator (control plane)
-
CRD Resource
Primary / RW Traffic
Replica Pod
-
Patroni DCS
-
Failover Path
+
Patroni DCS (K8s API)
Persistent Storage
diff --git a/src/components/QdrantArchitectureDiagram.tsx b/src/components/QdrantArchitectureDiagram.tsx index 5b8369e..10fd370 100644 --- a/src/components/QdrantArchitectureDiagram.tsx +++ b/src/components/QdrantArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function QdrantArchitectureDiagram() { .qdrant-ha-diagram .dot-blue { background: #388bfd; } .qdrant-ha-diagram .dot-green { background: #3fb950; } .qdrant-ha-diagram .dot-purple { background: #a371f7; } - .qdrant-ha-diagram .dot-orange { background: #e3b341; } - .qdrant-ha-diagram .dot-teal { background: #56d4dd; } .qdrant-ha-diagram .dot-red { background: #f85149; } .qdrant-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function QdrantArchitectureDiagram() { .qdrant-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .qdrant-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .qdrant-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -128,49 +122,7 @@ export default function QdrantArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .qdrant-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .qdrant-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .qdrant-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .qdrant-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .qdrant-ha-diagram .dcs-item span { color: #7d8590; } - .qdrant-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .qdrant-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .qdrant-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .qdrant-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .qdrant-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .qdrant-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .qdrant-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .qdrant-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .qdrant-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .qdrant-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .qdrant-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .qdrant-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .qdrant-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .qdrant-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .qdrant-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .qdrant-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .qdrant-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .qdrant-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .qdrant-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .qdrant-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } - .qdrant-ha-diagram .legend { + .qdrant-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; } @@ -310,118 +262,7 @@ export default function QdrantArchitectureDiagram() {
{/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* HA Mechanism */} -
-
- - Raft Shard Consensus -
-
-
- ConfigMap {'{scope}'}-config
- cluster peer addresses · consensus config -
-
- Raft consensus
- per-shard leader election · peer synchronization -
-
- Secret account-*
- API key credentials -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Pod/shard leader crashes
-
2Raft election timeout (≈10s)
-
3Replica initiates election for affected shard
-
4New shard leader elected
-
5Readiness probe (curl /cluster + consensus check) confirms node rejoined
-
6Traffic redistributed to healthy nodes
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
api-keymain API key
-
read-only-keyread-only access
-
kbdataprotectionbackup
-
-
- -
{/* /sidebar */}
{/* /main-area */} - - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 3
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ Readiness: curl /cluster · verify Raft consensus
- ⚙ Restart failed pods; inject peer FQDN list on startup
- ⚙ Execute scale ops (add/remove peer via Raft)
- ⚙ Manage SystemAccount Secrets (API key) -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
@@ -429,7 +270,6 @@ export default function QdrantArchitectureDiagram() {
Leader / RW Traffic
Replica Pod
Raft Consensus
-
Failover Path
Persistent Storage
diff --git a/src/components/RabbitmqArchitectureDiagram.tsx b/src/components/RabbitmqArchitectureDiagram.tsx index 2d4d3bf..0b47d82 100644 --- a/src/components/RabbitmqArchitectureDiagram.tsx +++ b/src/components/RabbitmqArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function RabbitmqArchitectureDiagram() { .rmq-ha-diagram .dot-blue { background: #388bfd; } .rmq-ha-diagram .dot-green { background: #3fb950; } .rmq-ha-diagram .dot-purple { background: #a371f7; } - .rmq-ha-diagram .dot-orange { background: #e3b341; } - .rmq-ha-diagram .dot-teal { background: #56d4dd; } .rmq-ha-diagram .dot-red { background: #f85149; } .rmq-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function RabbitmqArchitectureDiagram() { .rmq-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .rmq-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .rmq-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -128,49 +122,7 @@ export default function RabbitmqArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .rmq-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .rmq-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .rmq-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .rmq-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .rmq-ha-diagram .dcs-item span { color: #7d8590; } - .rmq-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .rmq-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .rmq-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .rmq-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .rmq-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .rmq-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .rmq-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .rmq-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .rmq-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .rmq-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .rmq-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .rmq-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .rmq-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .rmq-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .rmq-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .rmq-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .rmq-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .rmq-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .rmq-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .rmq-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } - .rmq-ha-diagram .legend { + .rmq-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; } @@ -328,126 +280,13 @@ export default function RabbitmqArchitectureDiagram() {
{/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* HA Mechanism */} -
-
- - Erlang Cluster + Quorum Queues - (HA Mechanism) -
-
-
- Erlang cookie / shared secret for cluster membership -
-
- ConfigMap {'{scope}'}-config
- rabbitmq.conf · enabled_plugins -
-
- Secret account-*
- user credentials -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Node Pod crashes or becomes unreachable
-
2Erlang net_ticktime timeout — cluster marks node down
-
3Quorum queues with that leader hold Raft election
-
4Surviving majority elects new queue leaders
-
5Producers/consumers reconnect to any remaining node
-
6KubeBlocks restarts pod; node rejoins Erlang cluster
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
guestdefault (disabled in prod)
-
kbadminmanagement
-
kbdataprotectionbackup
-
kbprobemonitor
-
-
- -
{/* /sidebar */}
{/* /main-area */} - - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 3
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ memberLeave: drain node before scale-in
- ⚙ Execute scale / reconfigure ops
- ⚙ Manage SystemAccount Secrets -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
CRD Resource
AMQP Traffic / Client Service
Peer Node
-
Erlang Cluster
-
Failover Path
Persistent Storage
diff --git a/src/components/RedisArchitectureDiagram.tsx b/src/components/RedisArchitectureDiagram.tsx index abb9d17..119598b 100644 --- a/src/components/RedisArchitectureDiagram.tsx +++ b/src/components/RedisArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function RedisArchitectureDiagram() { .redis-ha-diagram .dot-blue { background: #388bfd; } .redis-ha-diagram .dot-green { background: #3fb950; } .redis-ha-diagram .dot-purple { background: #a371f7; } - .redis-ha-diagram .dot-orange { background: #e3b341; } - .redis-ha-diagram .dot-teal { background: #56d4dd; } .redis-ha-diagram .dot-red { background: #f85149; } .redis-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function RedisArchitectureDiagram() { .redis-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .redis-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .redis-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -140,49 +134,7 @@ export default function RedisArchitectureDiagram() { padding: 7px 10px; font-size: 10px; color: #56d4dd; line-height: 1.6; } .redis-ha-diagram .sentinel-pod-name { font-size: 11px; font-weight: 700; color: #c9d1d9; margin-bottom: 3px; } - .redis-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .redis-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .redis-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .redis-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .redis-ha-diagram .dcs-item span { color: #7d8590; } - .redis-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .redis-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .redis-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .redis-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .redis-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .redis-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .redis-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .redis-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .redis-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .redis-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .redis-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .redis-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .redis-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .redis-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .redis-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .redis-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .redis-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .redis-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .redis-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .redis-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } - .redis-ha-diagram .legend { + .redis-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; } @@ -365,128 +317,13 @@ export default function RedisArchitectureDiagram() {
{/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* HA Mechanism - Redis Sentinel */} -
-
- - HA Mechanism - (Redis Sentinel) -
-
-
- Sentinel ConfigMap / sentinel.conf
- rebuilt on restart (emptyDir) -
-
- Endpoints {'{scope}'}
- master monitor · failover decision -
-
- Secret account-*
- default password -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Primary Pod crashes
-
2Sentinel detects down-after-milliseconds
-
3Sentinel quorum votes for failover (≥2 sentinels)
-
4Sentinel promotes best replica
-
5roleProbe detects role change
-
6Pod label role=primary updated
-
7Service Endpoints auto-switch
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
defaultmain account
-
kbdataprotectionbackup
-
kbprobemonitor
-
-
- -
{/* /sidebar */}
{/* /main-area */} - - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 3
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ roleProbe: probe role every 1s
- ⚙ Update Pod label role=primary
- ⚙ Execute switchover / scale ops
- ⚙ Manage SystemAccount Secrets -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
CRD Resource
Primary / RW Traffic
Replica Pod
-
Redis Sentinel
-
Failover Path
Persistent Storage
diff --git a/src/components/RedisClusterArchitectureDiagram.tsx b/src/components/RedisClusterArchitectureDiagram.tsx index 012f7da..9233f9a 100644 --- a/src/components/RedisClusterArchitectureDiagram.tsx +++ b/src/components/RedisClusterArchitectureDiagram.tsx @@ -118,46 +118,6 @@ export default function RedisClusterArchitectureDiagram() { display: flex; gap: 16px; align-items: flex-start; margin-top: 0; } .redis-cluster-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .redis-cluster-diagram .mgmt-sidebar { - width: 256px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } - .redis-cluster-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .redis-cluster-diagram .hierarchy-card { border-color: #a371f744; background: #130d2a; } - .redis-cluster-diagram .failover-card { border-color: #da363344; background: #1c0a0a; } - .redis-cluster-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .redis-cluster-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .redis-cluster-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .redis-cluster-diagram .cluster-props { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .redis-cluster-diagram .prop-row { - padding: 4px 8px; border-radius: 5px; border: 1px solid #a371f722; - background: #0d0820; font-size: 10px; color: #d2a8ff; line-height: 1.5; - } - .redis-cluster-diagram .prop-row span { color: #7d8590; } - .redis-cluster-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .redis-cluster-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .redis-cluster-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .redis-cluster-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .redis-cluster-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .redis-cluster-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .redis-cluster-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .redis-cluster-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .redis-cluster-diagram .crd-chip.sharding { border-color: #f0883e; color: #f0883e; background: #2a1200; } - .redis-cluster-diagram .crd-chip.shard { border-color: #e3b341; color: #e3b341; background: #302010; } - .redis-cluster-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .redis-cluster-diagram .crd-chip.pod { border-color: #79c0ff; color: #79c0ff; background: #0d2035; } - .redis-cluster-diagram .crd-arrow { color: #484f58; font-size: 14px; } .redis-cluster-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; @@ -370,112 +330,8 @@ export default function RedisClusterArchitectureDiagram() {
{/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* Cluster Properties */} -
-
- - Cluster Properties -
-
-
- Hash slots: 16384 total
- distributed evenly across shards -
-
- Min shards: 3 (for quorum)
- tolerates 1 full shard failure -
-
- Replication: async per shard
- primary → replicas within shard -
-
- No Sentinel: gossip protocol
- nodes self-manage cluster topology -
-
-
- - {/* Failover */} -
-
- - Shard Failover Process -
-
-
1Shard primary stops responding to gossip pings
-
2Other nodes mark primary as PFAIL (possible fail)
-
3Enough nodes agree → primary declared FAIL
-
4Shard replica requests votes from other primaries
-
5Replica wins majority → promoted to primary
-
6kubeblocks.io/role=primary label updated
-
7Cluster slot map updated; clients follow MOVED
-
-
- -
{/* /sidebar */}
{/* /main-area */} - {/* Separator */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* Operator */} -
-
-
- - KubeBlocks Operator - · manages Sharding resources; each shard is an independent Component -
-
-
- Apps Controller - Cluster / Sharding -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Scale shards / Rebalance -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Sharding
- -
Shard × N
- -
InstanceSet
- -
Pod × replicas
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ Manage Sharding topology changes
- ⚙ roleProbe: exec /tools/dbctl redis getrole in redis-cluster container
- ⚙ Scale shards in/out (rebalance slots)
- ⚙ Manage SystemAccount Secrets -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
@@ -483,7 +339,6 @@ export default function RedisClusterArchitectureDiagram() {
Shard Primary
Shard Replica
Gossip / Cluster Bus
-
Failover Path
Hash Slots
diff --git a/src/components/RedisStandaloneArchitectureDiagram.tsx b/src/components/RedisStandaloneArchitectureDiagram.tsx new file mode 100644 index 0000000..41a110d --- /dev/null +++ b/src/components/RedisStandaloneArchitectureDiagram.tsx @@ -0,0 +1,226 @@ +import React from 'react'; + +export default function RedisStandaloneArchitectureDiagram() { + return ( + <> + + +
+ + {/* ══ MAIN AREA: data plane (left) + sidebar (right) ══ */} +
+ + {/* LEFT: Client → Service → Pod */} +
+ + {/* Client */} +
+ + + + +
+
Application / Client
+
+ Read/Write  redis-standalone-redis-redis:6379
+ 「redis-standalone」为示例 cluster 名 +
+
+
+ + {/* Arrow: Client → Service */} +
+
+
+
+ RW traffic → single pod +
+ + {/* Service */} +
+
+ + Kubernetes Services +
+
+
redis-standalone-redis-redis
+
+ ClusterIP · :6379
+ selector: all redis pods (no role filter)
+ Direct connection to the single Redis instance +
+ ReadWrite +
+
+ + {/* Arrow: Service → Pod */} +
+
+
+
+ → redis-0 +
+ + {/* Pods */} +
+
Redis Pod · Worker Node
+ +
+
+ redis-0 + STANDALONE +
+
+
+ 🔴 +
+
redis (Redis Server)
+
:6379 · standalone mode · accepts all R/W
+
+
+
+ 🔍 +
+
[init] dbctl (→ /tools/dbctl)
+
copies dbctl binary for roleProbe
+
+
+
+ 📊 +
+
redis-exporter
+
:9121 metrics (Prometheus)
+
+
+
+
💾 PVC data-0 · 20Gi · RDB / AOF data directory
+
+ +
+ 🔗 + Headless service — stable pod DNS for operator probes and internal use; not a client endpoint +
+
+ +
{/* /data-plane */} + +
{/* /main-area */} + + {/* Legend */} +
+
KubeBlocks Operator (control plane)
+
CRD Resource
+
RW Traffic
+
Persistent Storage
+
+ +
+ + ); +} diff --git a/src/components/RocketmqArchitectureDiagram.tsx b/src/components/RocketmqArchitectureDiagram.tsx index 0fdcb30..655ec81 100644 --- a/src/components/RocketmqArchitectureDiagram.tsx +++ b/src/components/RocketmqArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function RocketmqArchitectureDiagram() { .rocketmq-ha-diagram .dot-blue { background: #388bfd; } .rocketmq-ha-diagram .dot-green { background: #3fb950; } .rocketmq-ha-diagram .dot-purple { background: #a371f7; } - .rocketmq-ha-diagram .dot-orange { background: #e3b341; } - .rocketmq-ha-diagram .dot-teal { background: #56d4dd; } .rocketmq-ha-diagram .dot-red { background: #f85149; } .rocketmq-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function RocketmqArchitectureDiagram() { .rocketmq-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .rocketmq-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .rocketmq-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -147,48 +141,6 @@ export default function RocketmqArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .rocketmq-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .rocketmq-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .rocketmq-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .rocketmq-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .rocketmq-ha-diagram .dcs-item span { color: #7d8590; } - .rocketmq-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .rocketmq-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .rocketmq-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .rocketmq-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .rocketmq-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .rocketmq-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .rocketmq-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .rocketmq-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .rocketmq-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .rocketmq-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .rocketmq-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .rocketmq-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .rocketmq-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .rocketmq-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .rocketmq-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .rocketmq-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .rocketmq-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .rocketmq-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .rocketmq-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .rocketmq-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } .rocketmq-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; @@ -437,116 +389,8 @@ export default function RocketmqArchitectureDiagram() {
{/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* HA Mechanism */} -
-
- - ASYNC_MASTER/SLAVE + NameServer -
-
-
- NameServer
- stateless registry · producers/consumers discover broker addresses here -
-
- ASYNC_MASTER/SLAVE
- default mode · master (brokerId=0) replicates to slaves asynchronously on port 10912 -
-
- Secret account-*
- ACL credentials -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Master Broker Pod crashes
-
2Slaves detect missing master heartbeat; writes unavailable; slaves continue serving reads
-
3KubeBlocks restarts the failed pod
-
4Recovered pod starts as master (brokerId=0), re-registers with NameServer
-
5Slaves reconnect on port 10912 and resync missed log entries
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
adminsuperuser
-
kbdataprotectionbackup
-
kbprobemonitor
-
-
- -
{/* /sidebar */}
{/* /main-area */} - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component ×2
- -
InstanceSet
- -
Pods
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ roleProbe: exec /scripts/get-role.sh per pod
- ⚙ Update Pod label role=master (drives rolling update order)
- ⚙ Restart failed pods (no auto master election)
- ⚙ Execute switchover / scale ops -
-
-
{/* Legend */}
@@ -554,8 +398,6 @@ export default function RocketmqArchitectureDiagram() {
CRD Resource
Master Broker / Write Traffic
Slave Pod
-
ASYNC_MASTER/SLAVE Replication
-
Failover Path
Persistent Storage
diff --git a/src/components/ZookeeperArchitectureDiagram.tsx b/src/components/ZookeeperArchitectureDiagram.tsx index e7e615b..f959a29 100644 --- a/src/components/ZookeeperArchitectureDiagram.tsx +++ b/src/components/ZookeeperArchitectureDiagram.tsx @@ -17,8 +17,6 @@ export default function ZookeeperArchitectureDiagram() { .zk-ha-diagram .dot-blue { background: #388bfd; } .zk-ha-diagram .dot-green { background: #3fb950; } .zk-ha-diagram .dot-purple { background: #a371f7; } - .zk-ha-diagram .dot-orange { background: #e3b341; } - .zk-ha-diagram .dot-teal { background: #56d4dd; } .zk-ha-diagram .dot-red { background: #f85149; } .zk-ha-diagram .card-title { font-size: 11px; font-weight: 700; letter-spacing: 1.5px; @@ -31,10 +29,6 @@ export default function ZookeeperArchitectureDiagram() { .zk-ha-diagram .data-plane { flex: 1; display: flex; flex-direction: column; gap: 0; } - .zk-ha-diagram .mgmt-sidebar { - width: 260px; flex-shrink: 0; - display: flex; flex-direction: column; gap: 12px; padding-top: 4px; - } .zk-ha-diagram .client-mini { border-radius: 12px; border: 1px solid #30363d; background: #161b22; padding: 12px 16px; @@ -138,49 +132,7 @@ export default function ZookeeperArchitectureDiagram() { padding: 7px; border-radius: 8px; background: #0a1a14; border: 1px solid #238636; margin-top: 10px; font-size: 11px; color: #3fb950; } - .zk-ha-diagram .sidebar-card { border-radius: 10px; border: 1px solid; padding: 12px 14px; } - .zk-ha-diagram .dcs-card { border-color: #1b7c83; background: #0a1e20; } - .zk-ha-diagram .dcs-items { display: flex; flex-direction: column; gap: 5px; margin-top: 8px; } - .zk-ha-diagram .dcs-item { - padding: 5px 9px; border-radius: 6px; border: 1px solid #1b7c8355; - background: #061515; font-size: 10px; color: #56d4dd; line-height: 1.5; - } - .zk-ha-diagram .dcs-item span { color: #7d8590; } - .zk-ha-diagram .failover-card { border-color: #da3633; background: #1c0a0a; } - .zk-ha-diagram .failover-steps { display: flex; flex-direction: column; gap: 4px; margin-top: 8px; } - .zk-ha-diagram .step { display: flex; align-items: flex-start; gap: 7px; font-size: 10px; color: #cdd9e5; line-height: 1.5; } - .zk-ha-diagram .step-num { - width: 16px; height: 16px; border-radius: 50%; flex-shrink: 0; - background: #da363322; border: 1px solid #da363388; - display: flex; align-items: center; justify-content: center; - font-size: 9px; font-weight: 700; color: #f85149; margin-top: 1px; - } - .zk-ha-diagram .accounts-card { border-color: #d2992244; background: #1a1505; } - .zk-ha-diagram .accounts-grid { display: flex; flex-wrap: wrap; gap: 5px; margin-top: 8px; } - .zk-ha-diagram .acc-chip { - padding: 3px 8px; border-radius: 5px; - border: 1px solid #d2992233; background: #0d0d00; font-size: 10px; color: #e3b341; - } - .zk-ha-diagram .acc-chip span { color: #484f58; font-size: 9px; display: block; } - .zk-ha-diagram .operator-block { - flex: 1; border-radius: 12px; border: 1px solid #1f6feb; - background: #0d1f38; padding: 16px 20px; - } - .zk-ha-diagram .operator-controllers { display: flex; gap: 8px; margin-top: 8px; } - .zk-ha-diagram .ctrl-chip { - flex: 1; padding: 8px 10px; border-radius: 8px; border: 1px solid #1f6feb44; - background: #0a1628; font-size: 11px; color: #79c0ff; text-align: center; - } - .zk-ha-diagram .ctrl-chip .ctrl-name { font-weight: 700; font-size: 12px; display: block; margin-bottom: 2px; } - .zk-ha-diagram .ctrl-chip .ctrl-sub { font-size: 10px; color: #4a7ab5; } - .zk-ha-diagram .crd-chain { display: flex; align-items: center; gap: 6px; margin-top: 10px; flex-wrap: wrap; } - .zk-ha-diagram .crd-chip { padding: 4px 10px; border-radius: 20px; border: 1px solid; font-size: 11px; font-weight: 600; white-space: nowrap; } - .zk-ha-diagram .crd-chip.cluster { border-color: #a371f7; color: #d2a8ff; background: #2d1f5e; } - .zk-ha-diagram .crd-chip.component { border-color: #79c0ff; color: #79c0ff; background: #1a3050; } - .zk-ha-diagram .crd-chip.instanceset { border-color: #7ee787; color: #7ee787; background: #1a3020; } - .zk-ha-diagram .crd-chip.pod { border-color: #e3b341; color: #e3b341; background: #302010; } - .zk-ha-diagram .crd-arrow { color: #484f58; font-size: 14px; } - .zk-ha-diagram .legend { + .zk-ha-diagram .legend { display: flex; gap: 18px; flex-wrap: wrap; justify-content: center; padding-top: 8px; border-top: 1px solid #21262d; margin-top: 16px; } @@ -340,129 +292,13 @@ export default function ZookeeperArchitectureDiagram() {
{/* /data-plane */} - {/* RIGHT SIDEBAR */} -
- - {/* HA Mechanism */} -
-
- - ZAB Atomic Broadcast - (HA Mechanism) -
-
-
- myid file / unique server ID per pod
- stable ordinal -
-
- ConfigMap {'{scope}'}-config
- zoo.cfg · server list -
-
- Secret account-*
- superDigest credentials -
-
-
- - {/* Failover */} -
-
- - Failover Process -
-
-
1Leader Pod crashes
-
2ZAB heartbeat timeout (tickTime × initLimit)
-
3Follower starts leader election
-
4ZXID-based voting → new leader elected
-
5roleProbe detects new leader
-
6Pod label role=leader updated
-
7Service Endpoints auto-switch
-
-
- - {/* System Accounts */} -
-
- - System Accounts -
-
-
supersuperuser (digest auth)
-
kbadminadmin
-
kbdataprotectionbackup
-
kbprobemonitor
-
-
- -
{/* /sidebar */}
{/* /main-area */} - - {/* ══ SEPARATOR ══ */} -
-
- Management Plane · KubeBlocks Operator -
-
- - {/* ══ BOTTOM: Operator ══ */} -
-
-
- - KubeBlocks Operator - · watches & reconciles CRDs, drives creation and reconciliation of all above resources -
-
-
- Apps Controller - Cluster / Component -
-
- Workloads Controller - InstanceSet → Pods -
-
- Ops Controller - Switchover / Scale -
-
-
-
CRD RESOURCE HIERARCHY
-
-
Cluster
- -
Component
- -
InstanceSet
- -
Pod × 3
-
-
-
- -
-
Operator Responsibilities
-
- ⚙ Create / reconcile Pods, Services, PVCs
- ⚙ roleProbe: probe role every 1s
- ⚙ Update Pod label role=leader
- ⚙ Execute switchover / scale ops
- ⚙ Manage SystemAccount Secrets -
-
-
- {/* Legend */}
KubeBlocks Operator (control plane)
CRD Resource
Leader / Client Traffic
Follower Pod
-
ZAB Atomic Broadcast
-
Failover Path
Persistent Storage