Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
165 changes: 165 additions & 0 deletions distributed-cockroachdb/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
# Distributed CockroachDB Multi-Cluster with KubeSlice

This example demonstrates how to deploy a distributed CockroachDB cluster across multiple Kubernetes clusters using KubeSlice. CockroachDB is a distributed SQL database that provides strong consistency, horizontal scaling, and built-in resilience.

## Architecture Overview

The setup consists of:
- **3 Kubernetes worker clusters** (worker-1, worker-2, worker-3)
- **1 KubeSlice controller cluster**
- **CockroachDB nodes** distributed across the worker clusters
- **KubeSlice networking** connecting all clusters via a secure overlay network


## Setup Instructions

### Step 1: Infrastructure Setup (Optional)

If you need to set up KubeSlice infrastructure from scratch, use the topology templates:

```bash
# For Enterprise version
kubeslice-cli create topology -f kubeslice-cli-topology-template/kubeslice-cli-topology-template.yaml

# For OSS version
kubeslice-cli create topology -f kubeslice-cli-topology-template/kubeslice-cli-topology-oss-template.yaml
```

### Step 2: Create the KubeSlice Project and Slice

1. **Create the project namespace on the controller cluster:**
```bash
kubectl create namespace kubeslice-cockroachdb-project
```

2. **Apply the slice configuration:**
```bash
# For basic setup
kubectl apply -f cockroachdb-slice/cockroachdb-slice.yaml

# For load balancer setup (if you need external access)
kubectl apply -f cockroachdb-slice/cockroachdb-slice-lb.yaml
```

### Step 3: Deploy CockroachDB

Create the `cockroachdb` namespace on all worker clusters:
```bash
# On each worker cluster
kubectl create namespace cockroachdb
```

```bash
# Apply on worker 1
kubectx worker-1
kubectl apply -f service-export/k8s-cluster-1.yaml
# Apply on worker 2
kubectx worker-2
kubectl apply -f service-export/k8s-cluster-2.yaml
# Apply on worker 3
kubectx worker-3
kubectl apply -f service-export/k8s-cluster-3.yaml
```

**Important**: Update the `--advertise-addr` and node names for each cluster:
- Worker-1: `cockroachdb-0`
- Worker-2: `cockroachdb-1`
- Worker-3: `cockroachdb-2`

### Step 4: Export Services

Apply the service exports on each cluster to enable cross-cluster communication:

```bash
# On worker-1
kubectl apply -f service-export/k8s-cluster-1.yaml

# On worker-2
kubectl apply -f service-export/k8s-cluster-2.yaml

# On worker-3
kubectl apply -f service-export/k8s-cluster-3.yaml
```

### Step 5: Initialize the CockroachDB Cluster

Once all nodes are running, initialize the cluster from any node:

```bash
# Connect to any CockroachDB pod
kubectl exec -it cockroachdb-0 -n cockroachdb -- /cockroach/cockroach init --insecure
```

## Verification

### Check Cluster Status
```bash
# Check nodes in the cluster
kubectl exec -it cockroachdb-0 -n cockroachdb -- /cockroach/cockroach node status --insecure

# Check cluster health
kubectl exec -it cockroachdb-0 -n cockroachdb -- /cockroach/cockroach node ls --insecure
```

### Test Database Connectivity
```bash
# Connect to SQL interface
kubectl exec -it cockroachdb-0 -n cockroachdb -- /cockroach/cockroach sql --insecure

# Run test queries
CREATE DATABASE test;
USE test;
CREATE TABLE users (id SERIAL PRIMARY KEY, name STRING);
INSERT INTO users (name) VALUES ('Alice'), ('Bob');
SELECT * FROM users;
```

### Verify Cross-Cluster Communication
```bash
# Check service imports on each cluster
kubectl get serviceimport -n cockroachdb

# Test connectivity between nodes
kubectl exec -it cockroachdb-0 -n cockroachdb -- /cockroach/cockroach node status --insecure --host=cockroachdb-1.cockroachdb.svc.cluster.local
```

## Configuration Details

### Slice Configuration
- **Slice Name**: `cockroachdb-slice`
- **Subnet**: `192.168.0.0/16`
- **Gateway Type**: OpenVPN with Local CA
- **QoS**: Bandwidth control with 5120 Kbps ceiling, 2560 Kbps guaranteed

### Service Exports
Each cluster exports its CockroachDB service with:
- **GRPC Port**: 26257 (main CockroachDB port)
- **HTTP Port**: 8080 (admin UI and API)
- **DNS Aliases**: `cockroachdb-{0,1,2}.cockroachdb.svc.cluster.local`

### Monitoring
- Access CockroachDB Admin UI: `kubectl port-forward svc/cockroachdb 8080:8080 -n cockroachdb`
- View logs: `kubectl logs -f cockroachdb-0 -n cockroachdb`
- Check slice status: `kubectl get sliceconfig -n kubeslice-system`

## Cleanup

To remove the deployment:

```bash
# Delete CockroachDB resources
kubectl delete namespace cockroachdb # On each worker cluster

# Delete slice configuration
kubectl delete -f cockroachdb-slice/cockroachdb-slice.yaml

# Delete project (optional)
kubectl delete namespace kubeslice-cockroachdb-project
```

## References

- [CockroachDB Documentation](https://www.cockroachlabs.com/docs/)
- [KubeSlice Documentation](https://docs.kubeslice.io/)
- [KubeSlice GitHub](https://github.com/kubeslice)
- [CockroachDB Kubernetes Operator](https://github.com/cockroachdb/cockroach-operator)
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: cockroachdb-slice
namespace: kubeslice-cockroachdb-project
spec:
sliceSubnet: 192.168.0.0/16
sliceType: Application
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceGatewayServiceType:
- cluster: worker-1
type: LoadBalancer
protocol: TCP
- cluster: worker-2
type: LoadBalancer
protocol: TCP
- cluster: worker-3
type: LoadBalancer
protocol: TCP
sliceIpamType: Local
clusters:
- worker-1
- worker-2
- worker-3
qosProfileDetails:
queueType: HTB
priority: 1
tcType: BANDWIDTH_CONTROL
bandwidthCeilingKbps: 5120
bandwidthGuaranteedKbps: 2560
dscpClass: AF11
namespaceIsolationProfile:
applicationNamespaces:
- namespace: cockroachdb
clusters:
- worker-1
- worker-2
- worker-3
isolationEnabled: false #make this true in case you want to enable isolation
allowedNamespaces:
- namespace: kube-system
clusters:
- worker-1
- worker-2
- worker-3
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: cockroachdb-slice-secure
namespace: kubeslice-cockroachdb-project
spec:
sliceSubnet: 192.168.0.0/16
sliceType: Application
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceGatewayServiceType:
- cluster: worker-1
type: LoadBalancer
protocol: TCP
- cluster: worker-2
type: LoadBalancer
protocol: TCP
- cluster: worker-3
type: LoadBalancer
protocol: TCP
sliceIpamType: Local
clusters:
- worker-1
- worker-2
- worker-3
qosProfileDetails:
queueType: HTB
priority: 0 # Higher priority for database traffic
tcType: BANDWIDTH_CONTROL
bandwidthCeilingKbps: 10240 # Higher bandwidth for database replication
bandwidthGuaranteedKbps: 5120
dscpClass: AF31 # Higher QoS class for database traffic
namespaceIsolationProfile:
applicationNamespaces:
- namespace: cockroachdb
clusters:
- worker-1
- worker-2
- worker-3
- namespace: cockroachdb-monitoring # Additional namespace for monitoring
clusters:
- worker-1
- worker-2
- worker-3
isolationEnabled: true # Enable isolation for security
allowedNamespaces:
- namespace: kube-system
clusters:
- worker-1
- worker-2
- worker-3
- namespace: cert-manager # Allow cert-manager for TLS certificates
clusters:
- worker-1
- worker-2
- worker-3
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: cockroachdb-slice
namespace: kubeslice-cockroachdb-project
spec:
sliceSubnet: 192.168.0.0/16
sliceType: Application
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceIpamType: Local
clusters:
- worker-1
- worker-2
- worker-3
qosProfileDetails:
queueType: HTB
priority: 1
tcType: BANDWIDTH_CONTROL
bandwidthCeilingKbps: 5120
bandwidthGuaranteedKbps: 2560
dscpClass: AF11
namespaceIsolationProfile:
applicationNamespaces:
- namespace: cockroachdb
clusters:
- worker-1
- worker-2
- worker-3
isolationEnabled: false #make this true in case you want to enable isolation
allowedNamespaces:
- namespace: kube-system
clusters:
- worker-1
- worker-2
- worker-3
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
configuration:
cluster_configuration:
kube_config_path: /path/to/merged/kubeconfig/merged.config #{specify the kube config file to use for topology setup; for topology only}
cluster_type: cloud #{optional: specify the type of cluster. Valid values are kind, cloud, data-center}
controller:
name: controller #{the user defined name of the controller cluster}
context_name: k8s-cluster-1 #{the name of the context to use from kubeconfig file; for topology only}
control_plane_address: https://35.243.149.48 #{the address of the control plane kube-apiserver. kubeslice-cli determines the address from kubeconfig}
workers: #{specify the list of worker clusters}
- name: worker-1 #{the user defined name of the worker cluster}
context_name: k8s-cluster-1 #{the name of the context to use from the kubeconfig file; for topology only}
control_plane_address: https://35.243.149.48 #{the address of the control plane kube-apiserver. kubeslice-cli determines the address from kubeconfig}
- name: worker-2 #{the user defined name of the worker cluster}
context_name: k8s-cluster-2 #{the name of the context to use from the kubeconfig file; for topology only}
control_plane_address: https://35.231.51.208 #{the address of the control plane kube-apiserver. kubeslice-cli determines the address from kubeconfig}
- name: worker-3 #{the user defined name of the worker cluster}
context_name: k8s-cluster-3 #{the name of the context to use from the kubeconfig file; for topology only}
control_plane_address: https://34.73.76.225 #{the address of the control plane kube-apiserver. kubeslice-cli determines the address from kubeconfig}
kubeslice_configuration:
project_name: cockroachdb-project #{the name of the KubeSlice Project}
helm_chart_configuration:
repo_alias: kubeslice #{The alias of the helm repo for KubeSlice Charts}
repo_url: https://kubeslice.github.io/kubeslice/ #{The URL of the OSS Helm Charts for KubeSlice}
cert_manager_chart:
chart_name: cert-manager #{The name of the Cert Manager Chart}
version: #{The version of the chart to use. Leave blank for latest version}
controller_chart:
chart_name: kubeslice-controller #{The name of the Controller Chart}
version: #{The version of the chart to use. Leave blank for latest version}
values: #(Values to be passed as --set arguments to helm install)
worker_chart:
chart_name: kubeslice-worker #{The name of the Worker Chart}
version: #{The version of the chart to use. Leave blank for latest version}
values: #(Values to be passed as --set arguments to helm install)
ui_chart:
chart_name: kubeslice-ui #{The name of the UI/Enterprise Chart}
version: #{The version of the chart to use. Leave blank for latest version}
values: #(Values to be passed as --set arguments to helm install)
Loading