This tutorial demonstrates how to share a PostgreSQL database across multiple Kubernetes clusters that are located in different public and private cloud providers.
This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites.
- Overview
- Prerequisites
- Step 1: Access your Kubernetes clusters
- Step 2: Install Skupper on your Kubernetes clusters
- Step 3: Create your Kubernetes namespaces
- Step 4: Set up the demo
- Step 5: Create your sites
- Step 6: Link your sites
- Step 7: Deploy the PostgreSQL service
- Step 8: Expose the PostgreSQL on the Virtual Application Network
- Step 9: Making the PostgreSQL database accessible to the public sites
- Step 10: Create pod with PostgreSQL client utilities
- Step 11: Create a database, a table and insert values
- Step 12: Access the product table from any site
- Cleaning up
- Summary
- Next steps
- About this example
In this tutorial, you will create a Virtual Application Nework that enables communications across the public and private clusters. You will then deploy a PostgreSQL database instance to a private cluster and attach it to the Virtual Application Network. This will enable clients on different public clusters attached to the Virtual Application Nework to transparently access the database without the need for additional networking setup (e.g. no vpn or sdn required).
-
Access to at least one Kubernetes cluster, from any provider you choose.
-
The
kubectlcommand-line tool, version 1.15 or later (installation guide).
The basis for the demonstration is to depict the operation of a PostgreSQL database in a private cluster and the ability to access the database from clients resident on other public clusters. As an example, the cluster deployment might be comprised of:
- A private cloud cluster running on your local machine
- Two public cloud clusters running in public cloud providers
While the detailed steps are not included here, this demonstration can alternatively be performed with three separate namespaces on a single cluster.
Skupper is designed for use with multiple Kubernetes clusters.
The skupper and kubectl commands use your
kubeconfig and current context to select the cluster
and namespace where they operate.
This example uses multiple cluster contexts at once. The
KUBECONFIG environment variable tells skupper and kubectl
which kubeconfig to use.
For each cluster, open a new terminal window. In each terminal,
set the KUBECONFIG environment variable to a different path and
log in to your cluster.
Public 1 cluster:
export KUBECONFIG=$PWD/kubeconfigs/public1.config
<provider-specific login command>Public 2 cluster:
export KUBECONFIG=$PWD/kubeconfigs/public2.config
<provider-specific login command>Private 1 cluster:
export KUBECONFIG=$PWD/kubeconfigs/private1.config
<provider-specific login command>Note: The login procedure varies by provider.
Using Skupper on Kubernetes requires the installation of the Skupper custom resource definitions (CRDs) and the Skupper controller.
For each cluster, use kubectl apply with the Skupper
installation YAML to install the CRDs and controller.
Public 1 cluster:
kubectl apply -f https://skupper.io/v2/install.yamlPublic 2 cluster:
kubectl apply -f https://skupper.io/v2/install.yamlPrivate 1 cluster:
kubectl apply -f https://skupper.io/v2/install.yamlThe example application has different components deployed to different Kubernetes namespaces. To set up our example, we need to create the namespaces.
For each cluster, use kubectl create namespace and kubectl config set-context to create the namespace you wish to use and
set the namespace on your current context.
Public 1 cluster:
kubectl create namespace public1
kubectl config set-context --current --namespace public1Public 2 cluster:
kubectl create namespace public2
kubectl config set-context --current --namespace public2Private 1 cluster:
kubectl create namespace private1
kubectl config set-context --current --namespace private1On your local machine, make a directory for this tutorial and clone the example repo:
Public 1 cluster:
cd ~/
mkdir pg-demo
cd pg-demo
git clone -b v2 https://github.com/skupperproject/skupper-example-postgresql.gitA Skupper Site is a location where your application workloads are running. Sites are linked together to form a network for your application.
Use the kubectl apply command to declaratively create sites in the kubernetes
namespaces. This deploys the Skupper router. Then use kubectl get site to see
the outcome.
Note: If you are using Minikube, you need to start minikube tunnel before you configure skupper.
The public1 site definition sets linkAccess: default, because the other two sites public2 and private1
will establish a Skupper link to public1. This extra definition tells that the public1 site accepts incoming
Skupper links from other sites using the default ingress type for the target cluster (route when using OpenShift or loadbalancer otherwise).
Public 1 cluster:
kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public1/site.yamlPublic 2 cluster:
kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public2/site.yamlPrivate 1 cluster:
kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/private1/site.yamlA Skupper link is a channel for communication between two sites. Links serve as a transport for application connections and requests.
Creating an AccessToken requires the creation of an AccessGrant first,
on the target namespace (public1), then we can consume the AccessGrant's status
to write an AccessToken and apply if into the target clusters (public2 and private1)
using kubectl apply.
Note: The link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it.
Public 1 cluster:
kubectl wait --for=condition=ready site/public1 --timeout 300s
kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public1/accessgrant.yaml
kubectl wait --for=condition=ready accessgrant/public1-grant --timeout 300s
kubectl get accessgrant public1-grant -o go-template-file=~/pg-demo/skupper-example-postgresql/kubernetes/token.template > ~/public1.tokenPublic 2 cluster:
kubectl apply -f ~/public1.tokenSample output:
$ kubectl apply -f ~/public1.token
accesstoken.skupper.io/token-public1-grant createdPrivate 1 cluster:
kubectl apply -f ~/public1.tokenSample output:
$ kubectl apply -f ~/public1.token
accesstoken.skupper.io/token-public1-grant createdIf your terminal sessions are on different machines, you may need
to use scp or a similar tool to transfer the token securely. By
default, tokens expire after a single use or 15 minutes after
being issued.
After creating the application router network, deploy the PostgreSQL service. The private1 cluster will be used to deploy the PostgreSQL server and the public1 and public2 clusters will be used to enable client communications to the server on the private1 cluster.
Private 1 cluster:
kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/private1/deployment-postgresql-svc.yamlSample output:
$ kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/private1/deployment-postgresql-svc.yaml
secret/postgresql created
deployment.apps/postgresql createdNow that the PostgreSQL is running in the private1 cluster, we need to expose it into your Virtual Application Network (VAN).
Private 1 cluster:
kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/private1/connector.yamlSample output:
$ kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/private1/connector.yaml
connector.skupper.io/postgresql createdIn order to make the PostgreSQL database accessible to the public1 and public2 sites, we need to define a Listener
on each site, which will produce a Kubernetes service on each cluster, connecting them with the database running on private1 cluster.
Public 1 cluster:
kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public1/listener.yaml
kubectl wait --for=condition=ready listener/postgresql --timeout 300sSample output:
$ kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public1/listener.yaml
listener.skupper.io/postgresql createdPublic 2 cluster:
kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public2/listener.yaml
kubectl wait --for=condition=ready listener/postgresql --timeout 300sSample output:
$ kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public2/listener.yaml
listener.skupper.io/postgresql createdCreate a pod named pg-shell on each of the public clusters. This pod will be used to
communicate with the PostgreSQL database from public1 and public2 clusters.
Public 1 cluster:
kubectl run pg-shell --image quay.io/skupper/simple-pg \
--env="PGUSER=postgres" \
--env="PGPASSWORD=skupper" \
--env="PGHOST=postgresql" \
--command sleep infinitySample output:
$ kubectl run pg-shell --image quay.io/skupper/simple-pg \
--env="PGUSER=postgres" \
--env="PGPASSWORD=skupper" \
--env="PGHOST=postgresql" \
--command sleep infinity
pod/pg-shell createdPublic 2 cluster:
kubectl run pg-shell --image quay.io/skupper/simple-pg \
--env="PGUSER=postgres" \
--env="PGPASSWORD=skupper" \
--env="PGHOST=postgresql" \
--command sleep infinitySample output:
$ kubectl run pg-shell --image quay.io/skupper/simple-pg \
--env="PGUSER=postgres" \
--env="PGPASSWORD=skupper" \
--env="PGHOST=postgresql" \
--command sleep infinity
pod/pg-shell createdNow that we can access the PostgreSQL database from both public sites, let's create a database called markets, then create a table named product and load it with some data.
Public 1 cluster:
kubectl exec pg-shell -- createdb -e markets
kubectl exec -i pg-shell -- psql -d markets < ~/pg-demo/skupper-example-postgresql/sql/table.sql
kubectl exec -i pg-shell -- psql -d markets < ~/pg-demo/skupper-example-postgresql/sql/data.sqlSample output:
$ kubectl exec pg-shell -- createdb -e markets
kubectl exec -i pg-shell -- psql -d markets < ~/pg-demo/skupper-example-postgresql/sql/table.sql
kubectl exec -i pg-shell -- psql -d markets < ~/pg-demo/skupper-example-postgresql/sql/data.sql
SELECT pg_catalog.set_config('search_path', '', false);
CREATE DATABASE markets;
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1Now that data has been added, try to read them from both the public1 and public2 sites.
Public 1 cluster:
echo "SELECT * FROM product;" | kubectl exec -i pg-shell -- psql -d marketsPublic 2 cluster:
echo "SELECT * FROM product;" | kubectl exec -i pg-shell -- psql -d marketsRestore your cluster environment by returning the resources created in the demonstration. On each cluster, delete the demo resources and the virtual application Network.
Public 1 cluster:
kubectl delete pod pg-shell --now
kubectl delete -f ~/pg-demo/skupper-example-postgresql/kubernetes/public1/Public 2 cluster:
kubectl delete pod pg-shell --now
kubectl delete -f ~/public1.token -f ~/pg-demo/skupper-example-postgresql/kubernetes/public2/Private 1 cluster:
kubectl delete -f ~/public1.token -f ~/pg-demo/skupper-example-postgresql/kubernetes/private1/Through this example, we demonstrated how Skupper enables secure access to a PostgreSQL database hosted in a private Kubernetes cluster, without exposing it to the public internet.
By deploying Skupper in each namespace, we established a Virtual Application Network (VAN), which allowed the PostgreSQL service to be securely shared across clusters. The database was made available exclusively within the VAN, enabling applications in the public1 and public2 clusters to access it seamlessly, as if it were running locally in their own namespaces.
This approach not only simplifies multi-cluster communication but also preserves strict network boundaries, eliminating the need for complex VPNs or firewall changes.
Check out the other examples on the Skupper website.
This example was produced using Skewer, a library for documenting and testing Skupper examples.
Skewer provides utility functions for generating the README and
running the example steps. Use the ./plano command in the project
root to see what is available.
To quickly stand up the example using Minikube, try the ./plano demo
command.