This documentation provides instructions on how to a start up a Kubernetes cluster on Google Cloud Platform for dictyBase applications
To begin, first clone this repository:
git clone https://github.com/dictybase-docker/cluster-ops.git
We use asdf to install and manage the version of the required binaries for the managing the cluster. Install asdf by following the documentation below.
just is used to run the various commands defined in the project. These include commands for setting up Google Cloud Platform resources and initializing the kubernetes cluster with kops.
First, install the just plugin for asdf:
asdf add plugin just
Now, to install just, simply run:
asdf install just
Since just is now installed, we can run a single just recipe to install the remaining binaries. This will install specific versions of the tools as defined in the Justfile:
just install-asdf-plugins
This command will install the following binaries:
Environmental variables for the project are managed by direnv, which will load the variables defined in the .envrc file at the root of the project.
This file will be created automatically when running the just set-env-var just recipe later on, so there is no need to create it yourself.
You must acquire a key of the Service Account Manager service account from the owner of the Google Cloud Project where the cluster will be set up.
The Service Account Manager service account is needed to:
- create all other services accounts needed to create and manage the cluster
- enable the required Google Cloud APIs to create and manage the cluster
The project owner must run:
just create-sa-manager <project_id>This will create a service account named sa-manager and create a JSON key file for the service account in their ./credentials directory.
Have the project owner send the key file to you. Save it as ./credentials/sa-manager.json directory.
Then, you can set the GOOGLE_APPLICATION_CREDENTIALS environmental variable by running
just set-env-var GOOGLE_APPLICATION_CREDENTIALS "${PWD}/credentials/sa-manager.json"
Google Application Default Credentials (ADC) is used by the Go Google Cloud client libraryto authenticate requests to your Google Cloud project. For service account keys, the use of environmental variables is the prescribed method of setting up ADC.
From here, you will be able to continue with the cluster setup on your own.
A single command can be used to initialize the required Google Cloud services and set up the kops cluster.
Running the following:
just init-kops-cluster <project_id> <bucket_name>will execute the steps listed below. There is no need to run them individually. They have been preserved here for documentation.
To create the kubernetes cluster and run all the dictyBase applications, certain Google Cloud APIs need to be enabled first.
running the following command will enable the APIs from the list defined in enabled_apis.txt
just gcp-api enable-apis <project_id> gcs-files/apis/enabled_apis.txtWhile not required for running the cluster, it is helpful to disable unneeded Google Cloud APIs to prevent unwanted charges to the GCP account.
running the following command will disable unnecessary Google Cloud APIs using the predefined list:
just gcp-api disable-apis <project_id> gcs-files/apis/disable_enabled_apis.txtThe kops cluster creator service account is a dedicated service account with the necessary permissions to create and manage the Kubernetes cluster using the kops tool.
Create this service account with:
just gcp-sa create-sa <project_id> kops-cluster-creator gcs-files/roles-permissions/kops-cluster-creator-roles.txt credentials/kops-cluster-creator.jsonThis will create a service account named kops-cluster-creator with the roles defined in gcs-files/roles-permissions/kops-cluster-creator-roles.txt, and save the JSON key file to credentials/kops-cluster-creator.json.
After creating the kops-cluster-creator key, you will need to update the value of the GOOGLE_APPLICATION_CREDENTIAL environmental variable to point to the key.
Run:
export GOOGLE_APPLICATION_CREDENTIALS="${PWD}/credentials/kops-cluster-creator.json"
Now, the Go Google Cloud client libraries will use the kops-cluster-creator service key,
The create-kops-cluster recipe sets up the state store bucket and initializes the Kubernetes cluster:
just create-kops-cluster <project_id> <bucket_name>Applications are deployed to the Kubernetes cluster using Pulumi, an infrastructure as code tool. Pulumi allows us to define, deploy, and manage cloud resources using familiar programming languages like Go.
just gcp-sa create-sa <project_id> pulumi-manager gcs-files/roles-permissions/pulumi-manager-roles.txt credentials/pulumi-manager.jsonexport PULUMI_GCP_CREDENTIALS="${PWD}/credentials/pulumi-manager.json"Creates a Google Cloud Key used to encrypt secrets in a Pulumi project's stack
just gcp-kms create-keyring-and-key <project-id> <keyring-name> <key-name> credentials/pulumi-manager.json <location>Then,
export PULUMI_SECRET_PROVIDER=<GCLOUD_KMS_KEY>
Arguments:
location: Optional. The Google Cloud region where the bucket will be created. Defaults to "us-central1"
The following command sets up a Google Cloud Storage bucket to store Pulumi state:
just gcp-pulumi pulumi-gcs-setup credentials/pulumi-manager.json <pulumi_bucket_name> <lifecycle_config> <location>Arguments:
pulumi_bucket_name: Name of the gcs bucket to create for storing pulumi statelifecycle_config: Optional. Path to a lifecycle configuration file for the bucket (controls object retention/deletion policies)location: Optional. The Google Cloud region where the bucket will be created. Defaults to "us-central1"
Initialize stack:
just gcp-pulumi new-stack-from <folder> <stack> <from-stack>
Example:
just gcp-pulumi new-stack from graphql_server production staging
This would initialize a new stack called production in the graphql_server project. It will copy the configuration from the staging stack, if it exists.
Create the project resources for the desired stack
just gcp-pulumi create-resource <folder> <stack>
For convenience, we provide two just recipes that simplify the Pulumi setup and deployment process:
The initialize-pulumi recipe combines steps 1-4 of the Application Deployment with Pulumi section into a single command:
just initialize-pulumi <project_id> <keyring_name> <key_name> <bucket_name> [location]Arguments:
project_id: Your GCP project IDkeyring_name: Name for the KMS keyring to createkey_name: Name for the KMS key to createbucket_name: Name for the GCS bucket to store Pulumi statelocation: (Optional) Google Cloud region. Defaults to "us-central1"
This command will:
- Create the Pulumi Manager service account with necessary permissions
- Set the PULUMI_GCP_CREDENTIALS environment variable
- Create a KMS keyring and key for Pulumi secrets encryption
- Initialize the Pulumi state store in GCS
The pulumi-init-and-deploy recipe combines the Pulumi environment setup with deploying the initial resources:
just pulumi-init-and-deploy <stack> <from-stack> <project_id> <keyring_name> <key_name> <bucket_name> [location]Arguments:
stack: Name of the stack to createfrom-stack: Name of the existing stack to copy configuration fromproject_id: Your GCP project IDkeyring_name: Name for the KMS keyring to createkey_name: Name for the KMS key to createbucket_name: Name for the GCS bucket to store Pulumi statelocation: (Optional) Google Cloud region. Defaults to "us-central1"
This command will:
- Set up the complete Pulumi environment (steps 1-4)
- Set the PULUMI_SECRET_PROVIDER environment variable
- Create resources for all projects listed in the specified resources files
The initial resources are defined in the initial-resources.txt file and include:
- ArangoDB single instance
- ArangoDB database creation
- MinIO object storage
- CloudNative PostgreSQL operator
- Backup secrets
- Event messenger
The database and storage management resources are defined in the database-and-storage-resources.txt file and include:
- ArangoDB Backup
- ArangoDB Data Loader
- ArangoDB Operation
- CloudNative PostgreSQL cluster
- Velero Installation
- Redis Standalone
- Storage Class