This directory contains a Helm chart to deploy Loculus instances for several purposes.
The Helm variable environment reflects those purposes:
local: Running locally with portsserver: Running on a server with domain name
For development, follow the k3d instructions lower down the page.
Install helm and use traefik for ingress.
Create a long-lived managed database: [to be documented as part of: #793]
Create your own configuration, by copying the loculus/values.yaml file and editing it as appropriate.
Install the Helm chart:
helm install loculus kubernetes/loculus -f my-values.yamlInstall k3d and helm. We also recommend installing k9s to inspect cluster resources.
We deploy to kubernetes via the ../deploy.py script. It requires you to have python 3.9 or higher and the packages pyyaml and requests installed. To create a virtual environment with the required dependencies run:
python3 -m venv .venv
source .venv/bin/activate
pip install requests pyyamlNOTE: On MacOS, make sure that you have configured enough RAM in Docker, we recommend 8GB.
../deploy.py cluster --dev
../deploy.py helm --devStart the backend and the website locally. Note that by default the deploy script will also start a Loculus deployment without preprocessing and ingest, to add preprocessing and ingest add the --enablePreprocessing and --enableIngest flags. To run either of these deployments locally you will need to use the generated configs.
The deploy.py script wraps the most important k3d and helm commands.
Check the help for more information:
../deploy.py --helpBasic cluster management should be done with this script.
Use kubectl to interact with the running cluster in full power (e.g. accessing individual pods, getting logs, etc.).
Create a cluster that doesn't expose the ports of the backend and the website:
../deploy.py cluster --devInstall the chart with some port forwarding disabled to link to local manual runs of the backend and website:
../deploy.py helm --devStart the website and the backend locally. Check the README of the backend and the website for more information on how to do that.
Check whether the services are already deployed (it might take some time to start, especially for the first time):
kubectl get podsIf something goes wrong,
kubectl get eventsmight help to see the reason.
Redeploy after changing the Helm chart:
../deploy.py upgradeYou can also delete the cluster with:
../deploy.py cluster --deleteWith helm based commands you can customise the values yaml file with --values [file.yaml].
There is a local environment intended for E2E testing in GitHub Actions.
It can also be used locally (though note caveats below for ARM-based mac systems).
Create a cluster with ports for all services exposed:
../deploy.py clusterInstall the chart to deploy the services:
../deploy.py helm --branch [your_branch]ArgoCD will aim to build preview instances for any open PR with the preview label. It may take 5 minutes for an instance to appear. The preview will appear at [branch_name].loculus.org. Long branch names are shortened, and some special characters are not supported. You can find the exact URL in the ArgoCD UI: https://argocd.k3s.pathoplexus.org/ (login details are on Slack).
The preview is intended to simulate the full backend and associated containers. It may be necessary to update this directory when changes are made to how containers need to be deployed. It you would like to test your changes on a persistent DB add developmentDatabasePersistence: true to your values.yaml.
We do not currently support branch names containing characters that can't go in domain names with the exception of '/' and '_' (see kubernetes/appset.yaml for details).
For preview instances this repo contains sealed secrets that allow the loculus-bot to access the GitHub container registry and (separately) the GitHub repository. These are encrypted such that they can only be decrypted on our cluster but are cluster-wide so can be used in any namespace.
Create a secret, for example like this:
kubectl create secret generic my-secret --from-literal=accessKey=<secret> --from-literal=secretKey=<secret> --dry-run=client -o yaml > my-secret.yaml
This will create a my-secret.yaml file. Now, ensure that you have correctly configured kubectl to point to the preview cluster. Then seal your secret like this:
kubeseal --scope cluster-wide --format=yaml < my-secret.yaml > my-sealed-secret.yaml
You now have a file my-sealed-secret.yaml with spec.encryptedData in it. You can now add this encryptedData to the values.yaml under secrets.<yoursecretname>. See values_preview_server.yaml for examples.
To access the remote cluster without sshing to the containing machine, you need to set up your kubeconfig file.
You can get the kubeconfig file from the server by sshing to the server and running:
sudo kubectl config view --rawHowever this configuration will specify the server as 127.0.0.1, which you need to replace with the real IP of the server to which you SSHed.
You need to add each of the clusters, users, and contexts to your local ~/.kube/config file. You can change the user/cluster/context names, but the context must contain the correct user and cluster names.
The key information to add are the client-certificate-data and client-certificate-data for the user, and certificate-authority-data and server for the cluster.
You can then switch between contexts, first listing them with:
kubectl config get-contextsAnd then switching with:
kubectl config use-context [context_name]You can confirm that you are connected to the correct cluster with:
kubectl cluster-infoSee kubeconfig docs for more information.
You can find frequently used kubectl commands in the KUBECTL_FAQ.md file.
If a deployment fails, you can use kubectl to get more information. For example, to see the status of the pods:
kubectl get podsOr to see the events which might give you more information about why a pod failed to start:
kubectl get eventsIf you are on macOS, you need to give Docker Desktop sufficient system resources, otherwise local deployments will fail with warnings such as node.kubernetes.io/disk-pressure and FreeDiskSpaceFailed.
As of March 2024, you need to give at least 3GB of RAM (6GB recommended) and 75GB of (virtual) disk space (100-150GB recommended). You can do this in the Docker Desktop settings under Resources > Advanced.