-
Notifications
You must be signed in to change notification settings - Fork 0
Local Cluster Deployment
This section provides a step-by-step guide to deploying the system in a local cluster environment using Docker. This setup is ideal for development, testing, and small-scale production environments.
- Docker
- Docker Compose
- Python3
- AWS CLI
- Base container image for Springtail Service *
- More on this in the "Building a Base Image" section below.
- Check out the
springtailrepository to your local machine.
flowchart TD
Z[Start Mock AWS Service]
Y[Start Redis Service]
X[Start Local Primary DB]
Z --> A
Y --> A
X --> A
A[Build Springtail Package] --> G[Upload Package to Local AWS S3]
G --> F[Launch Bootstrap Container: <br>set secrets, set Redis config, amend shared Env files etc]
F -- read shared Env files / Redis config --> M[Launch Ingestion Container]
F -- read shared Env files / Redis config ---> N[Launch Proxy Container]
F -- read shared Env files / Redis config --> Q[Launch FDW Container]
M --> J[Join & Wait]
N --> J
Q --> J
The Springtail service need a set of dependencies to run. These dependencies are packaged into a base container image, which is then used to build the service containers. The local cluster will assume the existence of such base image.
All the steps below assume you are at the root of springtail repository.
In the local-cluster/env directory, there is a env.setup file that defines the name of the base image to use.
IMAGE=local-cluster-img:latest
So you may want to build an image with the tag local-cluster-img:latest before starting the local cluster.
$ docker build -t local-cluster-img:latest -f local-cluster/Dockerfile.base .- First build a springtail package. You can reuse an existing package if you have one though.
$ ./cluster build-package <out-dir>- Start the local cluster. You can specify the full package path on your local machine (the tarball file generated from the previous step).
$ ./cluster up <full-package-path>- It takes a few minutes (usually less than 2) to start the cluster. You can check the status by:
$ ./cluster statusOnce all containers are up and running, you can shell into any container. For example:
$ ./cluster sh <name>The container names are:
proxyingestionfdwcontroller
By default, the springtail-coordinator service is not started. You can start it by (inside a container):
$ systemctl start springtail-coordinatorTo tear down the Springtail containers,
$ ./cluster downTo tear down everything including the supporting services (Redis, local AWS mock, local DB),
$ ./cluster down allThis section explains what happens under the hood when you run the above commands.
The ./cluster build-package <out-dir> command does the following:
Start a temporary build container from the image identified by BASE_BUILDER_IMAGE_TAG in the cluster script, which
resides in the DevSupport ECR repo.
If the BASE_BUILDER_IMAGE_TAG is not available locally it will try to pull from the remote repo, thus requiring AWS SSO
configured.
It saves the package into the <out-dir> directory on the host machine.
The package is a tarball file named springtail-<date-version>-<system-settings-gitsha>.tar.gz.
The ./cluster up <full-package-path> command does the following:
It sets the to the PACKAGE_FILE_NAME environment variable, making it available to the build
process of all the containers.
Then it starts the supporting local services, like the AWS mock, Redis, and a local PostgreSQL database serving as
primary DB. Then it uploads the package to the local mock S3 service.
Then it launches a bootstrap container that sets up shared environment files, secrets, and Redis configuration.
Specifically, the bootstrap process will add new env vars into ./env/.env file.
Finally, it launches the main service containers: proxy, ingestion, and fdw, picking up all the env files in the
./env directory.
This runs before any service container is started. It does the following:
- Set up shared environment files in the
./envdirectory, which will be picked up by all service containers; - Set up Redis configuration, creating a default user and password, and saving the config to
./env/redis.env; - Set up secrets, creating a self-signed certificate for HTTPS, and saving the secrets to
./env/secrets.env;
In summary, this is setting up the supporting environment for the service containers to run.
For each service container, upon startup, it runs a init-services script, managed by supervisord, that does the
following:
- Download S3 package and extract it, moving the coordinator to the
/opt/springtaildirectory as the bootstrap coordinator; - Make sure all necessary directories exist, creating them if needed;
- For
fdwspecifically, it triggers an Ansible script to customize the PostgreSQL; This is akin to the EC2'suserdatascript (CloudInit), which is used to initialize a container.