Repo for deploying, in Minikube, a "Hello World" Rails App that connects to PostgreSQL and Redis. Different components are monitored using Prometheus.
structure has been generated using the following rails command:
rails new sw_hello_world -d postgresql --minimal --skip-bundle
Key folders/files:
- app : rails app code including MVC pieces.
- manifests: application helm chart and value files for remote helm charts
- manifests/sw-hello-world: app helm chart
- manfiests/postgres_exporter.yaml: k8s manifest for a postgres-exporter for prometheus.
- manifests/postgresql_values.yaml: values file for the postgresql bitnami helm chart
- manifests/prometheus_values.yaml: values file for the prometheus bitnami helm chart. Conatins the alerting rules too.
- manifests/redis_values.yaml: values file for the postgresql bitnami helm chart
- Minikube ( used version v1.36.0 )
- kubectl ( Used Client Version: v1.33.2 , Kustomize Version: v5.6.0 , Server Version: v1.33.1 )
- Helm ( used version v3.18.3 )
Please make sure the pre-reqs are met.
Using the pipeline.sh script under script directory, Minikube should start and all of the components get deployed including the monitoring piece.
cd script
./pipeline.sh
for details on the script and the different steps, check the file.
for the purpose of our task, using Minikube service command allows for exposing the service component built for our application.
minikube service sw-hello-world --url
2 loaclhost URLs will show up; One for the application itself and the second one for the prometheus exporter.
just copy the 1st one and paste into your browser. Note: the port will change on each run of this command.
for example:
http://127.0.0.1:51680
http://127.0.0.1:51680/greetings/helloworld
Same as for the APP. To access the Prometheus server, just use the url generated by running:
minikube service prometheus-server --url
To access the Prometheus AlertManager server, just use the url generated by running:
minikube service prometheus-alertmanager --url
Prometheus is used for monitoring the different components of our environment.
Alerts are configured in manifests/prometheus_values.yaml
Note: the configured rules do not represent an exhaustive list of what can be monitored. much more needs to be setup. for the purpose of this task and respecting the timeboxing used for developping this, not all critical/useful metrics are being monitored.
- Postgresql
- REDIS
- PUMA Server
- Kube
- UP/DOWN for all instances
- Postgresql connection Count
- Postgresql slow queries count
- PVC High Usage. PVC used for persisting Postgresql data within k8s
- REDIS Mem Usage
- REDIS Connected clients count
- PUMA requests Count
- Puma Backlog Count: # of established but unaccepted connections
- Puma running worker threads count
Note: dashboarding in Grafana allows us to set up graphs for these metrics and other ones as needed.
Alerting is configured in manifests/prometheus_values.yaml
setup alerts on:
- down instances
- Postgres connections above 90%
- postgres slow queries more than 0
- PVC usage over 80%
- Redis memory usage above 90%
- Redis connections over 100
- Puma connections over 100
- Puma Backlog Count (# of established but unaccepted connections) over 20
- Puma running worker threads count is equal to max possible count
- Use Grafana with Prometheus for graphing and Dashboard'ing.
- Use a secret manager like Hashicorp Vault for example. No hardcoded passwords/secrets.
- Use of an artifact/img Registry.
- Use of different k8s namespaces for the app piece and the monitoring piece.
- Add monitoring rules to monitor resource usage by PODs/Containers as well as by the K8s cluster nodes (here only one for Minikube)
- configure HPA horizontal Pod Autoscaler for the APP to scale number of pods based on resource usage for example.