Skip to content

yurll/terraform-eks-fullstack-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Terraform EKS Fullstack Demo

License Terraform Kubernetes AWS

Prerequisites

Before deploying the EKS Fullstack Demo App, ensure the following tools are installed and configured on your local machine.

Tool Minimum Version Description
Terraform ≥ 1.13.3 Infrastructure as Code tool used to provision AWS resources and manage the EKS cluster.
AWS CLI ≥ 2.0 Command-line interface for managing AWS accounts and credentials. Must be configured with aws configure.
kubectl ≥ 1.32 CLI for interacting with the Kubernetes cluster created by Terraform.
Kustomize ≥ 5.7.1 Tool for customizing Kubernetes YAML configurations; used for managing environment-specific manifests.
Helm ≥ 3.16 Kubernetes package manager used to install and manage Helm charts on the EKS cluster.

Additional Requirements

  • An AWS account with sufficient IAM permissions (EKS, EC2, IAM, VPC, CloudFormation).
  • Configured AWS credentials on your local machine (~/.aws/credentials).
  • SSH key pair available in AWS (if you plan to access worker nodes).
  • Internet access for Terraform and Helm providers to pull remote modules and charts.

Getting Started

Follow these steps to set up and deploy the EKS Fullstack Demo App.

Configuring our infrastructure

A.1. Clone the Repository
git clone git@github.com:yurll/terraform-eks-fullstack-demo.git
cd terraform-eks-fullstack-demo
A.2. Prepare the Variables File

The repository includes a template files named terraform.tfvars.example with placeholder values. You must rename these files to terraform.tfvars and update the variables with your actual configuration.

cp terraform/envs/mgmt/terraform.tfvars.example terraform/envs/mgmt/terraform.tfvars
cp terraform/envs/dev/terraform.tfvars.example terraform/envs/dev/terraform.tfvars
A.3. Create the S3 Bucket for Terraform State

Terraform uses an S3 bucket to store its state file. This bucket must exist before running terraform init.

To ensure uniqueness, append a random suffix to your bucket name.

You can generate a random suffix and create the bucket with the following commands:

export AWS_ACCOINT_ID=<your-account-id>
export REGION="eu-west-1"
export ENVIRONMENT="dev"
export TFSTATE_BUCKET="demo-terraform-eks-fullstack-$(openssl rand -hex 3)"

echo "Creating bucket: $TFSTATE_BUCKET"

aws s3api create-bucket \
  --bucket "$TFSTATE_BUCKET" \
  --region "$REGION" \
  --create-bucket-configuration LocationConstraint="$REGION"

aws s3api put-bucket-versioning \
  --bucket "$TFSTATE_BUCKET" \
  --versioning-configuration Status=Enabled

echo "Bucket $TFSTATE_BUCKET created and versioning enabled."

After creation, update the value of tfstate_bucket_name in your example variable file (e.g. terraform.tfvars.example) with the generated $TFSTATE_BUCKET.

A.4. Initialize Terraform

Initialize Terraform to download all required providers and modules. Initialize our workspace variable.

terraform -chdir=terraform/envs/mgmt init -backend-config="bucket=${TFSTATE_BUCKET}"
terraform -chdir=terraform/envs/dev init -backend-config="bucket=${TFSTATE_BUCKET}"

Deploy Management Environemt

B.1. Plan Infrastructure Changes for Management Environemt

Preview the resources that Terraform will create:

terraform -chdir=terraform/envs/mgmt plan
B.2. Apply the Configuration for Management Environemt

Provision all resources defined in Terraform:

terraform -chdir=terraform/envs/mgmt apply

Confirm with yes when prompted.

After completion, Terraform will output key details such as the EKS cluster name and kubeconfig file path.

DNS Configuration and VPN Access

After successful deployment, Terraform will output important connection details. Among them, you will find the following values:

  • zone_ns — the list of Name Server (NS) records for your hosted zone
  • openvpn_get_config_command — the command to download your OpenVPN client configuration file
Update DNS NS Records

Copy the list of Name Servers (zone_ns) from the Terraform output:

Example:

zone_ns = tolist([
  "ns-1025.awsdns-00.org",
  "ns-1562.awsdns-03.co.uk",
  "ns-515.awsdns-00.net",
  "ns-69.awsdns-08.com",
])

Go to your domain registrar's DNS management console and update the NS records for your domain to match these values. Propagation of DNS changes may take up to 24 hours.

Download and OpenVPN Configuration

Use the command provided in your Terraform output to download the VPN configuration file: Example:

aws s3 cp s3://bastion-abcd1234-openvpn-backup/openvpn-config/client.ovpn .

You have to import this file into your OpenVPN Client. You can then securely access internal resources such as the EKS cluster or private services.

Configuring Dev - Management Connection

To resolve "Chicken or the egg" question we have to deploy Dev VPC first, then uncomment VPC Peering sectionf for Managemt. Alongside, withh a data source

Deploy DEV VPC
terraform -chdir=terraform/envs/dev plan -target module.vpc
terraform -chdir=terraform/envs/dev apply -target module.vpc
Create VPC Peering between MGMT and DEV

Uncomment VPC_PEERING sections within terraform/envs/mgmt/data.tf and terraform/envs/mgmt/network.tf files, then deploy terraform again:

terraform -chdir=terraform/envs/mgmt plan
terraform -chdir=terraform/envs/mgmt apply

Deploy Dev Environemt

B.1. Plan Infrastructure Changes for Management Environemt

Preview the resources that Terraform will create:

terraform -chdir=terraform/envs/dev plan
B.2. Apply the Configuration for Management Environemt

Provision all resources defined in Terraform:

terraform -chdir=terraform/envs/dev apply

Confirm with yes when prompted.

After completion, Terraform will output key details such as the EKS cluster name and kubeconfig file path.

Here you can export some neccessary parameters, that will be useful in future:

export VPC_ID=<vpc_id>
export RDS_CREDENTIALS_SECRET_ARN=<rds_credentials_secret_arn>
export DOMAIN_NAME=<domain_name>
export ACM_CERTIFICATE_ARN=<acm_certificate_arn>
export EKS_CLUSTER_NAME=<eks_cluster_name>

EKS Cluster configuring

C.1. Connect to the Cluster

Once the cluster is ready, update your local kubeconfig ( can be found from the terraform output as eks_cluster_name):

EKS_CLUSTER_NAME=<your-cluster-name>
aws eks update-kubeconfig --region $REGION --name $EKS_CLUSTER_NAME

Verify connectivity:

kubectl get nodes

You should see a list of worker nodes in the Ready state.

C.2. Deploy Core Helm Charts

After the EKS cluster is created and your kubectl context is configured, deploy the core Helm components required for full functionality of the demo application.

These include:

  • external-dns — manages DNS records automatically based on Kubernetes resources.
  • aws-load-balancer-controller — provisions AWS Load Balancers for Kubernetes Services.
  • secrets-store-csi-driver — mounts external secrets into pods from AWS Secrets Manager or other providers.
kubectl create namespace $ENVIRONMENT

external-dns:

helm repo add --force-update external-dns https://kubernetes-sigs.github.io/external-dns/
helm upgrade --install external-dns external-dns/external-dns --namespace ${ENVIRONMENT} \
  --set provider.name=aws \
  --set aws.zoneType=public \
  --set policy=upsert-only \
  --set txtOwnerId=stocks-demo \
  --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"="arn:aws:iam::${AWS_ACCOINT_ID}:role/${ENVIRONMENT}-eks-external-dns-role"

aws-load-balancer-controller:

helm repo add eks https://aws.github.io/eks-charts
helm upgrade --install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=${EKS_CLUSTER_NAME} \
  --set serviceAccount.create=false \
  --set vpcId=${VPC_ID} \
  --set serviceAccount.name=aws-load-balancer-controller

csi-secrets-store

helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver \
  --namespace kube-system

Installing the AWS Provider and Config Provider (ASCP):

kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yaml

Application deployment:

D.1. Render Kustomize Manifests

Before applying your Kustomize overlays, ensure all environment variables are exported.

Many manifests in this repository (e.g. k8s/base/*.yaml) contain placeholders such as ${RDS_ENDPOINT} that will be substituted before applying them to the cluster.

Render all templates by substituting variables into new files:

for file in $(find k8s/overlays/templates -name "*.yaml"); do
  envsubst < "$file" > "k8s/overlays/${ENVIRONMENT}/$(basename "$file")"
done
D.2 Apply Kustomize Overlay
kubectl apply -k k8s/overlays/dev/

After successful application, verify that your services, ingresses, and deployments are active:

kubectl get all -A
kubectl get ingress -A

And you can verify that app is available via your favorite browser.

Please pay attentiond that for security reason all patch- files are ignored by git for the k8s/overlays/dev folder
TODOs:
  • [] Move external dns IAM to MGMT, create helm within kube-system, attach only dev- policy to a role

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages