Before deploying the EKS Fullstack Demo App, ensure the following tools are installed and configured on your local machine.
| Tool | Minimum Version | Description |
|---|---|---|
| Terraform | ≥ 1.13.3 | Infrastructure as Code tool used to provision AWS resources and manage the EKS cluster. |
| AWS CLI | ≥ 2.0 | Command-line interface for managing AWS accounts and credentials. Must be configured with aws configure. |
| kubectl | ≥ 1.32 | CLI for interacting with the Kubernetes cluster created by Terraform. |
| Kustomize | ≥ 5.7.1 | Tool for customizing Kubernetes YAML configurations; used for managing environment-specific manifests. |
| Helm | ≥ 3.16 | Kubernetes package manager used to install and manage Helm charts on the EKS cluster. |
- An AWS account with sufficient IAM permissions (EKS, EC2, IAM, VPC, CloudFormation).
- Configured AWS credentials on your local machine (
~/.aws/credentials). - SSH key pair available in AWS (if you plan to access worker nodes).
- Internet access for Terraform and Helm providers to pull remote modules and charts.
Follow these steps to set up and deploy the EKS Fullstack Demo App.
git clone git@github.com:yurll/terraform-eks-fullstack-demo.git
cd terraform-eks-fullstack-demoThe repository includes a template files named terraform.tfvars.example with placeholder values.
You must rename these files to terraform.tfvars and update the variables with your actual configuration.
cp terraform/envs/mgmt/terraform.tfvars.example terraform/envs/mgmt/terraform.tfvars
cp terraform/envs/dev/terraform.tfvars.example terraform/envs/dev/terraform.tfvarsTerraform uses an S3 bucket to store its state file.
This bucket must exist before running terraform init.
To ensure uniqueness, append a random suffix to your bucket name.
You can generate a random suffix and create the bucket with the following commands:
export AWS_ACCOINT_ID=<your-account-id>
export REGION="eu-west-1"
export ENVIRONMENT="dev"
export TFSTATE_BUCKET="demo-terraform-eks-fullstack-$(openssl rand -hex 3)"
echo "Creating bucket: $TFSTATE_BUCKET"
aws s3api create-bucket \
--bucket "$TFSTATE_BUCKET" \
--region "$REGION" \
--create-bucket-configuration LocationConstraint="$REGION"
aws s3api put-bucket-versioning \
--bucket "$TFSTATE_BUCKET" \
--versioning-configuration Status=Enabled
echo "Bucket $TFSTATE_BUCKET created and versioning enabled."After creation, update the value of tfstate_bucket_name in your example variable file (e.g. terraform.tfvars.example) with the generated $TFSTATE_BUCKET.
Initialize Terraform to download all required providers and modules. Initialize our workspace variable.
terraform -chdir=terraform/envs/mgmt init -backend-config="bucket=${TFSTATE_BUCKET}"
terraform -chdir=terraform/envs/dev init -backend-config="bucket=${TFSTATE_BUCKET}"Preview the resources that Terraform will create:
terraform -chdir=terraform/envs/mgmt planProvision all resources defined in Terraform:
terraform -chdir=terraform/envs/mgmt applyConfirm with yes when prompted.
After completion, Terraform will output key details such as the EKS cluster name and kubeconfig file path.
After successful deployment, Terraform will output important connection details. Among them, you will find the following values:
zone_ns— the list of Name Server (NS) records for your hosted zoneopenvpn_get_config_command— the command to download your OpenVPN client configuration file
Copy the list of Name Servers (zone_ns) from the Terraform output:
Example:
zone_ns = tolist([
"ns-1025.awsdns-00.org",
"ns-1562.awsdns-03.co.uk",
"ns-515.awsdns-00.net",
"ns-69.awsdns-08.com",
])Go to your domain registrar's DNS management console and update the NS records for your domain to match these values. Propagation of DNS changes may take up to 24 hours.
Use the command provided in your Terraform output to download the VPN configuration file: Example:
aws s3 cp s3://bastion-abcd1234-openvpn-backup/openvpn-config/client.ovpn .You have to import this file into your OpenVPN Client. You can then securely access internal resources such as the EKS cluster or private services.
To resolve "Chicken or the egg" question we have to deploy Dev VPC first, then uncomment VPC Peering sectionf for Managemt. Alongside, withh a data source
terraform -chdir=terraform/envs/dev plan -target module.vpc
terraform -chdir=terraform/envs/dev apply -target module.vpcUncomment VPC_PEERING sections within terraform/envs/mgmt/data.tf and terraform/envs/mgmt/network.tf files, then deploy terraform again:
terraform -chdir=terraform/envs/mgmt plan
terraform -chdir=terraform/envs/mgmt applyPreview the resources that Terraform will create:
terraform -chdir=terraform/envs/dev planProvision all resources defined in Terraform:
terraform -chdir=terraform/envs/dev applyConfirm with yes when prompted.
After completion, Terraform will output key details such as the EKS cluster name and kubeconfig file path.
Here you can export some neccessary parameters, that will be useful in future:
export VPC_ID=<vpc_id>
export RDS_CREDENTIALS_SECRET_ARN=<rds_credentials_secret_arn>
export DOMAIN_NAME=<domain_name>
export ACM_CERTIFICATE_ARN=<acm_certificate_arn>
export EKS_CLUSTER_NAME=<eks_cluster_name>Once the cluster is ready, update your local kubeconfig ( can be found from the terraform output as eks_cluster_name):
EKS_CLUSTER_NAME=<your-cluster-name>
aws eks update-kubeconfig --region $REGION --name $EKS_CLUSTER_NAMEVerify connectivity:
kubectl get nodesYou should see a list of worker nodes in the Ready state.
After the EKS cluster is created and your kubectl context is configured, deploy the core Helm components required for full functionality of the demo application.
These include:
- external-dns — manages DNS records automatically based on Kubernetes resources.
- aws-load-balancer-controller — provisions AWS Load Balancers for Kubernetes Services.
- secrets-store-csi-driver — mounts external secrets into pods from AWS Secrets Manager or other providers.
kubectl create namespace $ENVIRONMENTexternal-dns:
helm repo add --force-update external-dns https://kubernetes-sigs.github.io/external-dns/
helm upgrade --install external-dns external-dns/external-dns --namespace ${ENVIRONMENT} \
--set provider.name=aws \
--set aws.zoneType=public \
--set policy=upsert-only \
--set txtOwnerId=stocks-demo \
--set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"="arn:aws:iam::${AWS_ACCOINT_ID}:role/${ENVIRONMENT}-eks-external-dns-role"aws-load-balancer-controller:
helm repo add eks https://aws.github.io/eks-charts
helm upgrade --install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=${EKS_CLUSTER_NAME} \
--set serviceAccount.create=false \
--set vpcId=${VPC_ID} \
--set serviceAccount.name=aws-load-balancer-controllercsi-secrets-store
helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts
helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver \
--namespace kube-systemInstalling the AWS Provider and Config Provider (ASCP):
kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/deployment/aws-provider-installer.yamlBefore applying your Kustomize overlays, ensure all environment variables are exported.
Many manifests in this repository (e.g. k8s/base/*.yaml) contain placeholders such as ${RDS_ENDPOINT} that will be substituted before applying them to the cluster.
Render all templates by substituting variables into new files:
for file in $(find k8s/overlays/templates -name "*.yaml"); do
envsubst < "$file" > "k8s/overlays/${ENVIRONMENT}/$(basename "$file")"
donekubectl apply -k k8s/overlays/dev/kubectl get all -A
kubectl get ingress -AAnd you can verify that app is available via your favorite browser.
Please pay attentiond that for security reason all patch- files are ignored by git for the k8s/overlays/dev folder
- [] Move external dns IAM to MGMT, create helm within kube-system, attach only dev- policy to a role