As a DevOps engineer in a modern tech company, you're tasked with deploying a scalable microservices architecture that demonstrates containerization, service orchestration, and database persistence. This project showcases a complete intelligence pipeline with Node.js Express, Go Gin, and PostgreSQL working together in a containerized environment.
- Containerization fundamentals with Docker
- Microservices architecture design
- Service orchestration with Docker Compose
- Database persistence and volume management
- Health checks and service dependencies
- Environment-based configuration management
- Cross-service communication patterns
Set up environment variables for secure configuration management
# Copy environment template
cp env.example .env
# Configure your environment variables
# - Database credentials
# - Service ports
# - Security settingsConfigure docker-compose.yml for all the services configure PostgreSQL with persistent volume and initialization scripts in database directory
-- Automatic table creation on container startup
CREATE TABLE IF NOT EXISTS process_logs (
id SERIAL PRIMARY KEY,
time TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
processing_time INTERVAL NOT NULL
);Spin up Ec2 on AWS and install various elements as root user
Dockergitdocker-compose
# Update system
yum update -y # For Amazon Linux
# OR
apt update && apt upgrade -y # For Ubuntu
# Install Docker
yum install docker -y
# OR
apt install docker.io -y
# Start Docker
systemctl start docker
systemctl enable docker
# Install docker-compose
curl -L "https://github.com/docker/compose/releases/download/2.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
# Install git
yum install docker -y
# Confirm installation
docker --version
docker-compose --version
Mount and Format attached EBS Volume onto Ec2 on a directory for database storage example below this volume xvdb is mounted on /mnt/xvdb directory
# Commands
# Check the Attached Volume
df -h
# Format the volume attached on Ec2
sudo mkfs.ext4 /dev/xvdb
# Mount on a directory
sudo mkdir -p /mnt/xvdb
sudo mount /dev/xvdb /mnt/xvdb
create a deploy user and setup permission as it will be responsible for deploying (good practice you can do it via root user too)
# Add a user
sudo adduser deploy
# Add user in user groups for docker and wheel (run sudo commands)
usermod -aG docker deploy
usermod -aG wheel deploy
# login deploy
su deploy
setup ssh key
# Generate ssh key
ssh-keygen
cd ~/.sshIn authorized_keys remove the restrictions related to ssh
give permission to postgres service (you can find the id once its created on docker-compose) for mounted directory example (/mnt/xvdb/postgres-data) so that it can access and write into it
# Login as deploy user
su deploy
# Go to ~ directory and clone repository
cd ~
git clone https://github.com/HarshSharma0801/The-Containerized-Intelligence-Pipeline.git
# Cd into Repository and setup env
cd The-Containerized-Intelligence-Pipeline
vim env # start docker compose
docker-compose up -d --build
# Verify using
docker ps
- Create
./github/workflows/deploy.yml
- Drive Github Secrets
EC2_SSH_KEY -> private ssh key for deploy user found in .ssh folder in deploy user (needed for ssh into ec2 as deploy user without password)
EC2_HOST -> ec2 Host found in ec2 dashboard
EC2_USER -> deploy
ENV_FILE -> check the env file in .env.example
- Push to main branch to see the actions tab
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Client/User β β Node.js API β β Go Compute β
β ββββββ Gateway ββββββ Service β
β Port: ANY β β Port: 3000 β β Port: 8086 β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β PostgreSQL β
β Database β
β Port: 5432 β
βββββββββββββββββββ
# Node.js service health
GET http://localhost:3000/health
# Go service health
GET http://localhost:8086/health# Trigger computation pipeline
GET http://localhost:3000/calculateResponse:
{
"result": {
"time": 0.05234,
"operation": "prime_calculation",
"processedAt": "2024-01-15T10:30:00Z"
},
"processingTime": 150,
"timestamp": "2024-01-15T10:30:00.000Z"
}CREATE TABLE process_logs (
id SERIAL PRIMARY KEY,
time TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
processing_time INTERVAL NOT NULL
);| Variable | Default | Description |
|---|---|---|
NODE_PORT |
3000 | Node.js API Gateway port |
GO_PORT |
8086 | Go computation service port |
POSTGRES_USER |
postgres | Database username |
POSTGRES_PASSWORD |
password | Database password |
POSTGRES_DB |
logs | Database name |
POSTGRES_PORT |
5432 | Database port |
- nodejs-server: http://localhost:3000
- go-server: http://localhost:8086
- postgres-db: localhost:5432
PostgreSQL data is persisted using Docker volumes:
- Volume Mount:
/mnt/xvdb/postgres-data:/var/lib/postgresql/data - Init Scripts:
./database/init.sqlautomatically executed on first run
nodejs-server:
depends_on:
postgres-db:
condition: service_healthy
go-server:
condition: service_healthyAll services include health checks:
- Interval: 30 seconds
- Timeout: 10 seconds
- Retries: 3 attempts
- Restart Policy: unless-stopped
docker-compose up --builddocker-compose downdocker-compose down -v# All services
docker-compose logs
# Specific service
docker-compose logs nodejs-server
docker-compose logs go-server
docker-compose logs postgres-db# Scale Go service for higher computation load
docker-compose up --scale go-server=3- Port conflicts: Ensure ports 3000, 8086, and 5432 are available
- Environment variables: Verify
.envfile exists and is properly configured - Docker permissions: Ensure Docker daemon is running and accessible
- Volume permissions: Check filesystem permissions for PostgreSQL data directory
# Check container status
docker-compose ps
# Inspect container logs
docker-compose logs -f [service-name]
# Execute commands in running containers
docker-compose exec nodejs-server sh
docker-compose exec go-server sh
docker-compose exec postgres-db psql -U postgres -d logsThis containerized intelligence pipeline demonstrates modern DevOps practices with microservices, containerization, and automated orchestration - perfect for learning deployment fundamentals and scaling strategies.