A fully automated serverless image processing system that automatically processes images uploaded to S3, creating multiple variants including compressed versions, format conversions, and thumbnails using AWS Lambda and Pillow.
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Upload S3 βββββΆβ Lambda Function βββββΆβ Processed S3 β
β Bucket β β + Pillow Layer β β Bucket β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
β βββββββββββββββββββ β
β β CloudWatch β β
ββββββββββββββββ Logs ββββββββββββββββ
βββββββββββββββββββ
- Image Upload β User uploads image to S3 upload bucket
- S3 Event Trigger β S3 automatically triggers Lambda function
- Image Processing β Lambda processes image using Pillow library
- Multiple Variants β Creates compressed, WebP, PNG, and thumbnail versions
- Storage β Saves all variants to processed S3 bucket
- Logging β Detailed processing logs in CloudWatch
- π Serverless Architecture - No servers to manage, pay only for usage
- πΌοΈ Multiple Image Formats - JPEG, PNG, WebP support
- π Smart Resizing - Automatic resizing for large images (max 4096px)
- ποΈ Compression Options - Multiple quality levels (60%, 85%)
- πΌοΈ Thumbnail Generation - 300x300 thumbnails automatically created
- π Format Conversion - Convert between JPEG, PNG, WebP
- π Detailed Logging - Comprehensive CloudWatch logging
- π‘οΈ Secure - Private S3 buckets with proper IAM policies
- ποΈ Infrastructure as Code - Complete Terraform deployment
- π³ Docker-based Layer - Cross-platform Pillow layer building
- π° Cost Effective - Pay only for actual processing time
Day-18/
βββ π lambda/ # Lambda function code
β βββ lambda_function.py # Main processing logic
β βββ lambda_function_basic.py # Fallback without PIL
βββ π scripts/ # Deployment and build scripts
β βββ build_layer_simple.sh # Build Pillow layer (recommended)
β βββ build_layer_docker.ps1 # PowerShell version
β βββ deploy.sh # Full deployment script
β βββ destroy.sh # Cleanup script
βββ π terraform/ # Infrastructure as Code
β βββ main.tf # Core AWS resources
β βββ variables.tf # Input variables
β βββ outputs.tf # Output values
β βββ data-sources.tf # Data sources
β βββ pillow_layer.zip # Generated layer (after build)
βββ π README.md # This file
βββ π .gitignore # Git ignore rules
- AWS CLI configured with appropriate permissions
- Terraform >= 1.0
- Docker (for building Pillow layer)
- Git Bash (Windows) or Bash (Linux/Mac)
git clone https://github.com/VanshShah174/serverless-image-processor-terraform-lambda.git
cd serverless-image-processor-terraform-lambdaaws configure
# Enter your AWS Access Key ID, Secret Access Key, and regionOption A: Simple Build (Recommended)
cd scripts
./build_layer_simple.shOption B: PowerShell (Windows)
cd scripts
.\build_layer_docker.ps1# Full deployment
./deploy.sh
# Or manual deployment
cd ../terraform
terraform init
terraform plan
terraform apply# Upload a test image
aws s3 cp your-image.jpg s3://your-upload-bucket-name/
# Check processed results
aws s3 ls s3://your-processed-bucket-name/
# View logs
aws logs describe-log-groups --log-group-name-prefix "/aws/lambda"| Variable | Description | Type | Default |
|---|---|---|---|
aws_region |
AWS region for deployment | string |
"us-east-1" |
project_name |
Project name prefix | string |
"image-processor" |
environment |
Environment name | string |
"dev" |
Edit terraform/variables.tf to customize:
variable "project_name" {
description = "Name prefix for all resources"
type = string
default = "my-image-processor"
}
variable "aws_region" {
description = "AWS region"
type = string
default = "us-west-2"
}- JPEG/JPG
- PNG
- BMP
- TIFF
- WebP
For each uploaded image photo.jpg, the system creates:
photo_compressed_abc123.jpg- High quality (85%) compressed JPEGphoto_low_abc123.jpg- Lower quality (60%) compressed JPEGphoto_webp_abc123.webp- WebP format (85% quality)photo_png_abc123.png- PNG format (lossless)photo_thumbnail_abc123.jpg- 300x300 thumbnail
- Smart Resizing - Images larger than 4096px are automatically resized
- Format Conversion - RGBA/LA images converted to RGB for JPEG compatibility
- Optimization - All outputs are optimized for size
- Metadata Preservation - Original filename and processing info stored in S3 metadata
| Resource | Usage | Estimated Cost |
|---|---|---|
| Lambda Execution | 1000 invocations/month (1GB, 30s avg) | ~$0.20 |
| Lambda Requests | 1000 requests/month | ~$0.20 |
| S3 Storage | 10GB stored images | ~$0.25 |
| S3 Requests | PUT/GET requests | ~$0.10 |
| CloudWatch Logs | 1GB logs/month | ~$0.50 |
| Data Transfer | Minimal | ~$0.05 |
| Total Estimated | ~$1.30/month |
- Lifecycle Policies - Archive old processed images to IA/Glacier
- Log Retention - Set CloudWatch log retention to 7-30 days
- Right-sizing - Adjust Lambda memory based on actual usage
- Batch Processing - Process multiple images in single invocation if possible
Modify lambda/lambda_function.py to add custom processing:
# Add custom variants
variants = [
{'format': 'JPEG', 'quality': 95, 'suffix': 'high'},
{'format': 'JPEG', 'quality': 50, 'suffix': 'mobile'},
{'format': 'WEBP', 'quality': 90, 'suffix': 'webp-high'},
# Add your custom variants
]# Development
terraform workspace new dev
terraform apply -var="environment=dev"
# Production
terraform workspace new prod
terraform apply -var="environment=prod"Build layer with additional libraries:
# Modify build_layer_simple.sh
docker exec $CONTAINER_ID bash -c "
pip install Pillow==10.4.0 opencv-python-headless boto3 -t /tmp/python/lib/python3.12/site-packages/
"Monitor these key metrics:
- Lambda Duration - Processing time per image
- Lambda Errors - Failed processing attempts
- Lambda Invocations - Total processing requests
- S3 Object Count - Images processed over time
1. PIL Import Error
[ERROR] Runtime.ImportModuleError: No module named 'PIL'
Solution: Rebuild the Lambda layer using ./build_layer_simple.sh
2. Lambda Timeout
[ERROR] Task timed out after 60.00 seconds
Solution: Increase timeout in terraform/main.tf:
resource "aws_lambda_function" "image_processor" {
timeout = 300 # 5 minutes
}3. Memory Issues
[ERROR] Runtime exited with error: signal: killed
Solution: Increase memory allocation:
resource "aws_lambda_function" "image_processor" {
memory_size = 2048 # 2GB
}View Lambda Logs:
aws logs tail /aws/lambda/image-processor-dev-processor --followTest Lambda Function:
aws lambda invoke --function-name image-processor-dev-processor \
--payload '{"test": true}' response.json- Private S3 Buckets - No public access allowed
- IAM Least Privilege - Lambda has minimal required permissions
- Encryption - S3 server-side encryption enabled
- VPC Support - Can be deployed in VPC for additional security
- Resource Tagging - All resources properly tagged for governance
- Memory Allocation - Optimized for image processing (1024MB default)
- Timeout - Set to 60 seconds for large images
- Runtime - Python 3.12 for best performance
- Layer Caching - Pillow layer cached across invocations
- Transfer Acceleration - Can be enabled for faster uploads
- Multipart Upload - Automatic for large files
- Versioning - Enabled for data protection
- Lifecycle Policies - Can be added for cost optimization
# Test Lambda function locally
cd lambda
python -m pytest test_lambda_function.py# Upload test images
aws s3 cp test-images/ s3://your-upload-bucket/ --recursive
# Verify processing
aws s3 ls s3://your-processed-bucket/# Upload multiple images simultaneously
for i in {1..10}; do
aws s3 cp test-image.jpg s3://your-upload-bucket/test-$i.jpg &
done
waitname: Deploy Image Processor
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
- name: Build Layer
run: ./scripts/build_layer_simple.sh
- name: Deploy
run: |
cd terraform
terraform init
terraform apply -auto-approve# Using script
./scripts/destroy.sh
# Or manually
cd terraform
terraform destroy# Remove unused Docker images
docker system prune -a- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
If you encounter issues or have questions:
- π Issues: GitHub Issues
- π¬ Discussions: GitHub Discussions
- π§ Email: vanshshah174@gmail.com
- Video Processing - Add support for video thumbnail generation
- AI Integration - Image recognition and tagging
- API Gateway - REST API for direct uploads
- Web Interface - Simple upload interface
- Batch Processing - Process existing S3 objects
- Custom Watermarks - Add watermark support
- EXIF Processing - Preserve/modify image metadata
β Star this repository if it helped you build awesome serverless image processing! β
