Base FastAPI project for applying general RestAPI Application cases, built with clean architecture, best practices, and clean code. For real production use, additional security layers, Docker Compose configuration, Nginx, and further hardening are recommended.
- Tech Stack
- Prerequisites
- Installation & Setup
- Usage
- Database Management
- Project Structure
- Code Quality & Formatting
- API Endpoints
- Deployment
- Monitoring & Logging
- Configuration Files
- Troubleshooting
- Contributing
- License
- Author
- Python 3.12+ - Modern Python with latest language features and performance improvements
- FastAPI - High-performance async web framework with automatic OpenAPI documentation
- PostgreSQL 17 - Robust, production-grade relational database management system
- Aio Kafka - Asynchronous Kafka client for event streaming and producer/consumer workflows
- Nginx - High-performance reverse proxy and load balancer
- Docker - Containerized, reproducible deployment across environments
- SQLAlchemy 2.0 - Async ORM and query builder for Python with full PostgreSQL support
- Redis - In-memory data store used for caching and session management
- Alembic - Lightweight database migration tool for SQLAlchemy schema version control
- Pytest - Comprehensive testing framework for unit and integration tests
- Nginx - Reverse proxy with custom error pages and logging
- Grafana - Metrics visualization and monitoring dashboards
- Loki - Log aggregation and querying system
- Promtail - Log collector and shipper to Loki
- Docker Compose - Multi-container orchestration
- Database Migrations - Automated schema version control with Alembic
- PostgreSQL Integration - Production-ready database with persistent storage
- Argon2 - Standard secure password hashing
- Reverse Proxy - Nginx with environment-based configuration
- JWT Authentication - Stateless token-based authentication and authorization
- Centralized Logging - Grafana/Loki stack for log aggregation and visualization
- RESTful API - Clean and scalable API architecture following Clean Architecture principles
- Environment-based Configuration - Separate dev/prod settings with .env files
- Custom Error Pages - Branded error handling through Nginx
- Log Management - Nginx access and error logs with Promtail collection
- Hot Reload - Development mode with automatic code reloading
- Database Shell Access - Direct PostgreSQL psql command-line interface
- Code Quality Tools - Automated formatting and linting
Before starting, ensure you have the following installed:
- Docker - Container platform
- Docker Compose - Multi-container orchestration
- Make - Build automation
- Git - Version control
Optional: Python 3.12+ if you prefer running the app without Docker.
git clone https://github.com/Victor-Zarzar/fastapi-clean-architecture
cd fastapi-clean-architecturezed .Copy the example environment file and configure your credentials:
cp .env-example .env.devKey configurations needed:
- Database: PostgreSQL credentials (user, password, database)
- API Settings: Host, port, and other application settings
- Environment: Development or production mode
Important: Never commit your
.env.devor.env.prodfiles to version control. They should be in.gitignore.
make build-devThis will:
- Build the Docker image with tag
api-cost-map-web-api:1.0.0 - Set up the PostgreSQL database container
- Configure networking between services
View all available Make commands:
make helpStart the development server (port 8000):
make up-devAccess the API at:
- API:
http://localhost:8000 - API Documentation:
http://localhost:8000/docs - Alternative Docs:
http://localhost:8000/redoc
make build-dev # Build development Docker image
make up-dev # Start development server with hot reload
make down-dev # Stop development server
make logs-dev # View development logs in real-time
make test # Run tests with pytest
make format # Format code with Ruff
make lint # Lint code with pylint
make shell # Access container bash shell
make migrate # Run database migrationsmake access-db-local # Access PostgreSQL shell directly
make migrate # Apply pending database migrationsAccess database with credentials from your .env.dev:
- Url: postgresql+psycopg://example...
- Host: localhost
- Port: 5432
- Database: costdb
- User: admin
- Password: pass
make clean # Clean local environment and containers
make clean-all # Remove all containers, volumes, and imagesmake logs-dev # Development logs
make logs-prod # Production logsOr directly with Docker:
docker logs -f api-cost-map
docker logs -f postgresql-servermake testOr manually with Docker:
docker compose -f docker-compose.dev.yaml exec web pytestFastAPI provides automatic interactive API documentation:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
This project uses Alembic for database schema version control.
# Inside the container
docker exec -it api-cost-map alembic revision --autogenerate -m "Description of changes"make migrate
# or
docker exec -it api-cost-map python -m alembic upgrade headdocker exec -it api-cost-map python -m alembic downgrade -1Access the PostgreSQL shell:
make access-db-localThis opens an interactive PostgreSQL session where you can run SQL queries directly.
Example queries:
--- List all tables in the current schema
\dt
-- Describe a table structure
\d your_table_name
-- Query data
SELECT * FROM your_table_name LIMIT 10;
-- Quit the session
\qapi-cost-map/
├── app/
│ ├── api/
│ │ └── v1/
│ │ └── endpoints/
│ │ ├── auth.py # Authentication routes
│ │ ├── user.py # User management routes
│ │ ├── cost.py # Cost management routes
│ │ └── ...
│ ├── repository/
│ │ ├── user_repository.py # User data access layer
│ │ └── cost_repository.py # Cost data access layer
│ ├── services/
│ │ ├── auth_service.py # Authentication business logic
│ │ ├── user_service.py # User business logic
│ │ └── cost_service.py # Cost business logic
│ ├── schemas/ # Pydantic request/response models
│ ├── models/ # SQLAlchemy ORM models
│ ├── core/
│ │ ├── config.py # App settings and environment config
│ │ ├── exceptions.py # Custom exception handlers
│ │ └── security.py # Auth utilities (JWT, hashing)
│ └── db/
│ └── database.py # Database session and engine setup
├── alembic/ # Database migrations
│ ├── versions/ # Auto-generated migration files
│ └── env.py # Alembic environment configuration
├── nginx/ # Nginx reverse proxy
│ ├── nginx.conf.template # Config template with env vars
│ └── errors/ # Custom branded error pages
├── grafana/ # Grafana monitoring
│ └── provisioning/ # Auto-provisioned datasources & dashboards
├── logs/ # Application logs (gitignored)
│ └── nginx/ # Nginx access and error logs
├── tests/ # Test suite
│ ├── test_api.py # API endpoint integration tests
│ └── test_services.py # Service layer unit tests
├── docker-compose.dev.yaml # Development orchestration
├── docker-compose.prod.yaml # Production orchestration
├── Dockerfile # Development image
├── Dockerfile.prod # Production image (optimized)
├── entrypoint.sh # Container startup & migration script
├── loki-config.yml # Loki log aggregation configuration
├── promtail-config.yml # Promtail log collection configuration
├── requirements.txt # Python dependencies
├── .env.example # Environment variables template
├── .env.dev # Development environment (not in git)
├── .env.prod # Production environment (not in git)
├── alembic.ini # Alembic CLI configuration
├── pyproject.toml # Ruff, pylint, and project metadata
├── Makefile # Build and task automation
└── README.md # Project documentation
This project uses Ruff for fast Python linting and code formatting. The configuration is defined in pyproject.toml:
[tool.ruff]
line-length = 88
target-version = "py312"
[tool.ruff.format]
quote-style = "double"
indent-style = "space"
[tool.ruff.lint]
select = ["E", "W", "F", "I", "B", "UP"]
ignore = ["E501"]
fixable = ["ALL"]
unfixable = []Configuration details:
- Line length: 88 characters (Black compatible)
- Target version: Python 3.12
- Quote style: Double quotes for strings
- Linting rules:
E,W- pycodestyle errors and warningsF- PyflakesI- isort (import sorting)B- flake8-bugbearUP- pyupgrade (modern Python syntax)
- Ignored rules:
E501(line too long)
All code quality tools run inside the Docker container:
# Format code with Ruff
make format
# Lint code with pylint
make lintOr directly with Docker Compose:
# Format code
docker compose -f docker-compose.dev.yaml exec web ruff format app
# Lint code
docker compose -f docker-compose.dev.yaml exec web pylint app- POST
/api/v1/auth/signup- Create new user - POST
/api/v1/auth/signin- Sign in - POST
/api/v1/auth/signout- Sign out - POST
/api/v1/auth/refresh- Refresh access token - POST
/api/v1/auth/verify-email- Verify email - POST
/api/v1/auth/resend-verification-email- Resend verification email - POST
/api/v1/auth/forgot-password- Forgot password - POST
/api/v1/auth/reset-password- Reset password - POST
/api/v1/auth/change-password- Change password
- GET
/api/v1/costs- List all costs - POST
/api/v1/costs- Create new cost - GET
/api/v1/costs/{cost_id}- Get cost by ID - DELETE
/api/v1/costs/{cost_id}- Delete cost (Admin only)
- GET
/api/v1/users/me- Get current user - GET
/api/v1/users- List all users - GET
/api/v1/users/{user_id}- Get user by ID
- GET
/api/v1/admin- Get admin info (Admin only)
- GET
/api/v1/health- Health check (Admin only)
- POST
/api/v1/kafka/publish- Publish Kafka message
Depending on your configuration, endpoints may require authentication. Check the API documentation for details:
- Development:
http://localhost:8000/docs - Production (via Nginx):
http://localhost/docs
Build and run the production container:
# Start production environment
make up-prod
# Check logs
make logs-prod
# Stop when needed
make down-prodThe production environment includes:
- FastAPI Application - Main API service (port 8006 internal)
- PostgreSQL Database - Persistent data storage (port 5432)
- Nginx - Reverse proxy (port 80)
- Grafana - Monitoring dashboard (port 3000)
- Loki - Log aggregation (port 3100)
- Promtail - Log collection from Nginx
Once deployed, access the services at:
- API:
http://localhost(via Nginx reverse proxy) - API Docs:
http://localhost/docs - Grafana Dashboard:
http://localhost:3000 - PostgreSQL:
localhost:5432 - Loki API:
http://localhost:3100
- Use strong database credentials
- Configure proper backup strategies for PostgreSQL
- Set up monitoring alerts in Grafana
- Use environment-specific configuration files
- Consider using Docker secrets for sensitive data
- Implement proper SSL/TLS certificates for Nginx
- Configure log retention policies in Loki
- Set up automated database backups
Access Grafana at http://localhost:3000 to visualize metrics and logs.
Default credentials (configure in .env.prod):
- Check your environment file for credentials
Features:
- Pre-configured Loki datasource
- Nginx access and error log visualization
- Real-time log streaming
- Log filtering and searching
- Custom dashboard creation
Loki aggregates logs from multiple sources:
Log Sources:
- Nginx access logs (
/var/log/nginx/access.log) - Nginx error logs (
/var/log/nginx/error.log) - Application logs (via Promtail configuration)
Querying Logs:
Access Loki directly at http://localhost:3100 or through Grafana's Explore interface.
Example LogQL queries:
# All nginx logs
{job="nginx"}
# Error logs only
{job="nginx"} |= "error"
# Logs from specific time range
{job="nginx"} | json | line_format "{{.timestamp}} {{.message}}"
Promtail collects logs and ships them to Loki. Configuration location:
./promtail-config.yml
Monitored paths:
/var/log/nginx- Nginx logs mounted from host
Nginx logs are stored in:
./logs/nginx/access.log # HTTP access logs
./logs/nginx/error.log # Nginx error logs
These logs are automatically collected by Promtail and sent to Loki.
Nginx serves custom error pages from:
./nginx/errors/
Configure error pages in nginx.conf.template to maintain branding during errors.
Location: ./nginx/nginx.conf.template
Environment variables used:
${BACKEND_HOST}- FastAPI service hostname (default:web)${BACKEND_PORT}- FastAPI service port (default:8006)
Example configuration:
upstream backend {
server ${BACKEND_HOST}:${BACKEND_PORT};
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}Location: ./loki-config.yml
Key settings:
- Log retention period
- Storage configuration
- Ingestion limits
Location: ./promtail-config.yml
Configure:
- Log file paths
- Labels and metadata
- Loki server endpoint
Location: ./grafana/provisioning/
Auto-provisioned:
- Datasources (Loki)
- Dashboards (if configured)
- Alert rules (if configured)
Database not accessible:
- Ensure PostgreSQL container is running:
docker ps - Check database credentials in
.env.prod - Verify port 5432 is not in use by another service
- Check Docker network connectivity
Migration errors:
- Ensure database is initialized:
make access-db-local - Check Alembic configuration in
alembic.ini - Review migration files in
alembic/versions/
502 Bad Gateway:
- Verify FastAPI container is running:
docker ps | grep web - Check backend configuration in
nginx.conf.template - Ensure
BACKEND_HOSTandBACKEND_PORTare correct - View Nginx error logs:
cat logs/nginx/error.log
Custom error pages not showing:
- Verify error page files exist in
./nginx/errors/ - Check Nginx configuration for error_page directives
- Restart Nginx:
docker restart api-cost-map-nginx
Grafana not accessible:
- Ensure Grafana container is running:
docker ps | grep grafana - Check port 3000 availability:
lsof -i :3000 - Verify credentials in
.env.prod
Logs not appearing in Grafana:
- Check Promtail is running:
docker ps | grep promtail - Verify Loki datasource is configured in Grafana
- Check Promtail configuration:
cat promtail-config.yml - Ensure log files exist:
ls -la logs/nginx/ - Test Loki directly:
curl http://localhost:3100/ready
Loki storage issues:
- Check disk space:
df -h - Review Loki configuration for retention settings
- Check Loki logs:
docker logs loki
Port already in use:
# Check what's using port 80
lsof -i :80
# Or port 5432 for PostgreSQL
lsof -i :5432
# Or port 3000 for Grafana
lsof -i :3000Clean restart:
make clean
make build-dev
make up-devContainers can't communicate:
- Verify all containers are on
app-network:docker network inspect app-network - Check Docker Compose network configuration
- Restart Docker Compose:
make down-prod && make up-prod
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the project
- Create your feature branch
- Commit your changes
- Push to the branch
- Open a Pull Request
Report issues at: https://github.com/Victor-Zarzar/fastapi-clean-architecture
- Write tests for new features
- Follow PEP 8 style guide
- Update documentation as needed
- Ensure all tests pass before submitting PR
- Create migrations for database schema changes
- Run
make formatandmake lintbefore committing
This project is licensed under the MIT License - see the LICENSE file for details.
Victor Zarzar - @Victor-Zarzar
Project Link: https://github.com/Victor-Zarzar/fastapi-clean-architecture