A professional-grade, asynchronous log collection and processing system built with a modern microservices architecture.
Lighthouse is a distributed system designed to handle high-volume log data without blocking main application threads. It uses a Producer-Consumer pattern where log entries are ingested via an API and queued in a high-speed message broker (Redis) before being processed and persisted by an independent worker.
The project follows a decoupled architecture to ensure high availability:
- Producer (FastAPI): Receives logs and instantly pushes them to the queue.
- Message Broker (Redis): Acts as a resilient buffer, storing logs in a FIFO (First-In, First-Out) structure.
- Consumer (Worker): A standalone background process that retrieves logs from the queue and saves them to permanent storage.
- Python 3.11
- FastAPI (High-performance web framework)
- Redis (In-memory data structure store / Message Broker)
- Docker & Docker Compose (Containerization and orchestration)
- Pydantic (Data validation and settings management)
This project was a significant milestone in my development as a Computer Engineer. Key takeaways include:
- Asynchronous Processing: Implementing
BackgroundTasksto handle operations outside the request-response cycle. - Distributed Systems: Learning how to decouple services using a message broker like Redis.
- Containerization: Managing multi-container environments using Docker Compose.
- Logic & Flow: Handling data serialization/deserialization with JSON and Python dictionaries.
- Debugging: Solving real-world connection and import issues in a containerized environment.
To spin up the entire infrastructure:
docker-compose up --buildThe entry point where logs are received and validated.

Logs successfully queued in Redis, verified via redis-cli.

Standalone worker service pulling logs from Redis and processing them.
Access API Docs: http://localhost:8000/docs
##Running the Worker Locally
python app/worker.pyDeveloped by Emine Computer Engineer