- Docker & Docker Compose installed
- Python 3.10+ installed
- At least 16GB RAM available
git clone <repo-url>
cd samp
# Run the automated quick start
./scripts/quick_start.sh- Dashboard: http://localhost:8501 📊
- API: http://localhost:8000/docs 🔧
- Monitoring: http://localhost:3000 📈
That's it! The platform is now running with sample data.
# Copy environment file
cp .env.example .env
# Edit with your API keys (optional for demo)
nano .envRequired for production:
# Add at least one financial data provider
ALPHA_VANTAGE_API_KEY="JJACWWBLJBVC0BOK"
FINNHUB_API_KEY="d2lo4vpr01qr27gk6mbgd2lo4vpr01qr27gk6mc0"
# Database passwords
POSTGRES_PASSWORD="Factory2$"
REDIS_PASSWORD="Factory2$"# Start all services
docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -f# Health check
curl http://localhost:8000/health
# Expected response:
# {"status": "healthy", "timestamp": "2024-01-01T00:00:00Z"}- 📊 Streamlit Dashboard - Real-time market data visualization
- 🔧 FastAPI Backend - RESTful API for all operations
- 📈 Grafana Monitoring - System metrics and alerts
- ⚡ Prometheus - Metrics collection
- 🗃️ PostgreSQL - Metadata and configuration storage
- 🚀 Redis - High-speed caching layer
- 📨 Kafka + Zookeeper - Real-time message streaming
- Ingestion: Multi-source market data collection
- ETL: Real-time processing with Delta Lake
- ML: Anomaly detection with MLflow
- Features: Technical indicators calculation
- Monitoring: Complete observability stack
Visit http://localhost:8501 and explore:
- Market Data tab - Live price charts
- Technical Analysis - RSI, MACD, Bollinger Bands
- Portfolio - Position tracking and P&L
- Alerts - Price and volume notifications
- System - Health and performance metrics
# Via API
curl -X POST http://localhost:8000/ingestion/sources \
-H "Content-Type: application/json" \
-d '{
"source": "alpha_vantage",
"symbols": ["AAPL", "GOOGL", "MSFT", "TSLA"],
"enabled": true
}'
# Via Dashboard
# Go to Settings -> Data Sources -> Add Source# Start ingestion for specific symbols
curl -X POST http://localhost:8000/ingestion/start \
-H "Content-Type: application/json" \
-d '{"sources": ["alpha_vantage"], "symbols": ["AAPL"]}'- Grafana: http://localhost:3000 (admin/admin)
- Pre-built dashboards for all components
- Real-time metrics and alerts
- Prometheus: http://localhost:9090
- Raw metrics and query interface
- Open dashboard at http://localhost:8501
- Select symbols in sidebar (AAPL, GOOGL, MSFT, TSLA)
- View real-time charts with technical indicators
- Set up price alerts for key levels
- Monitor portfolio performance
# Train anomaly detection models
curl -X POST http://localhost:8000/ml/train \
-H "Content-Type: application/json" \
-d '{
"model_type": "ensemble",
"symbols": ["AAPL"],
"lookback_days": 30
}'
# Check training progress
curl http://localhost:8000/ml/models/status# Get technical indicators
curl "http://localhost:8000/features/indicators?symbol=AAPL&indicators=RSI,MACD,BB"
# Batch feature retrieval
curl -X POST http://localhost:8000/features/batch \
-H "Content-Type: application/json" \
-d '{
"symbols": ["AAPL", "GOOGL"],
"features": ["sma_20", "rsi_14", "macd"],
"start_date": "2024-01-01",
"end_date": "2024-01-31"
}'- Data Sources:
config/ingestion/sources_config.yaml - ETL Pipeline:
config/etl/streaming_config.yaml - ML Models:
config/ml/model_configs.yaml - Dashboard:
config/dashboard/dashboard_config.yaml
# Development vs Production
QUANTSTREAM_ENV=development
# Logging
LOG_LEVEL=INFO
DEBUG=true
# Performance tuning
SPARK_EXECUTOR_MEMORY=2g
KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"# Check logs
docker-compose logs servicename
# Common issues:
# 1. Port conflicts - change ports in docker-compose.yml
# 2. Memory issues - increase Docker memory limits
# 3. Permission issues - check file permissions# Check if service is running
docker-compose ps quantstream-api
# Check health endpoint
curl http://localhost:8000/health
# View API logs
docker-compose logs -f quantstream-api# Check if ingestion is running
curl http://localhost:8000/ingestion/status
# Start ingestion manually
curl -X POST http://localhost:8000/ingestion/start
# Check data in database
docker-compose exec postgres psql -U quantstream -c "SELECT COUNT(*) FROM quotes;"# Check resource usage
docker stats
# Reduce Spark memory in .env:
SPARK_EXECUTOR_MEMORY=1g
SPARK_DRIVER_MEMORY=1g
# Restart services
docker-compose restart# Pull latest changes
git pull origin main
# Rebuild and restart
docker-compose down
docker-compose build --no-cache
docker-compose up -d# Database backup
docker-compose exec postgres pg_dump -U quantstream quantstream > backup.sql
# Configuration backup
tar -czf config_backup.tar.gz config/ .env# Stop and remove all data
docker-compose down -v
# Remove images (optional)
docker-compose down --rmi all
# Start fresh
./scripts/quick_start.sh- Check the logs:
docker-compose logs -f - Health status:
curl http://localhost:8000/health - Documentation: SETUP.md for detailed setup
- API Docs: http://localhost:8000/docs
- Grafana Metrics: http://localhost:3000
Once you have the platform running:
- Configure API Keys - Add real financial data providers
- Customize Dashboards - Modify charts and indicators
- Train ML Models - Use your own data for anomaly detection
- Set Up Alerts - Configure notifications for important events
- Scale to Production - Use Kubernetes deployment
Happy trading! 📈