Skip to content

TombStoneDash/TrashAlert

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

187 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TrashAlert API

A FastAPI-based service for managing trash pickup schedules with crowdsourced data.

🚀 Production Deployment

Ready to deploy to production? TrashAlert supports modern cloud deployment:

  • API: Deploy to Railway with managed PostgreSQL
  • Admin Dashboard: Deploy to Vercel
  • CI/CD: Automated testing and deployment with GitHub Actions
  • Monitoring: Built-in uptime monitoring and health checks

Quick Deploy: Run ./scripts/deploy-production.sh

📚 Full Guide: See PRODUCTION_DEPLOYMENT.md for detailed instructions.

Features

  • POST /report: Submit crowdsourced trash pickup reports
  • GET /lookup: Look up trash pickup schedules for an address
  • Consensus Algorithm: Automatically verifies crowdsourced data when ≥3 reports with ≥67% agreement
  • Multi-source Data: Merges crowdsourced and official data with intelligent prioritization

Quick Start

1. Install Dependencies

pip install -r requirements.txt

2. Initialize Database

python init_db.py

This creates a SQLite database with sample addresses.

3. Start the API Server

uvicorn app.main:app --reload

The API will be available at http://localhost:8000

4. Run Tests

In a new terminal:

python test_api.py

Docker Deployment

The easiest way to run TrashAlert in production is using Docker. This provides a complete deployment with API, worker services, and nginx reverse proxy.

One-Command Deployment

make docker-up

This command will:

  • Build Docker images for the API and worker services
  • Initialize the SQLite database automatically
  • Start the FastAPI application
  • Start nginx as a reverse proxy
  • Start a worker container for scheduled pipeline tasks

The API will be available at http://localhost (via nginx on port 80).

Docker Management Commands

# Start all containers
make docker-up

# Stop all containers
make docker-down

# View logs from all containers
make docker-logs

# Check container status
make docker-ps

# Rebuild containers from scratch
make docker-rebuild

Manual Docker Compose Usage

If you prefer to use docker-compose directly:

# Build and start all services
docker-compose up --build -d

# View logs
docker-compose logs -f

# Stop all services
docker-compose down

# Check container status
docker-compose ps

Services

The Docker deployment includes:

  • api: FastAPI application (exposed via nginx)
  • worker: Background worker for scheduled pipeline tasks
  • nginx: Reverse proxy (ports 80/443)
  • certbot: Automatic SSL certificate renewal (optional)

Data Persistence

Data is persisted in Docker volumes:

  • ./data: SQLite database
  • ./logs: Application logs

Testing the Docker Deployment

Once containers are running, test the API:

# Health check
curl http://localhost/health

# Lookup endpoint
curl "http://localhost/lookup?address=1122%20Palmview%20Ave,%20El%20Centro,%20CA"

# Submit a report
curl -X POST http://localhost/report \
  -H "Content-Type: application/json" \
  -d '{
    "address": "1122 Palmview Ave, El Centro, CA",
    "trash_day": "WED",
    "recycling_day": "FRI"
  }'

Running Worker Tasks

To run pipeline tasks in the worker container:

# Execute a command in the worker container
docker-compose exec worker python scripts/data_collection/schedule_runner.py --all

# Run the full pipeline
docker-compose exec worker python scripts/run_full_pipeline.py --city "El Centro"

Switching to PostgreSQL

Currently using SQLite for simplicity. To switch to PostgreSQL:

  1. Add a PostgreSQL service to docker-compose.yml
  2. Update app/database.py with PostgreSQL connection string
  3. Install psycopg2-binary in requirements.txt
  4. Update SQLALCHEMY_DATABASE_URL environment variable

Example PostgreSQL service:

postgres:
  image: postgres:15-alpine
  environment:
    POSTGRES_DB: trashalert
    POSTGRES_USER: trashalert
    POSTGRES_PASSWORD: ${DB_PASSWORD}
  volumes:
    - postgres-data:/var/lib/postgresql/data

API Endpoints

POST /report

Submit a crowdsourced trash pickup report.

Request:

{
  "address": "1122 Palmview Ave, El Centro, CA",
  "trash_day": "WED",
  "recycling_day": "FRI",
  "green_day": null,
  "user_hash": "optional-stable-id"
}

Response:

{
  "success": true,
  "message": "Report submitted successfully",
  "address_id": 1,
  "normalized_address": "1122 PALMVIEW AVE, EL CENTRO, CA",
  "consensus": {
    "trash_day": "WED",
    "recycling_day": "FRI",
    "green_day": null,
    "reports_count": 3,
    "trash_agreement_ratio": 1.0,
    "recycling_agreement_ratio": 1.0,
    "green_agreement_ratio": 0.0,
    "is_verified": true
  }
}

GET /lookup

Look up trash pickup schedule for an address.

Request:

GET /lookup?address=1122 Palmview Ave, El Centro, CA

Response:

{
  "address": "1122 Palmview Ave, El Centro, CA",
  "normalized_address": "1122 PALMVIEW AVE, EL CENTRO, CA",
  "trash_day": "WED",
  "recycling_day": "FRI",
  "green_day": null,
  "source": "CROWD_VERIFIED",
  "consensus_reports_count": 3,
  "consensus_agreement_ratio": 1.0,
  "lat": 32.792,
  "lon": -115.563
}

Source Priority:

  1. CROWD_VERIFIED - Crowdsourced data with ≥3 reports and ≥67% agreement
  2. OFFICIAL - Official GIS/government data
  3. UNKNOWN - No data available

GET /stats

Get database statistics.

Response:

{
  "total_addresses": 3,
  "total_reports": 10,
  "total_consensus": 2,
  "verified_consensus": 2
}

Data Model

Address

  • Stores normalized addresses with coordinates
  • Contains official pickup schedules (from GIS/rules)

CrowdReport

  • Individual user-submitted reports
  • Tracks user_hash to prevent spam

CrowdConsensus

  • Aggregated consensus from multiple reports
  • Automatically calculated and verified
  • Verification requires:
    • At least 3 reports
    • At least 67% agreement ratio

Consensus Algorithm

The consensus algorithm:

  1. Aggregates all reports for an address
  2. Finds the most common value for each pickup day type
  3. Calculates agreement ratios (% of reports agreeing with consensus)
  4. Marks as verified if:
    • Total reports ≥ 3
    • Average agreement ratio ≥ 0.67

Example Usage

Submit Reports

# Report 1
curl -X POST http://localhost:8000/report \
  -H "Content-Type: application/json" \
  -d '{
    "address": "1122 Palmview Ave, El Centro, CA",
    "trash_day": "WED",
    "recycling_day": "FRI",
    "user_hash": "user_001"
  }'

# Report 2
curl -X POST http://localhost:8000/report \
  -H "Content-Type: application/json" \
  -d '{
    "address": "1122 Palmview Ave, El Centro, CA",
    "trash_day": "WED",
    "recycling_day": "FRI",
    "user_hash": "user_002"
  }'

# Report 3 (reaches verification threshold)
curl -X POST http://localhost:8000/report \
  -H "Content-Type: application/json" \
  -d '{
    "address": "1122 Palmview Ave, El Centro, CA",
    "trash_day": "WED",
    "recycling_day": "FRI",
    "user_hash": "user_003"
  }'

Lookup Address

curl "http://localhost:8000/lookup?address=1122%20Palmview%20Ave,%20El%20Centro,%20CA"

Project Structure

TrashAlert/
├── app/
│   ├── __init__.py
│   ├── main.py          # FastAPI application
│   ├── models.py        # Database models
│   ├── schemas.py       # Pydantic schemas
│   ├── database.py      # Database connection
│   └── utils.py         # Utility functions
├── init_db.py           # Database initialization
├── test_api.py          # Test/demo script
├── requirements.txt     # Python dependencies
└── README.md           # This file

Development

Interactive API Documentation

FastAPI provides automatic interactive documentation:

Database

The application uses SQLite by default (trashalert.db). To use PostgreSQL or another database, modify app/database.py.

License

MIT

TrashAlert Pipeline

A scalable data pipeline for collecting and processing trash pickup information across multiple cities.

Overview

This pipeline collects address data from OpenStreetMap and processes it for use in the TrashAlert system. It's designed to scale from a single city to hundreds of cities across the United States.

Pipeline Architecture

The pipeline consists of several modular scripts that work together:

  1. fetch_city_boundaries.py - Fetches city boundaries from OpenStreetMap
  2. build_subdivisions.py - Builds subdivision/neighborhood data for each city
  3. fetch_addresses_osm.py - Fetches address data from OpenStreetMap
  4. sample_addresses_per_city.py - Samples addresses per city (up to configurable limit)
  5. run_full_pipeline.py - Orchestrates the entire pipeline
  6. bulk_import_pipeline.py - Multi-city bulk importer with checkpoints and monitoring (NEW!)

Bulk Import Pipeline (Recommended)

The bulk import pipeline is a production-ready system for importing OSM data across multiple cities automatically:

Features:

  • ✅ Automatic multi-city processing from cities.yaml
  • ✅ Rate limiting and retry logic for Overpass API
  • ✅ Resumable checkpoints for error recovery
  • ✅ Database-backed progress tracking
  • ✅ Real-time dashboard for monitoring
  • ✅ Comprehensive error handling

Quick Start:

# 1. Run database migration (one time only)
python scripts/migrate_pipeline_tables.py

# 2. Import all cities
python scripts/bulk_import_pipeline.py --all

# 3. Monitor progress at:
# http://localhost:8000/pipeline-status.html

For detailed documentation, see: docs/BULK_IMPORT_PIPELINE.md

Configuration

Cities are configured in config/cities.yaml. Each city entry includes:

- name: City Name
  state: State Name
  state_abbr: XX
  country: USA
  has_official_pickup_zones: true/false
  pickup_zone_data_source: "URL or note"
  notes: "Additional context"

Usage

Running the Full Pipeline

Process all cities:

python scripts/run_full_pipeline.py --all

Process a specific city:

python scripts/run_full_pipeline.py --city "Brawley, California"
# or
python scripts/run_full_pipeline.py --city "Brawley"

Process all cities in a state:

python scripts/run_full_pipeline.py --state CA
# or
python scripts/run_full_pipeline.py --state California

Skip certain pipeline steps:

python scripts/run_full_pipeline.py --city "Brawley" \
  --skip-boundaries --skip-subdivisions

Running Individual Scripts

Each script can be run independently with the same filtering options:

Sample addresses:

# All cities
python scripts/sample_addresses_per_city.py

# One city
python scripts/sample_addresses_per_city.py --only "Brawley, California"

# One state
python scripts/sample_addresses_per_city.py --state CA

# Custom sample size
python scripts/sample_addresses_per_city.py --only "Brawley" --max-per-city 100

Fetch boundaries:

python scripts/fetch_city_boundaries.py --only "San Diego, California"

Build subdivisions:

python scripts/build_subdivisions.py --state CA

Fetch addresses:

python scripts/fetch_addresses_osm.py --only "Brawley"

Adding New Cities

  1. Edit config/cities.yaml and add a new city entry
  2. Run the pipeline for that city:
    python scripts/run_full_pipeline.py --city "New City, State"

Output

The pipeline generates the following data:

  • data/boundaries/*.geojson - City boundary GeoJSON files
  • data/subdivisions/*.json - Subdivision/neighborhood data
  • data/addresses_osm_raw.csv - Raw address data from OpenStreetMap
  • data/addresses_sampled_50_per_city.csv - Sampled addresses (default: 50 per city)

Requirements

pip install pyyaml requests pandas

Example: Pipeline for One City

# Run the complete pipeline for Brawley
python scripts/run_full_pipeline.py --city "Brawley"

# Output shows:
# - Cities processed: Brawley, CA
# - Steps completed: 4
# - Data statistics per city
# - Summary with timing information

Future Enhancements

  • Normalization step for address standardization
  • Database loading functionality
  • Integration with official city pickup zone data
  • Support for international cities (currently US-only)

TrashAlert

Crowdsourced trash collection day lookup for California cities

TrashAlert is a pilot project to help residents quickly find their trash collection day through community-driven data. Instead of navigating complex city websites or calling municipal offices, users can look up their address and see when trash is collected based on real observations from their neighbors.

🎯 Project Goal

Build a reliable, crowdsourced trash schedule database for San Diego and Imperial Valley cities, demonstrating that community data can be more accurate and up-to-date than official sources.

🏗️ Project Status

Current Phase: Data Pipeline Development (Phase 1)

  • ✅ OpenStreetMap address extraction
  • ✅ Address sampling script (50 per city)
  • ✅ Sample data generation for testing
  • ⏳ Database schema design
  • ⏳ API development
  • ⏳ Web interface

📋 Table of Contents

🌟 Overview

The Problem

Finding your trash collection day is harder than it should be:

  • City websites are confusing or outdated
  • Schedules vary by neighborhood/subdivision
  • Route changes aren't communicated well
  • New residents don't know where to look

The Solution

TrashAlert uses crowdsourcing to build a reliable schedule database:

  1. Users report when they observe trash collection
  2. System calculates consensus from multiple reports
  3. Confidence scores indicate reliability
  4. Self-correcting as more data comes in

Key Features

  • 📍 Address-based lookup: Enter your address, get your schedule
  • 👥 Crowdsourced data: Community observations, not outdated records
  • 🎯 Confidence scores: Know how reliable each schedule is
  • 🔄 Self-updating: Automatically adapts to schedule changes
  • 🗺️ Geographic sampling: Ensures coverage across subdivisions

🏛️ Architecture

High-Level Data Flow

City Config → City Boundaries → OSM Query → Address Extraction
                                                    ↓
                                            Subdivision Detection
                                                    ↓
                                          Sampling (50 per city)
                                                    ↓
                                              Database
                                                    ↓
                                           API Endpoints
                                                    ↓
                                          Web Interface
                                                    ↓
                                        User Reports
                                                    ↓
                                      Consensus Calculation
                                                    ↓
                                      Updated Schedules

System Components

  1. Data Collection Layer

    • City boundary definitions
    • OpenStreetMap address queries
    • Subdivision detection
    • Geographic sampling
  2. Database Layer

    • PostgreSQL with PostGIS
    • Cities, addresses, reports, schedules
    • Spatial indexing for location queries
  3. API Layer (planned)

    • RESTful API with FastAPI/Flask
    • Address lookup endpoints
    • Report submission
    • Schedule queries
  4. Crowdsourcing Engine (planned)

    • Consensus algorithm
    • Confidence scoring
    • Conflict detection
    • Quality metrics
  5. Client Interface (planned)

    • Web application
    • Mobile app (future)

For detailed architecture, see docs/architecture.md.

🔄 Data Flow

Initial Setup Flow

1. City Configuration
   ├─ Define city name and boundaries
   └─ Load boundary GeoJSON

2. OSM Address Extraction
   ├─ Query Overpass API with city boundary
   ├─ Extract: house_number, street, subdivision, lat, lon
   └─ Save to addresses_osm_raw.csv

3. Address Sampling
   ├─ Load raw addresses
   ├─ Remove null coordinates and duplicates
   ├─ Sample up to 50 per city (stratified by subdivision)
   └─ Save to addresses_sampled_50_per_city.csv

4. Database Ingestion (planned)
   ├─ Load sampled addresses
   ├─ Geocode and validate
   └─ Insert into addresses table

User Interaction Flow (Planned)

1. User Lookup
   User enters address → Search API → Return consensus schedule + confidence

2. User Report
   User observes collection → Submit report → Store in database →
   Recalculate consensus → Update schedule

3. Consensus Calculation
   Collect all reports for address → Filter by recency →
   Calculate weighted scores → Determine consensus day →
   Compute confidence score → Update trash_schedules table

For detailed data flow, see docs/architecture.md.

🚀 Setup Instructions

Prerequisites

  • Python: 3.11 or higher
  • Git: For version control
  • PostgreSQL: 14+ with PostGIS extension (for production)
  • pip: Python package manager

Installation

  1. Clone the repository

    git clone https://github.com/yourusername/TrashAlert.git
    cd TrashAlert
  2. Create a virtual environment

    python3 -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies (when requirements.txt is created)

    pip install -r requirements.txt

    Current dependencies (to be added to requirements.txt):

    • pandas - Data manipulation
    • geopandas - Geospatial data processing
    • shapely - Geometric operations
    • requests - HTTP requests for OSM API
    • sqlalchemy - Database ORM (future)
    • psycopg2-binary - PostgreSQL adapter (future)
    • fastapi - API framework (future)
    • uvicorn - ASGI server (future)
  4. Set up environment variables (future)

    cp .env.example .env
    # Edit .env with your configuration
  5. Initialize the database (future)

    # Create database
    createdb trashalert
    
    # Run migrations
    alembic upgrade head

Configuration

Configuration files will be in config/:

  • cities.json - City definitions and boundaries
  • database.yml - Database connection settings
  • api.yml - API configuration

📖 Usage

Current Scripts

1. Generate Sample Raw Data

Creates realistic test data for development:

python scripts/create_sample_raw_data.py

Output: data/addresses_osm_raw.csv

  • Generates 445 sample addresses across 6 cities
  • Includes subdivisions for San Diego
  • Adds test cases: duplicates, null coordinates

2. Sample Addresses Per City

Samples up to 50 addresses per city with geographic distribution:

python scripts/sample_addresses_per_city.py

Input: data/addresses_osm_raw.csv Output: data/addresses_sampled_50_per_city.csv

Features:

  • Stratified sampling across subdivisions
  • Removes duplicates and null coordinates
  • Ensures geographic diversity
  • Logs sampling statistics

Example Output:

2025-11-16 10:00:00 - INFO - Loading raw addresses from data/addresses_osm_raw.csv
2025-11-16 10:00:00 - INFO - Loaded 445 addresses from 6 cities
2025-11-16 10:00:00 - INFO - San Diego: sampled 50 from 200 addresses across 8 subdivisions
2025-11-16 10:00:00 - INFO - El Centro: sampled 50 from 80 addresses across 5 subdivisions
2025-11-16 10:00:00 - INFO - Calexico: sampled 50 from 65 addresses (no subdivisions)
2025-11-16 10:00:00 - INFO - ✓ Done! Sampled dataset saved to data/addresses_sampled_50_per_city.csv

Full Pipeline (Future)

1. Setup a New City

# Add city to config/cities.json
python scripts/add_city.py --name "Carlsbad" --state "CA"

# Fetch addresses from OSM
python scripts/fetch_osm_addresses.py --city "Carlsbad"

# Sample addresses
python scripts/sample_addresses.py --city "Carlsbad"

# Load into database
python scripts/load_to_db.py --city "Carlsbad"

2. Run the API Server

# Development server
uvicorn app.main:app --reload --port 8000

# Production server
gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker

API will be available at: http://localhost:8000 API documentation: http://localhost:8000/docs

3. Run Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=app --cov-report=html

# Run specific test file
pytest tests/test_consensus.py

📁 Project Structure

TrashAlert/
├── README.md                 # This file
├── .gitignore               # Git ignore rules
├── requirements.txt         # Python dependencies (to be created)
├── setup.py                 # Package setup (to be created)
│
├── data/                    # Data files
│   ├── addresses_osm_raw.csv                # Raw OSM address data
│   └── addresses_sampled_50_per_city.csv    # Sampled addresses
│
├── scripts/                 # Data processing scripts
│   ├── create_sample_raw_data.py       # Generate test data
│   ├── sample_addresses_per_city.py    # Sample addresses
│   ├── fetch_osm_addresses.py          # Fetch from OSM (future)
│   ├── load_to_db.py                   # Load to database (future)
│   └── add_city.py                     # Add new city (future)
│
├── app/                     # Application code (future)
│   ├── __init__.py
│   ├── main.py             # FastAPI app entry point
│   ├── config.py           # Configuration management
│   ├── database.py         # Database connection
│   │
│   ├── models/             # SQLAlchemy models
│   │   ├── __init__.py
│   │   ├── city.py
│   │   ├── address.py
│   │   ├── report.py
│   │   └── schedule.py
│   │
│   ├── api/                # API routes
│   │   ├── __init__.py
│   │   ├── addresses.py
│   │   ├── reports.py
│   │   └── schedules.py
│   │
│   ├── services/           # Business logic
│   │   ├── __init__.py
│   │   ├── consensus.py    # Consensus algorithm
│   │   ├── geocoding.py    # Address geocoding
│   │   └── validation.py   # Data validation
│   │
│   └── utils/              # Utility functions
│       ├── __init__.py
│       └── geo.py          # Geospatial helpers
│
├── tests/                   # Test suite (future)
│   ├── __init__.py
│   ├── test_consensus.py
│   ├── test_api.py
│   └── test_models.py
│
├── docs/                    # Documentation
│   ├── architecture.md      # System architecture
│   ├── data_model.md        # Database schema
│   ├── crowdsourcing.md     # Consensus algorithm
│   └── api.md              # API documentation (future)
│
├── config/                  # Configuration files (future)
│   ├── cities.json         # City definitions
│   ├── database.yml        # Database config
│   └── api.yml             # API config
│
├── migrations/              # Database migrations (future)
│   └── alembic/            # Alembic migration files
│
└── web/                     # Frontend (future)
    ├── public/
    ├── src/
    └── package.json

📚 Documentation

Available Documentation

Future Documentation

  • API Reference: Endpoint documentation with examples
  • Deployment Guide: Production deployment instructions
  • Contributing Guide: How to contribute to the project
  • User Guide: How to use the web interface

🧪 Testing

Current Testing

Manual testing with sample data:

# Generate sample data
python scripts/create_sample_raw_data.py

# Run sampling script
python scripts/sample_addresses_per_city.py

# Verify output
head -20 data/addresses_sampled_50_per_city.csv

Future Testing

Automated test suite:

# Unit tests
pytest tests/test_consensus.py
pytest tests/test_models.py

# Integration tests
pytest tests/test_api.py

# End-to-end tests
pytest tests/test_e2e.py

🔍 Data Sources

Current Data

  • Sample Data: Generated test data for development
    • 6 cities: San Diego, El Centro, Calexico, Brawley, Imperial, Holtville
    • 445 addresses total (200 in San Diego, varying amounts in others)
    • Includes subdivisions where applicable

Future Data Sources

  • OpenStreetMap: Real address data via Overpass API
  • City Boundaries: GeoJSON from OpenStreetMap or city open data portals
  • Official Schedules: Where available from city websites
  • User Reports: Crowdsourced observations

🛠️ Technology Stack

Current

  • Python 3.11+: Core language
  • Pandas: Data processing
  • Standard Library: CSV handling, logging

Planned

Backend:

  • FastAPI: Modern Python web framework
  • SQLAlchemy: ORM for database operations
  • PostgreSQL: Database with PostGIS extension
  • Alembic: Database migrations
  • Pydantic: Data validation

Data Processing:

  • GeoPandas: Geospatial data analysis
  • Shapely: Geometric operations
  • Requests: HTTP client for OSM API

Frontend (future):

  • React: UI framework
  • Leaflet: Interactive maps
  • Tailwind CSS: Styling

Infrastructure:

  • Docker: Containerization
  • Nginx: Reverse proxy
  • GitHub Actions: CI/CD

🗺️ Roadmap

Phase 1: Data Pipeline ✅ (In Progress)

  • Sample data generation
  • Address sampling script
  • Database schema implementation
  • OSM data fetching script
  • Data ingestion pipeline

Phase 2: Core API (Q1 2026)

  • FastAPI project setup
  • Database models (SQLAlchemy)
  • CRUD operations
  • Consensus algorithm implementation
  • API endpoints

Phase 3: Web Interface (Q2 2026)

  • React app setup
  • Address search UI
  • Schedule display
  • Report submission form
  • Confidence indicators

Phase 4: Beta Launch (Q3 2026)

  • Deploy to cloud platform
  • Load real OSM data for pilot cities
  • User testing
  • Bug fixes and refinements
  • Documentation updates

Phase 5: Expansion (Q4 2026)

  • Add more California cities
  • Mobile app (React Native)
  • Advanced features (reminders, etc.)
  • Integration with official city APIs

🤝 Contributing

Contributions are welcome! This is an early-stage project with lots of opportunities to help.

How to Contribute

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/your-feature
  3. Make your changes
  4. Add tests (when test suite is set up)
  5. Commit: git commit -m "Add your feature"
  6. Push: git push origin feature/your-feature
  7. Open a Pull Request

Areas Needing Help

  • Database schema refinement
  • API development
  • Frontend development
  • Testing and quality assurance
  • Documentation improvements
  • Data collection for new cities

📄 License

[To be determined - recommend MIT or Apache 2.0]

💬 Contact

🙏 Acknowledgments

  • OpenStreetMap: For providing free, open address data
  • PostGIS: For powerful geospatial database capabilities
  • FastAPI: For the excellent Python web framework
  • Community Contributors: Everyone who reports trash days!

Note: This is a pilot project. Schedules may not be 100% accurate. Always verify with your local waste management provider for official information.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors