Skip to content

CamSense-AI/Watt-Watch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

⚑ WattWatch β€” Intelligent Energy Waste Detection System

Camera-based AI system that detects occupancy and monitors appliance states (lights, fans, monitors) to prevent energy waste in real-time.


πŸ“– Table of Contents

  1. What is WattWatch?
  2. How It Works β€” System Overview
  3. AI Models Used
  4. Project Structure
  5. Key Features
  6. Installation & Setup
  7. Configuration
  8. Running the System
  9. Dashboard (Frontend)
  10. API Endpoints
  11. Energy Metrics Explained
  12. Privacy & Anonymization
  13. Alert System
  14. Roboflow Model Training Guide

πŸ” What is WattWatch?

WattWatch is an AI-powered energy monitoring system for smart buildings, offices, and classrooms. It uses a combination of:

  • Computer Vision (YOLOv8) to detect if people are present in a room
  • Custom Roboflow ML Models to detect the ON/OFF state of lights, ceiling fans, and monitors
  • A real-time React dashboard to visualize room-level energy waste and send alerts

The core idea is simple: if no one is in the room but appliances are still ON β†’ that's energy waste. WattWatch automates this detection, calculates the cost in real-time, and alerts facility managers via WhatsApp/SMS.


🧠 How It Works β€” System Overview

                          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  IP Camera / Webcam ──▢  β”‚   FastAPI Backend     β”‚
                          β”‚   (api/main.py)       β”‚
                          β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                   β”‚ Each Frame
                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                    β–Ό              β–Ό                   β–Ό
           β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
           β”‚  YOLOv8n.pt  β”‚ β”‚ Roboflow API  β”‚ β”‚  Privacy Filter β”‚
           β”‚ (Person Det.)β”‚ β”‚ (3 ML Models) β”‚ β”‚  (Face Blur)    β”‚
           β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                  β”‚                 β”‚                   β”‚
                  β–Ό                 β–Ό                   β”‚
           Person Count    Light/Fan/Monitor            β”‚
                           ON or OFF status             β”‚
                    β”‚                 β”‚                 β”‚
                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                      β”‚
                          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                          β”‚   Room State Engine      β”‚
                          β”‚   AlertManager           β”‚
                          β”‚   MicrozoneTracker       β”‚
                          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                      β”‚
                     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                     β”‚  WebSocket Stream to Dashboard  β”‚
                     β”‚  React (Vite) Frontend          β”‚
                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Frame Processing Pipeline (per room per frame)

  1. Frame Capture β€” from IP camera stream or webcam
  2. Person Detection β€” YOLOv8 detects humans β†’ count of people
  3. Appliance Detection β€” Roboflow API checks if Light/Fan/Monitor is ON or OFF (every N frames to reduce cost)
  4. Privacy Anonymization β€” Faces are auto-blurred using Haar cascade + pixelation before storage
  5. Microzone Tracking β€” Frame split into 4Γ—4 grid, per-zone occupancy tracked for heatmaps
  6. Waste Detection β€” person_count == 0 AND any appliance is ON β†’ "WASTE" state
  7. AlertManager β€” debounced alerts sent via Twilio SMS / WhatsApp after configurable delay
  8. WebSocket Push β€” annotated frame + all metadata streamed to dashboard in real-time

πŸ€– AI Models Used

WattWatch uses 4 models in total. Here is a complete breakdown:


Model 1: YOLOv8n β€” Human / Person Detection (Primary Model in Use)

Property Value
Framework Ultralytics YOLOv8
Model File yolov8n.pt (also yolov8s.pt available)
Task Object Detection β€” Person class only (class_id = 0 from COCO dataset)
Config Key config.yaml β†’ model.name
Default Confidence 0.25
Where it runs Locally on your CPU/GPU
Purpose Counts how many people are in the room

yolov8n.pt is the currently active model (nano variant β€” fastest). yolov8s.pt (small variant) is also present for higher accuracy at a cost of speed.

How to switch: Edit config.yaml:

model:
  name: yolov8s.pt   # switch to small model for better accuracy

Code location: src/detector.py β†’ YOLODetector class


Model 2 (Roboflow): Light ON/OFF Detector

Property Value
Platform Roboflow β€” Hosted Serverless Inference
Model ID coms-room-light-63vyv/1
Task Classification / Detection β€” Is the light ON or OFF?
Training Data Custom-labeled room images with lights on/off (trained on Roboflow)
API Endpoint https://serverless.roboflow.com
Config Key config.yaml β†’ appliance.roboflow.light_model
Purpose Detects if the ceiling/room light is switched ON or OFF

Response parsing logic (from src/appliance_status.py):

  • If predicted class contains "on", "light", "glow", "lamp", "bright", or "tube" β†’ Status: ON
  • If class contains "off" β†’ Status: OFF

Model 3 (Roboflow): Ceiling Fan ON/OFF Detector

Property Value
Platform Roboflow β€” Hosted Serverless Inference
Model ID ceiling-fan-detection-epfsk/1
Task Detection β€” Is the ceiling fan spinning (ON) or stopped (OFF)?
Training Data Custom-labeled ceiling fan images (trained on Roboflow)
API Endpoint https://serverless.roboflow.com
Config Key config.yaml β†’ appliance.roboflow.fan_model
Purpose Detects the rotational state of ceiling fans

Response parsing logic:

  • If class contains "on", "fan", "spinning", "ceiling", or "rotor" β†’ Status: ON
  • If class contains "off" β†’ Status: OFF

Model 4 (Roboflow): Monitor / Display ON/OFF Detector

Property Value
Platform Roboflow β€” Hosted Serverless Inference
Model ID monitor_detection-uj19t-zqnlq/1
Task Detection β€” Is the monitor/screen turned ON or OFF?
Training Data Custom-labeled monitor images (trained on Roboflow)
API Endpoint https://serverless.roboflow.com
Config Key config.yaml β†’ appliance.roboflow.monitor_model
Purpose Detects if desktop monitors are left powered on in empty rooms

Response parsing logic:

  • If class contains "on", "active", "display", "monitor", "screen", or "power" β†’ Status: ON
  • Otherwise β†’ Status: OFF

Which Model is Active Right Now?

Model Active? Notes
yolov8n.pt βœ… YES Configured in config.yaml, runs locally
yolov8s.pt ❌ No Available on disk but not selected
Roboflow Light βœ… YES Called every frame_skip=20 frames
Roboflow Fan βœ… YES Called every frame_skip=20 frames
Roboflow Monitor βœ… YES Called every frame_skip=20 frames
MLApplianceDetector (MobileNetV2) ❌ No Fallback only, requires models/appliance_classifier.pt which is not present

Summary: The system currently uses YOLOv8n for person detection and the 3 Roboflow hosted models for appliance status. All 3 Roboflow calls are made in parallel (via ThreadPoolExecutor) to minimize latency.


πŸ“ Project Structure

watt-watch/
β”‚
β”œβ”€β”€ main.py                    # CLI entry point (detect/live/benchmark/calibrate)
β”œβ”€β”€ config.yaml                # Master configuration file
β”œβ”€β”€ requirements.txt           # Python dependencies
β”œβ”€β”€ setup.py                   # Package setup
β”œβ”€β”€ yolov8n.pt                 # YOLOv8 Nano model (person detection) ← ACTIVE
β”œβ”€β”€ yolov8s.pt                 # YOLOv8 Small model (alternative, not active)
β”‚
β”œβ”€β”€ src/                       # Core Python source code
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ detector.py            # YOLOv8 person detection wrapper
β”‚   β”œβ”€β”€ tracker.py             # Centroid-based multi-person tracker
β”‚   β”œβ”€β”€ appliance_status.py    # Roboflow API calls (Light/Fan/Monitor)
β”‚   β”œβ”€β”€ appliance_detector.py  # Rule-based fallback detector (brightness/edge analysis)
β”‚   β”œβ”€β”€ ml_appliance_detector.py # MobileNetV2 local ML detector (optional fallback)
β”‚   β”œβ”€β”€ alert_manager.py       # Waste event tracking + Twilio SMS/WhatsApp alerts
β”‚   β”œβ”€β”€ microzone.py           # 4Γ—4 grid zone tracking + heatmap generation
β”‚   β”œβ”€β”€ privacy_filter.py      # Face detection (Haar cascade) + anonymization
β”‚   β”œβ”€β”€ intensity_calibrator.py # Room brightness threshold calibration
β”‚   β”œβ”€β”€ smoothing.py           # Temporal smoothing for detection signals
β”‚   β”œβ”€β”€ preprocessing.py       # Frame preprocessing utilities
β”‚   β”œβ”€β”€ model_utils.py         # Model download and path utilities
β”‚   β”œβ”€β”€ mqtt_manager.py        # MQTT publish/subscribe for IoT integration
β”‚   β”œβ”€β”€ utils.py               # FPS counter, video extractor, JSON logger
β”‚   └── database/              # SQLite database layer
β”‚
β”œβ”€β”€ api/
β”‚   └── main.py                # FastAPI backend (~75KB) β€” WebSocket, REST API
β”‚
β”œβ”€β”€ dashboard-vite/            # React + Vite frontend dashboard
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ App.jsx            # Main dashboard component (~920 lines)
β”‚   β”‚   β”œβ”€β”€ App.css            # Dashboard styling
β”‚   β”‚   └── main.jsx
β”‚   β”œβ”€β”€ package.json
β”‚   └── vite.config.js
β”‚
β”œβ”€β”€ scripts/
β”‚   β”œβ”€β”€ download_samples.py    # Download sample test videos
β”‚   β”œβ”€β”€ extract_frames.py      # Extract frames from videos
β”‚   └── migrate_json_to_sqlite.py
β”‚
β”œβ”€β”€ configs/                   # Additional configuration files
β”œβ”€β”€ data/                      # Test clips and raw data
β”‚   └── clips/                 # occupied.mp4, empty.mp4, quiet-reader.mp4
β”œβ”€β”€ output/                    # Detection results, JSON logs
β”œβ”€β”€ logs/                      # FPS logs, appliance debug logs
β”œβ”€β”€ models/                    # Optional local ML model files
β”œβ”€β”€ docs/                      # Documentation
β”œβ”€β”€ tests/                     # Unit tests
β”‚
β”œβ”€β”€ ENERGY_METRICS.md          # Detailed energy calculation documentation
β”œβ”€β”€ test_detection.py          # Manual detection tests
└── test_appliance.py          # Manual appliance detection tests

✨ Key Features

Feature Description
🧍 Person Detection YOLOv8n detects and counts people in real-time
πŸ’‘ Light Detection Roboflow model classifies room lights as ON/OFF
πŸŒ€ Fan Detection Roboflow model detects spinning/stopped ceiling fans
πŸ–₯️ Monitor Detection Roboflow model detects powered-on/off monitors
⚑ Energy Waste Alerts SMS/WhatsApp alerts when room is empty but appliances are ON
πŸ”’ Privacy First Automatic face anonymization (pixelation/blur) before any storage
πŸ—ΊοΈ Microzone Heatmap 4Γ—4 grid zone tracking shows where people congregate
πŸ“Š Cost Calculation Real-time cost/hour and cumulative waste cost in β‚Ή or $
πŸŽ›οΈ Calibration Studio Per-room brightness threshold tuning via visual dashboard
πŸ“‘ Multi-Room Support Monitor up to 2 IP camera rooms simultaneously
πŸ—„οΈ SQLite Logging All waste events persisted in SQLite database
🌐 WebSocket Streaming Live annotated frames pushed to dashboard

πŸ› οΈ Installation & Setup

Prerequisites

  • Python 3.9 or higher
  • Node.js 18+ (for dashboard)
  • A Roboflow account with API key
  • (Optional) CUDA GPU for faster YOLO inference

Step 1 β€” Clone & Install Python Dependencies

git clone <your-repo-url>
cd watt-watch

pip install -r requirements.txt

The key packages installed:

ultralytics>=8.0.0       # YOLOv8 (person detection)
opencv-python>=4.8.0     # Video processing
torch>=2.0.0             # Deep learning backend
inference-sdk>=1.0.0     # Roboflow API client
fastapi>=0.104.0         # Backend API server
uvicorn>=0.24.0          # ASGI server
websockets>=12.0         # Real-time streaming
pyyaml>=6.0              # Config file parsing

Step 2 β€” Configure API Keys

Open config.yaml and set your Roboflow API key:

appliance:
  roboflow:
    api_key: YOUR_ROBOFLOW_API_KEY_HERE
    light_model: coms-room-light-63vyv/1
    fan_model: ceiling-fan-detection-epfsk/1
    monitor_model: monitor_detection-uj19t-zqnlq/1

Step 3 β€” (Optional) Configure Twilio for Alerts

alerts:
  twilio:
    enabled: true
    account_sid: YOUR_TWILIO_ACCOUNT_SID
    auth_token: YOUR_TWILIO_AUTH_TOKEN
    from_number: '+1xxxxxxxxxx'
    to_number: '+91xxxxxxxxxx'

Step 4 β€” Install Dashboard Dependencies

cd dashboard-vite
npm install

βš™οΈ Configuration

All system behavior is controlled by config.yaml. Key sections:

# ── Model selection ──────────────────────────────────
model:
  name: yolov8n.pt          # Switch to yolov8s.pt for higher accuracy
  confidence_threshold: 0.25

# ── Detection settings ───────────────────────────────
detection:
  frame_skip: 1             # Process every frame (increase for speed)
  min_confidence: 0.25

# ── Appliance wattage for cost calculation ───────────
appliance:
  enabled: true
  frame_skip: 20            # Run Roboflow every 20 frames
  wattage:
    light: 40               # Watts per light bulb
    ceiling_fan: 65         # Watts per ceiling fan
    monitor: 35             # Watts per monitor
  electricity_rate: 0.12    # USD per kWh
  electricity_rate_inr: 6.5 # INR per kWh

# ── Alert debouncing ─────────────────────────────────
alerts:
  initial_delay_seconds: 60    # Wait 60s before first alert
  repeat_interval_seconds: 600 # Repeat alert every 10 min

# ── Privacy settings ─────────────────────────────────
privacy:
  enabled: true
  blur_method: pixelate     # Options: pixelate, gaussian, solid
  blur_level: 99

# ── Microzone grid ───────────────────────────────────
microzone:
  enabled: true
  rows: 4
  cols: 4
  decay: 0.98               # Heatmap decay factor

πŸš€ Running the System

Option A β€” CLI Commands (for testing/video processing)

Process a video file:

python main.py detect data/test_clip.mp4

Run live webcam detection:

python main.py live

Run live on a specific camera:

python main.py live --camera 0

Run benchmark on test clips:

python main.py benchmark

Run intensity calibration on a room:

python main.py calibrate data/test_clip.mp4 --room classroom_1 --samples 30

Check calibration status:

python main.py calibrate --status

Process image (single frame):

python main.py detect test_img.jpg --output result.jpg

Option B β€” Full System (Backend API + Dashboard)

Step 1: Start the FastAPI Backend

cd api
uvicorn main:app --host 0.0.0.0 --port 8000 --reload

Backend runs at: http://localhost:8000 API docs available at: http://localhost:8000/docs

Step 2: Start the React Dashboard

cd dashboard-vite
npm run dev

Dashboard runs at: http://localhost:5173

Step 3: Connect a camera In the dashboard, enter your IP camera stream URL (e.g., http://192.168.0.154:8080/video) and click CONNECT.


πŸ“Š Dashboard (Frontend)

The dashboard (built with React + Vite) has 5 tabs:

1. MONITOR Tab

  • Live video feed from up to 2 IP cameras
  • Person count, Light/Fan/Monitor status displayed per room
  • WASTE_DETECTED alert banner when room is empty with appliances ON
  • Privacy mode toggle (GHOST_MODE) β€” enables/disables face blur
  • Real-time energy load and cumulative waste cost

2. SUMMARY Tab

  • Annual energy projections β€” kWh/day, savings in INR/year, COβ‚‚/year
  • Last 30 days savings report
  • Per-room breakdown with cost and COβ‚‚ metrics

3. PRIVACY Tab

  • Privacy measures status (face anonymization, data retention)
  • Stakeholder compliance commitments
  • Data retention policy overview

4. CALIBRATE Tab (Luminance Studio)

  • Visual real-time brightness meter for selected room
  • Dark / Medium threshold sliders for day and night modes
  • Drag sliders to tune thresholds and commit changes to config.yaml
  • Shows classification: DARK / MEDIUM / BRIGHT based on live feed

5. DATABASE Tab

  • Browse the SQLite database schema
  • View raw table data (waste events, detection logs)
  • Export and inspect historical energy waste records

πŸ”Œ API Endpoints

Method Endpoint Description
POST /api/camera/connect Connect a room camera (start streaming)
POST /api/camera/disconnect Disconnect a room camera
WS /ws/stream/{room_id} WebSocket for live frame streaming
GET /api/energy/metrics Current energy metrics per room
GET /api/energy/dashboard Annual projections and 30-day summary
GET /api/alerts/events Recent waste alert events
GET /api/alerts/status Room status + waste duration
GET /api/calibration Get current threshold calibration
POST /api/calibration Update brightness thresholds
GET /api/privacy/assurance Privacy compliance report
GET /api/database/info Database statistics
GET /api/database/schema Database table schema
GET /api/database/rows/{table} Browse table rows

πŸ’° Energy Metrics Explained

How cost is calculated:

estimated_watts = (40W if Light is ON) + (65W if Fan is ON) + (35W if Monitor is ON)

cost_per_hour  = (estimated_watts / 1000) Γ— electricity_rate     # in USD
cost_per_hour_inr = (estimated_watts / 1000) Γ— 6.5              # in INR

cumulative_cost = cost_per_hour Γ— (waste_duration_seconds / 3600)

Waste State Definition:

is_waste = (person_count == 0) AND (light == "ON" OR fan == "ON" OR monitor == "ON")

Annual Projections:

kwh_per_day = estimated_watts Γ— 24 / 1000
inr_per_year = kwh_per_day Γ— 365 Γ— electricity_rate_inr
co2_per_year_kg = kwh_per_day Γ— 365 Γ— co2_factor (0.71 kg/kWh)

See ENERGY_METRICS.md for the complete calculation documentation.


πŸ”’ Privacy & Anonymization

WattWatch is designed to be privacy-first in compliance with institutional requirements:

  • Haar Cascade face detection runs on every N frames
  • Detected faces are pixelated (or gaussian-blurred) with a large padding to obscure the entire head region
  • Raw images are NEVER stored by default (privacy.storage.save_raw: false)
  • Only anonymized thumbnails are saved (for alert evidence)
  • All processing happens locally β€” no raw video leaves the machine

Privacy configuration:

privacy:
  blur_method: pixelate   # pixelate / gaussian / solid
  pixelate_blocks: 12     # More blocks = finer pixelation
  blur_level: 99          # For gaussian mode
  skip_frames: 3          # Re-detect faces every 3 frames
  storage:
    save_raw: false        # NEVER store raw video
    save_anonymized: false # Only enable for auditing

🚨 Alert System

The AlertManager watches each room for waste conditions:

  1. Waste detected β†’ starts a timer
  2. After initial_delay_seconds (default: 60s) β†’ fires first alert
  3. If waste continues, repeats every repeat_interval_seconds (default: 600s = 10 min)
  4. When room is occupied or appliances are OFF β†’ resets the timer

Alert channels:

  • Twilio SMS β€” text message to facility manager
  • Twilio WhatsApp β€” WhatsApp template message with room name and duration
  • SQLite Database β€” event persisted to data/wattwatch.db
  • JSON fallback β€” events saved to output/waste_events.json

Alert message format:

⚠️ WATTWATCH ALERT
Energy waste detected in Room 101!
Duration: 5.2 mins
Lights: ON, Fans: ON, Mon: OFF
Please check the facility.

πŸ‹οΈ Roboflow Model Training Guide

The 3 Roboflow models (light, fan, monitor) were trained using Roboflow's platform. Here's how they were set up:

Steps to train your own models:

  1. Create a Roboflow account at app.roboflow.com

  2. Create a new project β†’ select Object Detection or Classification

  3. Upload images:

    • For light model: collect images of your room with light ON and light OFF
    • For fan model: collect images of ceiling fans spinning (ON) and still (OFF)
    • For monitor model: collect images of monitors powered ON and OFF
  4. Annotate β†’ draw bounding boxes and assign class labels:

    • Light model classes: light-on, light-off (or similar)
    • Fan model classes: fan-on, fan-off
    • Monitor model classes: monitor-on, monitor-off
  5. Train β†’ Use Roboflow's auto-train feature (YOLOv8 recommended)

  6. Get model ID β†’ from the Roboflow dashboard, copy the workspace/project/version format

  7. Update config.yaml:

appliance:
  roboflow:
    api_key: YOUR_API_KEY
    light_model: YOUR-WORKSPACE/YOUR-LIGHT-PROJECT/1
    fan_model: YOUR-WORKSPACE/YOUR-FAN-PROJECT/1
    monitor_model: YOUR-WORKSPACE/YOUR-MONITOR-PROJECT/1

Current models in use:

Appliance Roboflow Model ID
Light coms-room-light-63vyv/1
Ceiling Fan ceiling-fan-detection-epfsk/1
Monitor monitor_detection-uj19t-zqnlq/1

Tip: The more diverse your training images (different rooms, lighting conditions, angles), the more accurate your model will be.


πŸ§ͺ Testing

Test person detection on a single image:

python test_detection.py

Test appliance detection (light/fan) on a test image:

python test_appliance.py

Run detection with max frames limit:

python main.py detect data/clips/occupied.mp4 --max-frames 100

πŸ“ Logging & Output

File Contents
output/detections.json Per-frame detection results (JSON)
output/waste_events.json Waste alert event log (JSON)
output/appliance_status.json Appliance ON/OFF history per frame
output/benchmark_results.json Benchmark test results
logs/fps.log Frame-by-frame FPS log
logs/appliance_debug.log Raw Roboflow API response debug log
data/wattwatch.db SQLite database (all events + detections)
data/alerts/*.jpg Anonymized thumbnails for waste events

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/my-feature
  3. Commit changes: git commit -m 'Add my feature'
  4. Push: git push origin feature/my-feature
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License.


πŸ™ Acknowledgements

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors