Short: An end-to-end computer-vision pipeline that trains a YOLO model to detect vehicles, applies lightweight tracking+counting in video, computes queue length/density, and adapts traffic-signal green time using a simple control rule. This README explains repository layout, how to set up and run the notebooks, the core logic (detection → tracking → queue → adaptive timing), and common troubleshooting tips.
This project demonstrates a practical pipeline for traffic monitoring and adaptive signal control using an object detector (YOLOv8) and simple rule-based control logic. The goal is to show how detection outputs can feed a small control law that changes the green-light duration based on measured queue length or vehicle density.
It is intentionally kept reproducible and modular so each team member can work on: dataset preparation / model training / inference & logic / visualization.
- Train a YOLOv8 detector on a traffic dataset (cars, buses, trucks, bikes).
- Run inference on video frames to get bounding boxes and class labels.
- Track objects over frames (ID assignment) and compute counts & queue lengths.
- Compute adaptive green-light duration using a linear control rule.
- Save annotated output video and simple plots (queue length vs time).
Adaptive-Traffic-Signal-YOLO/
│
├── 01_Model_Training.ipynb # Traffic_YOLO.ipynb (training + dataset steps)
|── 02_Traffic_Logic.ipynb # ITCS.ipynb (inference, tracking, queue logic)
│
├── data/ # optional: ignored by git if large
│ ├── test_video.mp4
│ └── best.pt # trained model (optional to store here)
│
├── runs/ # annotated videos, plots (tracked in git via .gitkeep)
│ └── annotated_output.mp4
│ └── counts_log.csv
│ └── detections.json
│ └── queue_plot.png
│
├── .gitignore
├── requirements.txt
├── README.md
└── LICENSE
Note: keep
data/andruns/in.gitignoreby default to avoid pushing large files to GitHub.
- Create & activate a Python venv (or use conda).
- Install requirements:
pip install -r requirements.txt. - Create a
.envwith your API key(s) and ensure.envis in.gitignore.
python -m venv .venv
# Windows
.venv\Scripts\activate
# macOS / Linux
source .venv/bin/activate
pip install -r requirements.txtAdd python-dotenv, ultralytics, roboflow, opencv-python, matplotlib, numpy, pandas, and any tracker libraries you used (e.g., sort or opencv-contrib-python) to requirements.txt.
Create .env at repo root containing keys (example for Roboflow):
ROBOFLOW_API_KEY=your_key_here
Load it in notebooks with python-dotenv:
from dotenv import load_dotenv
import os
load_dotenv()
api_key = os.getenv('ROBOFLOW_API_KEY')Do not commit .env — add .env to .gitignore.
If your model file (best.pt) > 100MB, enable Git LFS and track .pt:
git lfs install
git lfs track "*.pt"
git add .gitattributesAlternatively upload the model to Google Drive, Kaggle, or Roboflow and place a download snippet in the notebook.
-
01_Model_Training.ipynb — dataset download & training
- Download dataset (Roboflow or local). If using Roboflow, ensure
.envcontains the key. - Inspect dataset, adjust
data.yamlif necessary. - Run training with YOLOv8 (Ultralytics API). Save
best.pt.
- Download dataset (Roboflow or local). If using Roboflow, ensure
-
02_Traffic_Logic.ipynb — inference, tracking, adaptive logic
- Place
test_video.mp4indata/(or change path in notebook). - Load
best.pt(or a public weights URL). - Run inference frame-by-frame, pass detections to tracker, compute queue metrics, and save annotated video + plots.
- Place
There are clearly marked cells: # CONFIG where you set paths, weights, and parameters like BASE_GREEN, K (gain), and QUEUE_THRESHOLD.
- Why YOLOv8? Fast, production-friendly, and easy to train with Ultralytics API. Fits well for video pipelines where per-frame latency matters.
- Dataset: Use Roboflow export or a custom dataset with
train/val/testsplits anddata.yamlfile specifying classes and paths. - Hyperparams to watch:
imgsz,batch,epochs, and augmentation. For small datasets, use transfer learning by loading a pretrained backbone.
Tips:
- Start with
epochs=50and observe mAP; reduce if overfitting. - Use
valset for early stopping.
-
Detections: model returns
[x1,y1,x2,y2,conf,cls]per object per frame. -
Tracker: we use a lightweight online tracker (e.g., SORT, ByteTrack-like logic, or OpenCV’s
cv2.TrackerCSRT_create()wrappers). The tracker maintains persistent IDs so you can count unique vehicles crossing zones. -
Counting & queue:
-
Define a counting line or a polygonal zone in frame coordinates.
-
When a tracked object crosses the line in the direction of interest, increment counts.
-
Compute queue length as either:
- number of vehicles inside a pre-defined queue zone, or
- sum of distances of vehicles from stop-line normalized by lane length (for density estimate).
-
Implementation detail (pseudo):
for frame in video:
detections = model(frame)
tracks = tracker.update(detections)
for t in tracks:
if crosses_count_line(t): counts += 1
queue_len = len([t for t in tracks if in_queue_zone(t)])
green_time = BASE_GREEN + K * queue_lenA simple linear control rule used here: $$ GreenTime = \max(MIN_GREEN,\ Base + K \cdot QueueLength) $$
Baseis minimum green time (configurable).Kis the tunable gain (seconds per vehicle). Suggested start:Base=10s, K=2s/vehicle.MIN_GREENandMAX_GREENenforce safety/time limits.
Why this simple rule? It's explainable, robust, and easy to implement for a demo. For production, consider PID or RL-based controllers.
- Save per-frame queue length to a CSV:
timestamp, queue_len, green_time. - Plot
queue_lenandgreen_timeover time to validate controller responsiveness. - Metrics to track: detection
mAP, false positives/negatives on key frames, average queue length reduction, and safety constraints (e.g., min wait time).
- Recommended: Host model weights on Google Drive / Kaggle dataset / Roboflow and add a download cell in the training notebook. Avoid pushing large
.ptor.mp4to GitHub unless using Git LFS. - Provide a
data/README.mdwith instructions for team members to download model and test video.
ModuleNotFoundError: roboflow→pip install roboflowand add torequirements.txt.- KeyError / None when loading env key → ensure
.envis in project root andload_dotenv()is called BEFOREos.getenv(). RuntimeError: CUDA out of memory→ lowerimgszorbatchsize; or run on CPU for demo (slower).Git pushrejected (file too large)` → file >100MB; use Git LFS or remove the file and host externally.- Tracker ID switches frequently → reduce detection noise by increasing detection confidence threshold and enable NMS; consider a stronger tracker.
- Keep
data/andruns/ignored. Store small demo assets inassets/. - Use branches for features:
feature/training,feature/inference,feature/visuals. - Add
CODEOWNERSor a shortCONTRIBUTING.mddescribing the role of each member and code-review process.
Suggested short blurb for GitHub description: Adaptive-Traffic-Signal-YOLO — YOLOv8-based traffic detection + lightweight tracking for adaptive signal timing. Group project.
CONFIG = {
'weights_path': 'data/best.pt',
'video_path': 'data/test_video.mp4',
'output_path': 'output/annotated.mp4',
'imgsz': 640,
'conf_thresh': 0.4,
'iou_thresh': 0.5,
'BASE_GREEN': 10, # seconds
'K': 2, # seconds per vehicle
'MIN_GREEN': 7,
'MAX_GREEN': 45
}- Use GitHub Desktop: create repo, copy files, commit with message
Initial commit: notebooks + READMEand publish.
If you'd like to improve this project, here are a few ideas:
- Replace linear rule with a PID controller or RL agent for green-time optimization.
- Add multi-lane support and per-lane queue estimation.
- Replace tracker with a ReID-based tracker for long-term ID stability.
This repository uses the MIT license. See LICENSE file.
Group members: add GitHub handles in CONTRIBUTORS.md or below:
- Member 1 — @Rishy-09
- Member 2 — @MoHiT05os
- Member 3 — @shubh-bhateja
- Member 4 — @Ishika0424