|
1 | 1 | # 🛡️ Real-Time Weapon Detection System for Security Surveillance |
2 | 2 |
|
3 | | -This project is a computer vision system designed to enhance public safety by detecting dangerous weapons in real-time surveillance footage. By leveraging deep learning, the system identifies potential security threats, allowing for proactive intervention before incidents occur. |
| 3 | +This project is a computer vision system designed to detect weapons in video streams or files. It is intended to enhance public safety by identifying dangerous objects such as firearms in real-time or pre-recorded surveillance footage. |
4 | 4 |
|
5 | | ------ |
| 5 | +--- |
6 | 6 |
|
7 | 7 | ## 🎯 Target Weapons for Detection |
8 | 8 |
|
9 | | -The model is trained to identify specific categories of weapons while minimizing false alarms from harmless objects. |
| 9 | +The system can detect the following: |
10 | 10 |
|
11 | | -#### **Primary Weapon Classes (Required)** |
| 11 | +- **Primary Weapon Classes (Required)** |
| 12 | + - 🔫 **Firearms**: handguns, pistols, rifles. |
| 13 | + - ✅ **No Weapon**: normal or safe scenarios. |
12 | 14 |
|
13 | | - * 🔫 **Firearms**: Includes handguns, pistols, and rifles. |
14 | | - * ✅ **No Weapon**: Represents normal, safe scenarios for baseline comparison and reducing false positives. |
15 | 15 |
|
16 | | -#### **Advanced Categories (Bonus Points)** |
| 16 | +> **Note:** The system is trained to minimize false alarms from harmless objects, such as toys |
17 | 17 |
|
18 | | - * 🔪 **Improvised Weapons**: Objects that can be used as weapons, such as broken bottles and metal rods. |
| 18 | +--- |
19 | 19 |
|
20 | | -> **Note:** A key challenge is to distinguish between actual threats and visually similar, harmless objects (e.g., a toy gun, a kitchen knife in a restaurant). The system is being developed to understand context and minimize false alarms. |
| 20 | +## 🔧 Project Modules |
21 | 21 |
|
22 | | ------ |
| 22 | +### 1️⃣ `video_inference.py` |
| 23 | +- **Purpose:** Processes a video and produces an output video with red bounding boxes around detected weapons. |
| 24 | +- **How it works:** |
| 25 | + - Loads the trained Faster R-CNN model (`detector_epochX.pth`). |
| 26 | + - Reads a video file frame by frame. |
| 27 | + - Runs object detection on each frame. |
| 28 | + - Draws bounding boxes and class labels on detected weapons. |
| 29 | + - Saves the resulting video as `.avi` or `.mp4`. |
| 30 | +- **Configurable paths:** |
| 31 | + ```python |
| 32 | + input_video_path = "input_video.mp4" # Set your original video here |
| 33 | + output_video_path = "output.avi" # Video with weapon detections |
| 34 | + checkpoint_path = "models/detector_epoch3.pth" # Trained model path |
| 35 | +* **Run Command:** |
23 | 36 |
|
24 | | -## 🔧 Core Requirements |
| 37 | + ```bash |
| 38 | + python src/video_inference.py |
| 39 | + ``` |
25 | 40 |
|
26 | | -The project is built upon three main modules that form a complete video analysis pipeline. |
27 | 41 |
|
28 | | -1. **Data Processing Module** |
29 | 42 |
|
30 | | - * Handles various video inputs (e.g., live streams, pre-recorded files). |
31 | | - * Responsible for efficient frame extraction and pre-processing for the model. |
| 43 | +### 2️⃣ `extract_weapon_frames.py` |
32 | 44 |
|
33 | | -2. **Model Implementation** |
| 45 | +* **Purpose:** Extracts frames from a video where weapons are detected, and saves them as images with bounding boxes applied. |
| 46 | +* **How it works:** |
34 | 47 |
|
35 | | - * Utilizes a deep learning model (e.g., YOLO, SSD, or a custom CNN) for robust object detection and classification. |
36 | | - * The model is trained to accurately classify the target weapon classes. |
| 48 | + * Loads the trained Faster R-CNN model. |
| 49 | + * Reads the input video frame by frame. |
| 50 | + * For frames with weapons detected above a confidence threshold, saves the frame to a folder. |
| 51 | + * Names the frames with the timestamp in seconds where the weapon appears. |
| 52 | +* **Configurable paths:** |
37 | 53 |
|
38 | | -3. **Video Analysis Pipeline** |
| 54 | + ```python |
| 55 | + input_video_path = "input_video.mp4" # Original video path |
| 56 | + frames_output_dir = "weapon_frames/" # Folder to save extracted frames |
| 57 | + checkpoint_path = "models/detector_epoch3.pth" # Trained model |
| 58 | + ``` |
| 59 | +* **Run Command:** |
39 | 60 |
|
40 | | - * Integrates the data processing and modeling modules into a seamless pipeline. |
41 | | - * Processes video files or streams and generates real-time threat detection results, including bounding boxes and threat labels. |
| 61 | + ```bash |
| 62 | + python src/extract_weapon_frames.py |
| 63 | + ``` |
| 64 | +* **Output:** Each frame image contains red boxes around weapons and is named as `frame_XXs.jpg`. |
42 | 65 |
|
43 | | ------ |
| 66 | +--- |
44 | 67 |
|
45 | | -## 📅 Timeline & Submission |
| 68 | +### 3️⃣ `train_detector.py` |
46 | 69 |
|
47 | | - * **Deadline**: September 21, 2025, 11:59 PM IST |
48 | | - * **Submission Format**: A link to a public GitHub repository containing the complete source code, trained models, and detailed documentation. |
| 70 | +* **Purpose:** Train the Faster R-CNN model on your annotated dataset. |
| 71 | +* **How it works:** |
49 | 72 |
|
50 | | ------ |
| 73 | + * Reads images and JSON annotations from `data/train` and `data/val`. |
| 74 | + * Creates a PyTorch dataset using `JsonDetectionDataset`. |
| 75 | + * Trains a Faster-RCNN model. |
| 76 | + * Saves checkpoints periodically and after each epoch. |
| 77 | +* **Configurable paths & params:** |
51 | 78 |
|
52 | | -## 🚀 Getting Started |
| 79 | + ```bash |
| 80 | + python src/train_detector.py --data-dir data --epochs 5 --batch-size 2 --device cuda |
| 81 | + ``` |
| 82 | +* **Resume Training from Checkpoint:** |
53 | 83 |
|
54 | | -### **Prerequisites** |
| 84 | + ```bash |
| 85 | + python src/train_detector.py --data-dir data --epochs 5 --batch-size 2 --device cuda --resume models/checkpoint_epoch2_batch1000.pth |
| 86 | + ``` |
55 | 87 |
|
56 | | -A list of software and libraries required to run the project. |
| 88 | +--- |
57 | 89 |
|
58 | | - * Python 3.8+ |
59 | | - * OpenCV |
60 | | - * TensorFlow / PyTorch |
61 | | - * NumPy |
62 | 90 |
|
63 | | -### **Installation** |
| 91 | +## ⚙️ Installation & Setup |
64 | 92 |
|
65 | | -A step-by-step guide to get the development environment running. |
| 93 | +### **Installation** |
66 | 94 |
|
67 | 95 | ```bash |
68 | 96 | # Clone the repository |
69 | 97 | git clone https://github.com/Pushkkaarr/weapon-detection-system.git |
70 | 98 |
|
71 | | -# Navigate to the project directory |
| 99 | +# Navigate to the project folder |
72 | 100 | cd weapon-detection |
73 | 101 |
|
74 | | -# Install the required packages |
| 102 | +# Install dependencies |
75 | 103 | pip install -r requirements.txt |
76 | 104 | ``` |
77 | 105 |
|
| 106 | +--- |
| 107 | + |
| 108 | +## 🗂️ Directory Structure |
| 109 | + |
| 110 | +``` |
| 111 | +weapon-detection/ |
| 112 | +│ |
| 113 | +├─ src/ |
| 114 | +│ ├─ train_detector.py # Training script |
| 115 | +│ ├─ video_inference.py # Weapon detection on videos |
| 116 | +│ ├─ extract_weapon_frames.py # Extract frames with weapons |
| 117 | +│ ├─ evaluate_model.py # Evaluate accuracy |
| 118 | +│ └─ dataset.py # JsonDetectionDataset class |
| 119 | +│ |
| 120 | +├─ data/ |
| 121 | +│ ├─ train/images/ # Training images |
| 122 | +│ ├─ train/labels/ # JSON annotations |
| 123 | +│ ├─ val/images/ # Validation images |
| 124 | +│ └─ val/labels/ # JSON annotations |
| 125 | +│ |
| 126 | +├─ models/ # Saved model checkpoints |
| 127 | +├─ weapon_frames/ # Extracted frames with weapons |
| 128 | +└─ README.md |
| 129 | +``` |
| 130 | + |
| 131 | +--- |
| 132 | + |
| 133 | +## 📝 How to Update Paths |
| 134 | + |
| 135 | +* **Videos:** Change `input_video_path` in `video_inference.py` and `extract_weapon_frames.py` to the video you want to process. |
| 136 | +* **Output folder:** Change `output_video_path` or `frames_output_dir` in the scripts. |
| 137 | +* **Trained Model:** Ensure the correct checkpoint is loaded (`detector_epochX.pth`). |
| 138 | + |
| 139 | +--- |
| 140 | + |
| 141 | +## 🚀 Running the System |
| 142 | + |
| 143 | +1. **Train the model (optional if already trained):** |
| 144 | + |
| 145 | + ```bash |
| 146 | + python src/train_detector.py --data-dir data --epochs 5 --batch-size 2 --device cuda |
| 147 | + ``` |
| 148 | + |
| 149 | +2. **Evaluate model accuracy:** |
| 150 | + |
| 151 | + ```bash |
| 152 | + python src/evaluate_model.py |
| 153 | + ``` |
| 154 | + |
| 155 | +3. **Detect weapons in video:** |
| 156 | + |
| 157 | + ```bash |
| 158 | + python src/video_inference.py |
| 159 | + ``` |
| 160 | + |
| 161 | +4. **Extract frames with detected weapons:** |
| 162 | + |
| 163 | + ```bash |
| 164 | + python src/extract_weapon_frames.py |
| 165 | + ``` |
| 166 | + |
| 167 | +--- |
| 168 | + |
| 169 | +## 📂 Output |
| 170 | + |
| 171 | +* **Video Showing the video processing and actual execution of model:** |
| 172 | +(Video is compressed , hence low quality) |
| 173 | + |
| 174 | +https://github.com/user-attachments/assets/95e69b89-df52-4771-8398-c9480013dfc0 |
| 175 | + |
| 176 | + |
| 177 | + |
| 178 | +* **Weapon Detection Video:** Saved at `output.avi` or your configured path in `video_inference.py`. |
| 179 | +* |
| 180 | + |
| 181 | +https://github.com/user-attachments/assets/1b8fa097-8a50-4812-a261-809f0ff177b0 |
| 182 | + |
| 183 | + |
| 184 | +* **Weapon Frames:** Saved in `weapon_frames/` with names like `frame_12s.jpg` for frame at 12 seconds where a weapon is detected. |
| 185 | +* <img width="1688" height="833" alt="image" src="https://github.com/user-attachments/assets/b0ee3316-799f-4e18-b0a8-d1f550edcb95" /> |
| 186 | + |
| 187 | + |
0 commit comments