A comprehensive computer vision project for detecting and tracking cricket-related objects in videos using YOLOv8. This system can detect and track:
- Ball - Cricket ball detection and tracking
- Stumps - Wicket stumps identification
- Players - Cricket players on the field
- Umpires - Match officials detection
- Multi-Object Detection: Simultaneous detection of 4 different cricket objects
- Real-time Tracking: Uses ByteTrack for consistent object tracking across frames
- Video Processing: Full video annotation with bounding boxes and labels
- Statistics: Detailed object count statistics and frame-by-frame analysis
- Easy Annotation: Interactive tool for labeling training data
- Flexible Training: Custom YOLOv8 training pipeline with validation
βββ frame_extract.py # Extract frames from video files
βββ label_tool.py # Interactive annotation tool for labeling frames
βββ train_yolov8.py # Train YOLOv8 model on annotated dataset
βββ annotate_video.py # Process videos with trained model
βββ requirements.txt # Python dependencies
βββ cricket_dataset/ # Dataset directory
β βββ images/ # Training and validation images
β β βββ train/ # Training images
β β βββ val/ # Validation images
β βββ labels/ # YOLO format annotations
β β βββ train/ # Training labels
β β βββ val/ # Validation labels
β βββ cricket.yaml # Dataset configuration
βββ annotated_videos/ # Output directory for processed videos
- Python 3.8 or higher
- CUDA-compatible GPU (recommended for training)
git clone https://github.com/yourusername/cricket-object-detection-yolov8.git
cd cricket-object-detection-yolov8python -m venv cricketenv
# On Windows:
cricketenv\Scripts\activate
# On macOS/Linux:
source cricketenv/bin/activatepip install -r requirements.txtThe system will automatically download YOLOv8 base models when needed, but you can pre-download:
# This will be done automatically on first runExtract frames from your cricket videos for annotation:
python frame_extract.py- Place your video files in the project directory
- The script will extract frames at regular intervals
- Frames are saved to
cricket_dataset/images/directory
Use the interactive annotation tool to label objects:
python label_tool.pyAnnotation Controls:
- 'n': Next image
- 'p': Previous image
- '0': Select Ball class
- '1': Select Stumps class
- '2': Select Player class
- '3': Select Umpire class
- 's': Save current annotations
- 'c': Clear all annotations for current image
- 'q': Quit annotation tool
How to Annotate:
- Select a class (0-3)
- Click and drag to draw bounding boxes around objects
- Save regularly with 's'
- Navigate between images with 'n' and 'p'
Train your custom object detection model:
python train_yolov8.pyTraining Features:
- Automatic train/validation split
- Progress monitoring with metrics
- Model checkpoints saved automatically
- Best model saved to
cricket_detection/training_run/weights/best.pt
Annotate new videos using your trained model:
python annotate_video.pyOutput Features:
- Bounding boxes around detected objects
- Class labels and confidence scores
- Object tracking IDs
- Frame-by-frame statistics
- Progress monitoring during processing
The system is designed to detect:
- Ball: Small, fast-moving cricket ball
- Stumps: Wooden wicket posts
- Players: Cricket players in various poses
- Umpires: Match officials on the field
Edit cricket_dataset/cricket.yaml to modify:
- Dataset paths
- Class names
- Training parameters
Modify training parameters in train_yolov8.py:
- Epochs, batch size, learning rate
- Image size, confidence threshold
- Validation split ratio
- Saved to
annotated_videos/directory - Include bounding boxes, labels, and tracking IDs
- Real-time statistics overlay
- Model performance metrics
- Validation accuracy
- Loss curves and mAP scores
- Fork the repository
- Create a feature branch (
git checkout -b feature/new-feature) - Commit your changes (
git commit -am 'Add new feature') - Push to the branch (
git push origin feature/new-feature) - Create a Pull Request
Anuj Dev Singh
- Project Creator & Lead Developer
- Python 3.8+
- OpenCV
- PyTorch
- Ultralytics YOLOv8
- Supervision (for tracking)
- NumPy
- PyYAML
This project is licensed under the MIT License - see the LICENSE file for details.
- CUDA Out of Memory: Reduce batch size in training
- Video Not Opening: Check video codec compatibility
- Model Not Found: Ensure training completed successfully
- Annotation Tool Issues: Check OpenCV installation
- Open an issue on GitHub
- Check existing issues for solutions
- Ensure all dependencies are correctly installed
- Real-time camera feed processing
- Advanced tracking algorithms
- Mobile app integration
- Web-based annotation interface
- Automated player action recognition
- Match statistics generation
Note: This project requires significant computing resources for training. For best results, use a CUDA-compatible GPU.