|
1 | | -[README.md](https://github.com/user-attachments/files/21972618/README.md) |
2 | | -# AgStream |
3 | | -# Inference Optimization Project |
| 1 | +# AgStream - Plant Classification Pipeline For NVIDIA Edge Devices |
4 | 2 |
|
5 | | -Welcome! 🚀 |
6 | | -We’re really happy to have you here. This project is your chance to explore how cutting-edge AI models can be made **faster, smarter, and lighter** — the same kind of work that powers self-driving cars, mobile AI apps, and real-time decision systems. |
| 3 | + |
7 | 4 |
|
8 | | -Think of it like tuning a race car: the model already drives, but now we’ll make it smooth, efficient, and lightning-fast. Along the way you’ll get to experiment, break things, fix them, and share what you’ve learned. Collaboration is key — at the beginning, you’ll be working in pairs, supporting each other, and reviewing each other’s work, just like in real engineering teams. |
| 5 | +## 🌾 Product Description |
| 6 | +AgStream is a **real-time inference pipeline** for crop and weed classification, optimized for NVIDIA Jetson devices. The system demonstrates end-to-end model optimization from PyTorch → ONNX → TensorRT deployment with **DeepStream 6.4**. |
9 | 7 |
|
10 | | -Here’s a real-world connection: the model we’re starting from is accurate, but not yet fast enough for real-time use in practical applications. Engineers often face this exact challenge - they have a working solution, but it’s too slow to be useful in the field. By applying optimizations, exporting to ONNX, and leveraging TensorRT, you’ll experience first-hand how these research models become deployable systems that can actually run on devices like laptops or even Jetsons. |
| 8 | +**Key Capabilities:** |
| 9 | +- **Real-time Processing:** Classify RTSP video streams with metadata extraction |
| 10 | +- **Edge Deployment:** Optimized for NVIDIA Jetson platforms |
| 11 | +- **Model Optimization:** PyTorch → ONNX → TensorRT conversion |
| 12 | +- **Agricultural Focus:** 83 crop and weed categories from CropAndWeed dataset |
11 | 13 |
|
12 | | -Most importantly: don’t stress. This is a safe place to try, fail, and learn. Mistakes are part of the journey, and every one of them makes you stronger. |
| 14 | +--- |
13 | 15 |
|
14 | | -## Next Steps |
15 | | -- Check out [our collaboration guide](docs/general/collaboration_guide.md)! |
16 | | -- Check out [setup_guide.md](docs/general/setup_guide.md) if you need help running things. |
17 | | -- See the weekly goals in [milestones.md](docs/general/milestones_and_weekly_breakdown.md). |
18 | | -- Ask questions, share insights, and help your teammates — you’re not alone in this! |
| 16 | +## 🔧 Hardware & Software Requirements |
19 | 17 |
|
20 | | -Good luck, and let’s make this model fly 🚀 |
| 18 | +### Target Platform |
| 19 | +- **Device:** NVIDIA Jetson Orin Nano (Developer Kit) |
| 20 | +- **JetPack:** 6.2 [L4T 36.4.3] |
| 21 | +- **DeepStream SDK:** 6.4 (Triton multi-arch) |
| 22 | +- **CUDA:** 12.6 |
| 23 | +- **TensorRT:** 10.3 |
| 24 | +- **Memory:** 8GB RAM |
| 25 | +- **OpenCV:** 4.8.0 (GPU support depends on build) |
| 26 | +--- |
21 | 27 |
|
22 | | -## AgStream - Implement Basic PyTorch Inference Script |
| 28 | +## 📊 Performance Metrics |
23 | 29 |
|
24 | | -This branch includes the implementation of a PyTorch inference script and related updates. |
| 30 | +Evaluation was done using the PyTorch model, with inputs resized to 256×256 (CPU inference). |
25 | 31 |
|
26 | | -### Changes Made: |
27 | | -1. **Inference Script**: |
28 | | - - Supports MobileNet and ResNet models. |
29 | | - - Processes images and saves predictions to `outputs/predictions.json`. |
| 32 | +### Classification Accuracy (CropAndWeed Dataset) |
| 33 | +| Model | 83-Class | Binary (Crop/Weed) | 9-Class | 24-Class | Model Size | |
| 34 | +|-------|----------|-------------------|---------|----------|------------| |
| 35 | +| MobileNet | 67.2% | 85.2% | 84.6% | 43.6% | 28.3 MB | |
| 36 | +| ResNet18 | 67.2% | 82.1% | 81.5% | 41.0% | 135 MB | |
30 | 37 |
|
31 | | -2. **Dataset Documentation**: |
32 | | - - Added `docs/data.md` with details about the CropAndWeed dataset. |
| 38 | +### Inference Latency (CPU) |
| 39 | +| Model | Average Latency | Std Dev | |
| 40 | +|-------|----------------|---------| |
| 41 | +| MobileNet | 55.1ms | 26.7ms | |
| 42 | +| ResNet18 | 84.9ms | 51.4ms | |
33 | 43 |
|
34 | | -3. **Model Documentation**: |
35 | | - - Added `docs/models.md` with performance metrics for MobileNet and ResNet. |
| 44 | +> ⚠ Accuracy improves with hierarchical classification (fewer classes) |
36 | 45 |
|
37 | | -4. **Test Images**: |
38 | | - - Added sample images in `assets/test_images/`. |
| 46 | +--- |
39 | 47 |
|
40 | | -5. **Pretrained Models**: |
41 | | - - Added pretrained MobileNet and ResNet models in `data/models/`. |
| 48 | +## 🚀 Quick Start |
42 | 49 |
|
43 | | -6. **Environment Setup**: |
44 | | - - Added `env/Dockerfile` and `env/requirements.txt`. |
| 50 | +### 1. Environment Setup |
45 | 51 |
|
46 | | -### How to Run: |
47 | | -1. Install dependencies: |
48 | | - ```bash |
49 | | - pip install -r env/requirements.txt |
50 | | - ``` |
| 52 | +```bash |
| 53 | +bash scripts/run_dev_jetson.sh |
| 54 | +``` |
51 | 55 |
|
52 | | -2. Run inference: |
53 | | - ```bash |
54 | | - python inference.py |
55 | | - ``` |
| 56 | +### 2. Run Pipeline |
| 57 | + |
| 58 | +```bash |
| 59 | +# Start RTSP server (terminal 1) |
| 60 | +python src/rtsp/rtsp_server.py |
| 61 | + |
| 62 | +# Run classification pipeline (terminal 2) |
| 63 | +python src/deepstream/pipelines/deepstream_pipeline_cpu.py |
| 64 | +# or |
| 65 | +python src/deepstream/pipelines/deepstream_pipeline_gpu.py |
| 66 | + |
| 67 | +# Optional: run metadata extraction |
| 68 | +python src/deepstream/pipelines/access_metadata.py |
| 69 | +``` |
| 70 | + |
| 71 | +--- |
| 72 | + |
| 73 | +## 🧠 Research and Development |
| 74 | + |
| 75 | +### 1. Model Conversion & Optimization |
| 76 | + |
| 77 | +```bash |
| 78 | +# Export PyTorch to ONNX |
| 79 | +python scripts/export_to_onnx.py resnet18 |
| 80 | +python scripts/export_to_onnx.py mobilenet |
| 81 | +# TensorRT engine generation is automatic |
| 82 | +``` |
| 83 | + |
| 84 | +### 2. Performance Benchmarking |
| 85 | + |
| 86 | +```bash |
| 87 | +python src/deepstream/speed_benchmark.py |
| 88 | +``` |
| 89 | + |
| 90 | +--- |
| 91 | + |
| 92 | +## 🎯 Pipeline Architecture |
| 93 | +**RTSP Stream → H.264 Decode → Video Convert → Stream Mux → AI Inference (TensorRT) → OSD Overlay → JPEG Encode → Frame Output** |
| 94 | + |
| 95 | + |
| 96 | + |
| 97 | +**Processing Details:** |
| 98 | +- Input: 256×256 RGB frames from RTSP |
| 99 | +- Normalization: mean=[0.5,0.5,0.5], std=[0.25,0.25,0.25] |
| 100 | +- Batch Size: 1 (real-time) |
| 101 | +- Precision: FP16 (default; configurable) |
| 102 | +--- |
| 103 | + |
| 104 | +## 📁 Project Structure |
| 105 | +* `src/` – Pipeline logic, inference modules, conversion scripts, evaluation |
| 106 | +* `models/` – Trained models (PyTorch, ONNX, TensorRT) |
| 107 | +* `scripts/` – Execution and export scripts |
| 108 | +* `env/` – Environment setup per target (Jetson / CPU) |
| 109 | +* `configs/` – Configuration files for pipeline and models |
| 110 | +* `assets/` – Test data (images, videos) |
| 111 | +* `docs/` – Documentation |
| 112 | + |
| 113 | +--- |
| 114 | + |
| 115 | +## 🔬 Technical Details |
| 116 | +- Dataset: CropAndWeed (WACV 2023), 83 categories |
| 117 | +- Training: PyTorch |
| 118 | +- Export: PyTorch → ONNX (opset 17) |
| 119 | +- Optimization: ONNX → TensorRT |
| 120 | +- Deployment: DeepStream Python API |
| 121 | +- Container: nvcr.io/nvidia/deepstream-l4t:6.4-triton-multiarch, Python 3.10, OpenCV 4.11 CUDA |
| 122 | + |
| 123 | +**Development Focus:** |
| 124 | +- Model optimization & performance analysis |
| 125 | +- Edge deployment & real-time inference |
| 126 | +- End-to-end video processing & metadata extraction |
| 127 | +- Benchmarking: latency & throughput |
| 128 | + |
| 129 | +**Code Quality:** |
| 130 | + |
| 131 | +```bash |
| 132 | +isort src/ && black src/ && flake8 src/ |
| 133 | +``` |
| 134 | + |
| 135 | +**Model Evaluation:** |
| 136 | + |
| 137 | +```bash |
| 138 | +python src/evaluation/run_evaluation.py |
| 139 | +python src/evaluation/run_hierarchical_evaluation.py |
| 140 | +``` |
| 141 | + |
| 142 | +--- |
| 143 | + |
| 144 | +⭐ If you found this project useful, consider giving it a star |
56 | 145 |
|
57 | | -3. View results: |
58 | | - - Predictions are saved in `outputs/predictions.json`. |
|
0 commit comments