The repository is for the paper: Robotic Plot-Scale Peanut Yield Estimation using Transformer-based Image Stitching and Detection
Fig. 1: Diagram of the proposed blueberry fruit phenotyping workflow involving four stages: data collection, training dataset generation, model training, and phenotyping traits extraction.
Fig. 2: The procedure of the image stitching algorithm using LoFTR.
Fig. 3: Illustration of improved RT-DETR detector. (a) overview of customized RT-DETR detector; (b) Backbone of ResNet18-FasterBlock; (c) Up sampling based on DySample; (d) Adown module for down sampling.
Fig. 4: Illustration of plot-scale pod counting.
pip install ultralytics
Clone the repository to local machine:
git clone https://github.com/UGA-BSAIL/Plot-scale-peanut-counting.git
Create a virtual env and Install the required packages :
conda create -n rt-detr-peanut python=3.8
conda activate rt-detr-peanut
pip install ultralytics
pip install scikit-learn
pip install kornia
We modified the original YOLOv8 repository for more module support (yolov8-BerryNet\ultralytics\nn\extra_modules). For letting ultralytics point to the modified repository,
pip uninstall ultralytics
This paper released a dataset for model training and validation of peanut detection, which is available on kaggle:
We provide a script to stitch the sequential images based on the LoFTR matching method.
python script/image_stitching/loftr-stitching-gpu.py
Parameters:
- folder_path = '/path/to/image_folder'
- folder structure: folder_path/sequences_folder/image_1, image2, ...
We provide two scripts to stitch the sequential images of single/double views.
* script/image_stitching/metashape_single_view.py
* image_stitching/metashape-stitching_left_right.py
Open the Metashape and load the script to process multiple plots.
Parameters:
* - frame_path = '/path/to/image_folder'
* - save_path = ''/path/to/save_orthomosaic_folder'
The model architecture of customized RT-RTDETR was defined in customized_rtdetr/ultralytics/cfg/models/rt-detr/rtdetr-resnet18-FasterBlock-ADown-Dysample.yaml.
For training the model, run the script: - train-detr-r18-fasterBlock-ADown-Dysample-peanut-1280.py (or select the 640) under the path of customized_rtdetr folder:
cd customized_rtdetr
python train-detr-r18-fasterBlock-ADown-Dysample-peanut-1280.py
Before running the script, please modify the path of the dataset and the model configuration file in the script. You can try more yaml files for different model architecture.
The pre-trained models are available at weight.
- customized_rtdetr:
- yolov8:
For model inference, run the script of BerryNet_phenotyping_extraction_split.py under the script folder:
python script/plot-scale_detection/plot-scale_detection.py
Parameters:
- model_path = " " # path to the BerryNet model
- image_folder = " " # path to the image folder
- save_path = " " # path to the save folder
If you find this work or code useful, please cite:
@article{li2025plot,
title={Plot-scale peanut yield estimation using a phenotyping robot and transformer-based image analysis},
author={Li, Zhengkun and Xu, Rui and Brown, Nino and Tillman, Barry L and Li, Changying},
journal={Smart Agricultural Technology},
volume={12},
pages={101154},
year={2025},
publisher={Elsevier}
}
@inproceedings{li2024robotic,
title={Robotic Plot-scale Peanut Counting and Yield Estimation using LoFTR-based Image Stitching and Improved RT-DETR},
author={Li, Zhengkun and Xu, Rui and Li, Changying and Tillman, Barry and Brown, Nino},
booktitle={2024 ASABE Annual International Meeting},
pages={1},
year={2024},
organization={American Society of Agricultural and Biological Engineers}
}



