-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathnotes.txt
More file actions
40 lines (25 loc) · 2.81 KB
/
notes.txt
File metadata and controls
40 lines (25 loc) · 2.81 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
pretrained weights of yolov5s yield poorer perfomance than randomly initialized weights on the same number of epochs (100)
when trained for 250 epochs, for both SpriteNet dataset verison 2 and 5, pretrained weights yielded better performance than with randomly initialized ones.
model SSD MobileNet V2 FPNLite 640x640 from this repository https://github.com/satmonkey/MeteorDL was taken into consideration for using but upon inspection of its performance on the COCO dataset and number of parameters, it was found to be inferior to the yolov5 model series from this repo https://github.com/ultralytics/yolov5.
yolov5 was used rather than v8 as v5 focuses more on usage on edge devices.
for spritenetv5 maxpixel variants achieved similar performance as the model trained on the original detection images.
stretched images (aspect ratio 1:1) yield better results.
yolov5 determines the best epoch by calculating which one has the highest weighted combination of metrics (precision, recall mAP@0.5, mAP@0.5:0.95). https://github.com/ultralytics/yolov5/issues/8701
custom fitness function can be created, example below.
yolov5/utils/metrics.py: line 17
from:
w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
to:
w = [0.5, 0.5, 0.0, 0.0] # weights for [P, R, mAP@0.5, mAP@0.5:0.95]
Running validation on model trained on spritenet-maxpixel-v2-pretrained gave similar results with custom and default thresholds.
yolo11 is being tested as a replacement for yolo5 since it reports greater maP on COCO dataset with lower latency.
training with yolo11m took 45 mins on dataset of around 2300 images
yolo11 models seem to offer no improvement in training results when compared to yolov5s model. yolo11 models also often tend to have slighlty worse results.
custom yolov5 architecture seems to not be needed when training as the model seems to self-adjust itself based on the number of classes.
41m training for yolo5m on dataset of around 2300 images
yolo11s took around 25mins, slightly worse performance when compared to yolov5s
currently both maxpixel and detection only datasets achieve similar performance. the expected outcome was that detection images would do better than maxpixel ones as there is much less clutter (non zero value pixels) on image. but it seems that objects on image that are in movement (clouds) can on detection images create bright streaks which the model can mistake for sprites.
ultralytics output:
PRO TIP 💡 Replace 'model=yolov5s.pt' with new 'model=yolov5su.pt'.
YOLOv5 'u' models are trained with https://github.com/ultralytics/ultralytics and feature improved performance vs standard YOLOv5 models trained with https://github.com/ultralytics/yolov5.
analyzing number of stars on sprite images is currently halted as there are nights in which software didnt properly detect stars