Skip to content

Evalutaion isn't working correct #8

@thepate94227

Description

@thepate94227

Hi,
thank you for your code. It helped me a lot. I used part of your notebook file mask_rcnn.ipynb, but converted it to a .py file and split it up two two files: train and load+evaluation. Everything works so far, but only the evaluation isn't working at all.

This is the part of code, which doesn't work:

predictions = extra_utils.compute_multiple_per_class_precision(model, inference_config, dataset_test,
                                                 number_of_images=60, iou_threshold=0.5)
complete_predictions = []

for shape in predictions:
    complete_predictions += predictions[shape]
    print("Test", type(shape))
    print("{} ({}): {}".format(shape, len(predictions[shape]), np.mean(predictions[shape])))

print("--------")
print("average: {}".format(np.mean(complete_predictions)))

When i use that part of code, this is the print:

Test <class 'str'>
knot (60): 0.0
--------
average: 0.0

My testset contains 60 images, and it take over 5mins after the loop is done, but this is the only print and i get an average of 0.
Why is that?

Also, in your code, you sometimes use model.find_last()[1], but the [1] is wrong. When i load my model, with that i get errors. When i remove the [1], then it works fine.
When you need my whole code, i will copy it here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions