These things are low priority given the current schedule.
- When training VGG on CUB-200 the training and validation loss decreases however the training and validation accuracy remains stagnant. This can reproduced with running
py train_baseline.py --gpus 1 --precision 16 --dataset cub200 --arch vgg --batch_size 32 --overfit_batches 10.
- Training CelebA is not implemented. This is a multi-class binary classification task (40 binary attributes) so changes need to be made to the pipeline which is currently only for ordinary multi-class classification.
Completing these are low priority since CIFAR is predominately used in the paper and reproducing at least some experiments for each attack is more important.
These things are low priority given the current schedule.
py train_baseline.py --gpus 1 --precision 16 --dataset cub200 --arch vgg --batch_size 32 --overfit_batches 10.Completing these are low priority since CIFAR is predominately used in the paper and reproducing at least some experiments for each attack is more important.