Pytorch 1.6 has native support for automatic mixed precision (AMP) training: https://pytorch.org/blog/pytorch-1.6-released/
Should we take advantage of this? In particular I think the larger batches would be nice for encoder and synthesizer training.
Pytorch 1.6 has native support for automatic mixed precision (AMP) training: https://pytorch.org/blog/pytorch-1.6-released/
Should we take advantage of this? In particular I think the larger batches would be nice for encoder and synthesizer training.