This is modification of Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations for decompression problem.
Model is trained on CSTR VCTK Corpus.
First run change_bit_rate.sh from VCTK-Corpus to compress wavs to 8kbps mp3. Then run mp3_to_wav.sh to get wav from mp3. It's possible to omit the second step if you can generate spectrograms directly from mp3.
Similar to base repo but moved to jupyter notebooks.
Download pretrained model (found in AutoVC repo) and move it to implementation directory. I use code from r9y9's wavenet_vocoder for spectrograms generation and synthesis.