HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
In our paper,
we proposed HiFi-GAN: a GAN-based model capable of generating high fidelity speech efficiently.
We provide our implementation and pretrained models as open source in this repository.
Abstract :
Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms.
Although such methods improve the sampling efficiency and memory usage,
their sample quality has not yet reached that of autoregressive and flow-based generative models.
In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis.
As speech audio consists of sinusoidal signals with various periods,
we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality.
A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method
demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than
real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the mel-spectrogram inversion of unseen
speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times
faster than real-time on CPU with comparable quality to an autoregressive counterpart.
Visit our demo website for audio samples.
LJSpeech-1.1/wavs
python train.py --config config_v1.json
To train V2 or V3 Generator, replace config_v1.json
with config_v2.json
or config_v3.json
.
Checkpoints and copy of the configuration file are saved in cp_hifigan
directory by default.
You can change the path by adding --checkpoint_path
option.
Validation loss during training with V1 generator.
You can also use pretrained models we provide.
Download pretrained models
Details of each folder are as in follows:
Folder Name | Generator | Dataset | Fine-Tuned |
---|---|---|---|
LJ_V1 | V1 | LJSpeech | No |
LJ_V2 | V2 | LJSpeech | No |
LJ_V3 | V3 | LJSpeech | No |
LJ_FT_T2_V1 | V1 | LJSpeech | Yes (Tacotron2) |
LJ_FT_T2_V2 | V2 | LJSpeech | Yes (Tacotron2) |
LJ_FT_T2_V3 | V3 | LJSpeech | Yes (Tacotron2) |
VCTK_V1 | V1 | VCTK | No |
VCTK_V2 | V2 | VCTK | No |
VCTK_V3 | V3 | VCTK | No |
UNIVERSAL_V1 | V1 | Universal | No |
We provide the universal model with discriminator weights that can be used as a base for transfer learning to other datasets.
.npy
.
Audio File : LJ001-0001.wav
Mel-Spectrogram File : LJ001-0001.npy
ft_dataset
folder and copy the generated mel-spectrogram files into it.For other command line options, please refer to the training section.
python train.py --fine_tuning True --config config_v1.json
test_files
directory and copy wav files into the directory.Generated wav files are saved in
python inference.py --checkpoint_file [generator checkpoint file path]
generated_files
by default.--output_dir
option.test_mel_files
directory and copy generated mel-spectrogram files into the directory.Generated wav files are saved in
python inference_e2e.py --checkpoint_file [generator checkpoint file path]
generated_files_from_mel
by default.--output_dir
option.We referred to WaveGlow, MelGAN
and Tacotron2 to implement this.