Real-Time High-Fidelity Speech Synthesis without GPU
In our recent paper, we propose WG-WaveNet, a fast, lightweight, and high-quality waveform generation model. WG-WaveNet is composed of a compact flow-based model and a post-filter. The two components are jointly trained by maximizing the likelihood of the training data and optimizing loss functions on the frequency domains. As we design a flow-based model that is heavily compressed, the proposed model requires much less computational resources compared to other waveform generation models during both training and inference time; even though the model is highly compressed, the post-filter maintains the quality of generated waveform. Our PyTorch implementation can be trained using less than 8 GB GPU memory and generates audio samples at a rate of more than 5000 kHz on an NVIDIA 1080Ti GPU. Furthermore, even if synthesizing on a CPU, we show that the proposed method is capable of generating 44.1 kHz speech waveform 1.2 times faster than real-time. Experiments also show that the quality of generated audio is comparable to those of other methods.
Visit the demopage for audio samples.
Download LJ Speech. In this example it’s in data/
For training, run the following command.
python3 train.py --data_dir=<dir/to/dataset> --ckpt_dir=<dir/to/models>
python3 train.py --data_dir=<dir/to/dataset> --ckpt_dir=<dir/to/models> --ckpt_pth=<pth/to/pretrained/model>
python3 train.py --data_dir=<dir/to/dataset> --ckpt_dir=<dir/to/models> --log_dir=<dir/to/logs>
python3 inference.py --ckpt_pth=<pth/to/model> --src_pth=<pth/to/src/wavs> --res_pth=<pth/to/save/wavs>
Work in progress.
We will combine this vocoder with Tacotron2. More information and Colab demo will be released here.