Implementations of GANs in PyTorch for Pokemon image generation
This repository is an ongoing implementation of shallow GAN architectures to generate Pokemons using PyTorch.
You can install the project requirement as follows:
git clone https://github.com/frgfm/PokeGAN.git
pip install -r PokeGAN/requirements.txt
or install it as a package:
pip install git+https://github.com/frgfm/PokeGAN.git
There are two available training script: main.py
for classic DCGAN, and progan.py
for ProGAN training. You can use the --help
flag to get more advanced usage instructions.
usage: main.py [-h] [--size SIZE] [--device DEVICE] [--lr LR] [--dropout DROPOUT] [--z-size Z_SIZE]
[--latent-size LATENT_SIZE] [--wd WEIGHT_DECAY] [--ls LABEL_SMOOTHING]
[--noise NOISE] [--swap SWAP] [-b BATCH_SIZE] [--epochs EPOCHS] [-j WORKERS]
data_path
Pokemon GAN Training
positional arguments:
data_path path to dataset folder
optional arguments:
-h, --help show this help message and exit
--size SIZE Image size to produce (default: 64)
--device DEVICE device (default: 0)
--lr LR initial learning rate (default: 0.001)
--dropout DROPOUT dropout rate (default: 0.3)
--z-size Z_SIZE number of features fed to the generator (default: 96)
--latent-size LATENT_SIZE
latent feature map size (default: 4)
--wd WEIGHT_DECAY, --weight-decay WEIGHT_DECAY
weight decay (default: 0)
--ls LABEL_SMOOTHING, --label-smoothing LABEL_SMOOTHING
label smoothing (default: 0.1)
--noise NOISE Norm of the noise added to labels (default: 0.1)
--swap SWAP Probability of swapping labels (default: 0.03)
-b BATCH_SIZE, --batch-size BATCH_SIZE
batch size (default: 32)
--epochs EPOCHS number of total epochs to run (default: 400)
-j WORKERS, --workers WORKERS
number of data loading workers (default: 16)
--output-file OUTPUT_FILE
path where to save (default: ./gan.pth)
Similar to DCGAN, but with weight initialization using normal distribution rather than uniform distribution. The Discriminator and the generators have mirrored architectures for downsampling and upsampling.
Tried InstanceNorm rather than BatchNorm but the latter proved to be more effective.
Source: Progressive Growing of GANs for improved quality, stability, and variation, ICLR 2018
Using the idea suggested by ProGAN, the implementation include a progressive training scheme:
[16, 32, 64]
for instance).[dict(lr=5e-4, nb_epochs=100), dict(lr=2e-4, nb_epochs=200)]
)Things that were tested to improve training:
And other tricks to be implemented soon:
Samples
Gradient flow
Samples
Gradient flow
Samples (mode collapse)
Gradient flow
Regarding issues, use the following format for the title:
[Topic] Your Issue name
Example:
[State saving] Add a feature to automatically save and load model states
Distributed under the MIT License. See LICENSE
for more information.