项目作者: nashory

项目描述 :
:fire::fire: PyTorch implementation of "Progressive growing of GANs (PGGAN)" :fire::fire:
高级语言: Python
项目地址: git://github.com/nashory/pggan-pytorch.git
创建时间: 2017-11-13T05:01:40Z
项目社区:https://github.com/nashory/pggan-pytorch

开源协议:MIT License

下载


Pytorch Implementation of “Progressive growing GAN (PGGAN)”

PyTorch implementation of PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION
YOUR CONTRIBUTION IS INVALUABLE FOR THIS PROJECT :)

image

What’s different from official paper?

  • original: trans(G)—>trans(D)—>stab / my code: trans(G)—>stab—>transition(D)—>stab
  • no use of NIN layer. The unnecessary layers (like low-resolution blocks) are automatically flushed out and grow.
  • used torch.utils.weight_norm for to_rgb_layer of generator.
  • No need to implement the the Celeb A data, Just come with your own dataset :)

How to use?

[step 1.] Prepare dataset
The author of progressive GAN released CelebA-HQ dataset, and which Nash is working on over on the branch that i forked this from. For my version just make sure that all images are the children of that folder that you declare in Config.py. Also i warn you that if you use multiple classes, they should be similar as to not end up with attrocities.

  1. ---------------------------------------------
  2. The training data folder should look like :
  3. <train_data_root>
  4. |--Your Folder
  5. |--image 1
  6. |--image 2
  7. |--image 3 ...
  8. ---------------------------------------------

[step 2.] Prepare environment using virtualenv

  • you can easily set PyTorch (v0.3) and TensorFlow environment using virtualenv.
  • CAUTION: if you have trouble installing PyTorch, install it mansually using pip. [PyTorch Install]
  • For install please take your time and install all dependencies of PyTorch and also install tensorflow

    1. $ virtualenv --python=python2.7 venv
    2. $ . venv/bin/activate
    3. $ pip install -r requirements.txt
    4. $ conda install pytorch torchvision -c pytorch

[step 3.] Run training

  • edit config.py to change parameters. (don’t forget to change path to training images)
  • specify which gpu devices to be used, and change “n_gpu” option in config.py to support Multi-GPU training.
  • run and enjoy!
  1. (example)
  2. If using Single-GPU (device_id = 0):
  3. $ vim config.py --> change "n_gpu=1"
  4. $ CUDA_VISIBLE_DEVICES=0 python trainer.py
  5. If using Multi-GPUs (device id = 1,3,7):
  6. $ vim config.py --> change "n_gpu=3"
  7. $ CUDA_VISIBLE_DEVICES=1,3,7 python trainer.py

[step 4.] Display on tensorboard (At the moment skip this part)

  • you can check the results on tensorboard.


  1. $ tensorboard --logdir repo/tensorboard --port 8888
  2. $ <host_ip>:8888 at your browser.

[step 5.] Generate fake images using linear interpolation

  1. CUDA_VISIBLE_DEVICES=0 python generate_interpolated.py

Experimental results

The result of higher resolution(larger than 256x256) will be updated soon.

Generated Images







Loss Curve

image

To-Do List (will be implemented soon)

  • Support WGAN-GP loss
  • training resuming functionality.
  • loading CelebA-HQ dataset (for 512x512 and 1024x0124 training)

Compatability

  • cuda v8.0 (if you dont have it dont worry)
  • Tesla P40 (you may need more than 12GB Memory. If not, please adjust the batch_table in dataloader.py)

Acknowledgement

#

Author

MinchulShin, @nashory

Contributors

DeMarcus Edwards, @Djmcflush
MakeDirtyCode, @MakeDirtyCode
Yuan Zhao, @yuanzhaoYZ
zhanpengpan, @szupzp