Code for "Layered Neural Rendering for Retiming People in Video."
This repository contains training code for the examples in the SIGGRAPH Asia 2020 paper “Layered Neural Rendering for Retiming People in Video.”
This is not an officially supported Google product.
This code has been tested with PyTorch 1.4 and Python 3.8.
pip install -r requirements.txt
.conda env create -f environment.yml
.
bash ./datasets/download_data.sh reflection
all
.The pretrained model will be saved at
bash ./scripts/download_kp2uv_model.sh
./checkpoints/kp2uv/latest_net_Kp2uv.pth
.
bash datasets/prepare_iuv.sh ./datasets/reflection
python train.py --name reflection --dataroot ./datasets/reflection --gpu_ids 0,1
./checkpoints/reflection/web/index.html
.You can find more scripts in the scripts
directory, e.g. run_${VIDEO}.sh
which combines data processing, training, and saving layer results for a video.
Note:
--num_epochs
at --batch_size
, and then trains the upsampling module for --num_epochs_upsample
at --batch_size_upsample
.--num_epochs_upsample 0
.batch_size_upsample
accordingly.
python test.py --name reflection --dataroot ./datasets/reflection --do_upsampling
./results/reflection/test_latest/
.--do_upsampling
uses the results of the upsampling module. If the upsampling module hasn’t been trained (num_epochs_upsample=0
), then remove this flag.To train on your own video, you will have to preprocess the data:
mkdir ./datasets/my_video && cd ./datasets/my_video
mkdir rgb && ffmpeg -i video.mp4 rgb/%04d.png
my_video/rgb_256
, and resize the video to 512x896 and save in my_video/rgb_512
.my_video/keypoints.json
my_video/metadata.json
following these instructions.my_video/homographies.txt
.scripts/run_cartwheel.sh
for a training example with camera motion, and see ./datasets/cartwheel/homographies.txt
for formatting.Note: Videos that are suitable for our method have the following attributes:
If you use this code for your research, please cite the following paper:
@inproceedings{lu2020,
title={Layered Neural Rendering for Retiming People in Video},
author={Lu, Erika and Cole, Forrester and Dekel, Tali and Xie, Weidi and Zisserman, Andrew and Salesin, David and Freeman, William T and Rubinstein, Michael},
booktitle={SIGGRAPH Asia},
year={2020}
}
This code is based on pytorch-CycleGAN-and-pix2pix.