[ICCV'19] CompenNet++: End-to-end Full Projector Compensation
CompenNet++: End-to-end Full Projector Compensation (ICCV’19)
PyTorch implementation of CompenNet++. Also see journal version.
Highlights:
For more info please refer to our ICCV’19 paper, high-res supplementary material ~(180M) and CompenNet++ benchmark dataset (~11G).
Clone this repo:
git clone https://github.com/BingyaoHuang/CompenNet-plusplus
cd CompenNet-plusplus
Install required packages by typing
pip install -r requirements.txt
data/
Start visdom by typing
visdom
Once visdom is successfully started, visit http://localhost:8097
(train locally) or http://serverhost:8097
(train remotely).
Open main.py
and set which GPUs to use. An example is shown below, we use GPU 0, 2 and 3 to train the model.
os.environ['CUDA_VISIBLE_DEVICES'] = '0, 2, 3'
device_ids = [0, 1, 2]
Run main.py
to start training and testing
cd src/python
python main.py
log/%Y-%m-%d_%H_%M_%S.txt
after training.data/ref/img_0125.png
) slightly overexposes the camera captured image. Similarly, the darkest projected input image (plain black data/ref/img_0001.png
) slightly underexposes the camera captured image. This allows the projector dynamic range to cover the full camera dynamic range. data/light[n]/pos[m]/[surface]
(we refer it to data_root
), where [n]
and [m]
are lighting and pose setup indices, respectively. [surface]
is the projection surface’s texture name.data/ref/img_0001.png
and the plain white images data/ref/img_0125.png
for projector FOV mask detection later. Then, save the captured images to data_root/cam/raw/ref/img_0001.png(img_0125.png)
.data/ref/img_gray.png
. Then, save the captured images to data_root/cam/raw/ref/img_0126.png
.data/train
and /data/test
. Then, save the captured images to data_root/cam/raw/train
, data_root/cam/raw/test
, respectively.loadData
in trainNetwork.py. Then, affine transform the images in data/test
to the optimal displayable area and save transformed images to data_root/cam/raw/desire/test
. Refer to model testing below.Note other than ref/img_0001.png
, ref/img_0125.png
and ref/img_gray.png
, the rest plain color images are used by original TPS w/ SL method, we don’t need them to train CompenNet++. Similarly, data_root/cam/raw/sl
and data_root/cam/warpSL
are only used by two-step methods.
@inproceedings{huang2019compennet++,
author = {Huang, Bingyao and Ling, Haibin},
title = {CompenNet++: End-to-end Full Projector Compensation},
booktitle = {IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019} }
@inproceedings{huang2019compennet,
author = {Huang, Bingyao and Ling, Haibin},
title = {End-To-End Projector Photometric Compensation},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019} }
The PyTorch implementation of SSIM loss is modified from Po-Hsun-Su/pytorch-ssim.
The PyTorch implementation of TPS warping is modified from cheind/py-thin-plate-spline.
We thank the anonymous reviewers for valuable and inspiring comments and suggestions.
We thank the authors of the colorful textured sampling images.
This software is freely available for non-profit non-commercial use, and may be redistributed under the conditions in license.