Canonical Appearance Transformations
Code to accompany our paper “How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change”.
ethl1/2
to ethl1/2_static
.tools/make_ethl_real_sync.py
and run python3 tools/make_ethl_real_sync.py
to generate a synchronized copy of the real
sequences.run_cat_ethl/vkitti.py
and run python3 run_cat_ethl/vkitti.py
to start training.tensorboard --port [port] --logdir [path]
to start the visualization server, where [port]
should be replaced by a numeric value (e.g., 60006) and [path]
should be replaced by your local results directory.localhost:[port]
and watch the action.make_localization_data.py
and run python3 make_localization_data.py [dataset]
to compile the model outputs into a localization_data
directory.run_localization_[dataset].py
and run python3 run_localization_[dataset].py [rgb,cat]
to compute VO and localization results using either the original RGB or CAT-transformed images.compute_localization_errors.py
script, which generates CSV files and several plots. Update the local paths and run python3 compute_localization_errors.py [dataset]
.If you use this code in your research, please cite:
@article{2018_Clement_Learning,
author = {Lee Clement and Jonathan Kelly},
journal = {{IEEE} Robotics and Automation Letters},
link = {https://arxiv.org/abs/1709.03009},
title = {How to Train a {CAT}: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change},
year = {2018}
}