项目作者: EloiZ
项目描述 :
A curated list of papers on explainability and interpretability of self-driving models
高级语言:
项目地址: git://github.com/EloiZ/awesome_explainable_driving.git
awesome_explainable_driving
A curated list of papers on explainability and interpretability of self-driving models
Most of the references below are organized and discuss in the following survey:
- Explainability of vision-based autonomous driving systems: Review and challenges (submitted to IJCV), Eloi Zablocki, Hédi Ben-Younes, Patrick Pérez, Matthieu Cord [arxiv]
Table of Contents
Saliency maps
- Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car (2017, arxiv), Mariusz Bojarski, Philip Yeres, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Lawrence Jackel, Urs Muller [arxiv]
- Visualbackprop: Efficient visualization of cnns for au-tonomous driving (2018, ICRA), Mariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Larry Jackel, Urs Muller, Karol Zieba [arxiv]
- Interpretable learning for self-driving cars by visualizing causal attention (2017, ICCV), Jinkyu Kim, John Canny [arxiv]
- Conditional Affordance Learning for Driving in Urban Environments (2018, CoRL), Axel Sauer, Nikolay Savinov, Andreas Geiger [arxiv]
- Interpretable Self-Attention Temporal Reasoning for Driving Behavior Understanding (2020, ICASSP), Yi-Chieh Liu, Yung-An Hsieh, Min-Hung Chen, Chao-Han Huck Yang, Jesper Tegner, Yi-Chang James Tsai [arxiv]
Counterfactual interventions and causal inference
- Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car (2017, arxiv), Mariusz Bojarski, Philip Yeres, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Lawrence Jackel, Urs Muller [arxiv]
- Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk Object Identification via Causal Inference (2020, IROS), Chengxi Li, Stanley H. Chan, Yi-Ting Chen [arxiv]
- ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst (2019 Robotics, Science and Systems), Mayank Bansal, Alex Krizhevsky, Abhijit Ogale [arxiv]
Representation
- DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars (2018, ICSE), Yuchi Tian, Kexin Pei, Suman Jana, Baishakhi Ray [arxiv]
Evaluation
- ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst (2019 Robotics, Science and Systems), Mayank Bansal, Alex Krizhevsky, Abhijit Ogale [arxiv]
- Learning Accurate and Human-Like Driving using Semantic Maps and Attention (2020, IROS), Simon Hecker, Dengxin Dai, Alexander Liniger, Luc Van Gool [arxiv]
- DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars (2018, ICSE), Yuchi Tian, Kexin Pei, Suman Jana, Baishakhi Ray [arxiv]
Attention maps
- Interpretable learning for self-driving cars by visualizing causal attention (2017, ICCV), Jinkyu Kim, John Canny [arxiv]
- Deep Object-Centric Policies for Autonomous Driving (2019, ICRA), Dequan Wang, Coline Devin, Qi-Zhi Cai, Fisher Yu, Trevor Darrell [arxiv]
- Attentional Bottleneck: Towards an Interpretable Deep Driving Network (2020, arxiv), Jinkyu Kim, Mayank Bansal [arxiv]
- Learning Accurate and Human-Like Driving using Semantic Maps and Attention (2020, IROS), Simon Hecker, Dengxin Dai, Alexander Liniger, Luc Van Gool [arxiv]
Output interpretability
Natural language explanations
Datasets