Adversarial attacks on deep reinforcement learning
Project for the course Advanced Deep Learning for Robotics
Group tum-adlr-ws20-03
Some requirements only work with python 3.7
conda create -n adlr python==3.7
conda activate adlr
Install pip libraries from requirements file:
pip install -r requirements.txt
conda install pytorch torchvision torchaudio -c pytorch
# At the same level as the project repository do:
git clone https://github.com/koulanurag/ma-gym.git
cd ma-gym
pip install -e .
Python scripts in the src/scripts
folder to reproduce training and evaluation.
Settings for these scripts can be adjusted inside the scripts.
src/scripts/train_selfplay
src/scripts/train_selfplay_adversarial_policy
src/scripts/evaluate_model
src/scripts/evaluate_observation_attack
src/scripts/render_match
main
src/scripts/train
src/scripts/test
Run python main.py --env 'PongDuel-v0'
with built-in arguments to reproduce our training and testing results. Use --obs_opp both
and --obs_img both
to choose between feature-based and image-based observation for the agent, respectively. Use --evaluate
to test trained agents. Use --render
to visualize agents playing Pong. For more details please run python main.py -h
. We give some examples about how to run the script that you may want to try out.
python main.py --env 'PongDuel-v0' --obs_opp both
python main.py --env 'PongDuel-v0' --obs_img both --evaluate --render --fgsm p1 --plot_fgsm
python main.py --env 'PongDuel-v0' --obs_opp both --fgsm_training
models
: Trained models that were created as part of the projectsrc
: Python Source code root directoryagents
attacks
common
scripts
selfplay
tests