Deep Q-Learning Networks vs. Policy Gradient Learning in OpenAI Gym's Pong Environment
This is the final project for CSCI 379 (Intro to AI & CogSci, Spring 2017).
Given a set a pixels from a game of Pong, a mechanism for measuring wins and losses,
and a hard-coded opponent agent, could we create an agent that could potentially
beat a human opponent, and what would be the best way to do that?
Our approach tests Deep Q-Learning Networks (DQN) against Policy Gradient (PG) learning in
order to see which algorithm and architecture learns the best.
Our DQN agent can be ran from the ./dqn directory. You can run it by using the command:
python main.py --env_name=Pong-v0 --is_train=True --display=True
This will run the program on the Pong environment with Training Mode and Rendering
turned on.
Our PG agent can be ran from the ./pg directory. You can run it by using the command:
python3 yashPong.py
This will run the agent in Training Mode and Rendering turned on by default, though
this can be changed in code.
The code for our DQN approach is modified from existing code from devsisters.
The original repository can be found here
Likewise, the code for our PG approach is modified from existing code from Dr. Andrej
Karpathy. The original code can be found here