Pytorch implementation of Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks
This is not official implementation for the paper Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks.
I implemented in Pytorch to reproduce similar result as the paper. You can find the checkpoint of pretrained model here.
This code is written in Python. Dependencies include
mkdir squad
wget http://nlp.stanford.edu/data/glove.840B.300d.zip -O ./data/glove.840B.300d.zip
unzip ./data/glove.840B.300d.zip
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json -O ./squad/train-v1.1.json
wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json -O ./squad/dev-v1.1.json
cd data
python process_data.py
You might need to change configuration in config.py.
If you want to train, change train = True and set the gpu device in config.py
cd qgevalcap
python2 eval.py --out_file prediction_file --src_file src_file --tgt_file target_file