UNION: An Unreferenced Metric for Evaluating Open-ended Story Generation
Automatic Evaluation Metric described in the paper UNION: An UNreferenced MetrIc for Evaluating Open-eNded Story Generation (EMNLP 2020). Please refer to the Paper List for more information about Open-eNded Language Generation (ONLG) tasks. Hopefully the paper list will help you know more about this field.
The code is written in TensorFlow library. To use the program the following prerequisites need to be installed.
We train UNION based on the platform:
Execute the following command:
cd ./Data
python3 ./get_vocab.py your_mode
python3 ./gen_train_data.py your_mode
your_mode
is roc
for ROCStories corpus
or wp
for WritingPrompts dataset
. Then the summary of vocabulary and the corresponding frequency and pos-tagging will be found under ROCStories/ini_data/entitiy_vocab.txt
or WritingPrompts/ini_data/entity_vocab.txt
. ROCStories/train_data
or WritingPrompts/train_data
.Execute the following command:
python3 ./run_union.py --data_dir your_data_dir \
--output_dir ./model/union \
--task_name train \
--init_checkpoint ./model/uncased_L-12_H-768_A-12/bert_model.ckpt
your_data_dir
is ./Data/ROCStories
or ./Data/WritingPrompts
.Execute the following command:
python3 ./run_union.py --data_dir your_data_dir \
--output_dir ./model/output \
--task_name pred \
--init_checkpoint your_model_name
your_data_dir
is ./Data/ROCStories
or ./Data/WritingPrompts
. If you want to evaluate your custom texts, you only need tp change your file format into ours.
your_model_name
is ./model/union_roc/union_roc
or ./model/union_wp/union_wp
. The fine-tuned checkpoint can be downloaded from the following link:
| Dataset | Fine-tuned Model |
| ——————— | —————————————————————————————— |
| ROCStories | THUcloud; GoogleDrive |
| WritingPrompts | THUcloud; GoogleDrive |
The union score of the stories under your_data_dir/ant_data
can be found under the output_dir ./model/output
.
Execute the following command:
python3 ./correlation.py your_mode
Then the correlation between the human judgements under your_data_dir/ant_data
and the scores of metrics under your_data_dir/metric_output
will be output. The figures under “./figure” show the score graph between metric scores and human judgments for ROCStories corpus
.
./Data
├── Data
└── `negation.txt` # manually constructed negation word vocabulary.
└── `conceptnet_antonym.txt` # triples with antonym relations extracted from ConceptNet.
└── `conceptnet_entity.csv` # entities acquired from ConceptNet.
└── `ROCStories`
├── `ant_data` # sampled stories and corresponding human annotation.
└── `ant_data.txt` # include only binary annotation for reasonable(1) or unreasonable(0)
└── `ant_data_all.txt` # include the annotation for specific error types: reasonable(0), repeated plots(1), bad coherence(2), conflicting logic(3), chaotic scenes(4), and others(5).
└── `reference.txt` # human-written stories with the same leading context with annotated stories.
└── `reference_ipt.txt`
└── `reference_opt.txt`
├── `ini_data` # original dataset for training/validation/testing.
└── `train.txt`
└── `dev.txt`
└── `test.txt`
└── `entity_vocab.txt` # generated by `get_vocab.py`, consisting of all the entities and the corresponding tagged POS followed by the mention frequency in the dataset.
├── `train_data` # negative samples and corresponding human-written stories for training, which are constructed by `gen_train_data.py`.
└── `train_human.txt`
└── `train_negative.txt`
└── `dev_human.txt`
└── `dev_negative.txt`
└── `test_human.txt`
└── `test_negative.txt`
├── `metric_output` # the scores of different metrics, which can be used to replicate the correlation in Table 5 of the paper.
└── `bleu.txt`
└── `bleurt.txt`
└── `ppl.txt` # the sign of the result of Perplexity needs to be changed to get the result for *minus* Perplexity.
└── `union.txt`
└── `union_recon.txt` # the ablated model without the reconstruction task
└── ...
└── `WritingPrompts`
├── ...
ant_data.txt
and ant_data_all.txt
are formatted as Story ID ||| Story ||| Seven Annotated Scores
.ant_data_all.txt
is only available for ROCStories corpus
. ant_data_all.txt
is the same with ant_data.txt
for WrintingPrompts dataset
. Please kindly cite our paper if this paper and the code are helpful.
@misc{guan2020union,
title={UNION: An Unreferenced Metric for Evaluating Open-ended Story Generation},
author={Jian Guan and Minlie Huang},
year={2020},
eprint={2009.07602},
archivePrefix={arXiv},
primaryClass={cs.CL}
}