A text classification model based on textGCN and the WikiData knowledge graph
This repository extends Ken Gu’s PyTorch implementation of “Graph Convolutional Networks for Text Classification.” (AAAI 2019) by introducing doc2doc
edges from the WikiData Knowledge Graphs to the word document graph.
$ python main.py
--show_eval
: Prints all evaluation metrics to the console--plot
: Plots textKGCN embeddings, training curves, recent model performance--word-window-size
: Specifies the window size used for the model (default: 15)--use_edge_weights
: Defines whether edge weights should be used --method
: Select doc2doc edge weighting method (count
, idf
, idf_wiki
)--threshold
: Filter threshold for doc2doc edges (default: 2)--no_wiki
: Disable doc2doc
edges to run the base textGCN
model--debug
: Activate debug mode (changes number of epochs)--version
: Specify the version of filtered relations--drop_out
: Perform random drop-out on the doc2doc edgesOther configuration options can be set in config.py
.
The code runs with Python 3.6
.
All dependencies can be installed automatically with this command (for cpu
usage only, macOS
& linux
only):
sh install_dependencies.sh
python3.6
and pip
installed. It is recommended to install all dependencies into a separate Python environment.These dependencies will be installed:
torch==1.6.0
torchvision==0.7.0
torch-cluster==1.5.7
torch-scatter==2.0.5
torch-sparse==0.6.7
torch-spline-conv==1.2.0
torch-geometric==1.6.1
klepto==0.1.9
sklearn
, matplotlib
, seaborn
, pytz
, pandas
, spacy
, nltk
, lxml
spacy 'en'
dataset which is will be installed automatically when you run the bash script. Otherwise it can be installed manually with python -m spacy download en
These datasets are already included and pre-processed:
r8
and r52
ohsumed
20NG
MR
The r8_small
dataset is a small subset of r8
and is only intendend for debugging purposes.
The following steps must be performed to include a custom dataset:
[dataset_name]
) to the dataset section in config.py
[dataset_name]_labels.txt
and [dataset_name]_sentences.txt
in the _data/corpus/[dataset_name]
directory. Each line of the files should correspond to a document and its label.python prep_data.py
to generate the [dataset_name]_sentences_clean.txt
and [dataset_name]_vocab.txt
filespython prep_graph.py
to start the knowledge graph mapping process (may take several hours)The dataset must be mapped to the WikiData knowledge graph to build doc2doc
edges.
In order to run the model the dataset must be mapped to the WikiData knowledge graph.
The following steps are performed when you run `python prep_graph.py:
doc2doc
edges are generated by analyzing all relations between all documentsAll results are saved automatically.
Models and model information are stored in the _logs
directory of each dataset directory.
All training metrics are stored in _data/results_log
as csv
.
To see average metrics split by the model parameters run python analyze_results.py --dataset [dataset_name]
.
Graph Convolutional Networks for Text Classification.
Liang Yao, Chengsheng Mao, Yuan Luo.
AAAI, 2019. (Paper)