项目作者: saminens

项目描述 :
Conversational Question Answering on Clinical Text
高级语言: Python
项目地址: git://github.com/saminens/Medi-CoQA.git
创建时间: 2020-06-18T08:52:07Z
项目社区:https://github.com/saminens/Medi-CoQA

开源协议:

下载


Medi-CoQA

Conversational Question Answering on Clinical Text

Docker

You can find this Dockerfile in the repository files. To reproduce the environment you can build a docker image locally and run the code inside docker.

Note: This is a GPU based image.

  1. ## build image
  2. docker build -t transformers-coqa .
  3. ## run the iamge
  4. docker run -it coqa
  5. ## run the code
  6. cd transformer-coqa && \
  7. . run.sh

Non-Docker Alernative

install packages

run pip install -r requirements.txt

install language model for spacy

python3 -m spacy download en

  1. ## Data
  2. `coqa-train-v1.0.json` for training,
  3. `coqa-dev-v1.0.json` for evaluating.
  4. Download dataset from [CoQA](https://stanfordnlp.github.io/coqa/)
  5. ## Run-train
  6. 1. place coqa-train-v1.0.json and coqa-dev-v1.0.json in same folder, e.g. `data/`
  7. 2. run code using the command `./run.sh` in terminal
  8. 3. or run `run_coqa.py`
  9. ```bash
  10. python3 run_coqa.py --model_type albert \
  11. --model_name_or_path albert-base-v2 \
  12. --do_train \
  13. --do_eval \
  14. --data_dir data/ \
  15. --train_file coqa-train-v1.0.json \
  16. --predict_file coqa-dev-v1.0.json \
  17. --learning_rate 3e-5 \
  18. --num_train_epochs 2 \
  19. --output_dir albert-output/ \
  20. --do_lower_case \
  21. --per_gpu_train_batch_size 8 \
  22. --max_grad_norm -1 \
  23. --weight_decay 0.01
  24. --threads 8
  25. --fp16
  1. Shell script to reproduce models; i.e. ClinicalBERT :

    1. # Clinicalbert
    2. . ./run_clinicalbert.sh
    3. # albert
    4. . ./run.sh
  2. The estimate training and evaluation time for albert-base, ‘run_clinicalbert’ model on the CoQA task is around 3 hours on AWS server g4dn.2xlarge.

    Run-evaluate

After you get the prediction files, you can evaluate on your test set seperately.
The evaluation script is provided by CoQA.
To evaluate, run

  1. python3 evaluate.py --data-file <path_to_dev-v1.0.json> --pred-file <path_to_predictions>

Results

Some commom parameters:
adam_epsilon=1e-08, data_dir='data/', do_lower_case=True, doc_stride=128, fp16=True, history_len=2, learning_rate=3e-05, max_answer_length=30, max_grad_norm=-1.0, max_query_length=64, max_seq_length=512, per_gpu_eval_batch_size=8, seed=42, train_file='coqa-train-v1.0.json', warmup_steps=2000, weight_decay=0.01,num_train_epochs=2, threads=8

Best results:
| Model | em | F1 | Parameters |
| ————————————————- | ———— | ———— | —————————————————————————————— |
| albert-base-v2 | 71.5 | 81.0 | per_gpu_train_batch_size=8 |
| ClinicalBERT | 63.8 | 73.7 | per_gpu_train_batch_size=8 |

Parameters

Here we will explain some important parameters, for all training parameters, you can find in run_coqa.py

Param name Default value Details
model_type None Type of models e.g. ClinicalBERT, ALBERT
model_name_or_path None Path to pre-trained model or model name listed above.
output_dir None The output directory where the model checkpoints and predictions will be written.
data_dir None The directory where training and evaluate data (json files) are placed, if is None, the root directory will be taken.
train_file coqa-train-v1.0.json The training file.
predict_file coqa-dev-v1.0.json The evaluation file.
max_seq_length 512 The maximum total input sequence length after WordPiece tokenization.
doc_stride 128 When splitting up a long document into chunks, stride length to take between chunks.
max_query_length 64 The maximum number of tokens for the question. Questions longer than this will be truncated to this length.
do_train False Whether to run training.
do_eval False Whether to run evaluation on the dev set CoQA.
evaluate_during_training False Run evaluation during training at 10times each logging step
do_lower_case False Set this flag if you are using an uncased model.
per_gpu_train_batch_size 8 Batch size per GPU/CPU for training.
learning_rate 3e-5 The initial learning rate for Adam.
gradient_accumulation_steps 1 Number of updates steps to accumulate before performing a backward/update pass.
weight_decay 0.01 Weight decay (optional to change).
num_train_epochs 2 Total number of training epochs to perform.
warmup_steps 2000 Linear warmup over warmup_steps.This should not be too small(such as 200), may lead to low score in model.
history_len 2 keep len of history quesiton-answers
logging_steps 50 Log every X updates steps.
threads 8 no. of CPU’s put for parallel processing
fp16 True half precision floating point format 16 bits; increases speed of model training.

Model explanation

The following is the overview of the whole repo structure, we keep the structure similiar with the transformers fine-tune on SQuAD, we use the transformers library to load pre-trained model and model implementation.

  1. ├── data
  2. ├── coqa-dev-v1.0.json # CoQA Validation dataset
  3. ├── coqa-train-v1.0.json # CoQA training dataset
  4. ├── metrics
  5. └── coqa_metrics.py # Compute and save the predictions, do evaluation and get the final score
  6. └── processors
  7. ├── coqa.py # Data processing: create examples from the raw dataset, convert examples into features
  8. └── utils.py # data Processor for sequence classification data sets.
  9. ├── evaluate.py # script used to run the evaluation only, please refer to the above Run-evaluate section
  10. ├── model
  11. ├── Layers.py # Multiple LinearLayer class used in the downstream QA tasks
  12. ├── modeling_albert.py # core ALBERT model class, add architecture for the downstream QA tasks on the top of pre-trained ALBERT model from transformer library.
  13. ├── modeling_auto.py # generic class that help instantiate one of the question answering model classes, As the bert like model has similiar input and output. Use this can make clean code and fast develop and test. Refer to the same class in transformers library
  14. ├── modeling_bert.py # core BERT model class, includes all the architecture for the downstream QA tasks
  15. ├── README.md # This instruction you are reading now
  16. ├── requirements.txt # The requirements for reproducing our results
  17. ├── run_coqa.py # Main function script
  18. ├── run.sh # run training with default setting/ALBERT
  19. ├── run_clinicalbert.sh # run training with default setting/ClinicalBERT
  20. └── utils
  21. └── tools.py # function used to calculate model parameter numbers

The following are detailed descrpition on some core scripts:

  • run_coqa.py: This script is the main function script for training and evaluation.
    1. Defines All system parameters and few training parameters
    2. Setup CUDA, GPU, distributed training and logging, all seeds
    3. Instantiate and initialize the corresponding model config, tokenizer and pre-trained model
    4. Calculate the number of trainable parameters
    5. Define and execute the training and evaluation function
  • coqa.py: This script contains the functions and classes used to conduct data preprocess,
    1. Define the data structure of CoqaExamples, CoqaFeatures and CoqaResult
    2. Define the class of CoqaProcessor, which is used to process the raw data to get examples. It implements the methods get_raw_context_offsets to add word offset, find_span_with_gt to find the best answer span, _create_examples to convert single conversation (context and QAs pairs) into CoqaExample, get_examples to parallel execute the create_examples
    3. Define the methods coqa_convert_example_to_features to convert CoqaExamples into CoqaFeatures, coqa_convert_examples_to_features to parallel execute coqa_convert_example_to_features
  • modeling_albert.py: This script contains the core ALBERT class and related downstream CoQA architecture,
    1. Load the pre-trained ALBERT model from transformer library
    2. Build downstream CoQA tasks architecture on the top of ALBERT last hidden state and pooler output to get the training loss for training and start, end, yes, no, unknown logits for prediction.

References

  1. coqa-baselines
  2. transformers
  3. bert4coqa
  4. transformers-coqa