项目作者: XiaoYee

项目描述 :
Code for NAACL 2019 paper: Adversarial Category Alignment Network for Cross-domain Sentiment Classification
高级语言: Python
项目地址: git://github.com/XiaoYee/ACAN.git
创建时间: 2019-07-04T09:21:47Z
项目社区:https://github.com/XiaoYee/ACAN

开源协议:MIT License

下载


ACAN

Code for NAACL 2019 paper: “Adversarial Category Alignment Network for Cross-domain Sentiment Classification” (pdf)

Dataset & pretrained word embeddings

You can download the datasets (amazon-benchmark) at [Download]. The zip file should be decompressed and put in the root directory.

Download the pretrained Glove vectors [glove.840B.300d.zip]. Decompress the zip file and put the txt file in the root directory.

Train & evaluation

You can find arguments and hyper-parameters defined in train_batch.py with default values.

Under code/, use the following command for training any source-target pair from the amazon benchmark:

  1. CUDA_VISIBLE_DEVICES="0" python train_batchs.py \
  2. --emb ../glove.840B.300d.txt \
  3. --dataset amazon \
  4. --source $source \
  5. --target $target \
  6. --n-class 2 \
  7. --lamda1 -0.1 --lamda2 0.1 --lamda3 5 --lamda4 1.5 \
  8. --epochs 30

where —emb is the path to the pre-trained word embeddings. $source and $target are domains from the amazon benchmark, both in [‘book’, ‘dvd’, ‘electronics’, ‘kitchen’]. —n-class denoting the number of output classes is set to 2 as we only consider binary classification (positive or negative) on this dataset. All other hyper-parameters are left as their defaults.

Dependencies

The code was only tested under the environment below:

  • Python 2.7
  • Keras 2.1.2
  • tensorflow 1.4.1
  • numpy 1.13.3

Cite

If you use the code, please cite the following paper:

  1. @InProceedings{qu-etal-2019-adversarial,
  2. author = {Qu, Xiaoye and Zou, Zhikang and Cheng, Yu and Yang, Yang and Zhou, Pan},
  3. title = {Adversarial Category Alignment Network for Cross-domain Sentiment Classification},
  4. booktitle = {Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics},
  5. publisher = {Association for Computational Linguistics}
  6. }