项目作者: taishan1994

项目描述 :
Pytorch implementation of graph attention network
高级语言: Python
项目地址: git://github.com/taishan1994/pytorch_gat.git
创建时间: 2020-09-13T03:24:23Z
项目社区:https://github.com/taishan1994/pytorch_gat

开源协议:

下载


pytorch_gat

Pytorch implementation of graph attention network:

Paper address:Graph Attention Networks (Veličković et al., ICLR 2018): https://arxiv.org/abs/1710.10903

The implementation is based on the official code of the graph attention network.
Not to get the same performance as the original code, but to deepen the understanding of tensorflow and pytorch.
If you want better performance, you can refer to:

Official implementation: https://github.com/PetarV-/GAT

Another pytorch implementation: https://github.com/Diego999/pyGAT

keras implementation:https://github.com/danielegrattarola/keras-gat

You can learn how to convert tensorflow code to pytorch code from here:

https://i.cnblogs.com/posts/edit;postId=13659274

Introduction

utils.py: Read data and data processing.

layer.py: Attention layer.

model.py: Graph attention model network.

main.py: Training, validation and testing.

You can run it through:

  1. python main.py

Results

I did not refer to another implementation of pytorch.
In order to make it easier to compare my code with the tensorflow version,
the code is constructed according to the tensorflow structure.

The following is the result of running the official code:

  1. Dataset: cora
  2. ----- Opt. hyperparams -----
  3. lr: 0.005
  4. l2_coef: 0.0005
  5. ----- Archi. hyperparams -----
  6. nb. layers: 1
  7. nb. units per layer: [8]
  8. nb. attention heads: [8, 1]
  9. residual: False
  10. nonlinearity: <function elu at 0x7f1b7507af28>
  11. model: <class 'models.gat.GAT'>
  12. (2708, 2708)
  13. (2708, 1433)
  14. epoch: 1
  15. Training: loss = 1.94574, acc = 0.14286 | Val: loss = 1.93655, acc = 0.13600
  16. epoch: 2
  17. Training: loss = 1.94598, acc = 0.15714 | Val: loss = 1.93377, acc = 0.14800
  18. epoch: 3
  19. Training: loss = 1.94945, acc = 0.14286 | Val: loss = 1.93257, acc = 0.19600
  20. epoch: 4
  21. Training: loss = 1.93438, acc = 0.24286 | Val: loss = 1.93172, acc = 0.22800
  22. epoch: 5
  23. Training: loss = 1.93199, acc = 0.17143 | Val: loss = 1.93013, acc = 0.36400
  24. 。。。。。。
  25. epoch: 674
  26. Training: loss = 1.23833, acc = 0.49286 | Val: loss = 1.01357, acc = 0.81200
  27. Early stop! Min loss: 1.010906457901001 , Max accuracy: 0.8219999074935913
  28. Early stop model validation loss: 1.3742048740386963 , accuracy: 0.8219999074935913
  29. Test loss: 1.3630210161209106 ; Test accuracy: 0.8219999074935913

The following is the result of running my code:

  1. (2708, 2708)
  2. (2708, 1433)
  3. 训练节点个数: 140
  4. 验证节点个数: 500
  5. 测试节点个数: 1000
  6. epoch:001,TrainLoss:7.9040,TrainAcc:0.0000,ValLoss:7.9040,ValAcc:0.0000
  7. epoch:002,TrainLoss:7.9040,TrainAcc:0.0000,ValLoss:7.9039,ValAcc:0.1920
  8. epoch:003,TrainLoss:7.9039,TrainAcc:0.0714,ValLoss:7.9039,ValAcc:0.1600
  9. epoch:004,TrainLoss:7.9038,TrainAcc:0.1000,ValLoss:7.9039,ValAcc:0.1020
  10. 。。。。。。
  11. epoch:2396,TrainLoss:7.0191,TrainAcc:0.8929,ValLoss:7.4967,ValAcc:0.7440
  12. epoch:2397,TrainLoss:7.0400,TrainAcc:0.8786,ValLoss:7.4969,ValAcc:0.7580
  13. epoch:2398,TrainLoss:7.0188,TrainAcc:0.8929,ValLoss:7.4974,ValAcc:0.7580
  14. epoch:2399,TrainLoss:7.0045,TrainAcc:0.9071,ValLoss:7.4983,ValAcc:0.7620
  15. epoch:2400,TrainLoss:7.0402,TrainAcc:0.8714,ValLoss:7.4994,ValAcc:0.7620
  16. TestLoss:7.4805,TestAcc:0.7700

The following is the result:

Loss changes with epochs:

pic1

Acc changes with epochs:

pic1

Dimensionality reduction visualization of test results:

pic1