项目作者: phamquiluan

项目描述 :
Pretrained weights of Residual Attention Network on ImageNet
高级语言: Python
项目地址: git://github.com/phamquiluan/ResidualAttentionNetwork.git
创建时间: 2020-03-07T05:46:41Z
项目社区:https://github.com/phamquiluan/ResidualAttentionNetwork

开源协议:

下载


ImageNet training in PyTorch - Residual Attention Network

version
phamquiluan/ResidualAttentionNetwork

This implements training of Residual Attention Network on the ImageNet dataset, and provide the pretrained weights.

Install

  1. pip install 'git+ssh://git@github.com/phamquiluan/ResidualAttentionNetwork.git@v0.2.0'

Quickstart

  1. import torch
  2. from resattnet import resattnet56
  3. m = resattnet56(in_channels=3, num_classes=10) # pretrained is load automatically
  4. tensor = torch.Tensor(1, 3, 224, 224)
  5. output = m(tensor)
  6. print(output.shape) # torch.Size([1, 10])

Pretrained Download

Download resattnet56 pretrained Imagenet1K: link

Eval: Acc@1 77.024 Acc@5 93.574

Training

To train a model, run main.py with the desired model architecture and the path to the ImageNet dataset:

  1. python main.py -a resattnet56 [imagenet-folder with train and val folders]

Multi-processing Distributed Data Parallel Training

You should always use the NCCL backend for multi-processing distributed training since it currently provides the best distributed training performance.

Single node, multiple GPUs:

  1. python main.py -a resattnet56 --dist-url 'tcp://127.0.0.1:FREEPORT' --dist-backend 'nccl' --multiprocessing-distributed --world-size 1 --rank 0 [imagenet-folder with train and val folders]