MobileNetV3 in pytorch and ImageNet pretrained models
This is a PyTorch implementation of MobileNetV3 architecture as described in the paper Searching for MobileNetV3.
Some details may be different from the original paper, welcome to discuss and help me figure it out.
Madds | Parameters | Top1-acc | Pretrained Model | |
---|---|---|---|---|
Offical 1.0 | 219 M | 5.4 M | 75.2% | - |
Offical 0.75 | 155 M | 4 M | 73.3% | - |
Ours 1.0 | 224 M | 5.48 M | 72.8% | - |
Ours 0.75 | 148 M | 3.91 M | - | - |
Madds | Parameters | Top1-acc | Pretrained Model | |
---|---|---|---|---|
Offical 1.0 | 66 M | 2.9 M | 67.4% | - |
Offical 0.75 | 44 M | 2.4 M | 65.4% | - |
Ours 1.0 | 63 M | 2.94 M | 67.4% | [google drive] |
Ours 0.75 | 46 M | 2.38 M | - | - |
Pretrained models are still training …
# pytorch 1.0.1
# large
net_large = mobilenetv3(mode='large')
# small
net_small = mobilenetv3(mode='small')
state_dict = torch.load('mobilenetv3_small_67.4.pth.tar')
net_small.load_state_dict(state_dict)
I used the following code for data pre-processing on ImageNet:
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
input_size = 224
train_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(
traindir, transforms.Compose([
transforms.RandomResizedCrop(input_size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
])),
batch_size=batch_size, shuffle=True,
num_workers=n_worker, pin_memory=True)
val_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(valdir, transforms.Compose([
transforms.Resize(int(input_size/0.875)),
transforms.CenterCrop(input_size),
transforms.ToTensor(),
normalize,
])),
batch_size=batch_size, shuffle=False,
num_workers=n_worker, pin_memory=True)