项目作者: M3DV

项目描述 :
[MICCAI'20] AlignShift: Bridging the Gap of Imaging Thickness in 3D Anisotropic Volumes
高级语言: Python
项目地址: git://github.com/M3DV/AlignShift.git
创建时间: 2020-05-05T05:15:53Z
项目社区:https://github.com/M3DV/AlignShift

开源协议:Apache License 2.0

下载


DeepLesion Codebase for Universal Lesion Detection

This repository contains the codebase for our 2 papers A3D (MICCAI’21) and AlignShift (MICCAI’20), which achieves state-of-the-art performance on DeepLesion for universal lesion detection.

  • Asymmetric 3D Context Fusion for Universal Lesion Detection (MICCAI’21)

  • AlignShift: Bridging the Gap of Imaging Thickness in 3D Anisotropic Volumes (MICCAI’20, early accepted)

Code structure

  • nn
    The core implementation of AlignShift convolution and TSM convolution, including the operators, models, and 2D-to-3D/AlignShift/TSM model converters.
    • operators: A3DConv, AlignShiftConv, TSMConv.
    • converters.py: include converters which convert 2D models to 3DConv/AlignShiftConv/TSMConv/A3DConv counterparts.
    • models: Native AlignShift/TSM/A3DConv models.
  • deeplesion
    The experiment code is based on mmdetection, this directory consists of compounents used in mmdetection.
  • mmdet: a duplication of mmdetection with our new models registered.

Installation

  • git clone this repository
  • pip install -e .

The code requires only common Python environments for machine learning. Basically, it was tested with
Python 3 (>=3.6)
PyTorch==1.3.1
numpy==1.18.5, pandas==0.25.3, scikit-learn==0.22.2, Pillow==8.0.1, fire, scikit-image
Higher (or lower) versions should also work (perhaps with minor modifications).

Convert a 2D model into 3D with a single line of code

  1. from converter import Converter
  2. import torchvision
  3. from alignshift import AlignShiftConv
  4. # m is a standard pytorch model
  5. m = torchvision.models.resnet18(True)
  6. alignshift_conv_cfg = dict(conv_type=AlignShiftConv,
  7. n_fold=8,
  8. alignshift=True,
  9. inplace=True,
  10. ref_spacing=0.2,
  11. shift_padding_zero=True)
  12. m = Converter(m,
  13. alignshift_conv_cfg,
  14. additional_forward_fts=['thickness'],
  15. skip_first_conv=True,
  16. first_conv_input_channles=1)
  17. # after converted, m is using AlignShiftConv and capable of processing 3D volumes
  18. x = torch.rand(batch_size, in_channels, D, H, W)
  19. thickness = torch.rand(batch_size, 1)
  20. out = m(x, thickness)

Usage of AlignShiftConv/TSMConv/A3DConv operators

  1. from nn.operators import AlignShiftConv, TSMConv, A3DConv
  2. x = torch.rand(batch_size, 3, D, H, W)
  3. thickness = torch.rand(batch_size, 1)
  4. # AlignShiftConv to process 3D volumnes
  5. conv = AlignShiftConv(in_channels=3, out_channels=10, kernel_size=3, padding=1, n_fold=8, alignshift=True, ref_thickness=2.0)
  6. out = conv(x, thickness)
  7. # TSMConv to process 3D volumnes
  8. conv = TSMConv(in_channels=3, out_channels=10, kernel_size=3, padding=1, n_fold=8, tsm=True)
  9. out = conv(x)
  10. # A3DConv to process 3D volumnes
  11. conv = A3DConv(in_channels=3, out_channels=10, kernel_size=3, padding=1, dimension=3)
  12. out = conv(x)

Usage of native AlignShiftConv/TSMConv models

  1. from nn.models import DenseNetCustomTrunc3dAlign, DenseNetCustomTrunc3dTSM
  2. net = DenseNetCustomTrunc3dAlign(num_classes=3)
  3. B, C_in, D, H, W = (1, 3, 7, 256, 256)
  4. input_3d = torch.rand(B, C_in, D, H, W)
  5. thickness = torch.rand(batch_size, 1)
  6. output_3d = net(input_3d, thickness)

How to run the experiments

  • Training

    1. ./deeplesion/train_dist.sh ${mmdetection script} ${dist training GPUS}
    • Train AlignShiftConv models

      1. ./deeplesion/train_dist.sh ./deeplesion/mconfig/densenet_align.py 2
    • Train TSMConv models

      1. ./deeplesion/train_dist.sh ./deeplesion/mconfig/densenet_tsm.py 2
      • Train A3DConv models
        1. ./deeplesion/train_dist.sh ./deeplesion/mconfig/densenet_a3d.py 2
    • Evaluation

      1. ./deeplesion/eval.sh ${mmdetection script} ${checkpoint path}
      1. ./deeplesion/eval.sh ./deeplesion/mconfig/densenet_align.py ./deeplesion/model_weights/alignshift_7slice.pth

Citation

If you find this project useful, please cite the following papers:

  1. Jiancheng Yang, Yi He, Kaiming Kuang, Zudi Lin, Hanspeter Pfister, Bingbing Ni. "Asymmetric 3D Context Fusion for Universal Lesion Detection". International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2021.
  2. Jiancheng Yang, Yi He, Xiaoyang Huang, Jingwei Xu, Xiaodan Ye, Guangyu Tao, Bingbing Ni. "AlignShift: Bridging the Gap of Imaging Thickness in 3D Anisotropic Volumes". International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2020.

or using bibtex:

  1. @inproceedings{yang2021asymmetric,
  2. title={Asymmetric 3D Context Fusion for Universal Lesion Detection},
  3. author={Yang, Jiancheng and He, Yi and Kuang, Kaiming and Lin, Zudi and Pfister, Hanspeter and Ni, Bingbing},
  4. booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)},
  5. pages={571--580},
  6. year={2021},
  7. organization={Springer}
  8. }
  9. @inproceedings{yang2020alignshift,
  10. title={AlignShift: bridging the gap of imaging thickness in 3D anisotropic volumes},
  11. author={Yang, Jiancheng and He, Yi and Huang, Xiaoyang and Xu, Jingwei and Ye, Xiaodan and Tao, Guangyu and Ni, Bingbing},
  12. booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)},
  13. pages={562--572},
  14. year={2020},
  15. organization={Springer}
  16. }