项目作者: sraashis

项目描述 :
pytorch implementation of paper https://www.frontiersin.org/articles/10.3389/fcomp.2020.00035/full
高级语言: Jupyter Notebook
项目地址: git://github.com/sraashis/deepdyn.git
创建时间: 2017-09-11T19:33:05Z
项目社区:https://github.com/sraashis/deepdyn

开源协议:MIT License

下载


Implementation of Deep Dynamic Networks for Retinal Vessel Segmentation (https://arxiv.org/abs/1903.07803)

A pytorch based framework for medical image processing with Convolutional Neural Network.

Along with example of unet for DRIVE dataset segmentation [1]. DRIVE dataset is composed of 40 retinal fundus images.

Update Jul 30, 2020: Please check a better, and pip installable version of the framework with an example over Here.

Required dependencies

We need python3, numpy, pandas, pytorch, torchvision, matplotlib and PILLOW packages

  1. pip install -r deepdyn/assets/requirements.txt

Flow

Project Structure

Dataset check

Original image and respective ground-truth image. Ground-truth is a binary image with each vessel pixel(white) 255
and background(black) 0.
Sample DRIVE image

Unet

Usage

Example main.py

  1. import testarch.unet as unet
  2. import testarch.unet.runs as r_unet
  3. import testarch.miniunet as mini_unet
  4. import testarch.miniunet.runs as r_miniunet
  5. import torchvision.transforms as tmf
  6. transforms = tmf.Compose([
  7. tmf.ToPILImage(),
  8. tmf.ToTensor()
  9. ])
  10. if __name__ == "__main__":
  11. unet.run([r_unet.DRIVE], transforms)
  12. mini_unet.run([r_miniunet.DRIVE], transforms)

Where testarch.unet.runs file consist a predefined configuration DRIVE with all necessary parameters.

  1. import os
  2. sep = os.sep
  3. DRIVE = {
  4. 'Params': {
  5. 'num_channels': 1,
  6. 'num_classes': 2,
  7. 'batch_size': 4,
  8. 'epochs': 250,
  9. 'learning_rate': 0.001,
  10. 'patch_shape': (388, 388),
  11. 'patch_offset': (150, 150),
  12. 'expand_patch_by': (184, 184),
  13. 'use_gpu': True,
  14. 'distribute': True,
  15. 'shuffle': True,
  16. 'log_frequency': 5,
  17. 'validation_frequency': 1,
  18. 'mode': 'train',
  19. 'parallel_trained': False,
  20. },
  21. 'Dirs': {
  22. 'image': 'data' + sep + 'DRIVE' + sep + 'images',
  23. 'mask': 'data' + sep + 'DRIVE' + sep + 'mask',
  24. 'truth': 'data' + sep + 'DRIVE' + sep + 'manual',
  25. 'logs': 'logs' + sep + 'DRIVE' + sep + 'UNET',
  26. 'splits_json': 'data' + sep + 'DRIVE' + sep + 'splits'
  27. },
  28. 'Funcs': {
  29. 'truth_getter': lambda file_name: file_name.split('_')[0] + '_manual1.gif',
  30. 'mask_getter': lambda file_name: file_name.split('_')[0] + '_mask.gif',
  31. 'dparm': lambda x: np.random.choice(np.arange(1, 101, 1), 2)
  32. }
  33. }

Similarly, testarch.miniunet.runs file consist a predefined configuration DRIVE with all necessary parameters.
NOTE: Make sure it picks up probability-maps from the logs of previous run.

  1. import os
  2. sep = os.sep
  3. DRIVE = {
  4. 'Params': {
  5. 'num_channels': 2,
  6. 'num_classes': 2,
  7. 'batch_size': 4,
  8. 'epochs': 100,
  9. 'learning_rate': 0.001,
  10. 'patch_shape': (100, 100),
  11. 'expand_patch_by': (40, 40)
  12. 'use_gpu': True,
  13. 'distribute': True,
  14. 'shuffle': True,
  15. 'log_frequency': 20,
  16. 'validation_frequency': 1,
  17. 'mode': 'train',
  18. 'parallel_trained': False
  19. },
  20. 'Dirs': {
  21. 'image': 'data' + sep + 'DRIVE' + sep + 'images',
  22. 'image_unet': 'logs' + sep + 'DRIVE' + sep + 'UNET',
  23. 'mask': 'data' + sep + 'DRIVE' + sep + 'mask',
  24. 'truth': 'data' + sep + 'DRIVE' + sep + 'manual',
  25. 'logs': 'logs' + sep + 'DRIVE' + sep + 'MINI-UNET',
  26. 'splits_json': 'data' + sep + 'DRIVE' + sep + 'splits'
  27. },
  28. 'Funcs': {
  29. 'truth_getter': lambda file_name: file_name.split('_')[0] + '_manual1.gif',
  30. 'mask_getter': lambda file_name: file_name.split('_')[0] + '_mask.gif'
  31. }
  32. }
  • num_channels: Input channels to the CNN. We are only feeding the green channel to unet.
  • num_classes: Output classes from CNN. We have vessel, background.
  • patch_shape, expand_patch_by: Unet takes 388 388 patch but also looks at 184 pixel on each dimension equally to make it 572 572. We mirror image if we run to image edges when expanding.
    So 572 572 goes in 388 388 * 2 comes out.
  • patch_offset: Overlap between two input patches. We get more data doing this.
  • distribute: Uses all gpu in parallel if set to True. [WARN]torch.cuda.set_device(1) Mustn’t be done if set to True.
  • shuffle: Shuffle train data after every epoch.
  • log_frequency: Just print log after this number of batches with average scores. No rocket science :).
  • validation_frequency: Do validation after this number of epochs. We also persist the best performing model.
  • mode: train/test.
  • parallel_trained: If a resumed model was parallel trained or not.
  • logs: Dir for all logs
  • splits_json: A directory that consist of json files with list of files with keys ‘train’, ‘test’
    ‘validation’. (https://github.com/sraashis/deepdyn/blob/master/utils/auto_split.py) takes a folder with all images and does that automatically. This is handy when we want to do k-fold cross validation. We jsut have to generate such k json files and put in splits_json folder.
  • truth_getter, mask_getter: A custom function that maps input_image to its ground_truth and mask respectively.

Sample log

  1. workstation$ python main.py
  2. Total Params: 31042434
  3. ### SPLIT FOUND: data/DRIVE/splits/UNET-DRIVE.json Loaded
  4. Patches: 135
  5. Patches: 9
  6. Patches: 9
  7. Patches: 9
  8. Patches: 9
  9. Patches: 9
  10. Training...
  11. Epochs[1/40] Batch[5/34] loss:0.72354 pre:0.326 rec:0.866 f1:0.473 acc:0.833
  12. Epochs[1/40] Batch[10/34] loss:0.34364 pre:0.584 rec:0.638 f1:0.610 acc:0.912
  13. Epochs[1/40] Batch[15/34] loss:0.22827 pre:0.804 rec:0.565 f1:0.664 acc:0.939
  14. Epochs[1/40] Batch[20/34] loss:0.19549 pre:0.818 rec:0.629 f1:0.711 acc:0.947
  15. Epochs[1/40] Batch[25/34] loss:0.17726 pre:0.713 rec:0.741 f1:0.727 acc:0.954
  16. Epochs[1/40] Batch[30/34] loss:0.16564 pre:0.868 rec:0.691 f1:0.770 acc:0.946
  17. Running validation..
  18. 21_training.tif PRF1A [0.66146, 0.37939, 0.4822, 0.93911]
  19. 39_training.tif PRF1A [0.79561, 0.28355, 0.41809, 0.93219]
  20. 37_training.tif PRF1A [0.78338, 0.47221, 0.58924, 0.94245]
  21. 35_training.tif PRF1A [0.83836, 0.45788, 0.59228, 0.94534]
  22. 38_training.tif PRF1A [0.64682, 0.26709, 0.37807, 0.92416]
  23. Score improved: 0.0 to 0.49741 BEST CHECKPOINT SAVED
  24. Epochs[2/40] Batch[5/34] loss:0.41760 pre:0.983 rec:0.243 f1:0.389 acc:0.916
  25. Epochs[2/40] Batch[10/34] loss:0.27762 pre:0.999 rec:0.025 f1:0.049 acc:0.916
  26. Epochs[2/40] Batch[15/34] loss:0.25742 pre:0.982 rec:0.049 f1:0.093 acc:0.886
  27. Epochs[2/40] Batch[20/34] loss:0.23239 pre:0.774 rec:0.421 f1:0.545 acc:0.928
  28. Epochs[2/40] Batch[25/34] loss:0.23667 pre:0.756 rec:0.506 f1:0.607 acc:0.930
  29. Epochs[2/40] Batch[30/34] loss:0.19529 pre:0.936 rec:0.343 f1:0.502 acc:0.923
  30. Running validation..
  31. 21_training.tif PRF1A [0.95381, 0.45304, 0.6143, 0.95749]
  32. 39_training.tif PRF1A [0.84353, 0.48988, 0.6198, 0.94837]
  33. 37_training.tif PRF1A [0.8621, 0.60001, 0.70757, 0.95665]
  34. 35_training.tif PRF1A [0.86854, 0.64861, 0.74263, 0.96102]
  35. 38_training.tif PRF1A [0.93073, 0.28781, 0.43966, 0.93669]
  36. Score improved: 0.49741 to 0.63598 BEST CHECKPOINT SAVED
  37. ...

Results

The network is trained for 40 epochs with 15 training images, 5 validation images and 20 test images.
Training_Loss
Training_Scores
Figure above is the training cross-entropy loss, F1, and accuracy.
Precision-Recall color-Map
Figure above is the precision-recall map for training and validation respectively with color being the training iterations.
Validation_scores
Figure above is the validation F1 and Accuracy.
Test scores and result
Figure on left is the test result on the test set after training and validation.
Right one the is the segmentation result on one of the test images.

Thank you! ❤

References

  1. J. Staal, M. Abramoff, M. Niemeijer, M. Viergever, and B. van Ginneken, “Ridge based vessel segmentation in color
    images of the retina,” IEEE Transactions on Medical Imaging 23, 501–509 (2004)
  2. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” inMICCAI,(2015)
  3. Dynamic Deep Networks for Retinal Vessel Segmentation, https://arxiv.org/abs/1903.07803