Automatic Irregular Texture Detection in Brain MRI without Human Supervision
LOTS-IM-GPU is a fast, fully-automatic, and unsupervised detection method to extract irregular textures of white matter hyperintensities (WMH) on brain FLAIR MRI. Unlike other recently proposed methods for doing WMH segmentation, LOTS-IM-GPU does not need any manual labelling of the WMH. Instead, LOTS-IM-GPU needs brain masks to exclude non-brain tissues (e.g. ICV mask, CSF mask, NAWM mask or cortical mask).
Notes:
Versioning Name — dd/mm/yyyy (va.b.c):
set_3d
).If you find that this work interesting and helps your work/research, please do cite our publications below.
These instructions will get you a copy of the software up and running on your local machine or virtual environment for development and testing purposes. Please clone/download the project from:
https://github.com/febrianrachmadi/lots-iam-gpu
The easiest way to use the LOTS-IM method is to get a Docker image and run it as a Docker container. You can get the Docker image using the this link. The Docker image for this project is written in Python3. Below is the libraries installed in the Docker image, in case you need it.
First of all, you should make sure that Nvidia’s CUDA Toolkit has been installed in your local machine. Please install Nvidia’s CUDA Toolkit compatible with your GPU and OS.
You have two options to install the project on your machine:
Note: Please make sure that Python3 has been installed in your local machine before continuing.
We provide .yml
files in environments folder which can be used to activate a virtual environment for running LOTS-IAM-GPU in Ubuntu-16.04/Windows. This is very useful especially if you want to run the software on Windows OS.
Below is list of .yml
files provided.
To use the provided environments, you have to install either Anaconda Navigator or miniconda for python3
:
NOTE: GUI workspace is provided by Jupyter Notebook which can be called by using either Anaconda Navigator’s GUI or miniconda’s command line (by calling jupyter notebook
after importing and activating the virtual environment).
After installation of Anaconda/miniconda, you now can import the provided environtments by following these instructions:
conda env create -f environments/linux_iam_gpu_jynb_env.yml
After importing the environment file, you should be able to see the imported environment’s name (in Anaconda Navigator, choose Home > Applications on
or Environments
tabs; while in miniconda, call conda env list
). You now should be able to activate/deactivate (i.e. load/unload) the virtual environment by following these instructions:
source activate IAM_GPU_LINUX_mini
By activating the provided environment, you should be able to run the project (if only if you have installed CUDA Toolkit in your machine). To deactivate (i.e. unload) an active environment running on terminal, call source deactivate
.
If you need more help on Anaconda Navigator or miniconda, please see Anaconda Navigator or miniconda.
If you would like to run the LOTS-IAM-GPU on your machine, you could easily do that by installing all of required libraries on your local machine. If you are not sure how to do it, please follow instructions below.
python3
(instructions).pip3
for python3
:
sudo apt-get install python3-setuptools
sudo easy_install3 pip
python3
(insructions) and update it:
conda update -n base conda
conda
:
conda install --file environments/requirements_conda.txt
pip3
:
pip3 install -r environments/requirements_pip3.txt
Anaconda Navigator (Jupyter Notebook/GUI): Please follow instructions below to run the software via Anaconda Navigator.
Anaconda Navigator
from Start
menu (Windows) or Terminal > $ anaconda-navigator
(Ubuntu-16.04).Home > Applications on > IAM_GPU_LINUX_jynb > jupyter notebook > Launch
tabs (Ubuntu-16.04) or Home > Applications on > IAM_GPU_WIN > jupyter notebook > Launch
tabs (Windows).LOTS_IM_GPU_FUNction_release.ipynb
Jupyter Notebook file.Kernel > Change kernel > IAM_GPU_LINUX_jynb
(only for Linux/Ubuntu-16.04).Kernel > Restart & Run All
. Note: You can run each cell one-by-one by choosing a cell and then click >| Run
button.output_filedir
variable.Miniconda (command line): Please follow instructions below to run the software via miniconda/command line.
Terminal
(Linux Ubuntu-16.04) or Anaconda Prompt
from Start
menu (Windows).source activate IAM_GPU_LINUX_mini
(Ubuntu-16.04) or activate IAM_GPU_WIN
(Windows).python lots_im_gpu.py
on the terminal.output_filedir
variable.Local Machine (Linux/command line): Please follow instructions below to run the software via command line (Ubuntu 16.04). Please, make sure that all libraries have been installed.
python lots_im_gpu.py
on the terminal.output_filedir
variable.The software will automatically create a new folder listed in output_filedir
variable. Some parameters will be concatenated at the end of folder’s name to make it unique; where the default setting is following convention of experiment_name
_number_of_samples
. Inside this folder, each subjects will have its own folder to save results produced by LOTS-IM’s method. To change experiment’s name and number of samples, please see Section 2.3. Changing Software’s Parameters.
Inside the experiment’s folder, each patient/MRI data will have its own folder. In default, there are 6 sub-folders which are:
.mat
format (Matlab)..mat
format (Matlab)..mat
format (Matlab)..mat
format (Matlab).JPEG
files, andJPEG
files..nii.gz
):IAM_GPU.nii.gz
: the original irregularity map values andIAM_GPU_GloballyNormalised.nii.gz
: the final irregularity map (i.e. global normalisation and penalty).In default, there are six parameters that can be easily changed by the user (listed below).
## General output full path (note to user: you can change this variable)
output_filedir = "/mnt/storage/MRI_dataset/LOTS_IM_results_mini"
## Set setting of LOTS-IM's calculation
## Default: True (default) --> 3D LOTS-IM (volume based calculation)
## Otherwise (False): 2D LOTS-IM (slice based calculation)
set_3d = True
## Size of source and target patches.
## Must be in the form of python's list data structure.
## Default: patch_size = [1,2,4,8]
patch_size = [1,2,4,8]
## Weights for age map blending produced by different size of source/target patches
## Must be in the form of python's list data structure.
## Its length must be the same as 'patch_size' variable.
## Default: blending_weights = [0.65,0.2,0.1,0.05]
blending_weights = [0.65,0.2,0.1,0.05]
## Used only for automatic calculation for all number of samples
## NOTE: Smaller number of samples makes computation faster (please refer to the manuscript).
## Samples used for IAM calculation
## Default: num_samples_all = [512]
num_samples_all = [64]
## Uncomment line below and comment line above if you want to run all different number of samples
# num_samples_all = [64, 128, 256, 512, 1024, 2048]
## Weight of distance function to blend maximum difference and average difference between source
## and target patches. Default: alpha=0.5. Input value should be between 0 and 1 (i.e. floating).
alpha = 0.5
## Thresholds the target patches to prevent including patches containing hyper-intensities.
## Default : threshold_patches = None.
## You might want to try: thrsh_patches = 0.05
thrsh_patches = None
## Save JPEG outputs
## Default: save_jpeg = True
save_jpeg = True
## Delete all intermediary files/folders, saving some spaces in the hard disk drive.
## Default: delete_intermediary = True
delete_intermediary = True
User can change these parameters via lots_im_parameters.py
file or the second active cell in LOTS_IM_GPU_FUNction_release.ipynb
file (Jupyter Notebook user only) before running the software.
Important notes: Some more explanations regarding of changeable parameters.
output_filedir
: Its value should follow this convention: ../output_path
/name_of_experiment
.set_3d
: Its value controls the setting of LOTS-IM’s calculation (i.e., 2D (slice) or 3D (volume)). The default is True
.patch_size
: Its value controls the sizes of source/target patches used in the computation. The default value is a python list [1,2,3,4]
i.e. translated to 1 x 1
, 2 x 2
, 4 x 4
, and 8 x 8
source/target patches. If user input only one number (e.g. [2]
), then LOTS-IM-GPU will do computation by using 2 x 2
source/target patch only. NOTE: Feel free to use different number of source/target patches, but other than these four numbers, it is not guaranteed that the software will finish the computation without any trouble.blending_weights
: Its value controls blending weights used for blending all irregularity maps produced by different size of source/target patches. The weights must be the form of python’s list, summed to 1, and its length must be the same as patch_size
variable.num_samples_all
: A list of numbers used for randomly sampling target patches to be used in the calculation of LOTS-IM-GPU. Some fixed and limited numbers of target patches are available to be used by user, which are 64, 128, 256, 512, 1024, 2048. These numbers are chosen to make GPU’s memory management easier. Some important notes regarding of this parameter are:alpha
: Weight of distance function to blend maximum difference and average difference between source and target patches. Default: alpha = 0.5
. Input value should be between 0 and 1 (i.e. floating points). The current distance function being used is: d = (alpha . |max(s - t)|) + ((1 - alpha) . |mean(s - t)|)
where d
is distance value, s
is source patch, and t
is target patch. thrsh_patches
: Thresholds the target patches during extraction phase to prevent the inclusions of patches containing prior knowledge of WMH (early automatic detection/estimation using confidence interval (CI) of 95%). Note, this only works for 3D LOTS-IM.save_jpeg
: Input False
if you do not want to save JPEG visualisation files.delete_intermediary
: Input True
if you want to delete all intermediary files, which can save some spaces in the hard disk drive.A CSV file is used to list all input data to be processed by LOTS-IM-GPU method. The default name of the CSV file is IAM_GPU_pipeline_test_v2.csv
. Feel free to edit or make a new CSV input file.
IMPORTANT NOTE IN LOTS-IM v1.0.0:
lots_im_function_compute
) retrieve 3D Numpy array as its input.Path to MRI’s folder | Names of MRI data | Path to FLAIR NIfTI files | Path to ICV NIfTI files | Path to CSF NIfTI files | Path to NAWM NIfTI files (optional) | Path to Cortical NIfTI files (optional) |
---|---|---|---|---|---|---|
/dir/…/MRIdatabase/ | MRI001 | /dir/…/MRIdatabase/MRI001/FLAIR.nii.gz | /dir/…/MRIdatabase/MRI001/ICV.nii.gz | /dir/…/MRIdatabase/MRI001/CSF.nii.gz | /dir/…/MRIdatabase/MRI001/NAWM.nii.gz | /dir/…/MRIdatabase/MRI001/Cortex.nii.gz |
/dir/…/MRIdatabase/ | MRI001 | /dir/…/MRIdatabase/MRI002/FLAIR.nii.gz | /dir/…/MRIdatabase/MRI002/ICV.nii.gz | /dir/…/MRIdatabase/MRI002/CSF.nii.gz | /dir/…/MRIdatabase/MRI002/NAWM.nii.gz | /dir/…/MRIdatabase/MRI002/Cortex.nii.gz |
… | … | … | … | … | … | … |
/dir/…/MRIdatabase/ | MRInnn | /dir/…/MRIdatabase/MRInnn/FLAIR.nii.gz | /dir/…/MRIdatabase/MRInnn/ICV.nii.gz | /dir/…/MRIdatabase/MRInnn/CSF.nii.gz | /dir/…/MRIdatabase/MRInnn/NAWM.nii.gz | /dir/…/MRIdatabase/MRInnn/Cortex.nii.gz |
The main function of the LOTS-IM-GPU is located in LOTS_IM_GPU_FUNction.py
, which is named lots_im_function_compute
. You can call function’s help
by calling help(lots_im_function_compute)
inside Python kernel.
The key idea of the LOTS-IM is treating hyperintensities of the FLAIR MRI as irregular textures as in [1]. To do this, there are at least four steps to complete the LOTS-IM’s computation which are listed below.
To understand LOTS-IM’s computation, one should understand two different types of patches, which are source patches and target patches. Source patches are the non-overlapping grid-patches of brain tissues (i.e. inside intracranial volume (ICV) and outside cerebrospinal fluid (CSF) brain masks). Whereas, target patches are randomly sampled patches from all possible overlapping patches which come from the same slice (also need to be inside ICV mask and outside CSF mask). Irregularity value is then calculated between *all source patches and **some of the* target patches by using a distance function to create irregularity maps of all slices of MR image.
Because of the nature of LOTS-IM’s computation, hundreds (if not thousands) of source patches need to be compared/calculated with thousands of target patches. Thus, implementation of LOTS-IM on GPU is needed to speed up its computation. Based on our experiments, implementation of LOTS-IM on GPU speeds up the computation by over 13.5 times without having any drawback on the quality of the results. The summary and flow of the LOTS-IM-GPU’s computation can be seen in Figure 1 below. If you are interested to learn more about the LOTS-IM-GPU, feel free to read extended explanation and experiments of the method in our publications.
Figure 1: Flow of one slice of MRI data processed by LOTS-IM-GPU.
The biggest different between LOTS-IM-GPU and other WMH segmentation methods are their respective results. Most of WMH segmentation methods produce probability values of all voxels as white matter hyperintensities (WMH) (i.e. a voxel has a high probability value if the chance of it as WMH is high). On other hand, LOTS-IM-GPU produces irregularity values of all voxels which explain each voxel’s irregularity (i.e. level of damage of each voxel) compare to other voxels in the brain. Thus, LOTS-IM-GPU produces richer information of WMH than other WMH segmentation methods, especially when many WMH segmentation methods cut off the probability values to become binary segmentation. Visualisation of the LOTS-IM-GPU and other WMH segmentation methods can be seen in Figure 2 below.
Figure 2: Visualisation of probabilistic values of WMH produced by LOTS-IM compare to other methods, which are the DeepMedic, U-Net based deep neural networks, LST-LGA, and minimum variance quantization with 100 levels (MVQ-100). The DeepMedic and U-Net are supervised deep neural networks methods whereas LOTS-IM(-GPU), LST-LGA, and MVQ-100 are unsupervised methods.
As showed in Figure 2, irregularity map (IM) has an important characteristic where it retains more texture information than probability map or binary mask of WMH. This is very helpful for simulating the regression and progression of WMH (see figure below). Please see the paper for full explanation of the proposed algorithm and discussion at here.
Figure 3: Simulation of WMH regression using irregularity map.
Figure 4: Simulation of WMH progression using irregularity map.
The code is implemented so that any user can change the number of target patch samples used for irregularity map calculation in the LOTS-IM-GPU. The relation between number of target patch samples and speed/quality of the results is depicted in the Figure 5 below. The means of DSC depicted below were produced on 60 MRI scans (manually labelled by expert) from ADNI database.
Figure 5: Speed versus quality of different number of target patch samples used in the LOTS-IM-GPU.
In recent years, the development of unsupervised detection methods of hyperintensities in brain MRI is slower than the supervised methods, especially after the usage of state-of-the-art deep neural networks methods in biomedical image processing and analysis. However, we believe that unsupervised methods have its own place in biomedical image analysis because its independency. Whilst supervised methods depend on the quality and amount of expert labelled data similar to the sample to be used, LOTS-IM-GPU and other unuspervised methods do not need training and is independent from imaging protocols and sample characteristics.
By making the LOTS-IM-GPU publicly available, we hope that it can be used as a new baseline method of unsupervised WMH segmentation. We will keep updating the LOTS-IM-GPU in the next future as well as making it more modular so that different pre-/post-processing processes could be easily incorporated by users. Please, feel free to ask any question or give feedback to improve LOTS-IM-GPU’s usage.
Best wishes,
Febrian Rachmadi
This project is licensed under the BSD 3-Clause License - see the LICENSE file for details.
Funds from Indonesia Endowment Fund for Education (LPDP) of Ministry of Finance, Republic of Indonesia and Row Fogo Charitable Trust (Grant No. BRO-D.FID3668413) (MCVH) are gratefully acknowledged.