The easiest way to use this network is with anaconda. Once anaconda is installed, downloading the installing the environment from the unet-cpu.yml file will install and set up most of the dependencies. Some pip versions cannot find the keras-contrib package so I have removed it from the .yml file and we will have to install it directly from git after installing the conda environment. First, install a conda environment from one of the provided .yml files. If you are using a cpu:
$ conda env create -f unet-cpu.yml
If you intend on using a gpu, this network has been tested CUDA 10.0 and 10.1 with Keras 2.2.4 and tensorflow-gpu 1.13.1 backend. It is possible that other versions of CUDA, Keras, and tensorflow-gpu will work but we do not guarantee it. With either CUDA 10.0 or 10.1 installed on your system, install the gpu conda environment from the provided keras.yml file:
$ conda env create -f keras.yml
If there are any packages that cannot be found feel free to remove them from the file unet-cpu.yml and manually install them with conda or pip. Next we must install the keras-contrib package from github:
$ pip install git+https://www.github.com/keras-team/keras-contrib.git
The final pre-requisites are the convert3d and greedy command line tools provided by ITK-SNAP, which can be downloaded at: http://www.itksnap.org/pmwiki/pmwiki.php?n=Downloads.SNAP3
Sample training scripts are provided in train_tissue_w_template.py and train_skullstrip.py. The config dictionary in that script holds all the training parameters. The experiment configuration is saved alongside the model in a .json file. The only real change you'll need to do to get this program to train on your data is update the read_split function to be able to find image-segmentation pairs on your hard-drive. The images are read, cropped, interpolated, standardized, and serialized into an hdf5 file for streaming data into GPU RAM during training.
In order to follow the methods of our manuscript the input images should be skullstripped T1 images and a spatial prior, which is generated by registering the OASIS atlas to patient T1 space. For ease of use the atlas, its tissue segmentation, and a registration script based on command line tools from ITK-SNAP have been provided. To create the spatial prior for an image run
./register_to_template.sh /path/to/T1 /path/to/output
This will create a spatial prior for the T1 image at /path/to/T1 and save it to /path/to/output.
T1 images can be skullstripped with an inhouse tool or publicly available tool, or by this network provided you have a model (see Skull Stripping below).
Inference can be performed on the test set specified in split.csv using
run_test_cases.py
You will need to specify in the program which experiment configuration to load. It is also good practice to modify the config's "data_file", "training_file", and "validation_file" for different experiments. If you edited the read_split function in train_tissue_w_template.py you will need to modify it here too. This program must be able to find the skullstripped T1 and patient specific spatial priors to run.
If you have both a skullstripping model and a tissue segmentation model trained you may generate a tissue segmentation from a raw T1 image with:
./preprocess_and_predict_tissue.sh /path/to/T1 \
/path/to/tissue_model \
/path/to/skullstripping_model \
$identifier \
/path/to/output
This will generate the spatial prior for the T1 image at /path/to/T1, skullstrip it with the model at /path/to/skullstripping_model, segment the tissues with the model at /path/to/tissue_model, and save the tissue segmentation to /path/to/output/${identifier}_prediction.nii.gz.
Before skullstripping is possible a model must be trained for the given modality. It is also possible to include all modalities in the training set and make a modality independent segmenter.
The python program for predicting is predict_single_case.py. Help for this program is available by typing
$ python predict_single_case.py -h
However, there are two skullstripping wrappers provided:
-
skullstrip_image.sh is used for skullstripping a single image. This script assumes no directory structure and thus is much easier to use. We just need to tell it which file to skullstrip and give it a unique identifier so that the output filename is unique. It is easiest to feed it the accession number. This program has three possible inputs,
i) input filename (required) ii) Unique identifying [Accession] number (required) iii) output filename (optional)
If the output file is not specified the output will be saved in the present working directory with the naming convention ${ACCESSION}_${MODALITY}_skullstripped.nii.gz. An example call might be
$ ./skullstrip_image.sh ../patients/9146424/FLAIR/FLAIR.nii.gz 9146424 ../patients/9146424/FLAIR/FLAIR_skullstripped.nii.gz
- skullstrip.sh takes as input the base directory where studies can be found, assuming file naming convention ${BASEDIR}/${ACCESSION}/${MODALITY}/${MODALITY}.nii.gz. It will skull strip the modality specified in the script (set to FLAIR now) for all patients in that directory. An example call may be:
$ ./skullstrip.sh ../patients
where the folder ../patients holds subdirectories for studies according to the organization ${ACCESSION}/${MODALITY}/${MODALITY}.nii.gz
In order to skull strip a different modality simply change the MOD variable in the second line of either of the two bash scripts from FLAIR to the desired modality and use as before.