HackEye Segmentation was a project to segment organs within human MRI scans. We were specifically interested in segmenting three tissues, the left eye (LE), right eye (RE), and brain. In order to do this we trained a convolutional neural network on a dataset of 987 MRI scans and their coresponding fiducial points.
We were provided a dataset of 981 subjects. Each subject had a t1 and t2 weighted image, as well as a segmentation mask of the brain and a csv file containing fiducial points that where generated via a pre-existing algorithm.
# Example files for a single subject:
It is important to know that these points represent the physical space location of the fiducial in a Right Aanterior Superior (RAS) coordinate frame. As where our t1, t2, and segmentation image where in Left Posterieur Superior coordinates (LPS).
# the fiducials we used to create masks for the left eye and right eye
We used an image processing library called SimpleITK to manipulate our images and pandas to manipulate the fcsv files. We used the
LE finducial points from each subjects fcsv file to generate labels of the left and right eye via radial expansion from the fiducial and intensity thresholding. We then added the eye and brain masks together to get our final label mask. The t1 and t2 images had already been intensity normalized and bias field corrected, so we didn’t worry to much about cleaning them.
We used a deep learning framework design specifically for medical imaging called NiftyNet, which is built on Tensorflow, in order to train our model, evaluate it’s performance and run inference on new data. We trained our model for 30000 interations overnight, which took approximately 8 hours.
Validation & Visualization
We used holdout cross validation to ensure that our model didn’t overfit during the training process.
TODO: include graphics of validation results
Below is a picture of the results we where able to achieve. This is segmentation of images that the model has never seen before.