With no other methodological innovation than a carefully designed training, our ResNet model achieved an AUC = 0.955 (0.953 - 0.956) on a combined test set of 61007 test images from different public datasets, which is in line or even better than what other more complex deep learning models reported in the literature.
We experimentally validate whether using coarse-to-fine models instead of one-stage models is appropriate or not for segmenting the optic disc and the optic cup in color fundus images. We observed that models trained with the right amount of data can perform much better than coarse-to-find approaches.
We designed a method to summarize hemodynamic parameters obtained by 0D simulations so that they can be applied for glaucoma detection. We observed certain correlation between glaucoma and these hemodynamic features.
We developed a simple linear regression model that is able to estimate the hyperparameters of a fully-connected CRF model for blood vessel segmentation in fundus images.
We use pretrained VGG-S and OverFeat architectures as feature extractors for glaucoma detection in fundus pictures. We were able to get almost 0.8 AUC without fine-tuning the networks!
We present an extensive description and evaluation of our method for blood vessel segmentation in fundus images based on a discriminatively trained fully connected conditional random field model.