With no other methodological innovation than a carefully designed training, our ResNet model achieved an AUC = 0.955 (0.953 - 0.956) on a combined test set of 61007 test images from different public datasets, which is in line or even better than what other more complex deep learning models reported in the literature.
We experimentally validate whether using coarse-to-fine models instead of one-stage models is appropriate or not for segmenting the optic disc and the optic cup in color fundus images. We observed that models trained with the right amount of data can perform much better than coarse-to-find approaches.
We introduce an augmented target loss function framework for photoreceptor layer segmentation that penalizes errors in the central area of each B-scan. It allows to significantly improve performance with respect to the standard loss functions.
We introduced a multitask learning y-shaped neural network that simultaneously segment the FAZ in FA images and predict a distance map. This extra branch aids to improve results in clinical routine images.
We designed a method to summarize hemodynamic parameters obtained by 0D simulations so that they can be applied for glaucoma detection. We observed certain correlation between glaucoma and these hemodynamic features.