With no other methodological innovation than a carefully designed training, our ResNet model achieved an AUC = 0.955 (0.953 - 0.956) on a combined test set of 61007 test images from different public datasets, which is in line or even better than what other more complex deep learning models reported in the literature.
We experimentally validate whether using coarse-to-fine models instead of one-stage models is appropriate or not for segmenting the optic disc and the optic cup in color fundus images. We observed that models trained with the right amount of data can perform much better than coarse-to-find approaches.
We introduced a fully automated approach to segment the photorceptor layer, evaluate its thickness and track potential disruptions using an ensemble of deep neural networks.
Anna introduced a new mathematical approach for dimensionality reduction that we incorporated into loss functions to augment target information and improve performance.
We posed a multiclass segmentation task as a single multitask model with binary segmentation targets. Our results indicate that this approach might be useful to deal with "sandwiched" structures.
We introduce an augmented target loss function framework for photoreceptor layer segmentation that penalizes errors in the central area of each B-scan. It allows to significantly improve performance with respect to the standard loss functions.