With no other methodological innovation than a carefully designed training, our ResNet model achieved an AUC = 0.955 (0.953 - 0.956) on a combined test set of 61007 test images from different public datasets, which is in line or even better than what other more complex deep learning models reported in the literature.
We experimentally validate whether using coarse-to-fine models instead of one-stage models is appropriate or not for segmenting the optic disc and the optic cup in color fundus images. We observed that models trained with the right amount of data can perform much better than coarse-to-find approaches.
We introduce an augmented target loss function framework for photoreceptor layer segmentation that penalizes errors in the central area of each B-scan. It allows to significantly improve performance with respect to the standard loss functions.