We posed a multiclass segmentation task as a single multitask model with binary segmentation targets. Our results indicate that this approach might be useful to deal with "sandwiched" structures.
We introduce an augmented target loss function framework for photoreceptor layer segmentation that penalizes errors in the central area of each B-scan. It allows to significantly improve performance with respect to the standard loss functions.
We introduced a multitask learning y-shaped neural network that simultaneously segment the FAZ in FA images and predict a distance map. This extra branch aids to improve results in clinical routine images.
We developed a Bayesian U-Net model for photoreceptor layer segmentation in OCT that predicts epistemic uncertainty maps highlighting potential areas of error in the segmentation.
We used CycleGANs to translate OCT images from one vendor to another. This approach allows us to increase the performance of fluid segmentation models trained on one vendor and evaluated on another.
We propose a deep learning methodology to predict retinal sensitivity from OCT volumes.
We designed a method to summarize hemodynamic parameters obtained by 0D simulations so that they can be applied for glaucoma detection. We observed certain correlation between glaucoma and these hemodynamic features.
We developed a simple linear regression model that is able to estimate the hyperparameters of a fully-connected CRF model for blood vessel segmentation in fundus images.
We use pretrained VGG-S and OverFeat architectures as feature extractors for glaucoma detection in fundus pictures. We were able to get almost 0.8 AUC without fine-tuning the networks!
We introduced a discriminatively trained fully-connected conditional random field model for blood vessel segmentation in retinal images.