Recent publications

Segmenting anatomical structures such as the photoreceptor layer in retinal optical coherence tomography (OCT) scans is challenging in pathological scenarios. Supervised deep learning models trained with standard loss functions are usually able to characterize only the most common disease appearance from a training set, resulting in suboptimal performance and poor generalization when dealing with unseen lesions. In this paper we propose to overcome this limitation by means of an augmented target loss function framework. We introduce a novel amplified-target loss that explicitly penalizes errors within the central area of the input images, based on the observation that most of the challenging disease appearance is usually located in this area. We experimentally validated our approach using a data set with OCT scans of patients with macular diseases. We observe increased performance compared to the models that use only the standard losses. Our proposed loss function strongly supports the segmentation model to better distinguish photoreceptors in highly pathological scenarios.
In OMIA 2019

Diagnosis and treatment guidance are aided by detecting relevant biomarkers in medical images. Although supervised deep learning can perform accurate segmentation of pathological areas, it is limited by requiring a-priori definitions of these regions, large-scale annotations, and a representative patient cohort in the training set. In contrast, anomaly detection is not limited to specific definitions of pathologies and allows for training on healthy samples without annotation. Anomalous regions can then serve as candidates for biomarker discovery. Knowledge about normal anatomical structure brings implicit information for detecting anomalies. We propose to take advantage of this property using bayesian deep learning, based on the assumption that epistemic uncertainties will correlate with anatomical deviations from a normal training set. A Bayesian UNet is trained on a well-defined healthy environment using weak labels of healthy anatomy produced by existing methods. At test time, we capture epistemic uncertainty estimates of our model using Monte Carlo dropout. A novel post-processing technique is then applied to exploit these estimates and transfer their layered appearance to smooth blob-shaped segmentations of the anomalies. We experimentally validated this approach in retinal optical coherence tomography (OCT) images, using weak labels of retinal layers. Our method achieved a Dice index of 0.789 in an independent anomaly test set of age-related macular degeneration (AMD) cases. The resulting segmentations allowed very high accuracy for separating healthy and diseased cases with late wet AMD, dry geographic atrophy (GA), diabetic macular edema (DME) and retinal vein occlusion (RVO). Finally, we qualitatively observed that our approach can also detect other deviations in normal scans such as cut edge artifacts.
In IEEE Transactions on Medical Imaging.

Purpose: In this paper we propose to apply generative adversarial neural networks trained with a cycle-consistency loss, or CycleGANs, to improve realism in ultrasound (US) simulation from Computed Tomography (CT) scans. Methods: A ray-casting US simulation approach is used to generate intermediate synthetic images from abdominal CT scans. Then, an unpaired set of these synthetic and real US images is used to train CycleGANs with two alternative architectures for the generator, a U-Net and a ResNet. These networks are finally used to translate ray-casting based simulations into more realistic synthetic US images. Results: Our approach was evaluated both qualitatively and quantitatively. A user study performed by two experts in US imaging shows that both networks significantly improve realism with respect to the original ray-casting algorithm (p << 0.001), with the ResNet model performing better than the U-Net. Conclusion: Applying CycleGANs allows to obtain better synthetic US images of the abdomen. These preliminary results pave the way towards efficient patient-specific US simulation for low-cost training of medical doctors and radiologists.
In International Journal of Computer Assisted Radiology and Surgery, IJCARS.

Glaucoma is one of the leading causes of irreversible but preventable blindness in working age populations. Color fundus photography (CFP) is the most cost-effective imaging modality to screen for retinal disorders. However, its application to glaucoma has been limited to the computation of a few related biomarkers such as the vertical cup-to-disc ratio. Deep learning approaches, although widely applied for medical image analysis, have not been extensively used for glaucoma assessment due to the limited size of the available data sets. Furthermore, the lack of a standardize benchmark strategy makes difficult to compare existing methods in a uniform way. In order to overcome these issues we set up the Retinal Fundus Glaucoma Challenge, REFUGE (, held in conjunction with MICCAI 2018. The challenge consisted of two primary tasks, namely optic disc/cup segmentation and glaucoma classification. As part of REFUGE, we have publicly released a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one. We have also built an evaluation framework to ease and ensure fairness in the comparison of different models, encouraging the development of novel techniques in the field. 12 teams qualified and participated in the online challenge. This paper summarizes their methods and analyzes their corresponding results. In particular, we observed that two of the top-ranked teams outperformed two human experts in the glaucoma classification task. Furthermore, the segmentation results were in general consistent with the ground truth annotations, with complementary outcomes that can be further exploited by ensembling the results.
In Medical Image Analysis.


  • An amplified-target loss approach for photoreceptor layer segmentation in pathological OCT scans

    Details PDF

  • Foveal Avascular Zone Segmentation in Clinical Routine Fluorescein Angiographies Using Multitask Learning


  • On Orthogonal Projections for Dimension Reduction and Applications in Augmented Target Loss Functions for Learning Problems

    Details PDF

  • Multiclass segmentation as multitask learning for drusen segmentation in retinal optical coherence tomography

    Details PDF

  • Exploiting Epistemic Uncertainty of Anatomy Segmentation for Anomaly Detection in Retinal OCT

    Details PDF

  • Improving realism in patient-specific abdominal Ultrasound simulation using CycleGANs

    Details PDF

  • U2-Net: A Bayesian U-Net model with epistemic uncertainty feedback for photoreceptor layer segmentation in pathological OCT scans

    Details PDF Slides

  • Using CycleGANs for effectively reducing image variability across OCT devices and improving retinal fluid segmentation

    Details PDF

  • Towards a glaucoma risk index based on simulated hemodynamics from fundus images

    Details PDF Code Dataset Poster

  • Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    Details PDF Dataset

  • An Ensemble Deep Learning Based Approach for Red Lesion Detection in Fundus Images

    Details PDF Code Project

  • A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images

    Details PDF Code Dataset Project

  • Proliferative Diabetic Retinopathy Characterization based on Fractal Features: Evaluation on a Publicly Available Data Set

    Details Code Dataset Project

  • Convolutional neural network transfer for automated glaucoma identification

    Details PDF Code Dataset

  • Assessment of image features for vessel wall segmentation in intravascular ultrasound images

    Details PDF Code

  • Learning fully-connected CRFs for blood vessel segmentation in retinal images

    Details PDF Code Dataset Project Poster

  • REFUGE Challenge: A Unified Framework for Evaluating Automated Methods for Glaucoma Assessment from Fundus Photographs

    Details PDF

Recent & Upcoming Talks

Recent Posts

More Posts

Estamos entrevistando candidatos para aplicar a las becas de estímulo a las vocaciones científicas EVC 2019 del CIN (Consejo Universitario Nacional).


I’m moving back to Argentina to join again PLADEMA as a CONICET funded Assistant Researcher.


Our paper with Santiago Vitale, Emmanuel Iarussi and Ignacio Larrabide on US simulation using CycleGANs was accepted for publication at the International Journal of Computer Assisted Radiology and Surgery.


Our paper with Rhona Asgari on simultaneous outer retinal layers and drusen segmentation was accepted at MICCAI 2019!


Our two papers on automated retinal OCT image analysis using deep learning have been accepted for presentation at ISBI 2019.



Ultrasound simulation

We are developing a low-cost ultrasound simulator for training clinicians in US without requiring a sonographer.

Automated retinal OCT analysis

We develop automated tools for retinal OCT image analysis.

Automated fundus image analysis

We develop automated tools based on machine learning for automated fundus image analysis


I have been a Teaching Assistant in the following courses at UNICEN (Argentina):


Feel free to contact me if you have any questions about my research!