We develop a deep neural network that automatically extracts contextual features from patches in sketches, trained with 3D models rendered with non-photorealistic techniques. Our method is able to find dense correspondences between real world sketches!
We introduce an augmented target loss function framework for photoreceptor layer segmentation that penalizes errors in the central area of each B-scan. It allows to significantly improve performance with respect to the standard loss functions.
We introduced a multitask learning y-shaped neural network that simultaneously segment the FAZ in FA images and predict a distance map. This extra branch aids to improve results in clinical routine images.