Sotirios Tsaftaris (University of Edinburgh)
Wed 28 Nov 2018, 16:00 - 17:00
JCMB5323

If you have a question about this talk, please contact: Kostas Zygalakis (kzygalak)

Medical imaging data are typically accompanied by additional information (e.g. the clinical history of the patient). At the same time, for example, magnetic resonance imaging exams typically contain more than one image modality: they show the same anatomy under different acquisition strategies revealing various pathophysiological information. The detection of disease, segmentation of anatomy and other classical analysis tasks, can benefit from a multimodal view to analysis that leverages shared information across the sources yet preserves unique (critical for diagnosis) information. It is without surprise that radiologists analyse data in this fashion, reviewing the exam as a whole. Yet, when aiming to automate analysis tasks, we still treat different image modalities in isolation and tend to ignore additional (non-image) information. In this talk, I will view modality as information extracted from the same imaging data (multimodal in the source) or from different imaging exams. I will discuss how different architectural choices can solve key problems in learning from structural information as means to reduce need for annotation. I will also show how deep neural networks can learn latent embeddings suitable for multimodal processing. I will conclude by highlighting challenges in improving our understanding and optimization of neural networks that by working across disciplines we can tackle.