Back to Top

Paper Title

Multimodal and multicontrast image fusion via deep generative models

Authors

Giovanna Maria Dimitri
Giovanna Maria Dimitri
Simeon Spasov
Simeon Spasov

Article Type

Research Article

Research Impact Tools

Issue

Volume : 88 | Page No : 146-160

Published On

December, 2022

Downloads

Abstract

Recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. This is particularly true for a broad range of neuropsychiatric illnesses such as depression and anxiety disorders or behavioural phenotypes such as aggression and antisocial personality. Patient heterogeneity can be better described and conceptualized by grouping individuals into novel categories, which are based on empirically-derived sections of intersecting continua that span both across and beyond traditional categorical borders. In this context, neuroimaging data (i.e. the set of images which result from functional/metabolic (e.g. functional magnetic resonance imaging, functional near-infrared spectroscopy, or positron emission tomography) and structural (e.g. computed tomography, T1-, T2- PD- or diffusion weighted magnetic resonance imaging) carry a wealth of spatiotemporally resolved information about each patient's brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. This is due to the fact that every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges. In this paper we design and validate a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks (which result in a 20-fold decrease in parameter utilization) in order to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) efficiently convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain good generalizability and minimal information loss. As proof of concept, we test our architecture on the well characterized Human Connectome Project database (n = 974 healthy subjects), demonstrating that our latent embeddings can be clustered into easily separable subject strata which, in turn, map to different phenotypical information (including organic, neuropsychological, personality variables) which was not included in the embedding creation process. The ability to extract meaningful and separable phenotypic information from brain images alone can aid in creating multi-dimensional biomarkers able to chart spatio-temporal trajectories which may correspond to different pathophysiological mechanisms unidentifiable to traditional data analysis approaches. In turn, this may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic disease understanding and also empowering clinical trials.

View more >>

Uploded Document Preview