Multi-modal Neuroimaging Data Fusion via Latent Space Learning for Alzheimer's Disease Diagnosis


Zhou T., Thung K., Liu M., Shi F., Zhang C., Shen D.

1st International Workshop on PRedictive Intelligence in MEdicine (PRIME), Granada, Nikaragua, 16 Eylül 2018, cilt.11121, ss.76-84 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Cilt numarası: 11121
  • Doi Numarası: 10.1007/978-3-030-00320-3_10
  • Basıldığı Şehir: Granada
  • Basıldığı Ülke: Nikaragua
  • Sayfa Sayıları: ss.76-84
  • İstanbul Teknik Üniversitesi Adresli: Hayır

Özet

Recent studies have shown that fusing multi-modal neuroimaging data can improve the performance of Alzheimer's Disease (AD) diagnosis. However, most existing methods simply concatenate features from each modality without appropriate consideration of the correlations among multi-modalities. Besides, existing methods often employ feature selection (or fusion) and classifier training in two independent steps without consideration of the fact that the two pipelined steps are highly related to each other. Furthermore, existing methods that make prediction based on a single classifier may not be able to address the heterogeneity of the AD progression. To address these issues, we propose a novel AD diagnosis framework based on latent space learning with ensemble classifiers, by integrating the latent representation learning and ensemble of multiple diversified classifiers learning into a unified framework. To this end, we first project the neuroimaging data from different modalities into a common latent space, and impose a joint sparsity constraint on the concatenated projection matrices. Then, we map the learned latent representations into the label space to learn multiple diversified classifiers and aggregate their predictions to obtain the final classification result. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that our method outperforms other state-of-the-art methods.