Convolutional Attention Network for MRI-based Alzheimer's Disease Classification and its Interpretability Analysis


Türkan Y., Tek F. B.

6th International Conference on Computer Science and Engineering, UBMK 2021, Ankara, Türkiye, 15 - 17 Eylül 2021, ss.151-156 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/ubmk52708.2021.9558882
  • Basıldığı Şehir: Ankara
  • Basıldığı Ülke: Türkiye
  • Sayfa Sayıları: ss.151-156
  • Anahtar Kelimeler: 3D Convolutional networks, 3D Gradient-Weighted Class Activation Mapping, 3D Ultrametric Contour Map, Alzheimer's disease, Attention, Interpretability, MRI, Occlusion, SHAP
  • İstanbul Teknik Üniversitesi Adresli: Hayır

Özet

© 2021 IEEENeuroimaging techniques, such as Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET), help to identify Alzheimer's disease (AD). These techniques generate large-scale, high-dimensional, multimodal neuroimaging data, which is time-consuming and difficult to interpret and classify. Therefore, interest in deep learning approaches for the classification of 3D structural MRI brain scans has grown rapidly. In this research study, we improved the 3D VGG model proposed by Korolev et al. [2]. We increased the filters in the 3D convolutional layers and then added an attention mechanism for better classification. We compared the performance of the proposed approaches for the classification of Alzheimer's disease versus mild cognitive impairments and normal cohorts on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. We observed that both the accuracy and area under curve results improved with the proposed models. However, deep neural networks are black boxes that produce predictions that require further explanation for medical usage. We compared the 3D-data interpretation capabilities of the proposed models using four different interpretability methods: Occlusion, 3D Ultrametric Contour Map, 3D Gradient-Weighted Class Activation Mapping, and SHapley Additive explanations (SHAP). We observed that explanation results differed in different network models and data classes.