Cross-View Self-Similarity Using Shared Dictionary Learning for Cervical Cancer Staging


Bnouni N., Rekık I., Rhim M. S., Ben Amara N. E.

IEEE ACCESS, cilt.7, ss.30079-30088, 2019 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 7
  • Basım Tarihi: 2019
  • Doi Numarası: 10.1109/access.2019.2902654
  • Dergi Adı: IEEE ACCESS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.30079-30088
  • İstanbul Teknik Üniversitesi Adresli: Evet

Özet

Dictionary Learning (DL) has gained large popularity in solving different computer vision and medical image problems. However, to the best of our knowledge, it has not been used for cervical tumor staging. More importantly, there have been very limited works on how to aggregate different interactions across data views using DL. As a contribution, we propose a novel cross-view self-similarity low rank shared dictionary learning-based (CVSS-LRSDL) framework, which introduces three major contributions in medical image-based cervical cancer staging: (1) leveraging the complementary of axial and sagittal T2w-magnetic resonance (MR) views for cervical cancer diagnosis, (2) introducing self-similarity (SS) patches for DL training, which explore the unidirectional interaction from a source view to a target one, and (3) extracting features that are shared across tumor grades and grade-specific features using the CVSS-LRSDL learning approach. For the first and second contributions, given an input patch in the source view (axial T2w-MR images), we generate its SS patches within a fixed neighborhood in the target view (sagittal T2w-MR images). Specifically, we produce a unidirectional patch-wise SS from a source to a target view, based on mutual and additional information between both views. As for the third contribution, we represent each individual subject using the weighted distance matrix between views, which is used to train our DL-based classifier to output the label for a new testing subject. Overall, our framework outperformed several DL based multi-label classification methods trained using: (i) patch intensities, (ii) SS single-view patches, and (iii) weighted-SS single-view patches. We evaluated our CVSS-LRSDL framework using 15 T2w-MRI sequences with axial and sagittal views. Our CVSS-LRSDL significantly (p < 0.05) outperformed several comparison methods and obtained an average accuracy of 81.73% for cervical cancer staging.