Sparse coding based classifier ensembles in supervised and active learning scenarios for data classification

Tuysuzoglu G., Yaslan Y.

EXPERT SYSTEMS WITH APPLICATIONS, vol.91, pp.364-373, 2018 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 91
  • Publication Date: 2018
  • Doi Number: 10.1016/j.eswa.2017.09.024
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.364-373
  • Istanbul Technical University Affiliated: Yes


Sparse coding and dictionary learning has recently gained great interest in signal, image and audio processing applications through representing each problem instance by a sparse set of atoms. This also allows us to obtain different representations of feature sets in machine learning problems. Thus, different feature views for classifier ensembles can be obtained using sparse coding. On the other hand, nowadays unlabelled data is abundant and active learning methods with single and classifier ensembles received great interest. In this study, Random Subspace Dictionary Learning (RDL) and Bagging Dictionary Learning (BDL) algorithms are examined by learning ensembles of dictionaries through feature instance subspaces. Besides, ensembles of dictionaries are evaluated under active learning framework as promising models and they are named as Active Random Subspace Dictionary Learning (ARDL) and Active Bagging Dictionary Learning (ABDL) algorithms. Active learning methods are compared with their Support Vector Machines counterparts. The experiments on eleven datasets from UCI and OpenML repositories has shown that selecting instance and feature subspaces for dictionary learning model increases the number of correctly classified instances for the most of the data sets while SVM has superiority over all of the applied models. Furthermore, using an active learner generally increases the chance of improved classification performance as the number of iterations is increased. (C) 2017 Elsevier Ltd. All rights reserved.