Integration of regularized l1 tracking and instance segmentation for video object tracking


Gurkan F., Günsel Kalyoncu B.

NEUROCOMPUTING, cilt.423, ss.284-300, 2021 (SCI-Expanded) identifier identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 423
  • Basım Tarihi: 2021
  • Doi Numarası: 10.1016/j.neucom.2020.09.072
  • Dergi Adı: NEUROCOMPUTING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Biotechnology Research Abstracts, Compendex, Computer & Applied Sciences, EMBASE, INSPEC, zbMATH
  • Sayfa Sayıları: ss.284-300
  • İstanbul Teknik Üniversitesi Adresli: Evet

Özet

We introduce a tracking-by-detection method that integrates a deep object detector with a particle filter tracker under the regularization framework where the tracked object is represented by a sparse dictionary. A novel observation model which establishes consensus between the detector and tracker is formulated that enables us to update the dictionary with the guidance of the deep detector. This yields an efficient representation of the object appearance through the video sequence hence improves robustness to occlusion and pose changes. The proposed tracker employs a 7D affine state vector formulated to output deformed object bounding boxes that significantly increases robustness to scale changes. Performance evaluation has been carried out on a subset of challenging VOT2016 and VOT2018 bench marking video sequences for 80 object classes of COCO. Numerical results demonstrate that the introduced tracker, L1DPF-M, achieves comparable robustness while it outperforms state-of-the-art trackers in success rate where the improvement achieved at IoU-th = 0.5 on the used VOT2016 and VOT2018 sequences is 11% and 9%, respectively. (C) 2020 Elsevier B.V. All rights reserved.