Human Semantic Parsing for Person Re-identification

Kalayeh M. M. , Başaran E. , Gokmen M., Kamaşak M. E. , Shah M.

31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Utah, United States Of America, 18 - 23 June 2018, pp.1062-1071 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1109/cvpr.2018.00117
  • City: Utah
  • Country: United States Of America
  • Page Numbers: pp.1062-1071


Person re-identification is a challenging task mainly due to factors such as background clutter, pose, illumination and camera point of view variations. These elements hinder the process of extracting robust and discriminative representations, hence preventing different identities from being successfully distinguished. To improve the representation learning, usually local features from human body parts are extracted. However, the common practice for such a process has been based on bounding box part detection. In this paper, we propose to adopt human semantic parsing which, due to its pixel-level accuracy and capability of modeling arbitrary contours, is naturally a better alternative. Our proposed SPReID integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance. We also show that, by employing a simple yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152, with no modification, while operating solely on full image, can dramatically outperform current state-of-the-art. Our proposed methods improve state-of-the-art person re-identification on: Market-1501 [48] by similar to 17% in mAP and similar to 6% in rank-1, CUHK03 [24] by similar to 4% in rank-1 and DukeMTMC-reID [50] by similar to 24% in mAP and similar to 10% in rank-1.