Contrastive learning based facial action unit detection in children with hearing impairment for a socially assistive robot platform?

Gurpinar C., Takır Ş., Biçer E., ULUER P., ARICA N., Köse H.

IMAGE AND VISION COMPUTING, vol.128, 2022 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 128
  • Publication Date: 2022
  • Doi Number: 10.1016/j.imavis.2022.104572
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Applied Science & Technology Source, Biotechnology Research Abstracts, Computer & Applied Sciences, INSPEC
  • Keywords: Contrastive learning, Facial action unit detection, Child -robot interaction, Transfer learning, Domain adaptation, Covariate shift, EMOTION RECOGNITION, NETWORK
  • Istanbul Technical University Affiliated: Yes


This paper presents a contrastive learning-based facial action unit detection system for children with hearing im-pairments to be used on a socially assistive humanoid robot platform. The spontaneous facial data of children with hearing impairments was collected during an interaction study with Pepper humanoid robot, and tablet -based game. Since the collected dataset is composed of limited number of instances, a novel domain adaptation extension is applied to improve facial action unit detection performance, using some well-known labelled datasets of adults and children. Furthermore, since facial action unit detection is a multi-label classification prob-lem, a new smoothing parameter, beta, is introduced to adjust the contribution of similar samples to the loss func-tion of the contrastive learning. The results show that the domain adaptation approach using children's data (CAFE) performs better than using adult's data (DISFA). In addition, using the smoothing parameter beta leads to a significant improvement on the recognition performance. (c) 2022 Elsevier B.V. All rights reserved.