Emotion Recognition Using Facial Attribute Analysis


Albarazi Y., Bıçakcı K.

INTERNATIONAL GRADUATE RESEARCH SYMPOSIUM - IGRS’23, İstanbul, Turkey, 2 - 05 May 2023, pp.1

  • Publication Type: Conference Paper / Summary Text
  • City: İstanbul
  • Country: Turkey
  • Page Numbers: pp.1
  • Istanbul Technical University Affiliated: Yes

Abstract


Emotion Recognition Using Facial Attribute Analysis

Yüsra Albarazi1, Kemal Bıçakcı1
1 Istanbul Technical University, Informatics Institute

albarazi20@itu.edu.tr,  kemalbicakci@itu.edu.tr

 

ABSTRACT

Under unrestricted conditions it has always been challenging to analyze facial features from facial images due to some unfavorable factors like scale, noise, illumination, occlusion, pose and etc. Age, gender, identity, emotional information, and other extremely useful information can be determined by analyzing the attributes associated with the face in light of the features that the face exhibits. The study of how individuals feel can be primarily used in many areas like psychology, neuroscience, security and surveillance. Integration with detection and alignment algorithms can improve the working efficiency of neural networks and, consequently, the model's accuracy. This article aims to assess the robustness of a pre-trained model of facial attribute analysis and its capability to accurately predict the facial expression on the biometric images that are embedded in ID cards and selfie pictures using Sefik Ilkin Serengil's "DeepFace" library in order to conduct further analysis and make improvements in future studies.

The used model employs a convolutional neural network (CNN) architecture with 12 layers total, of which five are convolution layers and three are fully connected layers. The output layer has seven nodes corresponding: Angry, Disgust, Fear, Happy, Sad, Surprise, and Neutral. Since the biometric images must be taken with a closed-mouth smile or natural expressions, we divided the above nodes into two groups: the first group contains Neutral and Happy nodes, while the second group contains all other nodes. In this work, we used the biometric images and selfies that had previously been collected at Istanbul Technical University. The used dataset includes 1032 biometric images and 3704 selfie images of different people. In the first phase of this work, the faces in the biometric images and selfies are detected and aligned using the RetinaFace algorithm, and then the pre-trained model is used to analyze and predict the emotions in the detected and aligned biometric and selfies images as two distinct tests. The experiments showed the robustness of facial attribute analysis predictions as we were able to achieve an accuracy of 94.42% for biometric images, recognizing that using biometric images improved accuracy by 26% over those observed when using non-biometric images, as compared to the findings reported by Sefik Ilkin Serengil and similarly, the accuracy improved by 24% over selfie images.

This study was funded by the Scientific Research Projects Coordination Unit of Istanbul Technical University (İTÜ BAP). Project No 43647.

Keywords: Facial attribute analysis; CNN; RetinaFace; DeepFace; Detection; Alignment.