Learn to synthesize and synthesize to learn


Creative Commons License

Bozorgtabar B., Rad M. S., Ekenel H. K., Thiran J.

COMPUTER VISION AND IMAGE UNDERSTANDING, cilt.185, ss.1-11, 2019 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 185
  • Basım Tarihi: 2019
  • Doi Numarası: 10.1016/j.cviu.2019.04.010
  • Dergi Adı: COMPUTER VISION AND IMAGE UNDERSTANDING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.1-11
  • İstanbul Teknik Üniversitesi Adresli: Evet

Özet

Attribute guided face image synthesis aims to manipulate attributes on a face image. Most existing methods for image-to-image translation can either perform a fixed translation between any two image domains using a single attribute or require training data with the attributes of interest for each subject. Therefore, these methods could only train one specific model for each pair of image domains, which limits their ability in dealing with more than two domains. Another disadvantage of these methods is that they often suffer from the common problem of mode collapse that degrades the quality of the generated images. To overcome these shortcomings, we propose attribute guided face image generation method using a single model, which is capable to synthesize multiple photo-realistic face images conditioned on the attributes of interest. In addition, we adopt the proposed model to increase the realism of the simulated face images while preserving the face characteristics. Compared to existing models, synthetic face images generated by our method present a good photorealistic quality on several face datasets. Finally, we demonstrate that generated facial images can be used for synthetic data augmentation, and improve the performance of the classifier used for facial expression recognition.