Zero-shot Facial Expression Recognition with Multi-label Label Propagation

Abstract

Facial expression recognition classifies a face image into one of several discrete emotional categories. We have a lot of exclusive or non-exclusive emotional classes to describe the varied and nuancing meaning conveyed by facial expression. However, it is almost impossible to enumerate all the emotional categories and collect adequate annotated samples for each category. To this end, we propose a zero-shot learning framework with multi-label label propagation (Z-ML$^2$P). Z-ML$^2$P is built on existing multi-class datasets annotated with several basic emotions and it can infer the existence of other new emotion labels via a learned semantic space. To evaluate the proposed method, we collect a multi-label FER dataset FaceME. Experimental results on FaceME and two other FER datasets demonstrate that Z-ML$^2$P framework improves the state-of-the-art zero-shot learning methods in recognizing both seen or unseen emotions.

Publication
Proceedings of Asian Conference on Computer Vision(ACCV), 2018. [Oral]
Jiabei Zeng
Jiabei Zeng
Associate Professor