Multi-task Learning of Emotion Recognition and Facial Action Unit Detection with Adaptively Weights Sharing Network

Abstract

Emotion recognition and facial action unit(AU) detection are the most two prevalent tasks in facial expression analysis. Since the two tasks are highly correlated, in this paper, we simultaneously do emotion recognition and AU detection in a multi-task learning framework to make the tasks benefit from each other. To achieve this, we propose an Adaptively Weights Sharing Network (AWS-Net) that automatically learns where and to what extent each task should borrow information from the other by placing an AWS-Unit after each layer-pair of the two tasks' networks. The proposed AWS-Net is end-to-end trainable on data that is merely annotated with emotions or AUs. Experimental results on several facial expression recognition(FER) datasets demonstrate that AWS-Net improves the performance of both single-task models(emotion recognition and AU detection) and it outperforms other state-of-the-art multi-task learning strategies in FER.

Publication
IEEE International Conference on Image Processing (ICIP), 2019. [Oral]
Jiabei Zeng
Jiabei Zeng
Associate Professor