Joost Van Der Weijer
Mon 13 Jun 2016, 15:00 - 16:00
4.31/33, IF

If you have a question about this talk, please contact: Steph Smith (ssmith32)

In the first part of this talk I will go into two aspects of facial expression analysis. Firstly, I address the problem of estimating high-level semantic labels for videos of recorded people by means of analyzing their facial expressions. This problem is a weakly-supervised learning problem where we do not have access to frame-by-frame facial gesture annotations but only weak-labels at the video level are available. We will investigate this as a Multi-Instance-Learning (MIL) problem and we propose a novel MIL method called Regularized Multi-Concept MIL to solve it. Secondly, I will discuss the learning of Action Unit recognition from databases labelled with according to the 6 universal facial expressions. To do so, we propose this as a novel learning framework: Hidden-Task Learning. The hidden task here is action unit recognition and the visual task is facial expression recognition. To automatically learn the hidden task we make use of prior knowledge from empirical psychological studies providing statistical correlations between Action Units and universal facial expressions.

In the second part of the talk I will investigate the usage of color for visual tracking. Many state-of-the-art visual trackers either rely on luminance information or only use simple color presentations for image description. The desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power. Our evaluation suggests that color attributes provide superior performance for visual tracking.  The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision, while running at more than 100 frames per second.