Taku Komura
Thu 21 May 2015, 13:00 - 14:00
IF 4.31/4.33

If you have a question about this talk, please contact: Steph Smith (ssmith32)

Describe about two topics that we have worked on recently. First I describe about using Deep Convolutional Neural Networks for motion synthesis, recognition and interpolation. Deep Convolutional Neural Networks have advantages in computing local features produced by different body parts as well as exploiting temporal correspondence. For each trajectory of the body parts, we produce temporal filters that convolve the movements along the timeline. Various types of trajectories of the body part is automatically extracted after optimization. The layered Neural Networks can further extract nonlinear features that are difficult to be obtained by linear approaches such as PCA. We show that after training the network using the full CMU human motion database, the hidden units in the upper layer corresponds to meaningful local movements that can be observed in humans. Compared to other kernel based techniques which can only use a small number of data for training, the proposed method can make use of the entire set of movements. I will present about our work of using a feature about the open space between objects for recognition and synthesis of new scenes.  We compute features such as the bisector surface and the spherical harmonics representation computed from the spherical depth image as features of the scene, and use them for matching the open space around objects. For the matched results, we can switch the objects from an example scene and produce new scenes. We show that our method can be applied for indoor scene synthesis and populating scenes with human characters.