Daniel Holden
Thu 03 Mar 2016, 12:45 - 13:45
4.31/33, IF

If you have a question about this talk, please contact: Steph Smith (ssmith32)

We present a framework to synthesize character movements based on high-level parameters while respecting the manifold of human motion trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of the convolutional autoencoder, encodes motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high-level parameters to motion manifold, we stack a deep feedforward neural network on top of this autoencoder. This is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently so the user can easily switch between feedforward networks according to the desired interface, without re-training the motion manifold.  Once motion is generated it can be edited by performing optimisation in the space of the motion manifold. This allows for motion to remain natural, while imposing kinematic constraints, or transforming the style of the motion. As a result, the system can produce smooth and natural motion sequences without any manual pre-processing of the data.