Yordan Hristov
Thu 29 Mar 2018, 12:45 - 14:00
IF 4.31/4.33

If you have a question about this talk, please contact: Allison Kruk (v1atayl6)

Pastries will be available

Title

Disentanglement Learning and Interpretable Symbol Grounding

Abstract

The talk would be a progress update on my thesis work and will discuss the usage of disentangled data representations for the sake of interpretable symbol grounding. A disentangled data representation is one, where the underlying generative factors of variation, responsible for key attributed of the data, are explicitly factorised during the learning process. Current state-of-the-art approaches for learning such representations rely on using no supervision and as a result might disentangle unimportant factors of variation, with respect to a given task. We hypothesise that through the usage of coarse labels/symbols, and by borrowing ideas from prototype and case-based learning, we can perform more focused disentanglement only of factors that are relevant to the grounding of those symbols.