Pavlos Andreadis
Thu 10 Mar 2016, 12:45 - 13:45
4.31/33, IF

If you have a question about this talk, please contact: Steph Smith (ssmith32)

Preference elicitation is the process of learning a decision maker’s model of choice under non-strategic uncertainty. Starting with the hypothesis that the hypothesis space of real human choice makers admits varying levels of coarseness, we wish to construct learning and inference algorithms that can achieve preference elicitation more quickly and efficiently than with techniques that do not model this. A key technical issue is to identify the user’s own representation of the solution space, which we approach by reasoning about possible partitions over the original outcome space. We argue that learning a model in this way and making inferences using beliefs over the user’s individual representation of the solution space can yield significant benefits in performance, as measured for instance by the number of queries needed to make recommendations. I will present an instantiation of this methodology, including learning a model of coarse preferences through decision tree based representations. I will also present preliminary results on a synthetic data set from a ride-sharing scenario, and with real human choice data in the Sushi Dataset, showing that preference elicitation using our model incorporating coarseness is faster and no less accurate than the baseline which ignores this structure.