Alex Keros, Chenyang Zhao, Henrique Ferrolho
Thu 23 May 2019, 12:45 - 14:00
IF, 4.31/33

If you have a question about this talk, please contact: Jodie Cameron (jcamero9)

Alex Keros

Title:

Monte Carlo integration in rendering: k-d tree sampling and persistent homology based summary statistics of stochastic integrators. Abstract:

Monte Carlo integrators provide a powerful tool for the approximation of complex, high dimensional integrals, often found in physically based rendering applications. The sample distribution used to approximate an integral drastically influences the error convergence and the quality of a rendered scene. Thus, it is necessary to devise efficient sampling strategies, as well as summary statistics, that accurately characterize and rank sampling methods in terms of error behaviour. We propose a k-d tree based approach to sampling, along with a topological summary statistic based on persistent homology. We evaluate our methods against popular samplers and the widely used summary statistic of Discrepancy, within the context of physically based rendering scenarios.

 

Chenyang Zhao

Title:

Investigating Generalisation in Continuous Deep Reinforcement Learning

Abstract:

Deep Reinforcement Learning has shown great success in a variety of control tasks. However, it is unclear how close we are to the vision of putting Deep RL into practice to solve real world problems. In particular, common practice in the field is to train policies on largely deterministic simulators and to evaluate algorithms through training performance alone, without a train/test distinction to ensure models generalise and are not overfitted. Moreover, it is not standard practice to check for generalisation under domain shift, although robustness to such system change between training and testing would be necessary for real-world Deep RL control, for example, in robotics. In this paper we study these issues by first characterising the sources of uncertainty that provide generalisation challenges in Deep RL. We then provide a new benchmark and thorough empirical evaluation of generalisation challenges for state of the art Deep RL methods. In particular, we show that, if generalisation is the goal, then common practice of evaluating algorithms based on their training performance leads to the wrong conclusions about algorithm choice. Finally, we evaluate several techniques for improving generalisation and draw conclusions about the most robust techniques to date.

Henrique Ferrolho

Title: Robust Dynamic Trajectory Optimization with Direct Transcription

Abstract: Motion Planning is a wide and well-studied area in robotics. 

What is the difference between a kinematic and a dynamic motion plan? 

How can we plan a robust motion for a robot? And what are we "robust" 

to, for that matter? In this talk, I will frame the continuous Dynamic Trajectory Optimization problem as a discrete Nonlinear Optimization problem by means of Direct Transcription. In addition, I will explain how it is possible to represent some of the system constraints as convex sets of inequalities and how these, in turn, can be used as a robustness metric (with respect to external disturbances) to augment the objective function of the nonlinear optimization problem.