Dr Subramanian Ramamoorthy
Thu 25 Jan 2018, 12:45 - 14:00
4.31/33, IF

If you have a question about this talk, please contact: Steph Smith (ssmith32)

This talk concentrates on a paradigm for specifying the task of an autonomous robot, wherein the human co-worker provides parts of the specification incrementally (e.g., through a process of dialogue) and informally (e.g., using natural language). This is how realistic interactions between robots and normal human users will be in the near future. However, this poses many problems for the way we typically handle motion planning and policy learning, including the issue of connecting symbols in the task specification to quantitative precepts the robot can actually use, and the more difficult issue of what the user might have meant.

Dr. Ramamoorthy will report on some recent work in this direction. Firstly, Dr. Ramamoorthy will describe a method for processing a raw stream of cross-modal signals to produce segmented objects and corresponding association to high level concepts such as colour and shape. Then, Dr. Ramamoorthy will show preliminary results from on-going work on mapping linguistic instructions directly to constructs in the sampling-based motion planning pipeline. 

Dr. Ramamoorthy will conclude with results from our recent work on program induction, discussing how the explainability benefits of the programmatic representations learnt through these methods could be combined with the above mentioned approaches to physical symbol grounding.


Dr. Subramanian Ramamoorthy is a Reader in the School of Informatics, University of Edinburgh, where he has been on the faculty since 2007. He is an Executive Committee Member within the Edinburgh Centre for Robotics. He received his PhD in Electrical and Computer Engineering from The University of Texas at Austin in 2007. He is an elected Member of the Young Academy of Scotland at the Royal Society of Edinburgh, and has been a Visiting Professor at Stanford University and the University of Rome “La Sapienza”. His research focus is on autonomous robotics, robot learning, and the emerging problems of achieving explainability and ensuring safety of autonomous learning systems.