Raluca Scona
Thu 12 May 2016, 12:45 - 13:45
4.31/33

If you have a question about this talk, please contact: Steph Smith (ssmith32)

A humanoid robot is expected to perform tasks in human-engineered environments by having the ability to operate tools typically used by humans and navigate by walking, climbing stairs and opening doors. Considering this scenario, the problem addressed in this work is dense mapping and localisation for a humanoid robot within a room-sized environment. The robot is equipped with proprioceptive sensors providing kinematic and inertial information as well as exteroceptive sensors such as a stereo camera pair. Challenges to a typical dense visual mapping method for this application include the robot performing locally loopy trajectories and repeatedly observing the same objects and surfaces, as well as the fact that the environment is dynamic. In addition, motion blur is introduced in the camera image whenever a fast motion is executed and the robot might also experience self-detection during operation. In order to overcome some of these challenges, the approach is to fuse proprioceptive data from kinematics and inertial sensors within a dense visual SLAM method to produce more accurate position estimates and higher quality maps.