Dragos Stanciu
Thu 02 Jun 2016, 12:45 - 13:45

If you have a question about this talk, please contact: Steph Smith (ssmith32)

Modern computer vision algorithms focus on analysing sequences of images captured from conventional visible-light cameras. While providing high-resolution, the main drawback of this sensing modality is that the image sequences that are captured typically include information redundant to robot navigation, wasting precious resources (e.g. processing time, memory) limiting real-time processing. Inspired by the visual receptors existing in nature, scientists have developed silicon-retina chips such as the Dynamic Vision Sensor (DVS), which transmit events asynchronously only at pixel-locations where contrast changes are detected. Such sensors permit high temporal resolution (1 μs), but they have low spatial resolution and suffer from noise. In this talk, I will present ongoing work on a pipeline for DVS motion tracking and scene mapping. Application areas include high-speed unmanned aerial vehicles and virtual reality, where tracking fast motions is essential.