Unsupervised Visual Ego-motion Learning for Robots


Khalilbayli F., Bayram B., İNCE G.

Proceedings of the International Conference on Computer Science and Engineering (UBMK 2019), 1 - 04 Ocak 2019 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/ubmk.2019.8907192
  • Anahtar Kelimeler: Image Processing, Robotics, Visual Ego-motion Estimation, Unsupervised Learning, Sensor Fusion
  • İstanbul Teknik Üniversitesi Adresli: Evet

Özet

Despite changing conditions and mobile objects in its environment, the goal of an autonomous robot is to navigate without any human operator. For achieving this, the knowledge of both the dynamic state of the robot and the 3D structure of the environment must be known. While discretizing a continuous image sequence, three types of motion can be observed: 1) a mobile camera (or a robot as in this study) capturing a static scene, 2) independent moving objects in front of a static camera, or 3) simultaneously moving camera and independent objects in the environment at any given time. One of the challenges is the existence of one or multiple mobile objects in the environment which have independent velocity and direction with respect to the environment. In this work, a visual ego-motion estimation with unsupervised learning for robots by using stereo video taken by a camera mounted on them is introduced. Also, audio perception is utilized to support visual ego-motion estimation by sensor fusion in order to identifying the source of the motion. We have verified the effectiveness of our approach by conducting three different experiments in which both robot and object, sole robot and sole object motion were present.