Object manipulation with a variable-stiffness robotic mechanism using deep neural networks for visual semantics and load estimation


Bayraktar E., Yiğit C. B., Boyraz Baykaş P.

NEURAL COMPUTING & APPLICATIONS, cilt.32, sa.13, ss.9029-9045, 2020 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 32 Sayı: 13
  • Basım Tarihi: 2020
  • Doi Numarası: 10.1007/s00521-019-04412-5
  • Dergi Adı: NEURAL COMPUTING & APPLICATIONS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Biotechnology Research Abstracts, Compendex, Computer & Applied Sciences, Index Islamicus, INSPEC, zbMATH
  • Sayfa Sayıları: ss.9029-9045
  • İstanbul Teknik Üniversitesi Adresli: Evet

Özet

In recent years, the computer vision applications in the robotics have been improved to approach human-like visual perception and scene/context understanding. Following this aspiration, in this study, we explored the possibility of better object manipulation performance by connecting the visual recognition of objects to their physical attributes, such as weight and center of gravity (CoG). To develop and test this idea, an object manipulation platform is built comprising a robotic arm, a depth camera fixed at the top center of the workspace, embedded encoders in the robotic arm mechanism, and microcontrollers for position and force control. Since both the visual recognition and force estimation algorithms use deep learning principles, the test set-up was named asDeep-Table. The objects in the manipulation tests are selected from everyday life and are common to be seen on modern office desktops. The visual object localization and recognition processes are performed from two distinct branches by deep convolutional neural network architectures. We present five of the possible cases, having different levels of information availability on the object weight and CoG in the experiments. The results confirm that using our algorithm, the robotic arm can move different types of objects successfully varying from several grams (empty bottle) to around 250 g (ceramic cup) without failure or tipping. The proposed method also shows that connecting the object recognition with load estimation and contact point further improves the performance characterized by a smoother motion.