This work is part of a project for sign language tutoring with imitation based interactive game, iSign(1). An assistive social humanoid robot (R3) is accompanying deaf children in the interaction game. The robot interacts with children using visual modules, including sign recognition and sign generation. This paper focuses on upper torso self collision detection system for the humanoid robot R3, which is used in sign generation step in the game. Three approaches including a neuro-fuzzy, a multi neuro-fuzzy and a multi neural-network approach based on the arm joint positions and orientations are implemented and the results are presented.