23nd Signal Processing and Communications Applications Conference (SIU), Malatya, Turkey, 16 - 19 May 2015, pp.1586-1589
This paper proposes a method for classification of the viewer watching the screen from a fixed distance is involved in the screened content or not. This is achieved by integrating head location and head movement features. The head movement based classification is activated where the location detection fails. 2-D feature vectors that comprise amplitude and angle of flow vectors extracted by SIFT flow algorithm are used for motion classification. Head location is represented with 3-D location and area features calculated by using Viola-Jones face detector. Pointing 04 database is used as training dataset for head movement estimation, the recorded real video is used for head location detection. Both processes employ recorded real video frames as test dataset. Test results demonstrate that the head involvement classification performance increases up to 71% by decision fusion while the motion features provide 67% accuracy.