Developing object tracking techniques robust to blur, scale changes, occlusion and illumination changes is a challenging problem for several applications. Recently many algorithms using deep learning for visual object tracking are proposed. These algorithms mostly perform object detection without using the temporal information for video object tracking. Nevertheless, they provide high object detection accuracy as a result of an extended training scheme. However, particle filtering enables us to track the objects with a lower complexity without requiring any training, when the state transition and observation models are formulated appropriately. In this paper, tracking performance of two visual object trackers (Faster R-CNN, Mask R-CNN) and the variable rate color based particle filtering are tested on OTB-50, VOT 2016 and 2017 datasets. Benefits and deficits of both these approaches are examined. It is concluded that the deep learning methods outperform particle filtering under occlusion and scale changes, whereas particle filtering is more robust to illumination changes and blur. Integration of both approaches improves object tracking accuracy.