We propose an effective combination of discriminative and generative tracking approaches in order to take the benefits from both. Our algorithm exploits the discriminative properties of Faster R-CNN which helps to generate target specific region proposals. A new proposal distribution is formulated to incorporate information from the dynamic model of moving objects and the detection hypotheses generated by deep learning. We construct the generative appearance model from the region proposals and perform tracking through sequential Bayesian filtering by variable rate color particle filtering (VRCPF). Test results reported on CVPR2013 benchmarking data set demonstrate that the interleaving of tracker and detector enables us to effectively update the target distribution that significantly improves robustness to illumination changes, scale changes, high motion and occlusion.