We propose an object tracking method for Wide Area Motion Imagery (WAMI) video sequences, which models the tracking as a regularization problem through sparse representation of aerial video content. The proposed object tracker, L1Dpct, applies particle filter tracking, and unlike the existing methods, it integrates a deep-learning-based object detector into the regularization scheme to improve the tracking performance. In order to enhance robustness to occlusion and scale changes, L1Dpct monitors the state propagation, the level of sparsity as well as the representation capability of the model and receives feedback from the detector to update the observation model of the particle filter. L1Dpct incrementally updates the dictionary of the sparse representation that enables us to efficiently represent the appearance changes of the object arising from illumination changes and high motion. Numerical results obtained on commonly used VIVID and UAV123 datasets denote that L1Dpct significantly improves the object tracking performance in terms of precision rate and success rate compared to the state-of-the-art trackers.