© 2022 SPIE.With the widespread use of thermal cameras in various fields such as medical, military, surveillance, astronomy, fire detection etc., image distortion caused by the structure of thermal sensors has become an important problem. Since every detector or pixel in the sensor reacts differently even when fed the same signal, the correction is necessary for good imaging, and this correction is known as non-uniformity correction (NUC). Because the response of each detector/pixel drifts slowly and randomly over time, a one-time or a single fabric correction in the array is not enough. Traditional methods are not sufficient for operational uses. Calibration based approaches are undesirable because of the shutter sound for uncooled thermal imagers, as well as causing time-gaps during imaging for a period for cooled thermal imagers. Scene-based approaches, on the other hand, are not preferred due to high computational cost or rather unrealistic assumptions about the scene. In this study, we propose a deep learning based approach proposed for both cooled and uncooled thermal imagers. We created various thermal datasets to train models for temporal noise for both cooled and uncooled thermal imagers and compared the results. In a thermal system, many operations such as NUC, BPR, IOP are applied in sequence, from the detector raw output to the final output shown to the user. Our deep learning model accounts for the entirety of these operations. We also show that the optical artifacts or distortions can be eliminated using deep learning. As such, we demonstrate this with different system architectures that are suitable for embedded systems.