Enhancing the abilities of service robots is important for expanding what they can achieve in everyday manipulation tasks. In addition, it is also essential to ensure that they can determine what they cannot achieve. Such necessity may arise due to anomalies during task execution. These situations should be detected and identified to overcome and recover from them. Identification necessitates a deeper time series analysis of onboard sensor readings to keep track of and relate anomaly indicators since some indicators may be perceived long before the detection of an anomaly. These sensor readings are usually taken asynchronously and need to be fused effectively for correct interpretations. In this paper, we propose a multimodal long short-term memory-based (LSTM-based) anomaly identification approach that takes into account real-time observations by fusing visual, auditory and proprioceptive sensory modalities during everyday object manipulation tasks. The symptoms of anomalies are first trained and then are classified based on the learned models in real time. We first provide a comparative analysis of our method with hidden Markov models (HMMs), conditional random fields (CRFs) and gated recurring units (GRUs) on a Baxter robot executing everyday object manipulation scenarios. Then, we analyze the impact of each modality and various feature extraction techniques on the performance of the identification problem. We show that our method has the ability to identify anomalies by capturing long-term dependencies between the anomaly indicators. The results indicate that the LSTM-based anomaly identification method outperforms the closest baseline with a 2% improvement of f-score (0.92) in classifying anomalies that occur during run-time.