© 2022 IEEE.3D facial animation generation from audio problem is drawing attention as it is demanded for generating artificial characters in games and movies. In the literature, several studies address this problem. However, the generated facial animations are far away from being realistic. In this work, we represent faces with Facial Action Coding System (FACS) and collect a 37-minute-long dataset. We develop convolutional and transformer based models. It is observed that the trained model is able to generate animations that can be used in video games and virtual reality applications, even with novel speaker audio data of speakers it has never seen in the training data.