The aim of this paper to compare the effect of feature selection methods in emotion recognition from speech and song. Emotion recognition composes of signal processing, feature extraction and classification steps. Nowadays, many studies have focused on common features of speech and song, and have used sub-task classification approach for these systems. In this paper, speech and song data are merged and processed together to focus on the feature selection phase. Autoencoder, Relief-F and Chi-Square selection methods are selected to increase the accuracy of classification. Although selecting features can output similar results, using Relief-F method and Mel Frequency Cepstral Coefficient type of feature outperform these already achieved accuracy rates.