Classification of nonlinearly separable data by nonlinear support vector machines (SVMs) is often a difficult task, particularly due to the necessity of choosing a convenient kernel type. Moreover, in order to get the optimum classification performance with the nonlinear SVM, a kernel and its parameters should be determined in advance. In this paper, we propose a new classification method called support vector selection and adaptation (SVSA) which is applicable to both linearly and nonlinearly separable data without choosing any kernel type. The method consists of two steps: selection and adaptation. In the selection step, first, the support vectors are obtained by a linear SVM. Then, these support vectors are classified by using the K-nearest neighbor method, and some of them are rejected if they are misclassified. In the adaptation step, the remaining support vectors are iteratively adapted with respect to the training data to generate the reference vectors. Afterward, classification of the test data is carried out by 1-nearest neighbor with the reference vectors. The SVSA method was applied to some synthetic data, multisource Colorado data, post-earthquake remote sensing data, and hyperspectral data. The experimental results showed that the SVSA is competitive with the traditional SVM with both linearly and nonlinearly separable data.