Learning algorithms can suffer a performance bias when data sets only have a small number of training examples for one or more classes. In this scenario learning methods can produce the deceptive appearance of "good looking" results even when classification performance on the important minority class can be poor. This paper compares two Genetic Programming (GP) approaches for classification with unbalanced data. The first focuses on adapting the fitness function to evolve classifiers with good classification ability across both minority and majority classes. The second uses a multi-objective approach to simultaneously evolve a Pareto front (or set) of classifiers along the minority and majority class trade-off surface. Our results show that solutions with good classification ability were evolved across a range of binary classification tasks with unbalanced data.