© 2022 IEEE.Designing visual content and characters for games is a time consuming task even for designers and illustrators with experience. Most of the game companies and developers use procedural methods to automate the design process. The visual content produced by these algorithms is limited in terms of variation. In this paper, we propose to use Generative Adversarial Networks (GANs) for visual content production. Two different rpg and dnd visual image datasets were collected over the internet for training and 6 different GAN models were trained on them. In 3 of 18 experiments, transfer learning methods are used because of the limited datasets. The Frechet Inception Distance metric was used to compare the model results. As a result, SNGAN was the most successful in both datasets. Moreover, the transfer learning method (WGAN-GP, BigGAN) was more successful than the from scratch method.