Optimization of casting parameters is essential in terms of quality factors in foundries. Nowadays, to optimize process parameters, new approaches such as artificial neural networks method are being used. In this study, a neural network model has been developed to control the grain size in aluminum casting alloys. Some of the important grain refinement parameters such as casting temperature, holding time and addition level have been evaluated as inputs for the model. The network training architecture was optimized at 241 training cycles with quasi-Newton algorithm with a single hidden layer and 6 neurons. With modeling, mean absolute percent error was found at 0.99 between experimental measurements and model estimation. R-2 value has been calculated as 99.2%. The minimum grain size was measured for the parameter of 680 degrees C casting temperature, 0.25% Ti, 25-min holding time. It was found that there was a good agreement between experimental measurements and artificial neural network predictions.