Neuromorphic systems are expected to equip a new paradigm in computation so that energy efficient, intelligent systems could be implemented easily. One way of fulfilling this aim is to design processes with Spiking Neural Networks (SNN). Here, we introduce an architecture to realize Izhikevich neuron model which ease the hardware implementation of large scale neural models. By using a folding method, we ensure that multiple operations of the same type are performed by one computing unit in a time multiplexed manner. In this way, we have achieved a design that uses hardware resources more efficiently, especially by saving multiplication, and allows more neurons to be implemented on the hardware. Finally, this architecture eliminates the necessity to allocate additional resources for implementing the synaptic dynamics of the neurons. Also, to present the effectiveness of the proposed architecture, a simple cerebellar granular layer structure is implemented on FPGA.