20th IEEE-Computer-Society Annual Symposium on VLSI (ISVLSI), ELECTR NETWORK, 07 July 2017 - 09 July 2021, pp.402-405
In this paper, we propose an efficient method to realize a convolution layer of the convolution neural networks (CNNs). Inspired by the hilly-connected neural network architecture, we introduce an efficient computation approach to implement convolution operations. Also, to reduce hardware complexity, we implement convolutional layers under the time-multiplexed architecture where computing resources are re-used in the multiply-accumulate (MAC) blocks. A comprehensive evaluation of convolution layers shows using our proposed method when compared to the conventional MAC-based method results up to 97% and 50% reduction in dissipated power and computation time, respectively.