Efficient Hardware Implementation of Convolution Layers Using Multiply-Accumulate Blocks

Nojehdeh M. E., Parvin S., Altun M.

20th IEEE-Computer-Society Annual Symposium on VLSI (ISVLSI), ELECTR NETWORK, 07 July 2017 - 09 July 2021, pp.402-405 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1109/isvlsi51109.2021.00079
  • Page Numbers: pp.402-405
  • Istanbul Technical University Affiliated: Yes


In this paper, we propose an efficient method to realize a convolution layer of the convolution neural networks (CNNs). Inspired by the hilly-connected neural network architecture, we introduce an efficient computation approach to implement convolution operations. Also, to reduce hardware complexity, we implement convolutional layers under the time-multiplexed architecture where computing resources are re-used in the multiply-accumulate (MAC) blocks. A comprehensive evaluation of convolution layers shows using our proposed method when compared to the conventional MAC-based method results up to 97% and 50% reduction in dissipated power and computation time, respectively.