A Study on Hardware-Aware Training Techniques for Feedforward Artificial Neural Networks

Parvin S., Altun M.

20th IEEE-Computer-Society Annual Symposium on VLSI (ISVLSI), ELECTR NETWORK, 07 July 2017 - 09 July 2021, pp.406-411 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1109/isvlsi51109.2021.00080
  • Page Numbers: pp.406-411
  • Keywords: artificial neural networks, hardware-aware training, parallel and time-multiplexed architecture, weightset approximation
  • Istanbul Technical University Affiliated: Yes


This paper presents hardware-aware training techniques for efficient hardware implementation of feedforward artificial neural networks (ANNs). Firstly, an investigation is done on the effect of the weight initialization on the hardware implementation of the trained ANN on a chip. We show that our unorthodox initialization technique can result in better area efficiency in comparison to the state-of-art weight initialization techniques. Secondly, we propose training based on large floating-point values. This means the training algorithm at the end finds a weight-set consisting of integer numbers by just ceiling/flooring of the large floating-point values. Thirdly, the large floating-point training algorithm is integrated with a weight and bias value approximation module to approximate a weight-set while optimizing an ANN for accuracy, to find an efficient weight-set for hardware realization. This integrated module at the end of training generates a weight-set that has a minimum hardware cost for that specific initialized weight-set. All the introduced algorithms are included in our toolbox called ZAAL. Then, the trained ANNs are realized on hardware under constant multiplication design using parallel and time-multiplexed architectures using TSMC 40nm technology in Cadence.