A Study on Hardware-Aware Training Techniques for Feedforward Artificial Neural Networks


Parvin S., Altun M.

20th IEEE-Computer-Society Annual Symposium on VLSI (ISVLSI), ELECTR NETWORK, 07 Temmuz 2017 - 09 Temmuz 2021, ss.406-411 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/isvlsi51109.2021.00080
  • Basıldığı Ülke: ELECTR NETWORK
  • Sayfa Sayıları: ss.406-411
  • Anahtar Kelimeler: artificial neural networks, hardware-aware training, parallel and time-multiplexed architecture, weightset approximation
  • İstanbul Teknik Üniversitesi Adresli: Evet

Özet

This paper presents hardware-aware training techniques for efficient hardware implementation of feedforward artificial neural networks (ANNs). Firstly, an investigation is done on the effect of the weight initialization on the hardware implementation of the trained ANN on a chip. We show that our unorthodox initialization technique can result in better area efficiency in comparison to the state-of-art weight initialization techniques. Secondly, we propose training based on large floating-point values. This means the training algorithm at the end finds a weight-set consisting of integer numbers by just ceiling/flooring of the large floating-point values. Thirdly, the large floating-point training algorithm is integrated with a weight and bias value approximation module to approximate a weight-set while optimizing an ANN for accuracy, to find an efficient weight-set for hardware realization. This integrated module at the end of training generates a weight-set that has a minimum hardware cost for that specific initialized weight-set. All the introduced algorithms are included in our toolbox called ZAAL. Then, the trained ANNs are realized on hardware under constant multiplication design using parallel and time-multiplexed architectures using TSMC 40nm technology in Cadence.