Energy-Efficient Hardware Implementation of Fully Connected Artificial Neural Networks Using Approximate Arithmetic Blocks


Creative Commons License

Esmali Nojehdeh M., Altun M.

Circuits, Systems, and Signal Processing, cilt.42, sa.9, ss.5428-5452, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 42 Sayı: 9
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1007/s00034-023-02363-w
  • Dergi Adı: Circuits, Systems, and Signal Processing
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Communication Abstracts, Compendex, zbMATH
  • Sayfa Sayıları: ss.5428-5452
  • Anahtar Kelimeler: Approximate adder, Approximate multiplier, Artificial neural network (ANN), Multiply accumulate (MAC)
  • İstanbul Teknik Üniversitesi Adresli: Evet

Özet

In this paper, we explore efficient hardware implementation of feedforward artificial neural networks (ANNs) using approximate adders and multipliers. Due to a large area requirement in a parallel architecture, the ANNs are implemented under the time-multiplexed architecture where computing resources are re-used in the multiply accumulate (MAC) blocks. The efficient hardware implementation of ANNs is realized by replacing the exact adders and multipliers in the MAC blocks by the approximate ones taking into account the hardware accuracy. Additionally, an algorithm to determine the approximate level of multipliers and adders due to the expected accuracy is proposed. As an application, the MNIST and SVHN databases are considered. To examine the efficiency of the proposed method, various architectures and structures of ANNs are realized. Experimental results show that the ANNs designed using the proposed approximate multiplier have a smaller area and consume less energy than those designed using previously proposed prominent approximate multipliers. It is also observed that the use of both approximate adders and multipliers yields, respectively, up to 50% and 10% reduction in energy consumption and area of the ANN design with a small deviation or better hardware accuracy when compared to the exact adders and multipliers.