TALIPOT: Energy-Efficient DNN Booster Employing Hybrid Bit Parallel-Serial Processing in MSB-First Fashion

Karadeniz M. B., Altun M.

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, vol.41, no.8, pp.2714-2727, 2022 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 41 Issue: 8
  • Publication Date: 2022
  • Doi Number: 10.1109/tcad.2021.3110747
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, INSPEC, Metadex, Civil Engineering Abstracts
  • Page Numbers: pp.2714-2727
  • Keywords: Energy consumption, Adders, Clocks, Arithmetic, Optimization, Delays, Deep learning, Activation rounding, ASIC, deep neural network (DNN), hardware accelerator, hybrid number representation (HNR), ACCELERATOR
  • Istanbul Technical University Affiliated: Yes


We propose a novel hybrid bit parallel-serial processing technique called TALIPOT in order to reduce energy consumption of deep neural networks (DNNs). TALIPOT works as computation booster and has inborn ability of quality adjusting tradeoff between accuracy and energy consumption of DNNs. The core principal of TALIPOT is keeping the serial number of bits same in the entire computing process, so the energy is consumed effectively without extra waiting times between inputs and outputs of the computing blocks (adders, multipliers, and activation blocks). To achieve this, we implement activation rounding which scales down the accumulation of parallel bits at the output of the hidden layers of DNN. TALIPOT utilizes most significant bit first (MSB-first) fashion, so we obtain the most valuable bit information first, between the layers of DNN, which ensures the activation rounding process done accurately and efficiently. Thanks to this method, we optimize operating accuracy/energy point by cutting off bits at the output whenever we obtain the desired accuracy. Simulations using the MNIST and CIFAR-10 datasets show that TALIPOT outperforms the state-of-the-art computation techniques in terms of energy consumption. TALIPOT performs the MNIST classification with the energy efficiency of 25.3 TOPS/W and the accuracy of 98.2% in ASIC environment using 40 nm CMOS process.