A low‐cost compensated approximate multiplier for Bfloat16 data processing on convolutional neural network inference

This paper presents a low‐cost two‐stage approximate multiplier for bfloat16 (brain floating‐point) data processing. For cost‐efficient approximate multiplication, the first stage implements Mitchell's algorithm that performs the approximate multiplication using only two adders. The second stag...

Full description

Saved in:
Bibliographic Details
Published inETRI journal Vol. 43; no. 4; pp. 684 - 693
Main Author Kim, HyunJin
Format Journal Article
LanguageEnglish
Published Electronics and Telecommunications Research Institute (ETRI) 01.08.2021
한국전자통신연구원
Subjects
Online AccessGet full text
ISSN1225-6463
2233-7326
DOI10.4218/etrij.2020-0370

Cover

More Information
Summary:This paper presents a low‐cost two‐stage approximate multiplier for bfloat16 (brain floating‐point) data processing. For cost‐efficient approximate multiplication, the first stage implements Mitchell's algorithm that performs the approximate multiplication using only two adders. The second stage adopts the exact multiplication to compensate for the error from the first stage by multiplying error terms and adding its truncated result to the final output. In our design, the low‐cost multiplications in both stages can reduce hardware costs significantly and provide low relative errors by compensating for the error from the first stage. We apply our approximate multiplier to the convolutional neural network (CNN) inferences, which shows small accuracy drops with well‐known pre‐trained models for the ImageNet database. Therefore, our design allows low‐cost CNN inference systems with high test accuracy.
Bibliography:Funding information
This research was supported by the research fund of Dankook University in 2018.
https://doi.org/10.4218/etrij.2020-0370
ISSN:1225-6463
2233-7326
DOI:10.4218/etrij.2020-0370