GeCo: Classification Restricted Boltzmann Machine Hardware for On-Chip Semisupervised Learning and Bayesian Inference

The probabilistic Bayesian inference of real-time input data is becoming more popular, and the importance of semisupervised learning is growing. We present a classification restricted Boltzmann machine (ClassRBM)-based hardware accelerator with on-chip semisupervised learning and Bayesian inference...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 31; no. 1; pp. 53 - 65
Main Authors Yi, Wooseok, Park, Junki, Kim, Jae-Joon
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2019.2899386

Cover

More Information
Summary:The probabilistic Bayesian inference of real-time input data is becoming more popular, and the importance of semisupervised learning is growing. We present a classification restricted Boltzmann machine (ClassRBM)-based hardware accelerator with on-chip semisupervised learning and Bayesian inference capability. ClassRBM is a specific type of Markov network that can perform classification tasks and reconstruct its input data. ClassRBM has several advantages in terms of hardware implementation compared to other backpropagation-based neural networks. However, its accuracy is relatively low compared to backpropagation-based learning. To improve the accuracy of ClassRBM, we propose the multi-neuron-per-class (multi-NPC) voting scheme. We also reveal that the contrastive divergence (CD) algorithm, which is commonly used to train RBM, shows poor performance in this multi-NPC ClassRBM. As an alternative, we propose an asymmetric contrastive divergence (ACD) training algorithm that improves the accuracy of multi-NPC ClassRBM. With the ACD learning algorithm, ClassRBM operates in the form of a combination of Markov Chain training and Bayesian inference. The experimental results on a field-programmable gate array (FPGA) board for a Modified National Institute of Standards and Technology data set confirm that the inference accuracy of the proposed ACD algorithm is 5.82% higher for a supervised learning case and 12.78% higher for a 1% labeled semisupervised learning case than the conventional CD algorithm. Also, the GeCo ver.2 hardware implemented on a Xilinx ZCU102 FPGA board was 349.04 times faster than the C simulation on CPU.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2019.2899386