Tiny Federated Learning with Bayesian Classifiers

Tiny machine learning (TinyML) represents an emerging research direction that aims to realize machine learning on Internet of Things (IoT) devices. The current TinyML research seems to focus on supporting the deployment of deep learning models on microprocessors, while the models themselves are trai...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the IEEE International Symposium on Industrial Electronics (Online) pp. 1 - 6
Main Authors Xiong, Ning, Punnekkat, Sasikumar
Format Conference Proceeding
LanguageEnglish
Published IEEE 19.06.2023
Subjects
Online AccessGet full text
ISSN2163-5145
DOI10.1109/ISIE51358.2023.10228115

Cover

More Information
Summary:Tiny machine learning (TinyML) represents an emerging research direction that aims to realize machine learning on Internet of Things (IoT) devices. The current TinyML research seems to focus on supporting the deployment of deep learning models on microprocessors, while the models themselves are trained on high performance computers or clouds. However, in the resource/time constrained IoT contexts, it is more desirable to perform data analytics and learning tasks directly on edge devices for crucial benefits such as increased energy efficiency, reduced latency as well as lower communication cost.To address the above challenge, this paper proposes a tiny federated learning algorithm for enabling learning of Bayesian classifiers based on distributed tiny data storage, referred to as TFL-BC. In TFL-BC, Bayesian learning is executed in parallel across multiple edge devices using local (tiny) training data and subsequently the learning results from local devices are aggregated via a central node to obtain the final classification model. The results of experiments conducted on a set of benchmark datasets demonstrate that our algorithm can produce final aggregated models that outperform single tiny Bayesian classifiers and that the result of tiny federated learning (of Bayesian classifier) is independent of the number of data partitions used for generating the distributed local training data.
ISSN:2163-5145
DOI:10.1109/ISIE51358.2023.10228115