On the generalization ability of distributed online learners

We propose a fully-distributed stochastic-gradient strategy based on diffusion adaptation techniques. We show that, for strongly convex risk functions, the excess-risk at every node decays at the rate of O(1/Ni), where N is the number of learners and i is the iteration index. In this way, the distri...

Full description

Saved in:
Bibliographic Details
Published in2012 IEEE International Workshop on Machine Learning for Signal Processing pp. 1 - 6
Main Authors Towfic, Z. J., Jianshu Chen, Sayed, A. H.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.09.2012
Subjects
Online AccessGet full text
ISBN1467310247
9781467310246
ISSN1551-2541
DOI10.1109/MLSP.2012.6349778

Cover

More Information
Summary:We propose a fully-distributed stochastic-gradient strategy based on diffusion adaptation techniques. We show that, for strongly convex risk functions, the excess-risk at every node decays at the rate of O(1/Ni), where N is the number of learners and i is the iteration index. In this way, the distributed diffusion strategy, which relies only on local interactions, is able to achieve the same convergence rate as centralized strategies that have access to all data from the nodes at every iteration. We also show that every learner is able to improve its excess-risk in comparison to the non-cooperative mode of operation where each learner would operate independently of the other learners.
ISBN:1467310247
9781467310246
ISSN:1551-2541
DOI:10.1109/MLSP.2012.6349778