On the Asymptotic Sample Complexity of HGR Maximal Correlation Functions in Semi-supervised Learning

The Hirschfeld-Gebelein-Rényi (HGR) maximal correlation has been shown useful in many machine learning applications, where the alternating conditional expectation (ACE) algorithm is widely adopted to estimate the HGR maximal correlation functions from data samples. In this paper, we consider the as...

Full description

Saved in:
Bibliographic Details
Published in2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton) pp. 879 - 886
Main Authors Xu, Xiangxiang, Huang, Shao-Lun
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.09.2019
Subjects
Online AccessGet full text
DOI10.1109/ALLERTON.2019.8919892

Cover

More Information
Summary:The Hirschfeld-Gebelein-Rényi (HGR) maximal correlation has been shown useful in many machine learning applications, where the alternating conditional expectation (ACE) algorithm is widely adopted to estimate the HGR maximal correlation functions from data samples. In this paper, we consider the asymptotic sample complexity of estimating the HGR maximal correlation functions in semi-supervised learning, where both labeled and unlabeled data samples are used for the estimation. First, we propose a generalized ACE algorithm to deal with the unlabeled data samples. Then, we develop a mathematical framework to characterize the learning errors between the maximal correlation functions computed from the true distribution and the functions estimated from the generalized ACE algorithm. We establish the analytical expressions for the error exponents of the learning errors, which indicate the number of training samples required for estimating the HGR maximal correlation functions by the generalized ACE algorithm. Moreover, with our theoretical results, we investigate the sampling strategy for different types of samples in semisupervised learning with a total sampling budget constraint, and an optimal sampling strategy is developed to maximize the error exponent of the learning error. Finally, the numerical simulations are presented to support our theoretical results.
DOI:10.1109/ALLERTON.2019.8919892