Online gradient descent algorithms for functional data learning
Functional linear model is a fruitfully applied general framework for regression problems, including those with intrinsically infinite-dimensional data. Online gradient descent methods, despite their evidenced power of processing online or large-sized data, are not well studied for learning with fun...
        Saved in:
      
    
          | Published in | Journal of Complexity Vol. 70; p. 101635 | 
|---|---|
| Main Authors | , , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
            Elsevier Inc
    
        01.06.2022
     | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 0885-064X 1090-2708 1090-2708  | 
| DOI | 10.1016/j.jco.2021.101635 | 
Cover
| Summary: | Functional linear model is a fruitfully applied general framework for regression problems, including those with intrinsically infinite-dimensional data. Online gradient descent methods, despite their evidenced power of processing online or large-sized data, are not well studied for learning with functional data. In this paper, we study reproducing kernel-based online learning algorithms for functional data, and derive convergence rates for the expected excess prediction risk under both online and finite-horizon settings of step-sizes respectively. It is well understood that nontrivial uniform convergence rates for the estimation task depend on the regularity of the slope function. Surprisingly, the convergence rates we derive for the prediction task can assume no regularity from slope. Our analysis reveals the intrinsic difference between the estimation task and the prediction task in functional data learning. | 
|---|---|
| ISSN: | 0885-064X 1090-2708 1090-2708  | 
| DOI: | 10.1016/j.jco.2021.101635 |