Generalization of Hamiltonian algorithms
The paper proves generalization results for a class of stochastic learning algorithms. The method applies whenever the algorithm generates an absolutely continuous distribution relative to some a-priori measure and the Radon Nikodym derivative has subgaussian concentration. Applications are bounds f...
        Saved in:
      
    
          | Main Author | |
|---|---|
| Format | Journal Article | 
| Language | English | 
| Published | 
          
        23.05.2024
     | 
| Subjects | |
| Online Access | Get full text | 
| DOI | 10.48550/arxiv.2405.14469 | 
Cover
| Summary: | The paper proves generalization results for a class of stochastic learning
algorithms. The method applies whenever the algorithm generates an absolutely
continuous distribution relative to some a-priori measure and the Radon Nikodym
derivative has subgaussian concentration. Applications are bounds for the Gibbs
algorithm and randomizations of stable deterministic algorithms as well as
PAC-Bayesian bounds with data-dependent priors. | 
|---|---|
| DOI: | 10.48550/arxiv.2405.14469 |