Sparse Algorithms Are Not Stable: A No-Free-Lunch Theorem
We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, o...
        Saved in:
      
    
          | Published in | IEEE transactions on pattern analysis and machine intelligence Vol. 34; no. 1; pp. 187 - 193 | 
|---|---|
| Main Authors | , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
        Los Alamitos, CA
          IEEE
    
        01.01.2012
     IEEE Computer Society The Institute of Electrical and Electronics Engineers, Inc. (IEEE)  | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 0162-8828 1939-3539 2160-9292 2160-9292 1939-3539  | 
| DOI | 10.1109/TPAMI.2011.177 | 
Cover
| Summary: | We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that ℓ 1 -regularized regression (Lasso) cannot be stable, while ℓ 2 -regularized regression is known to have strong stability properties and is therefore not sparse. | 
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 ObjectType-Article-2 ObjectType-Feature-1  | 
| ISSN: | 0162-8828 1939-3539 2160-9292 2160-9292 1939-3539  | 
| DOI: | 10.1109/TPAMI.2011.177 |