Projected Newton-type Methods in Machine Learning
We study Newton-type methods for solving the optimization problem $\mathop {\min }\limits_x f(x) + r(x)$, subject to$x \in \Omega $, (11.1) where$f:{R^n} \to R$is twice continuously differentiable and convex;$r:{R^n} \to R$is continuous and convex, but not necessarily differentiable everywhere; and$...
        Saved in:
      
    
          | Published in | Optimization for Machine Learning p. 305 | 
|---|---|
| Main Authors | , , | 
| Format | Book Chapter | 
| Language | English | 
| Published | 
        United States
          The MIT Press
    
        30.09.2011
     MIT Press  | 
| Subjects | |
| Online Access | Get full text | 
| ISBN | 026201646X 9780262016469  | 
| DOI | 10.7551/mitpress/8996.003.0013 | 
Cover
| Summary: | We study Newton-type methods for solving the optimization problem
$\mathop {\min }\limits_x f(x) + r(x)$, subject to$x \in \Omega $, (11.1)
where$f:{R^n} \to R$is twice continuously differentiable and convex;$r:{R^n} \to R$is continuous and convex, but not necessarily differentiable everywhere; and$\Omega $is a simple convex constraint set. This formulation is general and captures numerous problems in machine learning, especially wherefcorresponds to a loss, andrto a regularizer. Let us, however, defer concrete examples of (11.1) until we have developed some theoretical background.
We propose to solve (11.1) via Newton-type methods, a certain class of second-order methods that are known to often work well for | 
|---|---|
| ISBN: | 026201646X 9780262016469  | 
| DOI: | 10.7551/mitpress/8996.003.0013 |