SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
Neural network pruning is a key technique towards engineering large yet scalable, interpretable, and generalizable models. Prior work on the subject has developed largely along two orthogonal directions: (1) differentiable pruning for efficiently and accurately scoring the importance of parameters,...
Saved in:
| Main Authors | , , , , |
|---|---|
| Format | Journal Article |
| Language | English |
| Published |
27.02.2024
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.48550/arxiv.2402.17902 |
Cover
| Summary: | Neural network pruning is a key technique towards engineering large yet
scalable, interpretable, and generalizable models. Prior work on the subject
has developed largely along two orthogonal directions: (1) differentiable
pruning for efficiently and accurately scoring the importance of parameters,
and (2) combinatorial optimization for efficiently searching over the space of
sparse models. We unite the two approaches, both theoretically and empirically,
to produce a coherent framework for structured neural network pruning in which
differentiable pruning guides combinatorial optimization algorithms to select
the most important sparse set of parameters. Theoretically, we show how many
existing differentiable pruning techniques can be understood as nonconvex
regularization for group sparse optimization, and prove that for a wide class
of nonconvex regularizers, the global optimum is unique, group-sparse, and
provably yields an approximate solution to a sparse convex optimization
problem. The resulting algorithm that we propose, SequentialAttention++,
advances the state of the art in large-scale neural network block-wise pruning
tasks on the ImageNet and Criteo datasets. |
|---|---|
| DOI: | 10.48550/arxiv.2402.17902 |