Sub-Linear Time Support Recovery for Compressed Sensing Using Sparse-Graph Codes
We study the support recovery problem for compressed sensing, where the goal is to reconstruct the sparsity pattern of a high-dimensional <inline-formula> <tex-math notation="LaTeX">K </tex-math></inline-formula>-sparse signal <inline-formula> <tex-math not...
        Saved in:
      
    
          | Published in | IEEE transactions on information theory Vol. 65; no. 10; pp. 6580 - 6619 | 
|---|---|
| Main Authors | , , , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
        New York
          IEEE
    
        01.10.2019
     The Institute of Electrical and Electronics Engineers, Inc. (IEEE)  | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 0018-9448 1557-9654  | 
| DOI | 10.1109/TIT.2019.2921757 | 
Cover
| Summary: | We study the support recovery problem for compressed sensing, where the goal is to reconstruct the sparsity pattern of a high-dimensional <inline-formula> <tex-math notation="LaTeX">K </tex-math></inline-formula>-sparse signal <inline-formula> <tex-math notation="LaTeX">\mathrm {x}\in \mathbb {R}^{N} </tex-math></inline-formula>, as well as the corresponding sparse coefficients, from low-dimensional linear measurements with and without noise. Our key contribution is a new compressed sensing framework through a new family of carefully designed sparse measurement matrices associated with minimal measurement costs and a low-complexity recovery algorithm. Specifically, the measurement matrix in our framework is designed based on the well-crafted sparsification through capacity-approaching sparse-graph codes , where the sparse coefficients can be recovered efficiently in a few iterations by performing simple error decoding over the observations. We formally connect this general recovery problem with sparse-graph decoding in packet communication systems and analyze our framework in terms of the measurement cost, computational complexity, and recovery performance. Specifically, we show that in the noiseless setting, our framework can recover any arbitrary <inline-formula> <tex-math notation="LaTeX">K </tex-math></inline-formula>-sparse signal in <inline-formula> <tex-math notation="LaTeX">O(K) </tex-math></inline-formula> time using <inline-formula> <tex-math notation="LaTeX">2K </tex-math></inline-formula> measurements asymptotically with a vanishing error probability . In the noisy setting, when the sparse coefficients take values in a finite and quantized alphabet, our framework can achieve the same goal in time <inline-formula> <tex-math notation="LaTeX">O(K\log (N/K)) </tex-math></inline-formula> using <inline-formula> <tex-math notation="LaTeX">O(K\log (N/K)) </tex-math></inline-formula> measurements obtained from measurement matrix with elements {−1, 0, 1}. When the sparsity <inline-formula> <tex-math notation="LaTeX">K </tex-math></inline-formula> is sub-linear in the signal dimension <inline-formula> <tex-math notation="LaTeX">K=O(N^\delta ) </tex-math></inline-formula> for some <inline-formula> <tex-math notation="LaTeX">0< \delta < 1 </tex-math></inline-formula>, our results are order-optimal in terms of measurement costs and run-time, both of which are sub-linear in the signal dimension <inline-formula> <tex-math notation="LaTeX">N </tex-math></inline-formula>. The sub-linear measurement cost and run-time can also be achieved with continuous-valued sparse coefficients, with a slight increment in the logarithmic factors. More specifically, in the continuous alphabet setting, when <inline-formula> <tex-math notation="LaTeX">K=O(N^\delta ) </tex-math></inline-formula> and the magnitudes of all the sparse coefficients are bounded below by a positive constant, our algorithm can recover an arbitrarily large <inline-formula> <tex-math notation="LaTeX">(1-p) </tex-math></inline-formula>-fraction of the support of the sparse signal using <inline-formula> <tex-math notation="LaTeX">O(K\log (N/K)\log \log (N/K)) </tex-math></inline-formula> measurements, and <inline-formula> <tex-math notation="LaTeX">O(K\log ^{1+r}(N/K)) </tex-math></inline-formula> run-time, where <inline-formula> <tex-math notation="LaTeX">r </tex-math></inline-formula> is an arbitrarily small constant. For each recovered sparse coefficient, we can achieve <inline-formula> <tex-math notation="LaTeX">O(\epsilon ) </tex-math></inline-formula> error for an arbitrarily small constant <inline-formula> <tex-math notation="LaTeX">\epsilon </tex-math></inline-formula>. In addition, if the magnitudes of all the sparse coefficients are upper bounded by <inline-formula> <tex-math notation="LaTeX">O(K^{c}) </tex-math></inline-formula> for some constant <inline-formula> <tex-math notation="LaTeX">c< 1 </tex-math></inline-formula>, then we are able to provide a strong <inline-formula> <tex-math notation="LaTeX">\ell _{1} </tex-math></inline-formula> recovery guarantee for the estimated signal <inline-formula> <tex-math notation="LaTeX">\widehat { \mathrm {x}} </tex-math></inline-formula>: <inline-formula> <tex-math notation="LaTeX">\|\widehat { \mathrm {x}} - \mathrm {x}\|_{1} \le \kappa \| \mathrm {x}\|_{1} </tex-math></inline-formula>, where the constant <inline-formula> <tex-math notation="LaTeX">\kappa </tex-math></inline-formula> can be arbitrarily small. This offers the desired scalability of our framework that can potentially enable real-time or near-real-time processing for massive datasets featuring sparsity, which are relevant to a multitude of practical applications. | 
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14  | 
| ISSN: | 0018-9448 1557-9654  | 
| DOI: | 10.1109/TIT.2019.2921757 |