A non-convex blind calibration method for randomised sensing strategies
The implementation of computational sensing strategies often faces calibration problems typically solved by means of multiple, accurately chosen training signals, an approach that can be resource-consuming and cumbersome. Conversely, blind calibration does not require any training, but corresponds t...
Saved in:
| Published in | 2016 4th International Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa) pp. 16 - 20 |
|---|---|
| Main Authors | , |
| Format | Conference Proceeding |
| Language | English |
| Published |
IEEE
01.09.2016
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.1109/CoSeRa.2016.7745690 |
Cover
| Summary: | The implementation of computational sensing strategies often faces calibration problems typically solved by means of multiple, accurately chosen training signals, an approach that can be resource-consuming and cumbersome. Conversely, blind calibration does not require any training, but corresponds to a bilinear inverse problem whose algorithmic solution is an open issue. We here address blind calibration as a non-convex problem for linear random sensing models, in which we aim to recover an unknown signal from its projections on sub-Gaussian random vectors each subject to an unknown multiplicative factor (gain). To solve this optimisation problem we resort to projected gradient descent starting from a suitable initialisation. An analysis of this algorithm allows us to show that it converges to the global optimum provided a sample complexity requirement is met, i.e., relating convergence to the amount of information collected during the sensing process. Finally, we present some numerical experiments in which our algorithm allows for a simple solution to blind calibration of sensor gains in computational sensing applications. |
|---|---|
| DOI: | 10.1109/CoSeRa.2016.7745690 |