Abstract
AbstractWe study the Compressed Sensing (CS) problem, which is the problem of finding the most sparse vector that satisfies a set of linear measurements up to some numerical tolerance. CS is a central problem in Statistics, Operations Research and Machine Learning which arises in applications such as signal processing, data compression, image reconstruction, and multi-label learning. We introduce an $$\ell _2$$
ℓ
2
regularized formulation of CS which we reformulate as a mixed integer second order cone program. We derive a second order cone relaxation of this problem and show that under mild conditions on the regularization parameter, the resulting relaxation is equivalent to the well studied basis pursuit denoising problem. We present a semidefinite relaxation that strengthens the second order cone relaxation and develop a custom branch-and-bound algorithm that leverages our second order cone relaxation to solve small-scale instances of CS to certifiable optimality. When compared against solutions produced by three state of the art benchmark methods on synthetic data, our numerical results show that our approach produces solutions that are on average $$6.22\%$$
6.22
%
more sparse. When compared only against the experiment-wise best performing benchmark method on synthetic data, our approach produces solutions that are on average $$3.10\%$$
3.10
%
more sparse. On real world ECG data, for a given $$\ell _2$$
ℓ
2
reconstruction error our approach produces solutions that are on average $$9.95\%$$
9.95
%
more sparse than benchmark methods ($$3.88\%$$
3.88
%
more sparse if only compared against the best performing benchmark), while for a given sparsity level our approach produces solutions that have on average $$10.77\%$$
10.77
%
lower reconstruction error than benchmark methods ($$1.42\%$$
1.42
%
lower error if only compared against the best performing benchmark). When used as a component of a multi-label classification algorithm, our approach achieves greater classification accuracy than benchmark compressed sensing methods. This improved accuracy comes at the cost of an increase in computation time by several orders of magnitude. Thus, for applications where runtime is not of critical importance, leveraging integer optimization can yield sparser and lower error solutions to CS than existing benchmarks.
Funder
Massachusetts Institute of Technology
Publisher
Springer Science and Business Media LLC
Reference55 articles.
1. Aharon, M., Elad, M., & Bruckstein, A. (2006). K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11), 4311–4322.
2. Asif, M. S., & Romberg, J. (2013). Fast and accurate algorithms for re-weighted l1-norm minimization. IEEE Transactions on Signal Processing, 61(23), 5905–5916.
3. Baraniuk, R., Davenport, M., DeVore, R., & Wakin, M. (2008). A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3), 253–263.
4. Bertsimas, D., & Copenhaver, M. S. (2018). Characterization of the equivalence of robustification and regularization in linear and matrix regression. European Journal of Operational Research, 270(3), 931–942.
5. Bertsimas, D., Cory-Wright, R., & Johnson, N. A. G. (2023). Sparse plus low rank matrix decomposition: A discrete optimization approach. The Journal of Machine Learning Research, 24(1), 12478–12528.