TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Sparse Overcomplete Denoising: Aggregation Versus Global Optimization

Tutkimustuotosvertaisarvioitu

Standard

Sparse Overcomplete Denoising : Aggregation Versus Global Optimization. / Carrera, Diego; Boracchi, Giacomo; Foi, Alessandro; Wohlberg, Brendt.

julkaisussa: IEEE Signal Processing Letters, Vuosikerta 24, Nro 10, 01.10.2017, s. 1468-1472.

Tutkimustuotosvertaisarvioitu

Harvard

Carrera, D, Boracchi, G, Foi, A & Wohlberg, B 2017, 'Sparse Overcomplete Denoising: Aggregation Versus Global Optimization', IEEE Signal Processing Letters, Vuosikerta. 24, Nro 10, Sivut 1468-1472. https://doi.org/10.1109/LSP.2017.2734119

APA

Carrera, D., Boracchi, G., Foi, A., & Wohlberg, B. (2017). Sparse Overcomplete Denoising: Aggregation Versus Global Optimization. IEEE Signal Processing Letters, 24(10), 1468-1472. https://doi.org/10.1109/LSP.2017.2734119

Vancouver

Carrera D, Boracchi G, Foi A, Wohlberg B. Sparse Overcomplete Denoising: Aggregation Versus Global Optimization. IEEE Signal Processing Letters. 2017 loka 1;24(10):1468-1472. https://doi.org/10.1109/LSP.2017.2734119

Author

Carrera, Diego ; Boracchi, Giacomo ; Foi, Alessandro ; Wohlberg, Brendt. / Sparse Overcomplete Denoising : Aggregation Versus Global Optimization. Julkaisussa: IEEE Signal Processing Letters. 2017 ; Vuosikerta 24, Nro 10. Sivut 1468-1472.

Bibtex - Lataa

@article{4e7b5528d0634d63b154084e13a72b14,
title = "Sparse Overcomplete Denoising: Aggregation Versus Global Optimization",
abstract = "Denoising is often addressed via sparse coding with respect to an overcomplete dictionary. There are two main approaches when the dictionary is composed of translates of an orthonormal basis. The first, traditionally employed by techniques such as wavelet cycle spinning, separately seeks sparsity w.r.t. each translate of the orthonormal basis, solving multiple partial optimizations and obtaining a collection of sparse approximations of the noise-free image, which are aggregated together to obtain a final estimate. The second approach, recently employed by convolutional sparse representations, instead seeks sparsity over the entire dictionary via a global optimization. It is tempting to view the former approach as providing a suboptimal solution of the latter. In this letter, we analyze whether global sparsity is a desirable property, and under what conditions the global optimization provides a better solution to the denoising problem. In particular, our experimental analysis shows that the two approaches attain comparable performance in case of natural images and global optimization outperforms the simpler aggregation of partial estimates only when the image admits an extremely sparse representation. We explain this phenomenon by separately studying the bias and variance of these solutions, and by noting that the variance of the global solution increases very rapidly as the original signal becomes less and less sparse.",
keywords = "Convolutional sparse coding, denoising, overcomplete representations, sparse representations",
author = "Diego Carrera and Giacomo Boracchi and Alessandro Foi and Brendt Wohlberg",
note = "EXT={"}Carrera, Diego{"} EXT={"}Boracchi, Giacomo{"}",
year = "2017",
month = "10",
day = "1",
doi = "10.1109/LSP.2017.2734119",
language = "English",
volume = "24",
pages = "1468--1472",
journal = "IEEE Signal Processing Letters",
issn = "1070-9908",
publisher = "Institute of Electrical and Electronics Engineers",
number = "10",

}

RIS (suitable for import to EndNote) - Lataa

TY - JOUR

T1 - Sparse Overcomplete Denoising

T2 - Aggregation Versus Global Optimization

AU - Carrera, Diego

AU - Boracchi, Giacomo

AU - Foi, Alessandro

AU - Wohlberg, Brendt

N1 - EXT="Carrera, Diego" EXT="Boracchi, Giacomo"

PY - 2017/10/1

Y1 - 2017/10/1

N2 - Denoising is often addressed via sparse coding with respect to an overcomplete dictionary. There are two main approaches when the dictionary is composed of translates of an orthonormal basis. The first, traditionally employed by techniques such as wavelet cycle spinning, separately seeks sparsity w.r.t. each translate of the orthonormal basis, solving multiple partial optimizations and obtaining a collection of sparse approximations of the noise-free image, which are aggregated together to obtain a final estimate. The second approach, recently employed by convolutional sparse representations, instead seeks sparsity over the entire dictionary via a global optimization. It is tempting to view the former approach as providing a suboptimal solution of the latter. In this letter, we analyze whether global sparsity is a desirable property, and under what conditions the global optimization provides a better solution to the denoising problem. In particular, our experimental analysis shows that the two approaches attain comparable performance in case of natural images and global optimization outperforms the simpler aggregation of partial estimates only when the image admits an extremely sparse representation. We explain this phenomenon by separately studying the bias and variance of these solutions, and by noting that the variance of the global solution increases very rapidly as the original signal becomes less and less sparse.

AB - Denoising is often addressed via sparse coding with respect to an overcomplete dictionary. There are two main approaches when the dictionary is composed of translates of an orthonormal basis. The first, traditionally employed by techniques such as wavelet cycle spinning, separately seeks sparsity w.r.t. each translate of the orthonormal basis, solving multiple partial optimizations and obtaining a collection of sparse approximations of the noise-free image, which are aggregated together to obtain a final estimate. The second approach, recently employed by convolutional sparse representations, instead seeks sparsity over the entire dictionary via a global optimization. It is tempting to view the former approach as providing a suboptimal solution of the latter. In this letter, we analyze whether global sparsity is a desirable property, and under what conditions the global optimization provides a better solution to the denoising problem. In particular, our experimental analysis shows that the two approaches attain comparable performance in case of natural images and global optimization outperforms the simpler aggregation of partial estimates only when the image admits an extremely sparse representation. We explain this phenomenon by separately studying the bias and variance of these solutions, and by noting that the variance of the global solution increases very rapidly as the original signal becomes less and less sparse.

KW - Convolutional sparse coding

KW - denoising

KW - overcomplete representations

KW - sparse representations

U2 - 10.1109/LSP.2017.2734119

DO - 10.1109/LSP.2017.2734119

M3 - Article

VL - 24

SP - 1468

EP - 1472

JO - IEEE Signal Processing Letters

JF - IEEE Signal Processing Letters

SN - 1070-9908

IS - 10

ER -