TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Joint Sparse Recovery of Misaligned Multimodal Images via Adaptive Local and Nonlocal Cross-Modal Regularization

Tutkimustuotosvertaisarvioitu

Yksityiskohdat

AlkuperäiskieliEnglanti
Otsikko2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2019 - Proceedings
KustantajaIEEE
Sivut111-115
Sivumäärä5
ISBN (elektroninen)9781728155494
DOI - pysyväislinkit
TilaJulkaistu - 1 joulukuuta 2019
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
TapahtumaIEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing - Le Gosier, Guadeloupe
Kesto: 15 joulukuuta 201918 joulukuuta 2019
Konferenssinumero: 8th

Conference

ConferenceIEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing
LyhennettäCAMSAP
MaaGuadeloupe
KaupunkiLe Gosier
Ajanjakso15/12/1918/12/19

Tiivistelmä

Given few noisy linear measurements of distinct misaligned modalities, we aim at recovering the underlying multimodal image using a sparsity promoting algorithm. Unlike previous multimodal sparse recovery approaches employing side information under the naive assumption of perfect calibration of modalities or of known deformation parameters, we adaptively estimate the deformation parameters from the images separately recovered from the incomplete measurements. We develop a multiscale dense registration method that proceeds alternately by finding block-wise intensity mapping models and a shift vector field which is used to obtain and refine the deformation parameters through a weighted least-squares approximation. The co-registered images are then jointly recovered in a plug-and-play framework where a collaborative filter leverages the local and nonlocal cross-modal correlations inherent to the multimodal image. Our experiments with this fully automatic registration and joint recovery pipeline show a better detection and sharper recovery of fine details which could not be separately recovered.