TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

On the asymmetric view+depth 3D scene representation

Tutkimustuotos

Standard

On the asymmetric view+depth 3D scene representation. / Georgiev, Mihail; Gotchev, Atanas.

Ninth International Workshop on Video Processing and Quality Metrics for Consumer Electronics: VPQM 2015. 2016. s. 1-6.

Tutkimustuotos

Harvard

Georgiev, M & Gotchev, A 2016, On the asymmetric view+depth 3D scene representation. julkaisussa Ninth International Workshop on Video Processing and Quality Metrics for Consumer Electronics: VPQM 2015. Sivut 1-6, International Workshop on Video Processing and Quality Metrics for Consumer Electronics, 1/01/00.

APA

Georgiev, M., & Gotchev, A. (2016). On the asymmetric view+depth 3D scene representation. teoksessa Ninth International Workshop on Video Processing and Quality Metrics for Consumer Electronics: VPQM 2015 (Sivut 1-6)

Vancouver

Georgiev M, Gotchev A. On the asymmetric view+depth 3D scene representation. julkaisussa Ninth International Workshop on Video Processing and Quality Metrics for Consumer Electronics: VPQM 2015. 2016. s. 1-6

Author

Georgiev, Mihail ; Gotchev, Atanas. / On the asymmetric view+depth 3D scene representation. Ninth International Workshop on Video Processing and Quality Metrics for Consumer Electronics: VPQM 2015. 2016. Sivut 1-6

Bibtex - Lataa

@inproceedings{4c8738343fbb43a9a715c60980614f45,
title = "On the asymmetric view+depth 3D scene representation",
abstract = "In this work we promote the asymmetric view + depth representation as an efficient representation of 3D visual scenes. Recently, it has been proposed in the context of aligned view and depth images and specifically for depth compression. The representation employs two techniques for image analysis and filtering. A super-pixel segmentation of the color image is used to sparsify the depth map in spatial domain and a regularizing spatially adaptive filter is used to reconstruct it back to the input resolution. The relationship between the color and depth images established through these two procedures leads to substantial reduction of the required depth data. In this work we modify the approach for representing 3D scenes, captured by RGB-Z capture setup formed by non-confocal RGB and range sensors with different spatial resolutions. We specifically quantify its performance for the case of low-resolution range sensor working in low-sensing mode that generates images impaired by rather extreme noise. We demonstrate its superiority against other upsampling methods in how it copes with the noise and reconstructs a depth map with good quality out of very low-resolution input range image.",
author = "Mihail Georgiev and Atanas Gotchev",
year = "2016",
month = "2",
day = "16",
language = "English",
pages = "1--6",
booktitle = "Ninth International Workshop on Video Processing and Quality Metrics for Consumer Electronics",

}

RIS (suitable for import to EndNote) - Lataa

TY - GEN

T1 - On the asymmetric view+depth 3D scene representation

AU - Georgiev, Mihail

AU - Gotchev, Atanas

PY - 2016/2/16

Y1 - 2016/2/16

N2 - In this work we promote the asymmetric view + depth representation as an efficient representation of 3D visual scenes. Recently, it has been proposed in the context of aligned view and depth images and specifically for depth compression. The representation employs two techniques for image analysis and filtering. A super-pixel segmentation of the color image is used to sparsify the depth map in spatial domain and a regularizing spatially adaptive filter is used to reconstruct it back to the input resolution. The relationship between the color and depth images established through these two procedures leads to substantial reduction of the required depth data. In this work we modify the approach for representing 3D scenes, captured by RGB-Z capture setup formed by non-confocal RGB and range sensors with different spatial resolutions. We specifically quantify its performance for the case of low-resolution range sensor working in low-sensing mode that generates images impaired by rather extreme noise. We demonstrate its superiority against other upsampling methods in how it copes with the noise and reconstructs a depth map with good quality out of very low-resolution input range image.

AB - In this work we promote the asymmetric view + depth representation as an efficient representation of 3D visual scenes. Recently, it has been proposed in the context of aligned view and depth images and specifically for depth compression. The representation employs two techniques for image analysis and filtering. A super-pixel segmentation of the color image is used to sparsify the depth map in spatial domain and a regularizing spatially adaptive filter is used to reconstruct it back to the input resolution. The relationship between the color and depth images established through these two procedures leads to substantial reduction of the required depth data. In this work we modify the approach for representing 3D scenes, captured by RGB-Z capture setup formed by non-confocal RGB and range sensors with different spatial resolutions. We specifically quantify its performance for the case of low-resolution range sensor working in low-sensing mode that generates images impaired by rather extreme noise. We demonstrate its superiority against other upsampling methods in how it copes with the noise and reconstructs a depth map with good quality out of very low-resolution input range image.

M3 - Conference contribution

SP - 1

EP - 6

BT - Ninth International Workshop on Video Processing and Quality Metrics for Consumer Electronics

ER -