On the asymmetric view+depth 3D scene representation
Tutkimustuotos ›
Yksityiskohdat
Alkuperäiskieli | Englanti |
---|---|
Otsikko | Ninth International Workshop on Video Processing and Quality Metrics for Consumer Electronics |
Alaotsikko | VPQM 2015 |
Sivut | 1-6 |
Sivumäärä | 6 |
Tila | Julkaistu - 16 helmikuuta 2016 |
OKM-julkaisutyyppi | D3 Artikkeli ammatillisessa konferenssijulkaisussa |
Tapahtuma | International Workshop on Video Processing and Quality Metrics for Consumer Electronics - Kesto: 1 tammikuuta 2000 → … |
Conference
Conference | International Workshop on Video Processing and Quality Metrics for Consumer Electronics |
---|---|
Ajanjakso | 1/01/00 → … |
Tiivistelmä
In this work we promote the asymmetric view + depth representation as an efficient representation of 3D visual scenes. Recently, it has been proposed in the context of aligned view and depth images and specifically for depth compression. The representation employs two techniques for image analysis and filtering. A super-pixel segmentation of the color image is used to sparsify the depth map in spatial domain and a regularizing spatially adaptive filter is used to reconstruct it back to the input resolution. The relationship between the color and depth images established through these two procedures leads to substantial reduction of the required depth data. In this work we modify the approach for representing 3D scenes, captured by RGB-Z capture setup formed by non-confocal RGB and range sensors with different spatial resolutions. We specifically quantify its performance for the case of low-resolution range sensor working in low-sensing mode that generates images impaired by rather extreme noise. We demonstrate its superiority against other upsampling methods in how it copes with the noise and reconstructs a depth map with good quality out of very low-resolution input range image.
Julkaisufoorumi-taso
Tilastokeskuksen tieteenalat
Latausten tilastot
Ei tietoja saatavilla