A General Framework for Depth Compression and Multi-Sensor Fusion in Asymmetric View-Plus-Depth 3D Representation
Tutkimustuotos › › vertaisarvioitu
Yksityiskohdat
Alkuperäiskieli | Englanti |
---|---|
Sivut | 97516-97528 |
Sivumäärä | 13 |
Julkaisu | IEEE Access |
Vuosikerta | 8 |
DOI - pysyväislinkit | |
Tila | Julkaistu - 1 tammikuuta 2020 |
OKM-julkaisutyyppi | A1 Alkuperäisartikkeli |
Tiivistelmä
We present a general framework which can handle different processing stages of the three-dimensional (3D) scene representation referred to as 'view-plus-depth' (V+Z). The main component of the framework is the relation between the depth map and the super-pixel segmentation of the color image. We propose a hierarchical super-pixel segmentation which keeps the same boundaries between hierarchical segmentation layers. Such segmentation allows for a corresponding depth segmentation, decimation and reconstruction with varying quality and is instrumental in tasks such as depth compression and 3D data fusion. For the latter we utilize a cross-modality reconstruction filter which is adaptive to the size of the refining super-pixel segments. We propose a novel depth encoding scheme, which includes specific arithmetic encoder and handles misalignment outliers. We demonstrate that our scheme is especially applicable for low bit-rate depth encoding and for fusing color and depth data, where the latter is noisy and with lower spatial resolution.