A General Framework for Depth Compression and Multi-Sensor Fusion in Asymmetric View-Plus-Depth 3D Representation
Research output: Contribution to journal › Article › Scientific › peer-review
|Number of pages||13|
|Publication status||Published - 1 Jan 2020|
|Publication type||A1 Journal article-refereed|
We present a general framework which can handle different processing stages of the three-dimensional (3D) scene representation referred to as 'view-plus-depth' (V+Z). The main component of the framework is the relation between the depth map and the super-pixel segmentation of the color image. We propose a hierarchical super-pixel segmentation which keeps the same boundaries between hierarchical segmentation layers. Such segmentation allows for a corresponding depth segmentation, decimation and reconstruction with varying quality and is instrumental in tasks such as depth compression and 3D data fusion. For the latter we utilize a cross-modality reconstruction filter which is adaptive to the size of the refining super-pixel segments. We propose a novel depth encoding scheme, which includes specific arithmetic encoder and handles misalignment outliers. We demonstrate that our scheme is especially applicable for low bit-rate depth encoding and for fusing color and depth data, where the latter is noisy and with lower spatial resolution.
- 3-D depth, 3D, compression, fusion, super-pixel, time-of-flight, ToF, V+D, V+Z, view-plus-depth