TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Fast: Flow-Assisted Shearlet Transform for Densely-Sampled Light Field Reconstruction

Tutkimustuotosvertaisarvioitu

Standard

Fast: Flow-Assisted Shearlet Transform for Densely-Sampled Light Field Reconstruction. / Gao, Yuan; Koch, Reinhard; Bregovic, Robert; Gotchev, Atanas.

2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. s. 3741-3745 (IEEE International Conference on Image Processing).

Tutkimustuotosvertaisarvioitu

Harvard

Gao, Y, Koch, R, Bregovic, R & Gotchev, A 2019, Fast: Flow-Assisted Shearlet Transform for Densely-Sampled Light Field Reconstruction. julkaisussa 2019 IEEE International Conference on Image Processing (ICIP). IEEE International Conference on Image Processing, IEEE, Sivut 3741-3745, IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, 1/01/00. https://doi.org/10.1109/ICIP.2019.8803436

APA

Gao, Y., Koch, R., Bregovic, R., & Gotchev, A. (2019). Fast: Flow-Assisted Shearlet Transform for Densely-Sampled Light Field Reconstruction. teoksessa 2019 IEEE International Conference on Image Processing (ICIP) (Sivut 3741-3745). (IEEE International Conference on Image Processing). IEEE. https://doi.org/10.1109/ICIP.2019.8803436

Vancouver

Gao Y, Koch R, Bregovic R, Gotchev A. Fast: Flow-Assisted Shearlet Transform for Densely-Sampled Light Field Reconstruction. julkaisussa 2019 IEEE International Conference on Image Processing (ICIP). IEEE. 2019. s. 3741-3745. (IEEE International Conference on Image Processing). https://doi.org/10.1109/ICIP.2019.8803436

Author

Gao, Yuan ; Koch, Reinhard ; Bregovic, Robert ; Gotchev, Atanas. / Fast: Flow-Assisted Shearlet Transform for Densely-Sampled Light Field Reconstruction. 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. Sivut 3741-3745 (IEEE International Conference on Image Processing).

Bibtex - Lataa

@inproceedings{71ff5cad22ee44f2a71ac1df2369e49b,
title = "Fast: Flow-Assisted Shearlet Transform for Densely-Sampled Light Field Reconstruction",
abstract = "Shearlet Transform (ST) is one of the most effective methods for Densely-Sampled Light Field (DSLF) reconstruction from a Sparsely-Sampled Light Field (SSLF). However, ST requires a precise disparity estimation of the SSLF. To this end, in this paper a state-of-the-art optical flow method, i.e. PWC-Net, is employed to estimate bidirectional disparity maps between neighboring views in the SSLF. Moreover, to take full advantage of optical flow and ST for DSLF reconstruction, a novel learning-based method, referred to as Flow-Assisted Shearlet Transform (FAST), is proposed in this paper. Specifically, FAST consists of two deep convolutional neural networks, i.e. disparity refinement network and view synthesis network, which fully leverage the disparity information to synthesize novel views via warping and blending and to improve the novel view synthesis performance of ST. Experimental results demonstrate the superiority of the proposed FAST method over the other state-of-the-art DSLF reconstruction methods on nine challenging real-world SSLF sub-datasets with large disparity ranges (up to 26 pixels).",
keywords = "Image reconstruction, Transforms, Optical imaging, Cameras, Adaptive optics, Training, Light fields, Densely-Sampled Light Field Reconstruction, Parallax View Generation, Novel View Synthesis, Shearlet Transform, Flow-Assisted Shearlet Transform",
author = "Yuan Gao and Reinhard Koch and Robert Bregovic and Atanas Gotchev",
year = "2019",
month = "9",
doi = "10.1109/ICIP.2019.8803436",
language = "English",
isbn = "978-1-5386-6250-2",
series = "IEEE International Conference on Image Processing",
publisher = "IEEE",
pages = "3741--3745",
booktitle = "2019 IEEE International Conference on Image Processing (ICIP)",

}

RIS (suitable for import to EndNote) - Lataa

TY - GEN

T1 - Fast: Flow-Assisted Shearlet Transform for Densely-Sampled Light Field Reconstruction

AU - Gao, Yuan

AU - Koch, Reinhard

AU - Bregovic, Robert

AU - Gotchev, Atanas

PY - 2019/9

Y1 - 2019/9

N2 - Shearlet Transform (ST) is one of the most effective methods for Densely-Sampled Light Field (DSLF) reconstruction from a Sparsely-Sampled Light Field (SSLF). However, ST requires a precise disparity estimation of the SSLF. To this end, in this paper a state-of-the-art optical flow method, i.e. PWC-Net, is employed to estimate bidirectional disparity maps between neighboring views in the SSLF. Moreover, to take full advantage of optical flow and ST for DSLF reconstruction, a novel learning-based method, referred to as Flow-Assisted Shearlet Transform (FAST), is proposed in this paper. Specifically, FAST consists of two deep convolutional neural networks, i.e. disparity refinement network and view synthesis network, which fully leverage the disparity information to synthesize novel views via warping and blending and to improve the novel view synthesis performance of ST. Experimental results demonstrate the superiority of the proposed FAST method over the other state-of-the-art DSLF reconstruction methods on nine challenging real-world SSLF sub-datasets with large disparity ranges (up to 26 pixels).

AB - Shearlet Transform (ST) is one of the most effective methods for Densely-Sampled Light Field (DSLF) reconstruction from a Sparsely-Sampled Light Field (SSLF). However, ST requires a precise disparity estimation of the SSLF. To this end, in this paper a state-of-the-art optical flow method, i.e. PWC-Net, is employed to estimate bidirectional disparity maps between neighboring views in the SSLF. Moreover, to take full advantage of optical flow and ST for DSLF reconstruction, a novel learning-based method, referred to as Flow-Assisted Shearlet Transform (FAST), is proposed in this paper. Specifically, FAST consists of two deep convolutional neural networks, i.e. disparity refinement network and view synthesis network, which fully leverage the disparity information to synthesize novel views via warping and blending and to improve the novel view synthesis performance of ST. Experimental results demonstrate the superiority of the proposed FAST method over the other state-of-the-art DSLF reconstruction methods on nine challenging real-world SSLF sub-datasets with large disparity ranges (up to 26 pixels).

KW - Image reconstruction

KW - Transforms

KW - Optical imaging

KW - Cameras

KW - Adaptive optics

KW - Training

KW - Light fields

KW - Densely-Sampled Light Field Reconstruction

KW - Parallax View Generation

KW - Novel View Synthesis

KW - Shearlet Transform

KW - Flow-Assisted Shearlet Transform

U2 - 10.1109/ICIP.2019.8803436

DO - 10.1109/ICIP.2019.8803436

M3 - Conference contribution

SN - 978-1-5386-6250-2

T3 - IEEE International Conference on Image Processing

SP - 3741

EP - 3745

BT - 2019 IEEE International Conference on Image Processing (ICIP)

PB - IEEE

ER -