Tampere University of Technology

TUTCRIS Research Portal

Convolutional neural network based inter-frame enhancement for 360-degree video streaming

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review


Original languageEnglish
Title of host publicationAdvances in Multimedia Information Processing – PCM 2018 - 19th Pacific-Rim Conference on Multimedia, 2018
Number of pages10
ISBN (Print)9783030007669
Publication statusPublished - 2018
Publication typeA4 Article in a conference publication
EventPacific-Rim Conference on Multimedia - Hefei, China
Duration: 21 Sep 201822 Sep 2018

Publication series

NameLecture Notes in Computer Science
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


ConferencePacific-Rim Conference on Multimedia


360-degree video has attracted more and more attention in recent years. However, it is a highly challenging task to transmit the high-resolution video within the limited bandwidth. In this paper, we first propose to unequally compress the cubemaps in each frame of the 360-degree video to reduce the total bitrate of the transmitted data. Specifically, a Group of Pictures (GOP) is used as a unit to alternately transmit different versions of the video. Each version consists of 3 high-quality cubemaps and 3 low-quality cubemaps. Then, the convolutional neural network (CNN) is introduced to enhance the low-quality cubemaps with the high-quality cubemaps by exploring the inter-frame similarities. It is shown in the experiment that a single CNN model can be used for various videos. The experimental results also show that the proposed method has an excellent quality enhancement compared with the benchmark in terms of PSNR, especially for videos with slow motion.


  • 360-degree video streaming, Convolutional neural network, Inter-frame enhancement

Publication forum classification

Field of science, Statistics Finland