Tampere University of Technology

TUTCRIS Research Portal

Learning Image-to-Image Translation Using Paired and Unpaired Training Samples

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Details

Original languageEnglish
Title of host publicationComputer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers
EditorsC.V. Jawahar, Konrad Schindler, Greg Mori, Hongdong Li
PublisherSpringer Verlag
Pages51-66
Number of pages16
ISBN (Print)9783030208899
DOIs
Publication statusPublished - 2019
Publication typeA4 Article in a conference publication
Event Asian Conference on Computer Vision - Perth, Australia
Duration: 2 Dec 20186 Dec 2018

Publication series

NameLecture Notes in Computer Science
Volume11362
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference Asian Conference on Computer Vision
CountryAustralia
CityPerth
Period2/12/186/12/18

Abstract

Image-to-image translation is a general name for a task where an image from one domain is converted to a corresponding image in another domain, given sufficient training data. Traditionally different approaches have been proposed depending on whether aligned image pairs or two sets of (unaligned) examples from both domains are available for training. While paired training samples might be difficult to obtain, the unpaired setup leads to a highly under-constrained problem and inferior results. In this paper, we propose a new general purpose image-to-image translation model that is able to utilize both paired and unpaired training data simultaneously. We compare our method with two strong baselines and obtain both qualitatively and quantitatively improved results. Our model outperforms the baselines also in the case of purely paired and unpaired training data. To our knowledge, this is the first work to consider such hybrid setup in image-to-image translation.

Publication forum classification

Field of science, Statistics Finland