TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

Learning Image-to-Image Translation Using Paired and Unpaired Training Samples

Tutkimustuotosvertaisarvioitu

Yksityiskohdat

AlkuperäiskieliEnglanti
OtsikkoComputer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, Revised Selected Papers
ToimittajatC.V. Jawahar, Konrad Schindler, Greg Mori, Hongdong Li
KustantajaSpringer Verlag
Sivut51-66
Sivumäärä16
ISBN (painettu)9783030208899
DOI - pysyväislinkit
TilaJulkaistu - 2019
OKM-julkaisutyyppiA4 Artikkeli konferenssijulkaisussa
Tapahtuma Asian Conference on Computer Vision - Perth, Austraalia
Kesto: 2 joulukuuta 20186 joulukuuta 2018

Julkaisusarja

NimiLecture Notes in Computer Science
Vuosikerta11362
ISSN (painettu)0302-9743
ISSN (elektroninen)1611-3349

Conference

Conference Asian Conference on Computer Vision
MaaAustraalia
KaupunkiPerth
Ajanjakso2/12/186/12/18

Tiivistelmä

Image-to-image translation is a general name for a task where an image from one domain is converted to a corresponding image in another domain, given sufficient training data. Traditionally different approaches have been proposed depending on whether aligned image pairs or two sets of (unaligned) examples from both domains are available for training. While paired training samples might be difficult to obtain, the unpaired setup leads to a highly under-constrained problem and inferior results. In this paper, we propose a new general purpose image-to-image translation model that is able to utilize both paired and unpaired training data simultaneously. We compare our method with two strong baselines and obtain both qualitatively and quantitatively improved results. Our model outperforms the baselines also in the case of purely paired and unpaired training data. To our knowledge, this is the first work to consider such hybrid setup in image-to-image translation.

Julkaisufoorumi-taso