Tampere University of Technology

TUTCRIS Research Portal

Generalized Multi-view Embedding for Visual Recognition and Cross-modal Retrieval

Research output: Contribution to journalArticleScientificpeer-review

Details

Original languageEnglish
Pages (from-to)2542-2555
JournalIEEE Transactions on Cybernetics
Volume48
Issue number9
Early online date6 Sep 2017
DOIs
Publication statusPublished - Sep 2018
Publication typeA1 Journal article-refereed

Abstract

In this paper, the problem of multi-view embed-ding from different visual cues and modalities is considered. We propose a unified solution for subspace learning methods using the Rayleigh quotient, which is extensible for multiple
views, supervised learning, and non-linear embeddings. Numerous methods including Canonical Correlation Analysis, Partial Least Square regression and Linear Discriminant Analysis are studied using specific intrinsic and penalty graphs within the same framework. Non-linear extensions based on kernels and
(deep) neural networks are derived, achieving better performance than the linear ones. Moreover, a novel Multi-view Modular Discriminant Analysis (MvMDA) is proposed by taking the view difference into consideration. We demonstrate the effectiveness of the proposed multi-view embedding methods on visual object
recognition and cross-modal image retrieval, and obtain superior results in both applications compared to related methods.

Publication forum classification

Field of science, Statistics Finland