Tampere University of Technology

TUTCRIS Research Portal

Transfer learning using a nonparametric sparse topic model

Research output: Contribution to journalArticleScientificpeer-review

Details

Original languageEnglish
Pages (from-to)124-137
Number of pages14
JournalNeurocomputing
Volume112
DOIs
Publication statusPublished - 18 Jul 2013
Publication typeA1 Journal article-refereed

Abstract

In many domains data items are represented by vectors of counts; count data arises, for example, in bioinformatics or analysis of text documents represented as word count vectors. However, often the amount of data available from an interesting data source is too small to model the data source well. When several data sets are available from related sources, exploiting their similarities by transfer learning can improve the resulting models compared to modeling sources independently. We introduce a Bayesian generative transfer learning model which represents similarity across document collections by sparse sharing of latent topics controlled by an Indian buffet process. Unlike a prominent previous model, hierarchical Dirichlet process (HDP) based multi-task learning, our model decouples topic sharing probability from topic strength, making sharing of low-strength topics easier. In experiments, our model outperforms the HDP approach both on synthetic data and in first of the two case studies on text collections, and achieves similar performance as the HDP approach in the second case study.

Keywords

  • Latent Dirichlet allocation, Nonparametric Bayesian inference, Small sample size, Sparsity, Topic models, Transfer learning