Nyström-based approximate kernel subspace learning
Research output: Contribution to journal › Article › Scientific › peer-review
|Number of pages||8|
|Publication status||Published - Sep 2016|
|Publication type||A1 Journal article-refereed|
In this paper, we describe a method for the determination of a subspace of the feature space in kernel methods, which is suited to large-scale learning problems. Linear model learning in the obtained space corresponds to a nonlinear model learning process in the input space. Since the obtained feature space is determined only by exploiting properties of the training data, this approach can be used for generic nonlinear pattern recognition. That is, nonlinear data mapping can be considered to be a pre-processing step exploiting nonlinear relationships between the training data. Linear techniques can be subsequently applied in the new feature space and, thus, they can model nonlinear properties of the problem at hand. In order to appropriately address the inherent problem of kernel learning methods related to their time and memory complexities, we follow an approximate learning approach. We show that the method can lead to considerable operation speed gains and achieve very good performance. Experimental results verify our analysis.
ASJC Scopus subject areas
- Kernel methods, Nonlinear pattern recognition, Nonlinear projection trick, Nyström approximation