## MEG Decoding with Hierarchical Combination of Logistic Regression and Random Forests

Research output: Other contribution › Scientific

### Standard

**MEG Decoding with Hierarchical Combination of Logistic Regression and Random Forests.** / Huttunen, Heikki; Gencoglu, Oguzhan; Lehmusvaara, Johannes; Vartiainen, Teemu.

Research output: Other contribution › Scientific

### Harvard

*MEG Decoding with Hierarchical Combination of Logistic Regression and Random Forests*..

### APA

### Vancouver

### Author

### Bibtex - Download

}

### RIS (suitable for import to EndNote) - Download

TY - GEN

T1 - MEG Decoding with Hierarchical Combination of Logistic Regression and Random Forests

AU - Huttunen, Heikki

AU - Gencoglu, Oguzhan

AU - Lehmusvaara, Johannes

AU - Vartiainen, Teemu

PY - 2014

Y1 - 2014

N2 - This document describes the solution of the second place team in the DecMeg2014 brain decoding competition hosted at Kaggle.com. The model is a hierarchical combination of logistic regression and random forest. The first layer consists of a collection of 337 logistic regression classifiers, each using data either from a single sensor (31 features) or data from a single time point (306 features). The resulting probability estimates are fed to a 1000-tree random forest, which makes the final decision. In order to adjust the model to an unlabeled subject, the classifier is trained iteratively: After initial training, the model is retrained with unlabeled samples in the test set using their predicted labels from first iteration.

AB - This document describes the solution of the second place team in the DecMeg2014 brain decoding competition hosted at Kaggle.com. The model is a hierarchical combination of logistic regression and random forest. The first layer consists of a collection of 337 logistic regression classifiers, each using data either from a single sensor (31 features) or data from a single time point (306 features). The resulting probability estimates are fed to a 1000-tree random forest, which makes the final decision. In order to adjust the model to an unlabeled subject, the classifier is trained iteratively: After initial training, the model is retrained with unlabeled samples in the test set using their predicted labels from first iteration.

KW - Machine learning

M3 - Other contribution

ER -