Tampere University of Technology

TUTCRIS Research Portal

Supervised subspace learning based on deep randomized networks

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Standard

Supervised subspace learning based on deep randomized networks. / Iosifidis, Alexandros; Gabbouj, Moncef.

2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . The Institute of Electrical and Electronics Engineers, Inc., 2016. p. 2584-2588.

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Harvard

Iosifidis, A & Gabbouj, M 2016, Supervised subspace learning based on deep randomized networks. in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . The Institute of Electrical and Electronics Engineers, Inc., pp. 2584-2588, IEEE International Conference on Acoustics, Speech and Signal Processing, 1/01/00. https://doi.org/10.1109/ICASSP.2016.7472144

APA

Iosifidis, A., & Gabbouj, M. (2016). Supervised subspace learning based on deep randomized networks. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2584-2588). The Institute of Electrical and Electronics Engineers, Inc.. https://doi.org/10.1109/ICASSP.2016.7472144

Vancouver

Iosifidis A, Gabbouj M. Supervised subspace learning based on deep randomized networks. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . The Institute of Electrical and Electronics Engineers, Inc. 2016. p. 2584-2588 https://doi.org/10.1109/ICASSP.2016.7472144

Author

Iosifidis, Alexandros ; Gabbouj, Moncef. / Supervised subspace learning based on deep randomized networks. 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . The Institute of Electrical and Electronics Engineers, Inc., 2016. pp. 2584-2588

Bibtex - Download

@inproceedings{ea347d953ed44492a54a90f5339d61c7,
title = "Supervised subspace learning based on deep randomized networks",
abstract = "In this paper, we propose a supervised subspace learning method that exploits the rich representation power of deep feedforward networks. In order to derive a fast, yet efficient, learning scheme we employ deep randomized neural networks that have been recently shown to provide good compromise between training speed and performance. For optimally determining the learnt subspace, we formulate a regression problem where we employ target vectors designed to encode both the labeling information available for the training data and geometric properties of the training data, when represented in the feature space determined by the network's last hidden layer outputs. We experimentally show that the proposed approach is able to outperform deep randomized neural networks trained by using the standard network target vectors.",
keywords = "Deep Neural Networks, Network targets calculation, Supervised Subspace Learning",
author = "Alexandros Iosifidis and Moncef Gabbouj",
year = "2016",
month = "5",
day = "18",
doi = "10.1109/ICASSP.2016.7472144",
language = "English",
isbn = "9781479999880",
publisher = "The Institute of Electrical and Electronics Engineers, Inc.",
pages = "2584--2588",
booktitle = "2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",

}

RIS (suitable for import to EndNote) - Download

TY - GEN

T1 - Supervised subspace learning based on deep randomized networks

AU - Iosifidis, Alexandros

AU - Gabbouj, Moncef

PY - 2016/5/18

Y1 - 2016/5/18

N2 - In this paper, we propose a supervised subspace learning method that exploits the rich representation power of deep feedforward networks. In order to derive a fast, yet efficient, learning scheme we employ deep randomized neural networks that have been recently shown to provide good compromise between training speed and performance. For optimally determining the learnt subspace, we formulate a regression problem where we employ target vectors designed to encode both the labeling information available for the training data and geometric properties of the training data, when represented in the feature space determined by the network's last hidden layer outputs. We experimentally show that the proposed approach is able to outperform deep randomized neural networks trained by using the standard network target vectors.

AB - In this paper, we propose a supervised subspace learning method that exploits the rich representation power of deep feedforward networks. In order to derive a fast, yet efficient, learning scheme we employ deep randomized neural networks that have been recently shown to provide good compromise between training speed and performance. For optimally determining the learnt subspace, we formulate a regression problem where we employ target vectors designed to encode both the labeling information available for the training data and geometric properties of the training data, when represented in the feature space determined by the network's last hidden layer outputs. We experimentally show that the proposed approach is able to outperform deep randomized neural networks trained by using the standard network target vectors.

KW - Deep Neural Networks

KW - Network targets calculation

KW - Supervised Subspace Learning

U2 - 10.1109/ICASSP.2016.7472144

DO - 10.1109/ICASSP.2016.7472144

M3 - Conference contribution

SN - 9781479999880

SP - 2584

EP - 2588

BT - 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

PB - The Institute of Electrical and Electronics Engineers, Inc.

ER -