TUTCRIS - Tampereen teknillinen yliopisto

TUTCRIS

K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search

Tutkimustuotosvertaisarvioitu

Standard

K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search. / Ozan, Ezgi Can; Kiranyaz, Serkan; Gabbouj, Moncef.

julkaisussa: IEEE Transactions on Knowledge and Data Engineering, Vuosikerta 28, Nro 7, 2016, s. 1722-1733.

Tutkimustuotosvertaisarvioitu

Harvard

Ozan, EC, Kiranyaz, S & Gabbouj, M 2016, 'K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search', IEEE Transactions on Knowledge and Data Engineering, Vuosikerta. 28, Nro 7, Sivut 1722-1733. https://doi.org/10.1109/TKDE.2016.2535287

APA

Ozan, E. C., Kiranyaz, S., & Gabbouj, M. (2016). K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search. IEEE Transactions on Knowledge and Data Engineering, 28(7), 1722-1733. https://doi.org/10.1109/TKDE.2016.2535287

Vancouver

Ozan EC, Kiranyaz S, Gabbouj M. K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search. IEEE Transactions on Knowledge and Data Engineering. 2016;28(7):1722-1733. https://doi.org/10.1109/TKDE.2016.2535287

Author

Ozan, Ezgi Can ; Kiranyaz, Serkan ; Gabbouj, Moncef. / K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search. Julkaisussa: IEEE Transactions on Knowledge and Data Engineering. 2016 ; Vuosikerta 28, Nro 7. Sivut 1722-1733.

Bibtex - Lataa

@article{1e0d44411b6e4140b68b9d19a5cf0389,
title = "K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search",
abstract = "Approximate Nearest Neighbor (ANN) search has become a popular approach for performing fast and efficient retrieval on very large-scale datasets in recent years, as the size and dimension of data grow continuously. In this paper, we propose a novel vector quantization method for ANN search which enables faster and more accurate retrieval on publicly available datasets. We define vector quantization as a multiple affine subspace learning problem and explore the quantization centroids on multiple affine subspaces. We propose an iterative approach to minimize the quantization error in order to create a novel quantization scheme, which outperforms the state-of-the-art algorithms. The computational cost of our method is also comparable to that of the competing methods.",
keywords = "Approximate Nearest Neighbor Search, Vector Quantization, Large-scale learning, Big data",
author = "Ozan, {Ezgi Can} and Serkan Kiranyaz and Moncef Gabbouj",
note = "EXT={"}Kiranyaz, Serkan{"}",
year = "2016",
doi = "10.1109/TKDE.2016.2535287",
language = "English",
volume = "28",
pages = "1722--1733",
journal = "IEEE Transactions on Knowledge and Data Engineering",
issn = "1041-4347",
publisher = "Institute of Electrical and Electronics Engineers",
number = "7",

}

RIS (suitable for import to EndNote) - Lataa

TY - JOUR

T1 - K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search

AU - Ozan, Ezgi Can

AU - Kiranyaz, Serkan

AU - Gabbouj, Moncef

N1 - EXT="Kiranyaz, Serkan"

PY - 2016

Y1 - 2016

N2 - Approximate Nearest Neighbor (ANN) search has become a popular approach for performing fast and efficient retrieval on very large-scale datasets in recent years, as the size and dimension of data grow continuously. In this paper, we propose a novel vector quantization method for ANN search which enables faster and more accurate retrieval on publicly available datasets. We define vector quantization as a multiple affine subspace learning problem and explore the quantization centroids on multiple affine subspaces. We propose an iterative approach to minimize the quantization error in order to create a novel quantization scheme, which outperforms the state-of-the-art algorithms. The computational cost of our method is also comparable to that of the competing methods.

AB - Approximate Nearest Neighbor (ANN) search has become a popular approach for performing fast and efficient retrieval on very large-scale datasets in recent years, as the size and dimension of data grow continuously. In this paper, we propose a novel vector quantization method for ANN search which enables faster and more accurate retrieval on publicly available datasets. We define vector quantization as a multiple affine subspace learning problem and explore the quantization centroids on multiple affine subspaces. We propose an iterative approach to minimize the quantization error in order to create a novel quantization scheme, which outperforms the state-of-the-art algorithms. The computational cost of our method is also comparable to that of the competing methods.

KW - Approximate Nearest Neighbor Search

KW - Vector Quantization

KW - Large-scale learning

KW - Big data

U2 - 10.1109/TKDE.2016.2535287

DO - 10.1109/TKDE.2016.2535287

M3 - Article

VL - 28

SP - 1722

EP - 1733

JO - IEEE Transactions on Knowledge and Data Engineering

JF - IEEE Transactions on Knowledge and Data Engineering

SN - 1041-4347

IS - 7

ER -