K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search
Tutkimustuotos › › vertaisarvioitu
Standard
K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search. / Ozan, Ezgi Can; Kiranyaz, Serkan; Gabbouj, Moncef.
julkaisussa: IEEE Transactions on Knowledge and Data Engineering, Vuosikerta 28, Nro 7, 2016, s. 1722-1733.Tutkimustuotos › › vertaisarvioitu
Harvard
APA
Vancouver
Author
Bibtex - Lataa
}
RIS (suitable for import to EndNote) - Lataa
TY - JOUR
T1 - K-Subspaces Quantization Subspaces for Approximate Nearest Neighbor Search
AU - Ozan, Ezgi Can
AU - Kiranyaz, Serkan
AU - Gabbouj, Moncef
N1 - EXT="Kiranyaz, Serkan"
PY - 2016
Y1 - 2016
N2 - Approximate Nearest Neighbor (ANN) search has become a popular approach for performing fast and efficient retrieval on very large-scale datasets in recent years, as the size and dimension of data grow continuously. In this paper, we propose a novel vector quantization method for ANN search which enables faster and more accurate retrieval on publicly available datasets. We define vector quantization as a multiple affine subspace learning problem and explore the quantization centroids on multiple affine subspaces. We propose an iterative approach to minimize the quantization error in order to create a novel quantization scheme, which outperforms the state-of-the-art algorithms. The computational cost of our method is also comparable to that of the competing methods.
AB - Approximate Nearest Neighbor (ANN) search has become a popular approach for performing fast and efficient retrieval on very large-scale datasets in recent years, as the size and dimension of data grow continuously. In this paper, we propose a novel vector quantization method for ANN search which enables faster and more accurate retrieval on publicly available datasets. We define vector quantization as a multiple affine subspace learning problem and explore the quantization centroids on multiple affine subspaces. We propose an iterative approach to minimize the quantization error in order to create a novel quantization scheme, which outperforms the state-of-the-art algorithms. The computational cost of our method is also comparable to that of the competing methods.
KW - Approximate Nearest Neighbor Search
KW - Vector Quantization
KW - Large-scale learning
KW - Big data
U2 - 10.1109/TKDE.2016.2535287
DO - 10.1109/TKDE.2016.2535287
M3 - Article
VL - 28
SP - 1722
EP - 1733
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
SN - 1041-4347
IS - 7
ER -