Vector quantization: a review

Ze-bin WU, Jun-qing YU

PDF(694 KB)
Front. Inform. Technol. Electron. Eng All Journals
PDF(694 KB)
Front. Inform. Technol. Electron. Eng ›› 2019, Vol. 20 ›› Issue (4) : 507-524. DOI: 10.1631/FITEE.1700833
Review
Review

Vector quantization: a review

Author information +
History +

Abstract

Vector quantization (VQ) is a very effective way to save bandwidth and storage for speech coding and image coding. Traditional vector quantization methods can be divided into mainly seven types, tree-structured VQ, direct sum VQ, Cartesian product VQ, lattice VQ, classified VQ, feedback VQ, and fuzzy VQ, according to their codebook generation procedures. Over the past decade, quantization-based approximate nearest neighbor (ANN) search has been developing very fast and many methods have emerged for searching images with binary codes in the memory for large-scale datasets. Their most impressive characteristics are the use of multiple codebooks. This leads to the appearance of two kinds of codebook: the linear combination codebook and the joint codebook. This may be a trend for the future. However, these methods are just finding a balance among speed, accuracy, and memory consumption for ANN search, and sometimes one of these three suffers. So, finding a vector quantization method that can strike a balance between speed and accuracy and consume moderately sized memory, is still a problem requiring study.

Keywords

Approximate nearest neighbor search / Image coding / Vector quantization

Cite this article

Download citation ▾
Ze-bin WU, Jun-qing YU. Vector quantization: a review. Front. Inform. Technol. Electron. Eng, 2019, 20(4): 507‒524 https://doi.org/10.1631/FITEE.1700833
This is a preview of subscription content, contact us for subscripton.
PDF(694 KB)

Supplementary files

FITEE-0507-19007-ZBW_suppl_1 (836 KB)

FITEE-0507-19007-ZBW_suppl_2 (91 KB)

3037

Accesses

0

Citations

Detail

Sections
Recommended

/