Vector quantization: a review

Ze-bin WU , Jun-qing YU

Front. Inform. Technol. Electron. Eng ›› 2019, Vol. 20 ›› Issue (4) : 507 -524.

PDF (694KB)
Front. Inform. Technol. Electron. Eng ›› 2019, Vol. 20 ›› Issue (4) : 507 -524. DOI: 10.1631/FITEE.1700833
Review
Review

Vector quantization: a review

Author information +
History +
PDF (694KB)

Abstract

Vector quantization (VQ) is a very effective way to save bandwidth and storage for speech coding and image coding. Traditional vector quantization methods can be divided into mainly seven types, tree-structured VQ, direct sum VQ, Cartesian product VQ, lattice VQ, classified VQ, feedback VQ, and fuzzy VQ, according to their codebook generation procedures. Over the past decade, quantization-based approximate nearest neighbor (ANN) search has been developing very fast and many methods have emerged for searching images with binary codes in the memory for large-scale datasets. Their most impressive characteristics are the use of multiple codebooks. This leads to the appearance of two kinds of codebook: the linear combination codebook and the joint codebook. This may be a trend for the future. However, these methods are just finding a balance among speed, accuracy, and memory consumption for ANN search, and sometimes one of these three suffers. So, finding a vector quantization method that can strike a balance between speed and accuracy and consume moderately sized memory, is still a problem requiring study.

Keywords

Approximate nearest neighbor search / Image coding / Vector quantization

Cite this article

Download citation ▾
Ze-bin WU, Jun-qing YU. Vector quantization: a review. Front. Inform. Technol. Electron. Eng, 2019, 20(4): 507-524 DOI:10.1631/FITEE.1700833

登录浏览全文

4963

注册一个新账户 忘记密码

References

AI Summary AI Mindmap
PDF (694KB)

Supplementary files

FITEE-0507-19007-ZBW_suppl_1

FITEE-0507-19007-ZBW_suppl_2

3895

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/