Optimization for data de-duplication algorithm based on file content

Xuejun NIE,Leihua QIN,Jingli ZHOU,Ke LIU,Jianfeng ZHU,Yu WANG,

PDF(286 KB)
PDF(286 KB)
Front. Optoelectron. ›› 2010, Vol. 3 ›› Issue (3) : 308-316. DOI: 10.1007/s12200-010-0103-z
Research articles
Research articles

Optimization for data de-duplication algorithm based on file content

  • Xuejun NIE,Leihua QIN,Jingli ZHOU,Ke LIU,Jianfeng ZHU,Yu WANG,
Author information +
History +

Abstract

Content defined chunking (CDC) is a prevalent data de-duplication algorithm for removing redundant data segments in archival storage systems. Current researches on CDC do not consider the unique content characteristic of different file types, and they determine chunk boundaries in a random way and apply a single strategy for all file types. It has been proven that such method cannot achieve optimal performance for compound archival data. We analyze the content characteristic of different file types and propose candidate anchor histogram (CAH) to capture it. We propose an improved strategy for determining chunk boundaries based on CAH and tune some key parameters of CDC based on the data layout of underlying data de-duplication file system (TriDFS), which can efficiently store variable-sized chunks on fixed-sized physical blocks. These strategies are evaluated with representative archival data, and the result indicates that they can increase on average the compression ratio by 16.3% and write throughput by 13.7%, while only decrease the read throughput by 2.5%.

Cite this article

Download citation ▾
Xuejun NIE, Leihua QIN, Jingli ZHOU, Ke LIU, Jianfeng ZHU, Yu WANG,. Optimization for data de-duplication algorithm based on file content. Front. Optoelectron., 2010, 3(3): 308‒316 https://doi.org/10.1007/s12200-010-0103-z
AI Summary AI Mindmap
PDF(286 KB)

Accesses

Citations

Detail

Sections
Recommended

/