School of Computer Science
and Technology, Wuhan National Laboratory for Optoelectronics, Huazhong
University of Science and Technology, Wuhan 430074, China;
Show less
History+
Published
05 Sep 2010
Issue Date
05 Sep 2010
Abstract
Content defined chunking (CDC) is a prevalent data de-duplication algorithm for removing redundant data segments in archival storage systems. Current researches on CDC do not consider the unique content characteristic of different file types, and they determine chunk boundaries in a random way and apply a single strategy for all file types. It has been proven that such method cannot achieve optimal performance for compound archival data. We analyze the content characteristic of different file types and propose candidate anchor histogram (CAH) to capture it. We propose an improved strategy for determining chunk boundaries based on CAH and tune some key parameters of CDC based on the data layout of underlying data de-duplication file system (TriDFS), which can efficiently store variable-sized chunks on fixed-sized physical blocks. These strategies are evaluated with representative archival data, and the result indicates that they can increase on average the compression ratio by 16.3% and write throughput by 13.7%, while only decrease the read throughput by 2.5%.
Xuejun NIE, Leihua QIN, Jingli ZHOU, Ke LIU, Jianfeng ZHU, Yu WANG,.
Optimization for data de-duplication algorithm
based on file content. Front. Optoelectron., 2010, 3(3): 308‒316 https://doi.org/10.1007/s12200-010-0103-z
{{custom_sec.title}}
{{custom_sec.title}}
{{custom_sec.content}}
This is a preview of subscription content, contact us for subscripton.
AI Summary 中Eng×
Note: Please note that the content below is AI-generated. Frontiers Journals website shall not be held liable for any consequences associated with the use of this content.