Areliable power management scheme for consistent hashing based distributed key value storage systems

Nan-nan ZHAO, Ji-guang WAN, Jun WANG, Chang-sheng XIE

PDF(2377 KB)
PDF(2377 KB)
Front. Inform. Technol. Electron. Eng ›› 2016, Vol. 17 ›› Issue (10) : 994-1007. DOI: 10.1631/FITEE.1601162
Article
Article

Areliable power management scheme for consistent hashing based distributed key value storage systems

Author information +
History +

Abstract

Distributed key value storage systems are among the most important types of distributed storage systems currently deployed in data centers. Nowadays, enterprise data centers are facing growing pressure in reducing their power consumption. In this paper, we propose GreenCHT, a reliable power management scheme for consistent hashing based distributed key value storage systems. It consists of a multi-tier replication scheme, a reliable distributed log store, and a predictive power mode scheduler (PMS). Instead of randomly placing replicas of each object on a number of nodes in the consistent hash ring, we arrange the replicas of objects on nonoverlapping tiers of nodes in the ring. This allows the system to fall in various power modes by powering down subsets of servers while not violating data availability. The predictive PMS predicts workloads and adapts to load fluctuation. It cooperates with the multi-tier replication strategy to provide power proportionality for the system. To ensure that the reliability of the system is maintained when replicas are powered down, we distribute the writes to standby replicas to active servers, which ensures failure tolerance of the system. GreenCHT is implemented based on Sheepdog, a distributed key value storage system that uses consistent hashing as an underlying distributed hash table. By replaying 12 typical real workload traces collected from Microsoft, the evaluation results show that GreenCHT can provide significant power savings while maintaining a desired performance. We observe that GreenCHT can reduce power consumption by up to 35%–61%.

Keywords

Consistent hash table (CHT) / Replication / Power management / Key value storage system / Reliability

Cite this article

Download citation ▾
Nan-nan ZHAO, Ji-guang WAN, Jun WANG, Chang-sheng XIE. Areliable power management scheme for consistent hashing based distributed key value storage systems. Front. Inform. Technol. Electron. Eng, 2016, 17(10): 994‒1007 https://doi.org/10.1631/FITEE.1601162

References

[1]
Amur, H., Cipar, J., Gupta, V., , 2010. Robust and flexible power-proportional storage. Proc. 1st ACM Symp. on Cloud Computing, p.217–228. http://dx.doi.org/10.1145/1807128.1807164
[2]
Bhagwan, R., Savage, S., Voelker, G.M., 2003. Replication strategies for highly available peer-to-peer storage. In: Future Directions in Distributed Computing. Springer-Verlag, p.153–158.
[3]
Box, G.E.P., Jenkins, G., 1990. Time series analysis forecast ing and control. In: Wiley Series in Probability and Statistics. Holden-Day, Inc.
[4]
Brockwell, P.J., Davis, R.A., 1991. Time series: theory and methods. In: Springer Series in Statistics. Springer-Verlag, New York, NY, USA. http://dx.doi.org/10.1007/978-1-4419-0320-4
[5]
Cisco Systems, 2012. FNV-1. Available from http://www.isthe.com/chongo/tech/comp/fnv/index.html.
[6]
Colarelli, D., Grunwald, D., 2002. Massive arrays of idle disks for storage archives. Proc. ACM/IEEE Conf. on Supercomputing, p.1–11.
[7]
DeCandia, G., Hastorun, D., Jampani, M., , 2007. Dynamo: Amazon’s highly available key-value store. Proc. ACM SIGOPS Symp. on Operating Systems Principles, p.205–220. http://dx.doi.org/10.1145/1294261.1294281
[8]
Goiri, I., Le, K., Haque, M.E., , 2011. Greenslot: scheduling energy consumption in green datacenters. Proc. Int. Conf. for High Performance Computing, Networking, Storage and Analysis, p.1–11. http://dx.doi.org/10.1145/2063384.2063411
[9]
Goiri, I., Le, K., Nguyen, T.D., , 2012. GreenHadoop: leveraging green energy in data-processing frameworks. Proc. 7th ACM European Conf. on Computer Systems, p.57–70. http://dx.doi.org/10.1145/2168836.2168843
[10]
Gorini, S., Quirini, M., Menciassi, A., , 2007. PARAID: a Gear-Shifting Power-Aware Raid.
[11]
Harnik, D., Naor, D., Segall, I., 2009. Low power mode in cloud storage systems. Proc. Int. Symp. on Parallel and Distributed Processing Systems, p.1–8.
[12]
Karger, D., Lehman, E., Leighton, T., , 1997. Consistent hashing and random trees: distributed caching protocols for relieving hot spots on the World Wide Web. Proc. 29th Annual ACM Symp. on Theory of Computing, p.654–663. http://dx.doi.org/10.1145/258533.258660
[13]
Kaushik, R.T., Bhandarkar, M., 2010. GreenHDFS: towards an energy-conserving, storage-efficient, hybrid Hadoop compute cluster. Proc. Int. Conf. on Power Aware Computing and Systems, p.1–9.
[14]
Kaushik, R., Cherkasova, L., Campbell, R., , 2010.Lightning: self-adaptive, energy-conserving, multizoned, commodity green cloud storage system. Proc. 19th ACM Int. Symp. on High Performance Distributed Computing, p.332–335. http://dx.doi.org/10.1145/1851476.1851523
[15]
Lakshman, A., Malik, P., 2010. Cassandra—a decentralized structured storage system. ACM SIGOPS Oper. Syst. Rev., 44(2):35–40. http://dx.doi.org/10.1145/1773912.1773922
[16]
Li, C., Qouneh, A., Li, T., 2012. iSwitch: coordinating and optimizing renewable energy powered node clusters. Proc. 39th Annual Int. Symp. on Computer Architecture, p.512–523. http://dx.doi.org/10.1145/2366231.2337218
[17]
LinkedIn, 2009. Voldemort Project. Available from http://www.project-voldemort.com/voldemort/.
[18]
Microsoft Research Ltd., 2014. MRS Cambridge Traces..
[19]
MySQL, 2004. SysBench. Available from http://sysbench. sourceforge.net/.
[20]
Narayanan, D., Donnelly, A., Rowstron, A., 2008. Write offloading: practical power management for enterprise storage. ACM Trans. Stor., 4(3):1–10. http://dx.doi.org/10.1145/1416944.1416949
[21]
NTT Group, 2011. Sheepdog. Available from https:// github.com/sheepdog/sheepdog/wiki.
[22]
Open Source and Linux Organization, 2007. Blktrace User Guide. Hewlett-Packard Company.
[23]
Park, H., Park, K., 2001. Parallel algorithms for red-black trees. Theor. Comput. Sci., 262(1-2):415–435. http://dx.doi.org/10.1016/S0304-3975(00)00287-5
[24]
Pinheiro, E., Bianchini, R., 2004. Energy conservation techniques for disk array-based servers. Proc. 18th Annual Int. Conf. on Supercomputing, p.68–78. http://dx.doi.org/10.1145/1006209.1006220
[25]
Pinheiro, E., Bianchini, R., Dubnicki, C., 2006. Exploiting redundancy to conserve energy in storage systems. Proc. Joint Int. Conf. on Measurement and Modeling of Computer Systems, p.15–26. http://dx.doi.org/10.1145/1140277.1140281
[26]
Stoica, I., Morris, R., Karger, D., , 2001. Chord: a scalable peer-to-peer lookup service for Internet applications. Proc. Conf. on Applications, Technologies, Architectures, and Protocols for Computer Communications, p.149–160. http://dx.doi.org/10.1145/383059.383071
[27]
Thereska, E., Donnelly, A., Narayanan, D., 2011. Sierra: practical power-proportionality for data center storage. Proc. 6th Conf. on Computer Systems, p.169–182. http://dx.doi.org/10.1145/1966445.1966461
[28]
Zhu, Q., Chen, Z., Tan, L., , 2005. Hibernator: helping disk arrays sleep through the winter. 20th ACM Symp. on Operating Systems Principles, p.177–190. http://dx.doi.org/10.1145/1095810.1095828

RIGHTS & PERMISSIONS

2016 Zhejiang University and Springer-Verlag Berlin Heidelberg
PDF(2377 KB)

Accesses

Citations

Detail

Sections
Recommended

/