PDF
(592KB)
Abstract
Data items are usually replicated in modern distributed data stores to obtain high performance and availability. However, the availability-consistency and latencyconsistency trade-offs exist in data replication, thus system designers intend to choose weak consistency models, such as eventual consistency, which may result in stale reads. Since stale data items may lead to serious application semantic problems, we consider how to increase the probability of data recency which provides a uniform view on recent versions of data items for all clients. In this work, we propose HARP, a framework that can enhance data recency of eventually consistent distributed data stores in an efficient and highly available way. Through detecting possible stale reads under failures or not, HARP can perform reread operations to eliminate stale results only when needed based on our analysis on write/read processes. We also present solutions on how to deal with some practical anomalies in HARP, including delayed, reordered and dropped messages and clock drift, and show how to extend HARP to multiple datacenters. Finally we implement HARP based on Cassandra, and the experiments show that HARP can effectively eliminate stale reads, with a low overhead (less than 6.9%) compared with original eventually consistent Cassandra.
Keywords
eventual consistency
/
high availability
/
data recency
/
stale read
Cite this article
Download citation ▾
Yu TANG, Hailong SUN, Xu WANG, Xudong LIU.
An efficient and highly available framework of data recency enhancement for eventually consistent data stores.
Front. Comput. Sci., 2017, 11(1): 88-104 DOI:10.1007/s11704-016-6041-1
| [1] |
DeCandia G, Hastorun D, Jampani M, Kakulapati G, Lakshman A, Pilchin A, Sivasubramanian S, Vosshall P, Vogels W. Dynamo: Amazon’s highly available key-value store. In: Proceedings of ACM Symposium on Operating Systems Principles. 2007, 205–220
|
| [2] |
Brewer E A. Towards robust distributed systems. In: Proceedings of the 19th ACM Symposium on Principles of Distributed Computing. 2000
|
| [3] |
Abadi D. Consistency tradeoffs in modern distributed database system design: cap is only part of the story. IEEE Computer, 2012, 45(2): 37–42
|
| [4] |
Lakshman A, Malik P. Cassandra: a decentralized structured storage system. ACM SIGOPS Operating Systems Review, 2010, 44(2): 35–40
|
| [5] |
Vogels W. Eventually consistent. Communications of the ACM, 2009, 52(1): 14–19
|
| [6] |
Saito Y, Shapiro M. Optimistic replication. ACM Computing Surveys, 2005, 37(1): 42–81
|
| [7] |
Bailis P, Venkataraman S, Franklin M J, Hellerstein J M, Stoica I. Probabilistically bounded staleness for practical partial quorums. In: Proceedings of International Conference on Very Large Data Bases. 2012, 776–787
|
| [8] |
Bailis P, Venkataraman S, Franklin M J, Hellerstein J M, Stoica I. Quantifying eventual consistency with PBS. VLDB Journal, 2014, 23(2): 279–302
|
| [9] |
Dcmers A, Greene D, Hauser C, Irish W, Larson J, Shenkcr S, Sturgis H, Swinehart D, Terry D. Epidemic algorithms for replicated database maintenance. In: Proceedings of ACM Symposium on Principles of Distributed Computing. 1987, 1–12
|
| [10] |
Cooper B F, Silberstein A, Tam E, Ramakrishnan R, Sears R. Benchmarking cloud serving systems with YCSB. In: Proceedings of ACM Symposium on Cloud Computing. 2010, 143–154
|
| [11] |
Herlihy M P, Wing J M. Linearizability: a correctness condition for concurrent objects. ACM Transactions on Programming Languages & Systems, 1990, 12(3): 463–492
|
| [12] |
Lamport L. On interprocess communication. Distributed Computing, 1986, 1(2): 86–101
|
| [13] |
Gilbert S, Lynch N. Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant Web services. ACM SIGACT News, 2002, 33(2): 51–59
|
| [14] |
Mahajan P, Alvisi L, Dahlin M. Consistency, availability, convergence. Technical Report. 2011
|
| [15] |
Alpern B, Schneider F B. Recognizing safety and liveness. Distributed Computing, 1987, 2(3): 117–126
|
| [16] |
Lamport L. Time, clocks, and the ordering of events in a distributed system. Communications of the ACM, 1978, 21(14): 558–565
|
| [17] |
Wang X, Sun H L, Deng T, Huai J D. A quantitative analysis of quorum system availability in data centers. In: Proceedings of the 22nd IEEE International Symposium on Quality of Service. 2014, 99–104
|
| [18] |
Ahamad M, Neiger G, Burns J E, Kohli P, Hutto P W. Causal memory: definitions, implementation, and programming. Distributed Computing, 1995, 9(1): 37–49
|
| [19] |
Bailis P, Ghodsi A, Hellerstein J M, Stoica I. Bolt-on causal consistency. In: Proceedings of ACM SIGMOD International Conference on Management of Data. 2013, 761–772
|
| [20] |
Lloyd W, Freedman M J, Kaminsky M, Andersen D G. Don’t settle for eventual: scalable causal consistency for wide-area storage with cops. In: Proceedings of ACMSymposium on Operating Systems Principles. 2011, 401–416
|
| [21] |
Davidson S B, Garcia-Molina H, Skeen D. Consistency in a partitioned network. ACM Computing Surveys, 1985, 17(3): 341–370
|
| [22] |
Dean J S. Designs, lessons and advice from building large distributed systems. In: Proceedings of the Workshop on Large-Scale Distributed Systems and Middleware. 2009
|
| [23] |
Gill P, Jain N, Nagappan N. Understanding network failures in data centers: measurement, analysis, and implications. In: Proceedings of ACM International Conference on the applications, technologies, architectures, and protocols for computer communication. 2011, 350–361
|
| [24] |
Bailis P, Ghodsi A. Eventual consistency today: limitations, extensions, and beyond. Queue, 2013, 11(3): 55–63
|
| [25] |
Lloyd W, Freedman M J, Kaminsky M, Andersen D G. Stronger semantics for low-latency geo-replicated storage. In: Proceedings of the USENIX Conference on Networked Systems Design and Implementation. 2013, 313–328
|
| [26] |
Du J, Iorgulescu C, Roy A, Zwaenepoel W. GentleRain: cheap and scalable causal consistency with physical clocks. ACM Symposium on Cloud Computing. 2014, 1–13
|
| [27] |
Bailis P, Davidson A, Fekete A, Ghodsi A, Hellerstein J M, Stoica I. Highly available transactions: virtues and limitations. In: Proceedings of International Conference on Very Large Data Bases. 2013, 181–192
|
| [28] |
Tang Y, Sun H L, Wang X, Liu X D. Harp: towards enhancing data recency for eventually consistent data stores. In: Proceedings of IEEE International Conference on Parallel and Distributed Systems. 2014, 685–692
|
RIGHTS & PERMISSIONS
Higher Education Press and Springer-Verlag Berlin Heidelberg