
A bi-metric autoscaling approach for n-tier web applications on kubernetes
Changpeng ZHU, Bo HAN, Yinliang ZHAO
Front. Comput. Sci. ›› 2022, Vol. 16 ›› Issue (3) : 163101.
A bi-metric autoscaling approach for n-tier web applications on kubernetes
Container-based virtualization techniques are becoming an alternative to traditional virtual machines, due to less overhead and better scaling. As one of the most widely used open-source container orchestration systems, Kubernetes provides a built-in mechanism, that is, horizontal pod autoscaler (HPA), for dynamic resource provisioning. By default, scaling pods only based on CPU utilization, a single performance metric, HPA may create more pods than actually needed. Through extensive measurements of a containerized n-tier application benchmark, RUBBoS, we find that excessive pods consume more CPU and memory and even deteriorate response times of applications, due to interference. Furthermore, a Kubernetes service does not balance incoming requests among old pods and new pods created by HPA, due to stateful HTTP. In this paper, we propose a bi-metric approach to scaling pods by taking into account both CPU utilization and utilization of a thread pool, which is a kind of important soft resource in Httpd and Tomcat. Our approach collects the utilization of CPU and memory of pods. Meanwhile, it makes use of ELBA, a milli-bottleneck detector, to calculate queue lengths of Httpd and Tomcat pods and then evaluate the utilization of their thread pools. Based on the utilization of both CPU and thread pools, our approach could scale up less replicas of Httpd and Tomcat pods, contributing to a reduction of hardware resource utilization. At the same time, our approach leverages preStop hook along with liveness and readiness probes to relieve load imbalance among old Tomcat pods and new ones. Based on the containerized RUBBoS, our experimental results show that the proposed approach could not only reduce the usage of CPU and memory by as much as 14% and 24% when compared with HPA, but also relieve the load imbalance to reduce average response time of requests by as much as 80%. Our approach also demonstrates that it is better to scale pods by multiple metrics rather than a single one.
autoscaling / container / kubernetes / n-tier web application / ELBA
[1] |
Xavier M G , Neves M V , Rose C A F D . A super-peer model for resource discovery services in large-scale grids. Future Generation Computer Systems, 2005, 21( 8): 299– 306
|
[2] |
Burns B , Grant B , Oppenheimer D , Brewer E , Wilkes J . Borg, omega, and kubernetes. Queue, 2016, 14( 1): 70– 93
CrossRef
Google scholar
|
[3] |
Wang Q Y, Malkowski S, Jayasinghe D, Xiong P C, Pu C, Kanemasa Y, Kawaba M, Harada L. The impact of soft resource allocation on n-tier application scalability. In: Proceedings of the 25th IEEE International Symposium on Parallel and Distributed Processing. 2011, 1034−1045.
|
[4] |
Lai C, Kimball J, Zhu T, Wang Q, Pu C. MilliScope: a fine-grained monitoring framework for performance debugging of n-tier web services. In: Proceedings of the 37th IEEE International Conference on Distributed Computing Systems. 2017, 92−102
|
[5] |
Wang Q Y, Kanemasa Y, Li J, Jayasinghe D, Shimizu T, Matsubara M, Kawaba M, Pu C. Detecting transient bottlenecks in n-tier applications through fine-grained analysis. In: Proceedings of the 33rd IEEE International Conference on Distributed Computing Systems. 2013, 31−40
|
[6] |
White B, Lepreau J, Stoller L, Ricci R, Guruprasad S, Newbold M, Hibler M, Barb C, Joglekar A. An integrated experimental environment for distributed systems and Networks. In: Proceedings of the 5th Symposium on Operating Systems Design and Implementation. 2002, 255−270
|
[7] |
Wang Q Y, Kanemasa Y, Li J, Lai C A, Cho C A, Nomura Y, Pu C. Lightning in the cloud: a study of transient bottlenecks on n-tier web application performance. In: Proceedings of 2014 Conference on Timely Results in Operating Systems. 2014, 1−15
|
[8] |
Zhu C P , Zhao Y L , Bo H , Zeng Q H , Ma Y . Runtime support for type-safe and context-based behavior adaptation. Frontiers of Computer Science, 2014, 8( 1): 17– 32
CrossRef
Google scholar
|
[9] |
Bernstein D . Containers and cloud: from LXC to docker to Kubernetes. IEEE Cloud Computing, 2014, 1( 3): 81– 84
CrossRef
Google scholar
|
[10] |
Pahl C . Containerization and the PaaS cloud. IEEE Cloud Computing, 2015, 2( 3): 24– 31
CrossRef
Google scholar
|
[11] |
Felter W, Ferreira A, Rajamony R, Rubio J. An updated performance comparison of virtual machines and Linux containers In: Proceedings of 2015 IEEE International Symposium on Performance Analysis of Systems and Software. 2015, 171−172
|
[12] |
Zhang Q, Liu L, Pu C, Dou Q, Wu L, Zhou W. A comparative study of containers and virtual machines in big data environment. In: Proceedings of the 11th IEEE International Conference on Cloud Computing. 2018, 178−185
|
[13] |
Ruan B, Huang H, Wu S, Jin H. A performance study of containers in cloud environment. In: Proceedings of the 11th IEEE International Conference on Cloud Computing. 2016, 343−356
|
[14] |
Abdollahi Vayghan L, Saied M A, Toeroe M, Khendek F. Microservice based architecture: towards high-availability for stateful applications with Kubernetes. In: Proceedings of the 19th IEEE International Conference on Software Quality, Reliability and Security. 2019, 176−185
|
[15] |
Chang C, Yang S, Yeh E. A Kubernetes-based monitoring platform for dynamic cloud resource provisioning. In: Proceedings of 2017 IEEE Global Communications Conference. 2017, 1−6
|
[16] |
Medel V, Tolon C, Arronategu U, Tolosana-Calasanz R, Banares J A, Rana O F. Client-side scheduling based on application characterization on Kubernetes. In: Proceedings of International Conference on the Economics of Grids, Clouds, Systems, and Services. 2017, 162−176.
|
[17] |
Kho Lin S, Altaf U, Jayaputera G, Li J, Marques D, Meggyesy D, Sarwar S, Sharma S, Voorsluys W, Sinnott R, Novak A, Nguyen V, Pash K. Auto-scaling a defence application across the cloud using docker and Kubernetes. In: Proceedings of 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion. 2018, 327−334
|
[18] |
Sun Y , Meng L , Song Y K . AutoScale: adaptive QoS-aware container-based cloud applications scheduling framework. KSII Transactions on Internet and Information Systems, 2019, 13( 6): 2824– 2837
|
[19] |
Al-Haidari F, Sqalli M, Salah K. Impact of CPU utilization thresholds and scaling size on autoscaling cloud resources. In: Proceedings of the 5th IEEE International Conference on Cloud Computing Technology and Science. 2014, 256−261
|
[20] |
Sarajlic S, Chastang J, Marru S, Jeremy F, Mike L. Scaling JupyterHub using Kubernetes on jetstream cloud: platform as a service for research and educational initiatives in the atmospheric sciences. In: Proceedings of the Practice and Experience on Advanced Research Computing. 2018, 1−4
|
[21] |
Versluis L, Neacsu M, Iosup A. A trace-based performance study of autoscaling workloads of workflows in datacenters. In: Proceedings of the 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing. 2018, 223−232
|
[22] |
Ilyushkin A, Ali-Eldin A, Herbst N, Andre B, Alessandro P, Alexandru I. An experimental performance evaluation of autoscalers for complex workflows. ACM Transactions on Modeling and Performance Evaluation of Computing Systems, 2018, 3(2):1−32
|
[23] |
Lorido-Botran T , Miguel-Alonso J , Lozano J . A review of auto-scaling techniques for elastic applications in cloud environments. Journal of Grid Computing, 2014, 12( 4): 559– 592
CrossRef
Google scholar
|
[24] |
Shah J, Dubaria D. Building modern clouds: using docker, Kubernetes google cloud platform. In: Proceedings of the 9th IEEE Annual Computing and Communication Workshop and Conference. 2019, 184−189
|
[25] |
Wang Q Y, Kanemasa K, Kawaba M, Pu C. When average is not average: large response time fluctuations in n-tier systems. In: Proceedings of the 9th International Conference on Autonomic Computing. 2012, 33−42
|
[26] |
Pu C, Kimball J, Lai C, Zhu T, Li J, Park J, Wang Q Y. The millibottleneck theory of performance bugs, and its experimental verification. In: Proceedings of the 37th IEEE International Conference on Distributed Computing Systems. 2017, 1919−1926
|
[27] |
Zhu T, Li J, Kimball J, Park J, Lai C, Pu C, Wang Q Y. Limitations of load balancing mechanisms for n-tier systems in the presence of millibottlenecks. In: Proceedings of the 37th IEEE International Conference on Distributed Computing Systems. 2017, 1367−1377
|
/
〈 |
|
〉 |