Resolving the GPU responsiveness dilemma through program transformations

Qi ZHU, Bo WU, Xipeng SHEN, Kai SHEN, Li SHEN, Zhiying WANG

PDF(1711 KB)
PDF(1711 KB)
Front. Comput. Sci. ›› 2018, Vol. 12 ›› Issue (3) : 545-559. DOI: 10.1007/s11704-016-6206-y
RESEARCH ARTICLE

Resolving the GPU responsiveness dilemma through program transformations

Author information +
History +

Abstract

The emerging integrated CPU–GPU architectures facilitate short computational kernels to utilize GPU acceleration. Evidence has shown that, on such systems, the GPU control responsiveness (how soon the host program finds out about the completion of a GPU kernel) is essential for the overall performance. This study identifies the GPU responsiveness dilemma: host busy polling responds quickly, but at the expense of high energy consumption and interference with co-running CPU programs; interrupt-based notification minimizes energy and CPU interference costs, but suffers from substantial response delay. We present a programlevel solution that wakes up the host program in anticipation of GPU kernel completion.We systematically explore the design space of an anticipatorywakeup scheme through a timerdelayed wakeup or kernel splitting-based pre-completion notification. Experiments show that our proposed technique can achieve the best of both worlds, high responsivenesswith low power and CPU costs, for a wide range of GPU workloads.

Keywords

program transformation / GPU / integrated architecture / responsiveness

Cite this article

Download citation ▾
Qi ZHU, Bo WU, Xipeng SHEN, Kai SHEN, Li SHEN, Zhiying WANG. Resolving the GPU responsiveness dilemma through program transformations. Front. Comput. Sci., 2018, 12(3): 545‒559 https://doi.org/10.1007/s11704-016-6206-y

References

[1]
Zhu Q, Zhu M, Wu B, Shen X, Shen K, Wang Z. Software engagement with sleeping CPUs. In: Proceedings of the 15th Workshop on Hot Topics in Operating Systems (HotOS). 2015
[2]
Gupta K, Stuart J A, Owens J D. A study of persistent threads style GPU programming for GPGPU workloads. Innovative Parallel Computing (InPar). 2012
[3]
Lee S, Johnson T, Eigenmann R. Cetus- an extensible compiler infrastructure for source-to-source transformation. In: Proceedings of the 16th AnnualWorkshop on Languages and Compilers for Parallel Computing. 2003, 539–553
[4]
Gonzalez R, Horowitz M. Energy dissipation in general purpose microprocessors. IEEE Journal of Solid-State Circuits, 1996, 31(9): 1277–1284
CrossRef Google scholar
[5]
Mekkat V, Holey A, Yew P C, Zhai A. Managing shared last-level cache in a heterogeneous multicore processor. In: Proceedings of the 22nd International Conference on Parallel Architectures and Compilation Techniques. 2013, 225–234
[6]
Zhu Q, Wu B, Shen X, Shen L, Wang Z. Understanding co-run degradations on integrated heterogeneous processors. In: Proceedings of International Workshop on Languages and Compilers for Parallel Computing. 2014, 82–97
[7]
Markatos E P, LeBlanc T J. Using processor affinity in loop scheduling on shared-memory multiprocessors. IEEE Transactions on Parallel Distributed Systems, 1994, 5(4): 379–400
CrossRef Google scholar
[8]
Squillante M S, Lazowska E D. Using processor-cache affinity information in shared-memory multiprocessor scheduling. IEEE Transactions on Parallel and Distributed Systems, 1993, 4(2): 131–143
CrossRef Google scholar
[9]
Gelado I, Stone J E, Cabezas J, Patel S, Navarro N, W. Hwu m W. An asymmetric distributed shared memory model for heterogeneous parallel systems. In: Proceedings of the 15th International Conference on Architectural Support for Programming Languages and Operating Systems. 2010, 347–358
[10]
Jiang Y, Shen X, Chen J, Tripathi R. Analysis and approximation of optimal co-scheduling on chip multiprocessors. In: Proceedings of the International Conference on Parallel Architecture and Compilation Techniques. 2008, 220–229
CrossRef Google scholar
[11]
Tian K, Jiang Y, Shen X. A study on optimally co-scheduling jobs of different lengths on chip multiprocessors. In: Proceedings of the 6th ACM Conference on Computing Frontiers. 2009, 41–50
CrossRef Google scholar
[12]
Fedorova A, Seltzer M, Smith M D. Improving performance isolation on chip multiprocessors via an operating system scheduler. In: Proceedings of the International Conference on Parallel Architecture and Compilation Techniques. 2007, 25–38
CrossRef Google scholar
[13]
El-Moursy A, Garg R, Albonesi D H, Dwarkadas S. Compatible phase co-scheduling on a cmp of multi-threaded processors. In: Proceedings of the International Parallel and Distribute Processing Symposium. 2006
CrossRef Google scholar
[14]
Chang J, Sohi G. Cooperative cache partitioning for chip multiprocessors. In: Proceedings of the 21st Annual International Conference on Supercomputing. 2007, 242–252
CrossRef Google scholar
[15]
Rafique N, Lim W, Thottethodi M. Architectural support for operating system-driven CMP cache management. In: Proceedings of the International Conference on Parallel Architecture and Compilation Techniques. 2006, 2–12
CrossRef Google scholar
[16]
Suh G, Devadas S, Rudolph L. A new memory monitoring scheme for memory-aware scheduling and partitioning. In: Proceedings of the 8th International Symposium on High-Performance Computer Architecture. 2002, 117–128
CrossRef Google scholar
[17]
Qureshi M K, Patt Y N. Utility-based cache partitioning: a lowoverhead, high-performance, runtime mechanism to partition shared caches. In: Proceedings of the International Symposium on Microarchitecture. 2006, 423–432
CrossRef Google scholar
[18]
Zhang E Z, Jiang Y, Shen X. Does cache sharing on modern cmpmatter to the performance of contemporary multithreaded programs? In: Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 2010, 203–212
[19]
Mars J, Tang L, Hundt R. Whare-map: Heterogeneity in “homogeneous” warehouse-scale computers. In: Proceedings of the 40th International Symposium on Computer Architecture. 2013, 1–12
CrossRef Google scholar
[20]
Zahedi S M, Lee B C. Ref: resource elasticity fairness with sharing incentives for multiprocessors. In: Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems. 2014
CrossRef Google scholar
[21]
Menychtas K, Shen K, Scott M L. Disengaged scheduling for fair, protected access to computational accelerators. In: Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems. 2014, 301–316
CrossRef Google scholar
[22]
Kato S, Lakshmanan K, Rajkumar R, Ishikawa Y. TimeGraph: GPU scheduling for real-time multi-tasking environments. In: Proceedings of the USENIX Annual Technical Conference. 2011
[23]
Wong H, Bracy A, Schuchman E, Aamodt T M, Collins J D, Wang P H, Chinya G, Groen A K, Jiang H, Wang H. Pangaea: a tightlycoupled ia32 heterogeneous chip multiprocessor. In: Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques. 2008, 52–61
CrossRef Google scholar

RIGHTS & PERMISSIONS

2018 Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature
AI Summary AI Mindmap
PDF(1711 KB)

Accesses

Citations

Detail

Sections
Recommended

/