FunctionFlow: coordinating parallel tasks

Xuepeng FAN, Xiaofei LIAO, Hai JIN

PDF(504 KB)
PDF(504 KB)
Front. Comput. Sci. ›› 2019, Vol. 13 ›› Issue (1) : 73-85. DOI: 10.1007/s11704-016-6286-8
RESEARCH ARTICLE

FunctionFlow: coordinating parallel tasks

Author information +
History +

Abstract

With the growing popularity of task-based parallel programming, nowadays task-parallel programming libraries and languages are still with limited support for coordinating parallel tasks. Such limitation forces programmers to use additional independent components to coordinate the parallel tasks — the components can be third-party libraries or additional components in the same programming library or language. Moreover, mixing tasks and coordination components increase the difficulty of task-based programming, and blind schedulers for understanding tasks’ dependencies.

In this paper, we propose a task-based parallel programming library, FunctionFlow, which coordinates tasks in the purpose of avoiding additional independent coordination components. First, we use dependency expression to represent ubiquitous tasks’ termination. The key idea behind dependency expression is to use && for both task’s termination and || for any task termination, along with the combination of dependency expressions. Second, as runtime support, we use a lightweight representation for dependency expression. Also, we use suspended-task queue to schedule tasks that still have prerequisites to run.

Finally, we demonstrate FunctionFlow’s effectiveness in two aspects, case study about implementing popular parallel patterns with FunctionFlow, and performance comparision with state-of-the-art practice, TBB. Our demonstration shows that FunctionFlow can generally coordinate parallel tasks without involving additional components, along with comparable performance with TBB.

Keywords

task parallel programming / tasks dependency / FunctionFlow / coordination patterns

Cite this article

Download citation ▾
Xuepeng FAN, Xiaofei LIAO, Hai JIN. FunctionFlow: coordinating parallel tasks. Front. Comput. Sci., 2019, 13(1): 73‒85 https://doi.org/10.1007/s11704-016-6286-8

References

[1]
Reinders J. Intel Threading Building Blocks: Outfitting C++ forMulticore Processor Parallelism. Sebastopol, CA: O’Reilly Media, Inc., 2007
[2]
Leijen D, Schulte W, Burckhardt S. The design of a task parallel library. In: Proceedings of ACM Annual Conference on Object Oriented Programming Systems, Languages, and Applications. 2009, 227–242
CrossRef Google scholar
[3]
Kambadur P, Gupta A, Ghoting A, Avron H, Lumsdaine A. PFunc: modern task parallelism for modern high performance computing. In: Proceedings of ACM Conference on High Performance Computing Networking, Storage and Analysis. 2009, 1–11
CrossRef Google scholar
[4]
Frigo M, Leiserson C E, Randall K H.The implementation of the Cilk- 5 multithreaded language. ACM SIGPLAN Notices, 1998, 33(5): 212–223
CrossRef Google scholar
[5]
Dagum L, Menon R. OpenMP: an industry standard API for sharedmemory programming. IEEE Computational Science and Engineering, 2002, 5(1): 46–55
CrossRef Google scholar
[6]
Saraswat V,Sarkar V, von Praun C. X10: concurrent programming for modern architectures. In: Proceedings of the 12th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 2007, 271
CrossRef Google scholar
[7]
Chamberlain B L, Callahan D, Zima H P. Parallel programmability and the Chapel language. International Journal of High Performance Computing Applications, 2013, 21(3): 291–312
CrossRef Google scholar
[8]
Imam S, Sarkar V. Cooperative scheduling of parallel tasks with general synchronization patterns. In: Proceedings of the 24th European Conference on Object-Oriented Programming. 2014, 618–643
CrossRef Google scholar
[9]
Saad Y. Iterative Methods for Sparse Linear Systems. 2nd ed. Philadelphia: SIAM, 2003
CrossRef Google scholar
[10]
Alexandrescu A. Modern C++ Design: Generic Programming and Design Patterns Applied. Addison Wesley, 2001
[11]
Pyla H K, Ribbens C, Varadarajan S. Exploiting coarse-grain speculative parallelism. ACM SIGPLAN Notices, 2011, 46(10): 555–574
CrossRef Google scholar
[12]
Kazi I H, Lilja D J. Coarse-grained thread pipelining: a speculative parallel execution model for shared-memory multiprocessors. IEEE Transactions on Parallel and Distributed Systems, 2001, 12(9): 952–966
CrossRef Google scholar
[13]
Li S, Hu C, Zhang J, Zhang Y. Automatic tuning of sparse matrixvector multiplication on multi core clusters. Science in China Series F: Information Sciences, 2015, 58(9): 1–14
[14]
Zhang F, Qiao X, Liu Z. Parallel divide and conquer bio-sequence comparison based on smith-waterman algorithm. Science in China Series F: Information Sciences, 2004, 47(2): 221–231
CrossRef Google scholar
[15]
Chi C C, Juurlink B, Meenderinck C. Evaluation of parallel H.264 decoding strategies for the cell broadband engine. In: Proceedings of the 24th ACM International Conference on Supercomputing. 2010, 105–114
CrossRef Google scholar
[16]
Subhlok J, Stichnoth J M, O'hallaron D R, Gross T. Exploiting task and data parallelism on a multicomputer. In: Proceedings of the 4th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 1993, 13–22
CrossRef Google scholar
[17]
Chase D, Lev Y. Dynamic circular work-stealing deque. In: Proceedings of the 17th Annual ACM Symposium on Parallelism in Algorithms and Architectures. 2005, 21–28
CrossRef Google scholar
[18]
Dechev D, Pirkelbauer P, Stroustrup B. Understanding and effectively preventing the ABA problem in descriptor-based lock-free designs. In: Proceedings of the 13th IEEE International Symposiumon Object/Component/Service-Oriented Real-Time Distributed Computing. 2010, 185–192
CrossRef Google scholar
[19]
Herlihy M. Wait-free synchronization. ACMTransactions on Programming Languages and Systems, 1991, 13(1): 124–149
CrossRef Google scholar
[20]
Bienia C, Li K. PARSEC 2.0: a new benchmark suite for chipmultiprocessors. In: Proceedings of the 5th AnnualWorkshop on Modeling Benchmarking and Simulation. 2009
[21]
Woo S C, Ohara M, Torrie E, Singh J P, Gupta A. The SPLASH-2 programs: characterization and methodological considerations. In: Proceedings of the 22nd ACM Annual International Symposium on Computer Architecture. 1995, 24–36
CrossRef Google scholar
[22]
Bahmani B, Moseley B, Vattani A, Kumar R, Vassilvitskii S. Scalable k-means++. Very Large Data Bases Endowment, 2012, 5(7): 622–633
CrossRef Google scholar
[23]
Luo Y, Duraiswami R. Canny edge detection on NVIDIA CUDA. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2008, 1–8
[24]
Spatz C. Basic Statistics: Tales of Distributions. Belmont: Wads worth Cengage Learning, 1981
[25]
Zhou J, Demsky B. Bamboo: a data-centric, object-oriented approach to many-core software. In: Proceedings of ACM SIGPLAN Conference on Programming Language Design and Implementation. 2010, 388–399
CrossRef Google scholar
[26]
Tzenakis G, Papatriantafyllou A, Vandierendonck H, Pratikakis P, Nikolopoulos D S. BDDT: blocklevel dynamic dependence analysis for task-based parallelism. Lecture Notes in Computer Science, 2013, 8299: 17–31
CrossRef Google scholar
[27]
Lam M S, Rinard M C. Coarse-grain parallel programming in Jade. In: Proceedings of the 3rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 1991, 94–105
CrossRef Google scholar
[28]
Perez J M, Badia R M, Labarta J. A dependency-aware task-based programming environment for multi-core architectures. In: Proceedings of the 9th IEEE International Conference on Cluster Computing. 2008, 142–151
CrossRef Google scholar
[29]
Chatterjee S S, Gururaj R. Lazy-parallel function calls for automatic parallelization. In: Proceedings of the 1st International Conference on Computational Intelligence and Information Technology. 2011, 811–816
CrossRef Google scholar
[30]
Aldinucci M, Danelutto M, Kilpatrick P, Torquati M. Fastflow: highlevel and efficient streaming on multi-core. In: Pllana S, Xhafa F, eds. Programming Multi-core and Many-core Computing Systems. Parallel and Distributed Computing, Chapter 13. Wiley, 2014
[31]
Tasirlar S, Sarkar V. Data-driven tasks and their implementation. In: Proceedings of the 40th IEEE International Conference on Parallel Processing. 2011, 652–661
CrossRef Google scholar
[32]
Fan X, Jin H, Zhu L, Liao X, Ye C, Tu X. Function flow: making synchronization easier in task parallelism. In: Proceedings of the 2012 ACM International Workshop on Programming Models and Applications for Multicores and Manycores. 2012, 74–82
CrossRef Google scholar
[33]
Kwok Y K, Ahmad I. Static scheduling algorithms for allocating directed task graphs to multiprocessors. ACM Computing Surveys, 1999, 31(4): 406–471
CrossRef Google scholar
[34]
Guo Y, Barik R, Raman R, Sarkar V. Work-first and help-first scheduling policies fora sync-finish task parallelism. In: Proceedings of the 23rd IEEE International Symposium on Parallel and Distributed Processing Symposium. 2009, 1–12
[35]
Tardieu O, Wang H, Lin H. A work-stealing scheduler for X10’s task parallelism with suspension. In: Proceedings of the 17th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 2012, 267–276
CrossRef Google scholar
[36]
Xia Y, Prasanna V K, Li J. Hierarchical scheduling of DAG structured computations on manycore processors with dynamic thread grouping. In: Proceedings of the 15thWorkshop on Job Scheduling Strategies for Parallel Processing. 2010, 154–174
CrossRef Google scholar
[37]
Ahmad I, Kwok Y K, Wu M Y. Analysis, evaluation, and comparison of algorithms for scheduling task graphs onparallel processors. In: Proceedings of the 2nd IEEE International Symposium on Parallel Architectures, Algorithms, and Networks. 1996, 207–213
[38]
Agarwal S, Barik R, Bonachea D, Sarkar V, Shyamasundar R K, Yelick K. Deadlock-free scheduling of X10 computations with bounded resources. In: Proceedings of the 19th Annual ACM symposium on Parallel Algorithms and Architectures. 2007, 229–240
CrossRef Google scholar
[39]
Agrawal K, Leiserson C E, Sukha J. Executing task graphs using workstealing. In: Proceedings of the 24th IEEE International Symposium on Parallel and Distributed Processing Symposium. 2010, 1–12

RIGHTS & PERMISSIONS

2018 Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature
AI Summary AI Mindmap
PDF(504 KB)

Accesses

Citations

Detail

Sections
Recommended

/