Compiler testing: a systematic literature analysis

Yixuan TANG, Zhilei REN, Weiqiang KONG, He JIANG

PDF(1364 KB)
PDF(1364 KB)
Front. Comput. Sci. ›› 2020, Vol. 14 ›› Issue (1) : 1-20. DOI: 10.1007/s11704-019-8231-0
RESEARCH ARTICLE

Compiler testing: a systematic literature analysis

Author information +
History +

Abstract

Compilers are widely-used infrastructures in accelerating the software development, and expected to be trustworthy. In the literature, various testing technologies have been proposed to guarantee the quality of compilers. However, there remains an obstacle to comprehensively characterize and understand compiler testing. To overcome this obstacle, we propose a literature analysis framework to gain insights into the compiler testing area. First, we perform an extensive search to construct a dataset related to compiler testing papers. Then, we conduct a bibliometric analysis to analyze the productive authors, the influential papers, and the frequently tested compilers based on our dataset. Finally, we utilize association rules and collaboration networks to mine the authorships and the communities of interests among researchers and keywords. Some valuable results are reported. We find that the USA is the leading country that contains the most influential researchers and institutions. The most active keyword is “random testing”. We also find that most researchers have broad interests within small-scale collaborators in the compiler testing area.

Keywords

software engineering / compiler-theory and techniques / literature analysis / collaboration network / bibliometric analysis

Cite this article

Download citation ▾
Yixuan TANG, Zhilei REN, Weiqiang KONG, He JIANG. Compiler testing: a systematic literature analysis. Front. Comput. Sci., 2020, 14(1): 1‒20 https://doi.org/10.1007/s11704-019-8231-0

References

[1]
Howard M. A process for performing security code reviews. IEEE Security and Privacy, 2006, 4(4): 74–79
CrossRef Google scholar
[2]
Pearse T, Oman P. Maintainability measurements on industrial source code maintenance activities. In: Proceedings of the International Conference on Software Maintenance. 1995, 295–303
CrossRef Google scholar
[3]
Sun C, Le V, Zhang Q, Su Z. Toward understanding compiler bugs in GCC and LLVM. In: Proceedings of the 25th International Symposium on Software Testing and Analysis. 2016, 294–305
CrossRef Google scholar
[4]
Sun C, Le V, Su Z. Finding and analyzing compiler warning defects. In: Proceedings of the 38th IEEE/ACM International Conference on Software Engineering. 2016, 203–213
CrossRef Google scholar
[5]
Le V, Afshari M, Su Z. Compiler validation via equivalence modulo inputs. In: Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation. 2014, 216–226
CrossRef Google scholar
[6]
Yang X, Chen Y, Eide E, Regehr J. Finding and understanding bugs in C compilers. In: Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation. 2011, 283–294
CrossRef Google scholar
[7]
Chen J, Hu W, Hao D, Xiong Y, Zhang H, Lu Z, Xie B. An empirical comparison of compiler testing techniques. In: Proceedings of the 38th IEEE/ACM International Conference on Software Engineering. 2016, 180–190
CrossRef Google scholar
[8]
Lidbury C, Lascu A, Chong N, Donaldson A F. Many-core compiler fuzzing. In: Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation. 2015, 65–76
CrossRef Google scholar
[9]
Sheridan F. Practical testing of a C99 compiler using output comparison. Software: Practice and Experience, 2007, 37(14): 1475–1488
CrossRef Google scholar
[10]
Nagai E, Hashimoto A, Ishiura N. Reinforcing random testing of arithmetic optimization of C compilers by scaling up size and number of expressions. IPSJ Transactions on System LSI Design Methodology, 2014, 7(4): 91–100
CrossRef Google scholar
[11]
Chen Y, Groce A, Zhang C, Wong W K, Fern X, Eide E, Regehr J. Taming compiler fuzzers. In: Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation. 2013, 197–208
CrossRef Google scholar
[12]
Regehr J, Chen Y, Cuoq P, Eide E, Ellison C, Yang X. Test-case reduction for C compiler bugs. In: Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation. 2012, 335–346
CrossRef Google scholar
[13]
Lindig C. Find a compiler bug in 5 minutes. British Journal of Ophthalmology, 2005, 79(4): 387–396
[14]
Lindig C. Random testing of C calling conventions. In: Proceedings of the 6th International Symposium on Automated Analysis-Driven Debugging. 2005, 3–12
CrossRef Google scholar
[15]
Eide E, Regehr J. Volatiles are miscompiled, and what to do about it. In: Proceedings of the 8th ACM International Conference on Embedded Software. 2008, 255–264
CrossRef Google scholar
[16]
Zhao C, Xue Y, Tao Q, Guo L, Wang Z. Automated test program generation for an industrial optimizing compiler. In: Proceedings of ICSE Workshop on Automation of Software Test. 2009, 36–43
[17]
McKeeman W M. Differential testing for software. Digital Technical Journal, 1998, 10(1): 100–107
[18]
Le V, Sun C, Su Z. Randomized stress-testing of link-time optimizers. In: Proceedings of the 2015 International Symposium on Software Testing and Analysis. 2015, 327–337
CrossRef Google scholar
[19]
Hariri F, Shi A, Converse H, Khurshid S, Marinov D. Evaluating the effects of compiler optimizations on mutation testing at the compiler ir level. In: Proceedings of the 27th IEEE International Symposium on Software Reliability Engineering. 2016, 105–115
CrossRef Google scholar
[20]
Tao Q, Wu W, Zhao C, Shen W. An automatic testing approach for compiler based on metamorphic testing technique. In: Proceedings of the 17th Asia Pacific Software Engineering Conference. 2010, 270–279
CrossRef Google scholar
[21]
Donaldson A F, Lascu A. Metamorphic testing for (graphics) compilers. In: Proceedings of the 1st International Workshop on Metamorphic Testing. 2016, 44–47
CrossRef Google scholar
[22]
Pflanzer M, Donaldson A F, Lascu A. Automatic test case reduction for opencl. In: Proceedings of the 4th International Workshop on OpenCL. 2016, 1–12
CrossRef Google scholar
[23]
Ren Z, Jiang H, Xuan J, Yang Z. Automated localization for unreproducible builds. In: Proceedings of the 40th International Conference on Software Engineering. 2018, 71–81
CrossRef Google scholar
[24]
Jiang H, Li X, Yang Z, Xuan J. What causes my test alarm? Automatic cause analysis for test alarms in system and integration testing. In: Proceedings of the 39th International Conference on Software Engineering. 2017, 712–723
CrossRef Google scholar
[25]
Celentano A, Reghizzi S C, Vigna P D, Ghezzi C, Granata G, Savoretti F. Compiler testing using a sentence generator. Software: Practice and Experience, 1980, 10(11): 897–918
CrossRef Google scholar
[26]
Boujarwah A S, Saleh K, Al-Dallal J. Testing syntax and semantic coverage of Java language compilers. Information and Software Technology, 1999, 41(1): 15–28
CrossRef Google scholar
[27]
Chae H S, Woo G, Kim T Y, Bae J H, Kim W Y. An automated approach to reducing test suites for testing retargeted C compilers for embedded systems. Journal of Systems and Software, 2011, 84(12): 2053–2064
CrossRef Google scholar
[28]
Wu M Y, Fox G C. A test suite approach for Fortran90D compilers on MIMD distributed memory parallel computers. In: Proceedings of Scalable High Performance Computing Conference. 1992, 393–400
CrossRef Google scholar
[29]
Kalinov A, Kossatchev A, Posypkin M, Shishkov V. Using ASM specification for automatic test suite generation for mpC parallel programming language compiler. In: Proceedings of the 4th International Workshop on Action Semantic. 2002, 99–109
[30]
Zhang Q, Sun C, Su Z. Skeletal program enumeration for rigorous compiler testing. In: Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation. 2017, 347–361
CrossRef Google scholar
[31]
Barr E T, Harman M, McMinn P, Shahbaz M, Yoo S. The oracle problem in software testing: a survey. IEEE Transactions on Software Engineering, 2015, 41(5): 507–525
CrossRef Google scholar
[32]
Leroy X. Formal verification of a realistic compiler. Communications of the ACM, 2009, 52(7): 107–115
CrossRef Google scholar
[33]
Kong W, Liu L, Ando T, Yatsu H, Hisazumi K, Fukuda A. Facilitating multicore bounded model checking with stateless explicit-state exploration. The Computer Journal, 2014, 58(11): 2824–2840
CrossRef Google scholar
[34]
Le V, Sun C, Su Z. Finding deep compiler bugs via guided stochastic program mutation. In: Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming Systems, Languages, and Applications. 2015, 50(10): 386–399
CrossRef Google scholar
[35]
Sun C, Le V, Su Z. Finding compiler bugs via live code mutation. In: Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming Systems, Languages, and Applications. 2016, 849–863
CrossRef Google scholar
[36]
Mei H, Hao D, Zhang L, Zhang L, Zhou J, Rothermel G. A static approach to prioritizing junit test cases. IEEE Transactions on Software Engineering, 2012, 38(6): 1258–1275
CrossRef Google scholar
[37]
Chen J, Bai Y, Hao D, Xiong Y, Zhang H, Xie B. Learning to prioritize test programs for compiler testing. In: Proceedings of the 39th International Conference on Software Engineering. 2017, 700–711
CrossRef Google scholar
[38]
Li X, Jiang H, Liu D, Ren Z, Li G. Unsupervised deep bug report summarization. In: Proceedings of the 26th International Conference on Program Comprehension. 2018, 144–155
CrossRef Google scholar
[39]
Nagai E, Awazu H, Ishiura N, Takeda N. Random testing of C compilers targeting arithmetic optimization. In: Proceedings of the Workshop on Synthesis and System Integration of Mixed Information Technologies. 2012, 48–53
[40]
Garousi V, Mesbah A, Betin-Can A, Mirshokraie S. A systematic mapping study of Web application testing. Information and Software Technology, 2013, 55(8): 1374–1396
CrossRef Google scholar
[41]
Kanewala U, Bieman J M. Testing scientific software: a systematic literature review. Information and Software Technology, 2014, 56(10): 1219–1232
CrossRef Google scholar
[42]
Mihalcea R, Tarau P. Textrank: bringing order into text. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. 2004, 404–411
[43]
Balcerzak B, Jaworski W, Wierzbicki A. Application of TextRank algorithm for credibility assessment. In: Proceedings of the 2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT). 2014, 451–454
CrossRef Google scholar
[44]
Rahman M M, Roy C K. TextRank based search term identification for software change tasks. In: Proceedings of the 22nd IEEE International Conference on Software Analysis, Evolution and Reengineering. 2015, 540–544
[45]
Holsapple C W, Johnson L E, Manakyan H, Tanner J. Business computing research journals: a normalized citation analysis. Journal of Management Information Systems, 1994, 11(1): 131–140
CrossRef Google scholar
[46]
Mcclure C R. Foundations of library and information science. Journal of Academic Librarianship, 1998, 24(6): 491–492
CrossRef Google scholar
[47]
Blondel V D, Guillaume J L, Lambiotte R, Lefebvre E. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008, 2008(10): 10008–10020
CrossRef Google scholar
[48]
Agrawal R, Srikant R. Fast algorithms for mining association rules. In: Proceedings of the 20th International Conference on Very Large Data Bases. 1994, 487–499
[49]
Bastian M, Heymann S, Jacomy M. Gephi: an open source software for exploring and manipulating networks. In: Proceedings of International Conference on Weblogs and Social Media. 2009, 361–362
[50]
Jacomy M, Venturini T, Heymann S, Bastian M. ForceAtlas2, a continuous graph layout algorithm for handy network visualization designed for the Gephi software. Public Library of Science One, 2014, 9(6): e98679
CrossRef Google scholar
[51]
Su H N, Lee P C. Mapping knowledge structure by keyword cooccurrence: a first look at journal papers in technology foresight. Scientometrics, 2010, 85(1): 65–79
CrossRef Google scholar
[52]
Mei H, Zhang L. Can big data bring a breakthrough for software automation. Science China (Information Sciences), 2018, 61(5): 056101
CrossRef Google scholar
[53]
Lattner C, Adve V. LLVM: a compilation framework for lifelong program analysis and transformation. In: Proceedings of the International Symposium on Code Generation and Optimization: Feedback-directed and Runtime Optimization. 2004, 75–86
CrossRef Google scholar
[54]
Zelenov S, Zelenova S. Model-based testing of optimizing compilers. In: Proceedings of the International Conference on Testing of Software and Communicating Systems. 2007, 365–377
CrossRef Google scholar
[55]
Chen J, Bai Y, Hao D, Xiong Y, Zhang H, Zhang L, Xie B. Test case prioritization for compilers: a text-vector based approach. In: Proceedings of 2016 IEEE International Conference on Software Testing, Verification and Validation. 2016, 266–277
CrossRef Google scholar
[56]
Woo G, Chae H S, Jang H. An intermediate representation approach to reducing test suites for retargeted compilers. In: Proceedings of the International Conference on Reliable Software Technologies. 2007, 100–113
CrossRef Google scholar
[57]
Wohlin C. An analysis of the most cited articles in software engineering journals — 1999. Information and Software Technology, 2005, 47(15): 957–964
CrossRef Google scholar
[58]
Wohlin C. An analysis of the most cited articles in software engineering journals — 2000. Information and Software Technology, 2007, 49(1): 2–11
CrossRef Google scholar
[59]
Wohlin C. An analysis of the most cited articles in software engineering journals — 2001. Information and Software Technology, 2008, 50(1–2): 3–9
CrossRef Google scholar
[60]
Wohlin C. An analysis of the most cited articles in software engineering journals — 2002. Information and Software Technology, 2009, 50(1): 3–6
CrossRef Google scholar
[61]
Wong W E, Tse T H, Glass R L, Basili V R, Chen T Y. An assessment of systems and software engineering scholars and institutions (2001— 2005). Journal of Systems and Software, 2008, 81(6): 1059–1062
CrossRef Google scholar
[62]
Wong W E, Tse T H, Glass R L, Basili V R, Chen T Y. An assessment of systems and software engineering scholars and institutions (2002— 2006). Journal of Systems and Software, 2009, 82(8): 1370–1373
CrossRef Google scholar
[63]
Wong W E, Tse T H, Glass R L, Basili V R, Chen T Y. An assessment of systems and software engineering scholars and institutions (2003— 2007 and 2004—2008). Journal of Systems and Software, 2011, 84(1): 162–168
CrossRef Google scholar
[64]
Freitas F G, Souza J T. Ten years of search based software engineering: a bibliometric analysis. In: Proceedings of the International Symposium on Search Based Software Engineering. 2011, 18–32
CrossRef Google scholar
[65]
Jiang H, Chen X, Zhang J, Han X, Xu X. Mining software repositories: contributors and hot topics. Journal of Computer Research and Development, 2016, 53(12): 2768–2782
[66]
Garousi V, Ruhe G. A bibliometric/geographic assessment of 40 years of software engineering research (1969—2009). International Journal of Software Engineering and Knowledge Engineering, 2013, 23(9): 1343–1366
CrossRef Google scholar
[67]
Garousi V, Fernandes J M. Highly-cited papers in software engineering: the top-100. Information and Software Technology, 2016, 71(3): 108–128
CrossRef Google scholar
[68]
Velden T, Haque A, Lagoze C. A new approach to analyzing patterns of collaboration in co-authorship networks: mesoscopic analysis and interpretation. Scientometrics, 2010, 85(1): 219–242
CrossRef Google scholar
[69]
Madaan G, Jolad S. Evolution of scientific collaboration networks. In: Proceedings of 2014 IEEE International Conference on Big Data. 2014, 7–13
CrossRef Google scholar

RIGHTS & PERMISSIONS

2019 Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature
AI Summary AI Mindmap
PDF(1364 KB)

Accesses

Citations

Detail

Sections
Recommended

/