Improving students’ programming quality with the continuous inspection process: a social coding perspective

Yao LU, Xinjun MAO, Tao WANG, Gang YIN, Zude LI

PDF(775 KB)
PDF(775 KB)
Front. Comput. Sci. ›› 2020, Vol. 14 ›› Issue (5) : 145205. DOI: 10.1007/s11704-019-9023-2
RESEARCH ARTICLE

Improving students’ programming quality with the continuous inspection process: a social coding perspective

Author information +
History +

Abstract

College students majoring in computer science and software engineering need to master skills for highquality programming. However, rich research has shown that both the teaching and learning of high-quality programming are challenging and deficient in most college education systems. Recently, the continuous inspection paradigm has been widely used by developers on social coding sites (e.g., GitHub) as an important method to ensure the internal quality of massive code contributions. This paper presents a case where continuous inspection is introduced into the classroom setting to improve students’ programming quality. In the study, we first designed a specific continuous inspection process for students’ collaborative projects and built an execution environment for the process. We then conducted a controlled experiment with 48 students from the same course during two school years to evaluate how the process affects their programming quality. Our results show that continuous inspection can help students in identifying their bad coding habits, mastering a set of good coding rules and significantly reducing the density of code quality issues introduced in the code. Furthermore,we describe the lessons learned during the study and propose ideas to replicate and improve the process and its execution platform.

Keywords

continuous inspection / programming quality / SonarQube

Cite this article

Download citation ▾
Yao LU, Xinjun MAO, Tao WANG, Gang YIN, Zude LI. Improving students’ programming quality with the continuous inspection process: a social coding perspective. Front. Comput. Sci., 2020, 14(5): 145205 https://doi.org/10.1007/s11704-019-9023-2

References

[1]
Salman I. Students versus professionals as experiment subjects: an investigation on the effectiveness of TDD on code quality. Master’s Thesis, University Oulu, 2014
[2]
Breuker D M, Derriks J, Brunekreef J. Measuring static quality of student code. In: Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education. 2011, 13–17
CrossRef Google scholar
[3]
Feldman Y A. Teaching quality object-oriented programming. Technology on Educational Resources in Computing, 2005, 5(1): 1
CrossRef Google scholar
[4]
Carver J C, Kraft N A. Evaluating the testing ability of senior-level computer science students. In: Proceedings of IEEE-CS Conference on Software Engineering Education and Training. 2011, 169–178
CrossRef Google scholar
[5]
Nawahdah M, Taji D. Investigating students’ behavior and code quality when applying pair-programming as a teaching technique in a middle eastern society. In: Proceedings of IEEEGlobal Engineering Education Conference. 2016, 32–39
CrossRef Google scholar
[6]
Radermacher A D. Evaluating the gap between the skills and abilities of senior undergraduate computer science students and the expectations of industry. North Dakota State University, Thesis, 2012
[7]
Begel A, Simon B. Struggles of new college graduates in their first software development job. ACM SIGCSE Bulletin, 2008, 40(1): 226–230
CrossRef Google scholar
[8]
Robins A, Rountree J, Rountree N. Learning and teaching programming: a review and discussion. Computer Science Education, 2003, 13(2): 137–172
CrossRef Google scholar
[9]
Higgins C A, Gray G, Symeonidis P, Tsintsifas A. Automated assessment and experiences of teaching programming. Technology on Educational Resources in Computing, 2005, 5(3): 5
CrossRef Google scholar
[10]
Piaget J. Psychology and Epistemology: Towards A Theory of Knowledge. Markham: Penguin Books Canada, 1977
CrossRef Google scholar
[11]
Chen W K, Tu P Y. Grading code quality of programming assignments based on bad smells. In: Proceedings of the 24th IEEE-CS Conference on Software Engineering Education and Training. 2011, 559
CrossRef Google scholar
[12]
Radermacher A, Walia G, Knudson D. Investigating the skill gap between graduating students and industry expectations. In: Proceedings of the 36th International Conference on Software Engineering Companion. 2014, 291–300
CrossRef Google scholar
[13]
McConnell S. Code Complete. Pearson Education, 2004
[14]
ISO. IEC25010: 2011 systems and software engineering–systems and software quality requirements and evaluation (square)–system and software quality models. International Organization for Standardization. 2011, 34–35
[15]
Gousios G, Pinzger M, Deursen A V. An exploratory study of the pullbased software development model. In: Proceedings of the 36th International Conference on Software Engineering. 2014, 345–355
CrossRef Google scholar
[16]
Dabbish L, Stuart C, Tsay J, Herbsleb J. Social coding in GitHub: transparency and collaboration in an open software repository. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work. 2012, 1277–1286
CrossRef Google scholar
[17]
Lu Y, Mao X J, Li Z D, Zhang Y, Wang T, Yin G. Does the role matter? an investigation of the code quality of casual contributors in GitHub. In: Proceedings of the 23rd Asia-Pacific Software Engineering Conference. 2016, 49–56
CrossRef Google scholar
[18]
Yu Y, Vasilescu B, Wang H M, Filkov V, Devanbu P. Initial and eventual software quality relating to continuous integration in GitHub. 2016, arXiv preprint arXiv:1606.00521
CrossRef Google scholar
[19]
Ebbinghaus H. Memory: a contribution to experimental psychology. Annals of Neurosciences, 2013, 20(4): 155
CrossRef Google scholar
[20]
Wang H M, Yin G, Li X, Li X. TRUSTIE: A Software Development Platform for Crowdsourcing. Crowdsourcing. Springer Berlin Heidelberg, 2015
CrossRef Google scholar
[21]
Wong C P, Xiong Y F, Zhang H Y, Hao D. Boosting bug-report-oriented fault localization with segmentation and stack-trace analysis. In: Proceedings of International Conference on Software Maintenance and Evolution. 2014, 181–190
CrossRef Google scholar
[22]
Tonella P, Abebe S L. Code quality from the programmer’s perspective. In: Proceedings of XII Advanced Computing and Analysis Techniques in Physics Research. 2008
CrossRef Google scholar
[23]
Zhang H, Ali B M. Systematic reviews in software engineering: an empirical investigation. Information and Software Technology, 2013, 55(7): 1341–1354
CrossRef Google scholar
[24]
Lu Y, Mao X J, Li Z D, Zhang Y, Wang T, Yin G. Internal quality assurance for external contributions in GitHub: an empirical investigation. Journal of Software: Evolution and Process, 2018, 30(4): e1918
CrossRef Google scholar
[25]
Akinola O S. An empirical comparative analysis of programming effort, bugs incurrence and code quality between solo and pair programmers. Middle East Technology Scientific Research, 2014, 21(12): 2231–2237
[26]
Braught G, Wahls T, Eby L M. The case for pair programming in the computer science classroom. ACM Transactions on Computing Education, 2011, 11(1): 2
CrossRef Google scholar
[27]
Wang Y Q, Li Y J, Collins M, Liu P J. Process improvement of peer code review and behavior analysis of its participants. In: Proceedings of SigCSE Technical Symposium on Computer Science Education. 2008, 107–111
CrossRef Google scholar
[28]
Cunha A D D, Greathead D. Does personality matter?: an analysis of code-review ability. Communications of the ACM, 2007, 50(5): 109–112
CrossRef Google scholar
[29]
Hundhausen C, Agrawal A, Fairbrother D, Trevisan M. Integrating pedagogical code reviews into a CS 1 course: an empirical study. ACM SIGCSE Bulletion, 2009, 41(1): 291–295
CrossRef Google scholar
[30]
Hüttermann M. DevOps for Developers. Apress, 2012
CrossRef Google scholar
[31]
Waller J, Ehmke N C, Hasselbring W. Including performance benchmarks into continuous integration to enable devops. ACM SIGSOFT Software Engineering Notes, 2015, 40(2): 1–4
CrossRef Google scholar
[32]
Vasilescu B, Yu Y, Wang H M, Devanbu P, Filkov V. Quality and productivity outcomes relating to continuous integration in GitHub. In: Proceedings of the 10th Joint Meeting on Foundations of Software Engineering. 2015, 805–816
CrossRef Google scholar
[33]
Bowyer J, Hughes J. Assessing undergraduate experience of continuous integration and test-driven development. In: Proceedings of the 28th International Conference on Software Engineering. 2006, 691–694
CrossRef Google scholar
[34]
Heckman S, King J. Developing software engineering skills using real tools for automated grading. In: Proceedings of the 49th ACM Technical Symposium on Computer Science Education. 2018, 794–799
CrossRef Google scholar
[35]
Gaudin O, Sonar Source. Continuous inspection: a paradigm shift in software quality management. Technical Report, SonarSource S.A., Switzerland, 2013
[36]
Merson P, Yoder J W, Guerra E M, Aguiar A. Continuous inspection: a pattern for keeping your code healthy and aligned to the architecture. In: Proceedings of the 3rd Asian Conference on Pattern Languages of Programs. 2014
[37]
Barroca L, Sharp H, Salah D, Taylor K, Gregory Peggy. Bridging the gap between research and agile practice: an evolutionary model. International Journal of System Assurance Engineering and Management, 2018, 9(2): 323–334
CrossRef Google scholar
[38]
Krusche S, Berisha M, Bruegge B. Teaching code review management using branch based workflows. In: Proceedings of the 38th International Conference on Software Engineering Companion. 2016, 384–393
CrossRef Google scholar
[39]
Jick T D. Mixing qualitative and quantitative methods: triangulation in action. Administrative Science Quarterly, 1979, 24(4): 602–611
CrossRef Google scholar
[40]
Letouzey J L. The SQALE method definition document. In: Proceedings of the 3rd International Workshop on Managing Technical Debt. 2012, 31–36
[41]
Zagalsky A, Feliciano J, Storey M A, Zhao Y Y, Wang W L. The emergence of GitHub as a collaborative platform for education. In: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing. 2015, 1906–1917
CrossRef Google scholar
[42]
Lu Y, Mao X J, Li Z D. Assessing software maintainability based on class diagram design: a preliminary case study. Lecture Notes on Software Engineering, 2016, 4(1): 53–58
CrossRef Google scholar
[43]
Chess B, McGraw G. Static analysis for security. IEEE Security & Privacy, 2004, 2(6): 76–79
CrossRef Google scholar
[44]
Gousios G, Storey M A, Bacchelli A. Work practices and challenges in pull-based development: the contributor’s perspective. In: Proceedings of the 38th International Conference on Software Engineering. 2016, 285–296
CrossRef Google scholar
[45]
Mengel S A, Yerramilli V. A case study of the static analysis of the quality of novice student programs. ACM SIGCSE Bulletin, 1999, 31(1): 78–82
CrossRef Google scholar
[46]
Kim S H, Pan K, Whitehead E E J. Memories of bug fixes. In: Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering. 2006, 35–45
CrossRef Google scholar
[47]
Beck K. Extreme Programming Explained: Embrace Change. Addison-Wesley Professional, 2000
[48]
Mcdowell C, Werner L, Bullock H, Fernald J. Pair programming improves student retention, confidence, and program quality. Communication of the ACM, 2006, 49(8): 91
CrossRef Google scholar
[49]
Mcdowell C, Werner L, Bullock H E, Fernald J. The impact of pair programming on student performance, perception and persistence. In: Proceedings of the 25th International Conference on Software Engineering. 2003, 602–603
CrossRef Google scholar
[50]
Mcdowell C, Werner L, Bullock H. The effects of pair-programming on performance in an introductory programming course. ACM SIGCSE Bulletin, 2002, 34(1): 38–42
CrossRef Google scholar
[51]
Nagoya F, Liu S Y, Chen Y T. A tool and case study for specificationbased program review. In: Proceedings of the 29th Annual International Computer Software and Applications Conference. 2005, 375–380
[52]
Campbell D T, Stanley J C, Lees Gage N. Experimental and quasiexperimental designs for research. Handbook of Research on Teaching, 1963, 5: 171–246
[53]
Sigelaman L. Question-order effects on presidential popularity. Public Opinion Quarterly, 1981, 45(2): 199–207
CrossRef Google scholar
[54]
Zheng J, Williams L, Nagappan N, Snipes W, Hudepohl J P, Vouk M A. On the value of static analysis for fault detection in software. IEEE Transaction on Software Engineering, 2006, 32(4): 240–253
CrossRef Google scholar

RIGHTS & PERMISSIONS

2019 Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature
AI Summary AI Mindmap
PDF(775 KB)

Accesses

Citations

Detail

Sections
Recommended

/