Quality assessment in competition-based software crowdsourcing

Zhenghui HU , Wenjun WU , Jie LUO , Xin WANG , Boshu LI

Front. Comput. Sci. ›› 2020, Vol. 14 ›› Issue (6) : 146207

PDF (598KB)
Front. Comput. Sci. ›› 2020, Vol. 14 ›› Issue (6) : 146207 DOI: 10.1007/s11704-019-8418-4
RESEARCH ARTICLE

Quality assessment in competition-based software crowdsourcing

Author information +
History +
PDF (598KB)

Abstract

Quality assessment is a critical component in crowdsourcing-based software engineering (CBSE) as software products are developed by the crowd with unknown or varied skills and motivations. In this paper, we propose a novel metric called the project score to measure the performance of projects and the quality of products for competitionbased software crowdsourcing development (CBSCD) activities. To the best of our knowledge, this is the first work to deal with the quality issue of CBSE in the perspective of projects instead of contests. In particular, we develop a hierarchical quality evaluation framework for CBSCD projects and come up with two metric aggregation models for project scores. The first model is a modified squale model that can locate the software modules of poor quality, and the second one is a clustering-based aggregationmodel, which takes different impacts of phases into account. To test the effectiveness of the proposed metrics, we conduct an empirical study on TopCoder, which is a famous CBSCD platform. Results show that the proposed project score is a strong indicator of the performance and product quality of CBSCD projects.We also find that the clustering-based aggregation model outperforms the Squale one by increasing the percentage of the performance evaluation criterion of aggregation models by an additional 29%. Our approach to quality assessment for CBSCD projects could potentially facilitate software managers to assess the overall quality of a crowdsourced project consisting of programming contests.

Keywords

crowdsourcing / software engineering / product quality / competition / evaluation framework / metric aggregation

Cite this article

Download citation ▾
Zhenghui HU, Wenjun WU, Jie LUO, Xin WANG, Boshu LI. Quality assessment in competition-based software crowdsourcing. Front. Comput. Sci., 2020, 14(6): 146207 DOI:10.1007/s11704-019-8418-4

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

LaToza T D, Hoek A V D. Crowdsourcing in software engineering: models, motivations, and challenges. IEEE Software, 2016, 33(1): 74–80

[2]

Li K, Xiao J C, Wang Y J, Wang Q. Analysis of the key factors for software quality in crowdsourcing development: an empirical study on topcoder.com. In: Proceedings of the 37th Annual Computer Software and Applications Conference. 2013, 812–817

[3]

Mao K, Yang Y, Li M S, Harman M. Pricing crowdsourcing-based software development tasks. In: Proceedings of International Conference on Software Engineering. 2013, 1205–1208

[4]

Archak N. Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder.com. In: Proceedings of the 19th International Conference on World Wide Web. 2010, 21–30

[5]

Mao K, Capra L, Harman M, Jia Y. A survey of the use of crowdsourcing in software engineering. Systems and Software, 2017, 126: 57–84

[6]

Wu W J , Tsai W T, Li W. An evaluation framework for software crowdsourcing. Frontiers of Computer Science, 2013, 7(5): 694–709

[7]

Daniel F, Kucherbaev P, Cappiello C, Benatallah B, Allahbakhsh M. Quality control in crowdsourcing: a survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys, 2018, 51(1): 7

[8]

Chen X, Jiang H, Li X C, He T K, Chen Z Y. Automated quality assessment for crowdsourced test reports of mobile applications. In: Proceedings of the 25th International Conference on Software Analysis, Evolution and Reengineering. 2018, 368–379

[9]

Mao K, Yang Y, Wang Q, Jia Y, Harman M. Developer recommendation for crowdsourced software development tasks. In: Proceedings of IEEE Symposium on Service-Oriented System Engineering. 2015, 347–356

[10]

Lakhani K R, Garvin D A, Lonstein E. Topcoder (a): developing software through crowdsourcing. Harvard Business School General Management Vnit Case No. 610-032, 2010

[11]

Miguel J P, Mauricio D, Rodriguez G. A review of software quality models for the evaluation of software products. International Journal of Software Engineering and Applications, 2014, 5(6): 31–54

[12]

Mordal K, Anquetil N, Laval J, Serebrenik A, Vasilescu B, Ducasse S. Software quality metrics aggregation in industry. Journal of Software: Evolution and Process, 2013, 25(10): 1117–1135

[13]

ISO/IEC9126-1. Software engineering-product quality-part1: quality model. 1st ed. International Organization for Standardization, 2001

[14]

McCall J A, Richards P K, Walters G F. Factors in software quality. RADC TR-77369, 1977

[15]

Wang X, Wu W J, Hu Z H. Evaluation of software quality in the topcoder crowdsourcing environment. In: Proceedings of the 7th Annual Computing and Communication Workshop and Conference. 2017, 1–6

[16]

Kludt S R. Metrics and models in software quality engineering. Journal of Product Innovation Management, 1996, 13(2): 182–183

[17]

Breaux T D, Schaub F. Scaling requirements extraction to the crowd: experiments with privacy policies. In: Proceedings of the 22nd International Requirements Engineering Conference. 2014, 163–172

[18]

Hosseini M, Shahri A, Phalp K, Taylor J, Ali R, Dalpiaz F. Configuring crowdsourcing for requirements elicitation. In: Proceedings of the 9th International Conference on Research Challenges in Information Science. 2015, 133–138

[19]

Nebeling M, Leone S, Norrie M C. Crowdsourced web engineering and design. In: Proceedings of International Conference on Web Engineering. 2012, 31–45

[20]

Latoza T D, Chen M, Jiang L X, Zhao M Y, Hoek A V D. Borrowing from the crowd: a study of recombination in software design competitions. In: Proceedings of the 37th International Conference on Software Engineering. 2015, 551–562

[21]

Goldman M, Little G, Miller R C. Real-time collaborative coding in a web IDE. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. 2011, 155–164

[22]

Pham R, Singer L, Schneider K. Building test suites in social coding sites by leveraging drive-by commits. In: Proceedings of the 35th International Conference on Software Engineering. 2013, 1209–1212

[23]

Bishop J, Horspool R N, Xie T, Tillmann N, Halleux J D. Code hunt: experience with coding contests at scale. In: Proceedings of the 37th International Conference on Software Engineering. 2015, 398–407

[24]

Xie T. Cooperative testing and analysis: human-tool, tool-tool and human-human cooperations to get work done. In: Proceedings of the 12th International Working Conference on Source Code Analysis and Manipulation. 2012, 1–3

[25]

Tung Y H, Tseng S S. A novel approach to collaborative testing in a crowdsourcing environment. Journal of Systems and Software, 2013, 86(8): 2143–2153

[26]

Itkonen J. More testers–the effect of crowd size and time restriction in software testing. Information and Software Technology, 2013, 55(6): 986–1003

[27]

Barr E T, Harman M, Mcminn P, Shahbaz M, Yoo S. The oracle problem in software testing: a survey. IEEE Transactions on Software Engineering, 2015, 41(5): 507–525

RIGHTS & PERMISSIONS

Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature

AI Summary AI Mindmap
PDF (598KB)

Supplementary files

Article highlights

1194

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/