An evaluation framework for software crowdsourcing

Wenjun WU, Wei-Tek TSAI, Wei LI

PDF(438 KB)
PDF(438 KB)
Front. Comput. Sci. ›› 2013, Vol. 7 ›› Issue (5) : 694-709. DOI: 10.1007/s11704-013-2320-2
RESEARCH ARTICLE

An evaluation framework for software crowdsourcing

Author information +
History +

Abstract

Recently software crowdsourcing has become an emerging area of software engineering. Few papers have presented a systematic analysis on the practices of software crowdsourcing. This paper first presents an evaluation framework to evaluate software crowdsourcing projects with respect to software quality, costs, diversity of solutions, and competition nature in crowdsourcing. Specifically, competitions are evaluated by the min-max relationship from game theory among participants where one party tries to minimize an objective function while the other party tries to maximize the same objective function. The paper then defines a game theory model to analyze the primary factors in these minmax competition rules that affect the nature of participation as well as the software quality. Finally, using the proposed evaluation framework_this paper illustrates two crowdsourcing processes, Harvard-TopCoder and AppStori. The framework demonstrates the sharp contrasts between both crowdsourcing processes as participants will have drastic behaviors in engaging these two projects.

Keywords

crowdsourcing / software engineering / competition rules / game theory

Cite this article

Download citation ▾
Wenjun WU, Wei-Tek TSAI, Wei LI. An evaluation framework for software crowdsourcing. Front. Comput. Sci., 2013, 7(5): 694‒709 https://doi.org/10.1007/s11704-013-2320-2

References

[1]
Doan A, Ramakrishnan R, Halevy A Y. Crowdsourcing systems on theWorld-WideWeb. Communications of the ACM, 2011, 54(4): 86−96
CrossRef Google scholar
[2]
Lakhani K, Garvin D, Lonstein E. Topcoder (a): developing software through crowdsourcing. Harvard Business School General Management Unit Case, 2010. Available at SSRN: http://ssrn.com/abstract=2002884
[3]
uTest. https://www.utest.com/
[4]
Bosch J. From software product lines to software ecosystems. In: Proceedings of the 13th International Software Product Line Conference. 2009, 111−119
[5]
Jansen S, Finkelstein A, Brinkkemper S. A sense of community: a research agenda for software ecosystems. In: Proceedings of the 31st International Conference on Software Engineering-Companion Volume. 2009, 187−190
[6]
Apple Store Metrics. http://148apps.biz/app-store-metrics/, 2012
[7]
AppStori. http://appstori.com/, 2012
[8]
Kittur A. Crowdsourcing, collaboration and creativity. XRDS, 2010, 17(2): 22−26
CrossRef Google scholar
[9]
Constantinescu R, Iacob I M. Capability maturity model integration. Journal of Applied Quantitative Methods, 2007, 2(1): 187
[10]
Atwood M. Military standard: defense system software development. Department of Defense, USA, 1988
[11]
Schenk E, Guittard C. Crowdsourcing: what can be outsourced to the crowd, and why. In: Workshop on Open Source Innovation, Strasbourg, France. 2009
[12]
Tong R, Lakhani K. Public-private partnerships for organizing and executing prize-based competitions. Berkman Center Research Publication, 2012. Available at SSRN: http://ssrn.com/abstract=2083755
[13]
Algorithm Development Through Crowdsourcing. 2012
[14]
Archak N, Sundararajan A. Optimal design of crowdsourcing contests. In: Proceedings of the 30th International Conference on Information Systems. 2009, 1−16
[15]
Wu T W, Li W. Creative software crowdsourcing. Creative Software Crowdsourcing: From Components and Algorithm Development to Project Concept Formations, 2013
[16]
Baldwin C Y, Clark K B. The architecture of participation: does code architecture mitigate free riding in the open source development model? Management Science, 2006, 52(7): 1116−1127
CrossRef Google scholar
[17]
Rand D G, Dreber A, Ellingsen T, Fudenberg D, Nowak MA. Positive interactions promote public cooperation. Science, 2009, 325(5945): 1272−1275
CrossRef Google scholar
[18]
Herbrich R, Minka T, Graepel T. TrueSkillTM: a bayesian skill rating system. In: Proceedings of the 2006 Annual Conference of Advances in Neural Information Processing Systems. 2007, 19: 569−576
[19]
TopCoder Inc. http://apps.topcoder.com/wiki/display/tc/algorithm+competition+rating+system, 2013
[20]
Apple App Store Review Guidelines. 2010
[21]
Archak N. Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder. com. In: Proceedings of the 19th International Conference on World Wide Web. 2010, 21−30
CrossRef Google scholar
[22]
DiPalantino D, Vojnovic M. Crowdsourcing and all-pay auctions. In: Proceedings of the 10th ACM Conference on Electronic Commerce. 2009, 119−128
[23]
Horton J J, Chilton L BThe labor economics of paid crowdsourcing. In: Proceedings of the 11th ACM Conference on Electronic Commerce. 2010, 209−218
[24]
Bacon D F, Chen Y, Parkes D, Rao M. A market-based approach to software evolution. In: Proceedings of the 24th ACM SIGPLAN Conference Companion on Object Oriented Programming Systems Languages and Applications. 2009, 973−980
CrossRef Google scholar
[25]
Bullinger A C, Moeslein K. Innovation contests-where are we? In: Proceedings of the 16th Americas Conference on Information Systems. 2010
[26]
Leimeister J M, Huber M, Bretschneider U, Krcmar H. Leveraging crowdsourcing: activation-supporting components for it-based ideas competition. Journal of Management Information Systems, 2009, 26(1): 197−224
CrossRef Google scholar
[27]
Kazman R, Chen HM. The metropolis model a new logic for development of crowdsourced systems. Communications of the ACM, 2009, 52(7): 76−84
CrossRef Google scholar
[28]
Bratvold D, Armstrong Chttp://www.dailycrowdsource.com, 2013

RIGHTS & PERMISSIONS

2013 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(438 KB)

Accesses

Citations

Detail

Sections
Recommended

/