The Surprising Power of Minimal Contributors in Crowdsourcing

TL;DR Summary
A study conducted by researchers from multiple institutions suggests that rewarding individuals for contributing to a virtual public good, such as online ratings, can improve the accuracy and overall quality of the resource. The study used a simulation involving over 500 participants and found that incentivizing contributions increased the proportion of individuals who left ratings from 35% to 70%. Free riders who responded to incentives provided higher quality evaluations and balanced out over-optimistic ratings from intrinsically motivated contributors. The findings have implications for online rating systems and other collective action problems.
Reading Insights
Total Reads
0
Unique Readers
1
Time Saved
4 min
vs 5 min read
Condensed
89%
817 → 91 words
Want the full story? Read the original article
Read on Phys.org