
The Surprising Power of Minimal Contributors in Crowdsourcing
A study conducted by researchers from multiple institutions suggests that rewarding individuals for contributing to a virtual public good, such as online ratings, can improve the accuracy and overall quality of the resource. The study used a simulation involving over 500 participants and found that incentivizing contributions increased the proportion of individuals who left ratings from 35% to 70%. Free riders who responded to incentives provided higher quality evaluations and balanced out over-optimistic ratings from intrinsically motivated contributors. The findings have implications for online rating systems and other collective action problems.