The idea of the wisdom of the crowd is that integrating multiple estimates of a group of individuals provides an outcome that is often better than most of the underlying estimates or even better than the best individual estimate. In this paper, we examine the wisdom of the crowd principle on the example of spatial data collection by paid crowdworkers. We developed a web-based user interface for the collection of vehicles from rasterized shadings derived from 3D point clouds and executed different data collection campaigns on the crowdsourcing marketplace microWorkers. Our main question is: how large must be the crowd in order that the quality of the outcome fulfils the quality requirements of a specific application? To answer this question, we computed precision, recall, F1 score, and geometric quality measures for different crowd sizes. We found that increasing the crowd size improves the quality of the outcome. This improvement is quite large at the beginning and gradually decreases with larger crowd sizes. These findings confirm the wisdom of the crowd principle and help to find an optimum number of the crowd size that is in the end a compromise between data quality, and cost and time required to perform the data collection.
Abstract. In this article, we present a two-level approach for the crowd-based collection of vehicles from 3D point clouds. In the first level, the crowdworkers are asked to identify the coarse positions of vehicles in 2D rasterized shadings that were derived from the 3D point cloud. In order to increase the quality of the results, we utilize the wisdom of the crowd principle which says that averaging multiple estimates of a group of individuals provides an outcome that is often better than most of the underlying estimates or even better than the best estimate. For this, each crowd job is duplicated 10 times and the multiple results are integrated with a DBSCAN cluster algorithm. In the second level, we use the integrated results as pre-information for extracting small subsets of the 3D point cloud that are then presented to crowdworkers for approximating the included vehicle by means of a Minimum Bounding Box (MBB). Again, the crowd jobs are duplicated 10 times and an average bounding box is calculated from the individual bounding boxes. We will discuss the quality of the results of both steps and show that the wisdom of the crowd significantly improves the completeness as well as the geometric quality. With a tenfold acquisition, we have achieve a completeness of 93.3 percent and a geometric deviation of less than 1 m for 95 percent of the collected vehicles.
Abstract. Non-commercial, unpaid crowdsourcing is the basis of many non-profit projects on the Internet such as Wikipedia or OpenStreetMap. A prerequisite for such projects to be successful is to find a sufficient number of volunteer crowdworkers who are intrinsically motivated to participate. In the field of geodata collection, many tasks exist that in principle could be solved with crowdsourcing; however, finding a large number of volunteers is problematic. There is also paid crowdsourcing in addition to crowdsourcing based on voluntary collaboration. The main incentive for participating in paid crowdsourcing projects is primarily payment for the work. Thus, intrinsic motivation is replaced by extrinsic motivation. However, intrinsic motivation simply replaced by extrinsic motivation can lead to a reduction in performance. If there are no additional intrinsic incentives in addition to monetary payment, it can happen that crowdworkers only perform exactly as much work as is necessary to satisfy the employers. Gamification may positively influence the motivation of paid crowdworkers. The goal of this paper is to investigate whether it is possible to increase the performance of paid crowdworkers with gamification. To this end, we have developed a web-based tool for the labelling of 3D triangle meshes. We presented this tool with and without game elements to paid crowdworkers and investigated to what extent gamification influenced the quality and quantity of the collected data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.