2018
DOI: 10.15346/hc.v5i1.2
|View full text |Cite
|
Sign up to set email alerts
|

Human Computation vs. Machine Learning: an Experimental Comparison for Image Classification

Abstract: Image classification is a classical task heavily studied in computer vision and widely required in many concrete scientific and industrial scenarios. Is it better to rely on human eyes, thus asking people to classify pictures, or to train a machine learning system to automatically solve the task? The answer largely depends on the specific case and the required accuracy: humans may be more reliable - especially if they are domain experts - but automatic processing can be cheaper, even if less capable to demonst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
2

Relationship

4
3

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 18 publications
0
11
0
Order By: Relevance
“…2018). Several groups have used the "Cities at Night" as training sample for computer vision proposes, including Minh Hieu (2016), Calegari et al (2018) and Sadler (2018). Based on this catalogue it can be seen that ISS night-time photos are not representing all parts of the world, and are more common in the urban areas of North America, Europe, the Middle East, eastern China and Japan (Figure 15a).…”
Section: Citizen Science: Cities At Nightmentioning
confidence: 99%
“…2018). Several groups have used the "Cities at Night" as training sample for computer vision proposes, including Minh Hieu (2016), Calegari et al (2018) and Sadler (2018). Based on this catalogue it can be seen that ISS night-time photos are not representing all parts of the world, and are more common in the urban areas of North America, Europe, the Middle East, eastern China and Japan (Figure 15a).…”
Section: Citizen Science: Cities At Nightmentioning
confidence: 99%
“…Therefore, whenever an assessment of the task difficulty is required, the number of collected contributions can be adopted as a proxy measure. In our previous work [18] we indeed demonstrated that this empirical measure of difficulty is highly correlated with the (lack of) confidence value resulting from machine learning classifiers applied to the same data.…”
Section: Requirement Satisfactionmentioning
confidence: 61%
“…To evaluate the proposed truth inference algorithm we performed a comparative assessment with alternative solutions, on the basis of the data collected through two different GWAPs: the LCV Game [19] and Night Knights [18].…”
Section: Discussionmentioning
confidence: 99%
“…Indeed, it can happen that what is "difficult" to predict for an algorithm (i.e. predictions with low confidence metrics) is also difficult for humans to judge; the case of questionable image classification is illustrated in [26], where the correspondence exists between low-confidence machine classifications and user disagreement. The correlation between human and machine predictions and their respective confidence/reliability can be exploited to understand the reasons behind a model and can therefore improve both the modelling phase (by incorporating additional human knowledge in training) and the generation of explanations (which can be closer to human understanding).…”
Section: Human and Machine Confidencementioning
confidence: 99%