2011
DOI: 10.1177/0165551511415584
|View full text |Cite
|
Sign up to set email alerts
|

A comparative assessment of answer quality on four question answering sites

Abstract: Question answering (Q&A) sites, where communities of volunteers answer questions, may provide faster, cheaper, and better services than traditional institutions. However, like other Web 2.0 platforms, user-created content raises concerns about information quality. At the same time, Q&A sites may provide answers of different quality because they have different communities and technological platforms. This paper compares answer quality on four Q&A sites: Askville, WikiAnswers, Wikipedia Reference Desk, and Yahoo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
82
1

Year Published

2012
2012
2017
2017

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 77 publications
(84 citation statements)
references
References 26 publications
(78 reference statements)
1
82
1
Order By: Relevance
“…First, prior research has theorized or assumed that ratings are indicators of quality (Fichman, 2011;Sun, 2012). The underlying logic behind this conjecture is that crowds of individuals have the ability to make quality determinations or make decisions more accurately than experts even though the individuals in the crowd may use their own criteria and non-standardized processes to make ratings (Surowiecki, 2004).…”
Section: Online Ratingsmentioning
confidence: 99%
See 3 more Smart Citations
“…First, prior research has theorized or assumed that ratings are indicators of quality (Fichman, 2011;Sun, 2012). The underlying logic behind this conjecture is that crowds of individuals have the ability to make quality determinations or make decisions more accurately than experts even though the individuals in the crowd may use their own criteria and non-standardized processes to make ratings (Surowiecki, 2004).…”
Section: Online Ratingsmentioning
confidence: 99%
“…Using a three-point scale (-1 for low quality, 0 for neither high nor low quality (or not applicable), and 1 for high quality), three PhD researchers coded each post along each of Fichman's (2011) dimensions of quality (accuracy, completeness, and verifiability). After coding, I summed the values across all three dimensions, which resulted in a scale from -3 (low quality on all three dimensions) to +3 (high quality on all three dimensions).…”
Section: Contribution Qualitymentioning
confidence: 99%
See 2 more Smart Citations
“…In situations in which there are labels derived from multiple contributors, it is possible to use, for example, the majority vote system to determine a label, but this treats all annotators equally and is of little value if the vast majority of the contributions are of low quality (Raykar and Yu 2012). If the identity of a contributor is known, it may be possible to build up levels of trust in their work over time and rate by reputation (Fichman 2011). There are still, however, concerns such as how a rated contributor copes with cases of new, previously unseen classes.…”
Section: Introductionmentioning
confidence: 99%