2011
DOI: 10.1016/j.dss.2010.11.009
|View full text |Cite
|
Sign up to set email alerts
|

Exploring determinants of voting for the “helpfulness” of online user reviews: A text mining approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

15
336
2
28

Year Published

2013
2013
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 536 publications
(411 citation statements)
references
References 22 publications
15
336
2
28
Order By: Relevance
“…These mixed findings might stem from (1) methodological shortcomings, such as a cross-sectional context and inability to control for unobserved differences, including product quality (Zhu and Zhang 2010), or (2) the inability of numeric cues to do justice to the nuanced, fine-grained, and expressive nature of verbatim reviews (Cao, Duan, and Gan 2011;Pavlou and Dimoka 2006;Singh, Hillmer, and Ze 2011). Making use of recent advances in text analytics to systematically analyze large volumes of collections of customer review verbatim scripts and taking a dynamic perspective, which is more reflective of the rapid, continual changes in user-generated content (Tirunillai and Tellis 2012), may clarify the impacts of review content on conversion rates (Chevalier and Mayzlin 2006;Mudambi and Schuff 2010).…”
Section: Conceptual Foundationsmentioning
confidence: 99%
See 2 more Smart Citations
“…These mixed findings might stem from (1) methodological shortcomings, such as a cross-sectional context and inability to control for unobserved differences, including product quality (Zhu and Zhang 2010), or (2) the inability of numeric cues to do justice to the nuanced, fine-grained, and expressive nature of verbatim reviews (Cao, Duan, and Gan 2011;Pavlou and Dimoka 2006;Singh, Hillmer, and Ze 2011). Making use of recent advances in text analytics to systematically analyze large volumes of collections of customer review verbatim scripts and taking a dynamic perspective, which is more reflective of the rapid, continual changes in user-generated content (Tirunillai and Tellis 2012), may clarify the impacts of review content on conversion rates (Chevalier and Mayzlin 2006;Mudambi and Schuff 2010).…”
Section: Conceptual Foundationsmentioning
confidence: 99%
“…We aggregated all review scores for the same product to derive a mean level for each week, that is, an intensity percentage or summary score between -1 and 1, depending on relative intensity of negative or positive affective content across all reviews for that product in a given week. Because review titles are particularly prominent, we mined and conducted separate calculations for title and text intensities similar to Cao, Duan, and Gan (2011). The aggregation is as follows:…”
Section: Measurement Developmentmentioning
confidence: 99%
See 1 more Smart Citation
“…This may seem like weak measure of quality, but the history of automatic quality assessment is saturated with findings that length is the best predictor of quality. This holds true for both answers to questions Surdeanu et al, 2011;Beygelzimer et al, 2015); as well as e-commerce reviews (Cao et al, 2011;Racherla and Friske, 2012). Of course, length is not as shallow as it may seem at first; given no strong incentive for authors to leave long comments, length is likely a proxy for thoroughness of the comment.…”
Section: Discussionmentioning
confidence: 99%
“…Understandability was performed as surface-level characteristics. Structural features were calculated as the number of characters per word, number of words, and fraction of words containing 7 or more characters is called long word [4].…”
Section: Proposed Systemmentioning
confidence: 99%