Abstract:Abstract-With the rapid growth of the Internet, the ability of users to create and publish content has created active electronic communities that provide a wealth of product information. However, the high volume of reviews that are typically published for a single product makes harder for individuals as well as manufacturers to locate the best reviews and understand the true underlying quality of a product. In this paper, we re-examine the impact of reviews on economic outcomes like product sales and see how d… Show more
“…For instance, Amazon.com provides a service that displays the top two most helpful, favourable, and critical reviews posted by online users in order to help its customers evaluate each displayed product easily. These useful votes are generally believed not only to be an indicator of review diagnosticity to separate the useful reviews from the rest (Mudambi & Schuff, 2010), but also to be a signalling cue for users to filter numerous reviews efficiently (Ghose & Ipeirotis, 2008). In other words, the useful information in a review may assist customers to evaluate the attributes of the service so as to build confidence in the source (Gupta & Harris, 2010).…”
Section: Perceived Usefulness Of Online Reviewsmentioning
confidence: 99%
“…To be more specific, the dependent variable is online review usefulness (PU) measured by counting the number of online users who voted that the reviews were useful in response to the reviews posted (Ghose & Ipeirotis, 2010;Jin, Lu & Shi, 2002). The independent variables, including the messenger's name, address, and real photo, are binary variables to be measured as '1' if they disclose information and '0' otherwise.…”
Section: Operationalization Of Data Variablesmentioning
“…For instance, Amazon.com provides a service that displays the top two most helpful, favourable, and critical reviews posted by online users in order to help its customers evaluate each displayed product easily. These useful votes are generally believed not only to be an indicator of review diagnosticity to separate the useful reviews from the rest (Mudambi & Schuff, 2010), but also to be a signalling cue for users to filter numerous reviews efficiently (Ghose & Ipeirotis, 2008). In other words, the useful information in a review may assist customers to evaluate the attributes of the service so as to build confidence in the source (Gupta & Harris, 2010).…”
Section: Perceived Usefulness Of Online Reviewsmentioning
confidence: 99%
“…To be more specific, the dependent variable is online review usefulness (PU) measured by counting the number of online users who voted that the reviews were useful in response to the reviews posted (Ghose & Ipeirotis, 2010;Jin, Lu & Shi, 2002). The independent variables, including the messenger's name, address, and real photo, are binary variables to be measured as '1' if they disclose information and '0' otherwise.…”
Section: Operationalization Of Data Variablesmentioning
“…In a context in which evaluation objects are ideas represented through text, the readability of that text is critical, as readability has been defined as the ease of understanding or comprehension of text due to the style of writing (Klare 1963). Readability relates directly to the cognitive effort of understanding text (Ghose and Ipeirotis 2011). Existing research has shown that readability strongly influences human decision processes.…”
Section: Readability Of Ideasmentioning
confidence: 99%
“…Existing research has shown that readability strongly influences human decision processes. For instance, Ghose and Ipeirotis (2011) show that higher readability of user-generated online reviews positively influences purchase decisions. Similarly, Tan et al (2014) found that readability moderates the effect between the presentation of financial reports and the investors' earning evaluations.…”
Section: Readability Of Ideasmentioning
confidence: 99%
“…In this context, ideas are often poorly written, contain spelling errors, and are difficult to understand (Blohm et al 2011a, Ghose andIpeirotis 2011). During the phase of information evaluation, blurry mental representations of an idea may lead users to ignore important information cues or to misinterpret them.…”
I nformation technology (IT) has created new patterns of digitally-mediated collaboration that allow opensourcing of ideas for new products and services. These novel sociotechnical arrangements afford finely-grained manipulation of how tasks can be represented and have changed the way organizations ideate. In this paper, we investigate differences in behavioral decision-making resulting from IT-based support of open idea evaluation. We report results from a randomized experiment of 120 participants comparing IT-based decision-making support using a rating scale (representing a judgment task) and a preference market (representing a choice task). We find that the rating scale-based task invokes significantly higher perceived ease of use than the preference market-based task and that perceived ease of use mediates the effect of the task representation treatment on the users' decision quality. Furthermore, we find that the understandability of ideas being evaluated, which we assess through the ideas' readability, and the perception of the task's variability moderate the strength of this mediation effect, which becomes stronger with increasing perceived task variability and decreasing understandability of the ideas. We contribute to the literature by explaining how perceptual differences of task representations for open idea evaluation affect the decision quality of users and translate into differences in mechanism accuracy. These results enhance our understanding of how crowdsourcing as a novel mode of value creation may effectively complement traditional work structures.
Online word‐of‐mouth (WOM) can impact consumers’ product evaluations, purchase intentions, and choices—but when does it do so? How do those receiving WOM know whether to rely on a particular message? This article suggests that the multiple players involved in online WOM (receivers, senders, sellers, platforms, and other consumers) each have their own interests, which are often in conflict. Thus, receivers of WOM are faced with a judgment task in deciding what information to rely on: They must make inferences about the product in question and about the players who provide or present WOM. To do so, they use signals embedded in various components of WOM, such as average star ratings, message content, or sender characteristics. The product and player information provided by these signals shapes the impact of WOM by allowing receivers to make inferences about (a) their likelihood of product satisfaction, and (b) the trustworthiness of WOM players, and therefore the trustworthiness of their content. This article summarizes how each player changes the impact of online WOM, providing a lens for understanding the current literature in online WOM, offering insights for theory in this context, and opening up pathways for future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.