2015
DOI: 10.1007/s10640-015-9905-1
|View full text |Cite
|
Sign up to set email alerts
|

Are Fast Responses More Random? Testing the Effect of Response Time on Scale in an Online Choice Experiment

Abstract: Scepticism over stated preference surveys conducted online revolves around the concerns over "professional respondents" who might rush through the questionnaire without sufficiently considering the information provided. To gain insight on the validity of this phenomenon and test the effect of response time on choice randomness, this study makes use of a recently conducted choice experiment survey on ecological and amenity effects of an offshore windfarm in the UK. The positive relationship between self-rated a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
40
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 70 publications
(49 citation statements)
references
References 48 publications
3
40
0
Order By: Relevance
“…Haaijer, Kamakura, and Wedel 2000;Rose and Black 2006;Otter, Allenby, and van Zandt 2008;Vista, Rosenberger, and Collins 2009;B€ orger 2015), we show that accounting for the role of response latency is important for goodness-of-fit measures. Our results reinforce previous findings (e.g.…”
Section: Resultsmentioning
confidence: 79%
See 1 more Smart Citation
“…Haaijer, Kamakura, and Wedel 2000;Rose and Black 2006;Otter, Allenby, and van Zandt 2008;Vista, Rosenberger, and Collins 2009;B€ orger 2015), we show that accounting for the role of response latency is important for goodness-of-fit measures. Our results reinforce previous findings (e.g.…”
Section: Resultsmentioning
confidence: 79%
“…It is, however, somewhat surprising that the potential impacts of response latency in choice experiments has only been subject to relatively few investigations(e.g. see Holmes et al 1998;Haaijer, Kamakura, and Wedel 2000;Rose and Black 2006;Otter, Allenby, and van Zandt 2008;Brown et al 2008;Bonsall and Lythgoe 2009;Vista, Rosenberger, and Collins 2009;Hess and Stathopoulos 2011;B€ orger 2015). Though interest in this topic has clearly increased recently, Bonsall and Lythgoe (2009) note that there is considerable scope for more research.…”
Section: Introductionmentioning
confidence: 99%
“…This has led several researchers to voice concerns regarding respondent B Erlend Dancke Sandorf erlend.dancke.sandorf@slu.se and response quality in such panels (see e.g. Börger 2016;Börjesson and Fosgerau 2015;Gao et al 2016b;Hess et al 2013;Lindhjem and Navrud 2011a;Windle and Rolfe 2011;Baker et al 2010;Heerwegh and Loosveldt 2008). These concerns range from speeding behavior and inattentiveness to respondents trying to qualify for panel membership or survey participation by acting dishonestly (fraudulently).…”
Section: Introductionmentioning
confidence: 99%
“…These concerns range from speeding behavior and inattentiveness to respondents trying to qualify for panel membership or survey participation by acting dishonestly (fraudulently). These "low-quality" respondents have been identified through low-probability screening questions (Gao et al 2016b;Jones et al 2015b), "trap-questions" or "red herrings" (Baker and Downes-Le Guin 2007;Baker et al 2010), and hidden speeding metrics (Börger 2016). Once identified, these respondents have commonly been excluded from the panel or analysis (Jones et al 2015b;Baker et al 2010).…”
Section: Introductionmentioning
confidence: 99%
“…Also, neglecting decision uncertainty results into underestimation of welfare measures (Dekker et al, 2016). A few studies have quantified the decision uncertainty by asking follow-up questions after each choice task and incorporating these self-reported responses as explanatory variables or in other structural forms (Lundhede et al, 2009;Olsen et al, 2011;Beck et al, 2013;Börger, 2016).…”
Section: Behavioral Implicationsmentioning
confidence: 99%