2018
DOI: 10.1177/0894439317750089
|View full text |Cite
|
Sign up to set email alerts
|

Measurement Reliability, Validity, and Quality of Slider Versus Radio Button Scales in an Online Probability-Based Panel in Norway

Abstract: Little is known about the reliability and validity in web surveys, although this is crucial information to evaluate how accurate the results might be and/or to correct for measurement errors. In particular, there are few studies based on probability-based samples for web surveys, looking at web-specific response scales and considering the impact of having smartphone respondents. In this article, we start filling these gaps by estimating the measurement quality of sliders compared to radio button scales control… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
23
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(25 citation statements)
references
References 33 publications
(36 reference statements)
2
23
0
Order By: Relevance
“…However, in terms of answer distributions, Buskirk and Andrus (2014) found no differences between PC and smartphone completion on slider bar questions in a randomized experiment in an online optin panel. Bosch et al (2019) found that slider bars do not threat data quality even if smartphone respondents are included in the sample.…”
Section: Device Effectsmentioning
confidence: 93%
“…However, in terms of answer distributions, Buskirk and Andrus (2014) found no differences between PC and smartphone completion on slider bar questions in a randomized experiment in an online optin panel. Bosch et al (2019) found that slider bars do not threat data quality even if smartphone respondents are included in the sample.…”
Section: Device Effectsmentioning
confidence: 93%
“…Some studies advocate for slider bars [45], some advocate for radio buttons [46], and some present them as having equally good characteristics [42]. Finally and most importantly, some scholars have argued that their relative merits depend on the circumstances, including, for instance, questionnaire topic or screen size, which demands a case-per-case assessment of measurement quality [25]. This is what we have done in the present article, based on the topic of entrepreneurial competences and desktop computers.…”
Section: Measurement Scalesmentioning
confidence: 99%
“…Although it would seem reasonable to expect more response alternatives to lead to better solutions, there are indications that the practical number of response alternatives is limited by our cognitive abilities [20,21], even if it is possible to find solutions to some cognitive limitations [22,23]. Comparing response scales still constitutes an open research problem [24], and evidence that takes into account the multiple facets of their measurement quality (e.g., reliability, validity, and bias) is still lacking [25]. In respect of this, it must also be taken into account that answering questions on a screen, as opposed to using a paper and pencil, tends to change the rules of the game, and that computer-assisted questionnaires are extremely diverse in nature [25,26].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A slider format was adopted for this population of adolescent participants because of its potential to increase engagement through a more interactive approach to responding (Roster, Lucianetti, & Albaum, 2015). The starting point for the marker was the middle of the scale, as noted by Bosch, Revilla, DeCastellarnau, and Weber (2019) as useful in reducing response style bias and extreme responding. To reinforce the meaning and polarity of the opposing statements, numerical labels appeared at each point along the 11-point continuum as the participant moved the marker along the slider.…”
Section: Instrumentmentioning
confidence: 99%