2012
DOI: 10.1111/1475-6773.12002
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Survey Quality in Health Services Research: A Decision Framework for Assessing Nonresponse Bias

Abstract: Objective. To address the issue of nonresponse as problematic and offer appropriate strategies for assessing nonresponse bias. Study Design. A review of current strategies used to assess the quality of survey data and the challenges associated with these strategies is provided along with appropriate post-data collection techniques that researchers should consider. Principal Findings. Response rates are an incomplete assessment of survey data quality, and quick reactions to response rate should be avoided. Base… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
229
1
2

Year Published

2014
2014
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 259 publications
(247 citation statements)
references
References 37 publications
8
229
1
2
Order By: Relevance
“…However, recent reviews of the survey methodology literature suggest that among probability samples conducted with a standardized process that adheres to typical survey methodology standards (as in the present study), response rates are only weakly associated with nonresponse bias and may not be a strong indicator of survey data quality. [36][37][38][39] These limitations notwithstanding, the current analyses fill several key gaps in the literature. First, ours is the first community-based probability sample study to examine the relationship between several domains of sleep and key indicators of QOL in a large, representative sample of women with IC/BPS symptoms.…”
Section: Discussionmentioning
confidence: 99%
“…However, recent reviews of the survey methodology literature suggest that among probability samples conducted with a standardized process that adheres to typical survey methodology standards (as in the present study), response rates are only weakly associated with nonresponse bias and may not be a strong indicator of survey data quality. [36][37][38][39] These limitations notwithstanding, the current analyses fill several key gaps in the literature. First, ours is the first community-based probability sample study to examine the relationship between several domains of sleep and key indicators of QOL in a large, representative sample of women with IC/BPS symptoms.…”
Section: Discussionmentioning
confidence: 99%
“…To date, there is no validated method to measure the influence of nonresponse bias on the representativeness of study results. 34,36 However, nonresponse analyses can provide some insight into the likely direction of a possible nonresponse bias. 34 All nonresponse analyses performed by the individual studies suggested that recruited participants were similar or healthier in comparison to nonparticipants.…”
Section: Heterogeneity In Study Populationsmentioning
confidence: 99%
“…On the other hand, if some service members were unhappy about the way DoD and the Coast Guard handled some specific aspect of health or health care (e.g., quality of mental health treatment) and wanted a mechanism to provide feedback, the results could possibly underestimate the health of the force. Some research suggests that passive nonresponders-that is, those individuals who want to participate but simply forget to complete the survey, cannot access it because of technical problems, or miss the deadline because of competing demands-are more closely aligned with responders than those who actively ignore survey requests (Halbesleben and Whitman, 2013). Nonresponse bias is a common problem in survey research (Brick and Williams, 2013;Halbesleben and Whitman, 2013;Sax, Gilmartin, and Bryant, 2003), and as noted earlier, we took the available steps to remedy this potential problem.…”
Section: Limitationsmentioning
confidence: 99%
“…Some research suggests that passive nonresponders-that is, those individuals who want to participate but simply forget to complete the survey, cannot access it because of technical problems, or miss the deadline because of competing demands-are more closely aligned with responders than those who actively ignore survey requests (Halbesleben and Whitman, 2013). Nonresponse bias is a common problem in survey research (Brick and Williams, 2013;Halbesleben and Whitman, 2013;Sax, Gilmartin, and Bryant, 2003), and as noted earlier, we took the available steps to remedy this potential problem. Nonetheless, because it is not possible to identify nonrespondents in an anonymous survey, the ability to address nonresponse bias and reasons for nonresponse are limited.…”
Section: Limitationsmentioning
confidence: 99%