2014
DOI: 10.1108/ijpdlm-10-2012-0314
|View full text |Cite
|
Sign up to set email alerts
|

Non-response bias assessment in logistics survey research: use fewer tests?

Abstract: Purpose: The purpose of this paper is to consider the concepts of individual and complete statistical power used for multiple testing and shows their relevance for determining the number of statistical tests to perform when assessing non-response bias. Design/methodology/approach: A statistical power analysis of 55 survey-based research papers published in three prestigious logistics journals (International Journal of Physical Distribution and Logistics Management, Journal of Business Logistics, Transportation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 49 publications
(32 citation statements)
references
References 31 publications
0
29
1
Order By: Relevance
“…Our results also indicate that a researcher should include as many key items as their sample size and number of nonrespondents to be contacted will allow (see Appendix and Table 6) when assessing for potential unit nonresponse bias with the proposed two‐way MANOVA approach. This is somewhat contrary to the advice given to SCM researchers, in previous studies (Mentzer & Flint, 1997; Clottey & Grawe, 2014), to perform unit nonresponse bias assessment on only two or five key survey items.…”
Section: Resultscontrasting
confidence: 74%
See 1 more Smart Citation
“…Our results also indicate that a researcher should include as many key items as their sample size and number of nonrespondents to be contacted will allow (see Appendix and Table 6) when assessing for potential unit nonresponse bias with the proposed two‐way MANOVA approach. This is somewhat contrary to the advice given to SCM researchers, in previous studies (Mentzer & Flint, 1997; Clottey & Grawe, 2014), to perform unit nonresponse bias assessment on only two or five key survey items.…”
Section: Resultscontrasting
confidence: 74%
“…This is an intriguing and important finding as it suggests that the inclusion of more survey items, in the assessment of potential nonresponse bias, is desirable in the MANOVA approach. The prevalent advice given to SCM researchers for the number of survey items to include in an assessment for potential unit nonresponse bias is two key items, to maximize complete power (Clottey & Grawe, 2014), or five, which is considered a reasonable number of key items (Mentzer & Flint, 1997). The results of our study suggest that with dyadic data structures, the number of key items to use could be expanded past five due to ES considerations.…”
Section: Discussionmentioning
confidence: 99%
“…In case of surveys performed by e-mail it is recommended the testing difference in responses of early respondents and late respondents or non-respondents (Armstrong and Overton, 1977;Clottey and Grawe, 2014;Rogelberg and Stanton, 2007). In order to verify if the answers obtained presented significant difference in item averages, a t-test for independent samples was applied for the first 247 respondents, considered (Group 1), and for the last 73, considered late (Group 2).…”
Section: Non-response Biasmentioning
confidence: 99%
“…To assess non-response bias, we compared early and late responses (Armstrong and Overton, 1977;Rogelberg and Stanton, 2007;Clottey and Grawe, 2014). Data from individuals who completed the survey between the initial contact and the second contact were compared against the data from the individuals who completed the survey between the second contact and the time the survey was closed.…”
Section: Data Preparationmentioning
confidence: 99%