Using list-assisted random digit dialing (RDD) with telephone data collection and address-based sampling (ABS) with mail questionnaires are two survey designs that yield probability based inference, yet they are so different that they can yield entirely different results. The 2007 Health Information National Trends Survey (HINTS) provides a unique opportunity to evaluate the effect of these different designs on a variety of survey estimates and, even more importantly, the effect on individual sources of survey error. Understanding the difference in error structure between the two designs is important to survey practitioners in order to select the optimum design, and to data users who can anticipate which results may be affected and how. We first compared estimates between the two designs and then estimated the different sources of error. In addition to identified differences in estimates, we found that for some estimates the two designs can yield similar results merely due to the effect of similar biases. The error components were quite different between the two designs--while the ABS design yields almost complete coverage of the population compared to the RDD design, it was subjected to substantially higher nonresponse bias.
Cannabis legalization has spread rapidly in the United States. Although national surveys provide robust information on the prevalence of cannabis use, cannabis disorders, and related outcomes, information on knowledge, attitudes, and beliefs (KABs) about cannabis is lacking. To inform the relationship between cannabis legalization and cannabis-related KABs, RTI International launched the National Cannabis Climate Survey (NCCS) in 2016. The survey sampled US residents 18 years or older via mail (n = 2,102), mail-to-web (n = 1,046), and two social media data collections (n = 11,957). This report outlines two techniques that we used to problem-solve several challenges with the resulting data: (1) developing a model for detecting fraudulent cases in social media completes after standard fraud detection measures were insufficient and (2) designing a weighting scheme to pool multiple probability and nonprobability samples. We also describe our approach for validating the pooled dataset. The fraud prevention and detection processes, predictive model of fraud, and the methods used to weight the probability and nonprobability samples can be applied to current and future complex data collections and analysis of existing datasets.
This PDF document was made available from www.rti.org as a public service of RTI International. More information about RTI Press can be found at http://www.rti.org/rtipress. RTI International is an independent, nonprofit research organization dedicated to improving the human condition by turning knowledge into practice. The RTI Press mission is to disseminate information about RTI research, analytic tools, and technical expertise to a national and international audience. RTI Press publications are peerreviewed by at least two independent substantive experts and one or more Press editors. Abstract ii Introduction 1 Types of Supplementation Procedures 2 The CHUM Methodology 3 CHUM Operational Issues 6 Benefits and Limitations of Using ABS with CHUM 8 Summary 10 References 10 About the Authors Bonnie E. Shook-Sa, MAS, is a research statistician at RTI International. Her research focuses on sampling frame development and evaluation, sample design and optimization, and the analysis of complex data for household and establishment surveys. Rachel M. Harter, PhD, is a senior research statistician and program director at RTI. Areas of interest include household and establishment surveys, area probability survey designs, address-based sampling, imputation, and small area estimation. Joseph McMichael, BS, is a research statistician in RTI's Division for Statistical and Data Sciences. Jamie Ridenhour, MStat, is a research statistician at RTI. Her research interests are sample design, weighting, and methodological challenges associated with addressbased sampling and dual-frame random-digit-dial surveys. Jill A. Dever, PhD, is a senior research statistician at RTI. Her current research interests are variance estimation with calibrated analysis weights for complex survey designs and statistical issues related to samples drawn without a defined probabilistic structure.Abstract RTI developed the check for housing units missed (CHUM) methodology to compensate for housing unit undercoverage of address-based sampling (ABS) frames for in-person, area probability surveys. The CHUM systematically identifies housing units missing from the ABS frame, giving each housing unit a chance of selection with known probability. The CHUM poses several advantages over alternative supplementation approaches. Because only a subset of housing units within selected areas must be evaluated, the CHUM is less costly than supplementation techniques that require the verification of all addresses within selected areas. Because it is conducted after housing units are selected instead of the frame-building stage, the CHUM provides timelier frame updates. This paper presents details for designing ABS studies that incorporate the CHUM, appropriately incorporating missed units into area probability samples, and training field personnel to implement the CHUM. It also compares the CHUM with other frame supplementation approaches and discusses the advantages and limitations of each approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.