Measurement invariance (MI) is a pre-requisite for comparing latent variable scores across groups. The current paper introduces the concept of approximate MI building on the work of Muthén and Asparouhov and their application of Bayesian Structural Equation Modeling (BSEM) in the software Mplus. They showed that with BSEM exact zeros constraints can be replaced with approximate zeros to allow for minimal steps away from strict MI, still yielding a well-fitting model. This new opportunity enables researchers to make explicit trade-offs between the degree of MI on the one hand, and the degree of model fit on the other. Throughout the paper we discuss the topic of approximate MI, followed by an empirical illustration where the test for MI fails, but where allowing for approximate MI results in a well-fitting model. Using simulated data, we investigate in which situations approximate MI can be applied and when it leads to unbiased results. Both our empirical illustration and the simulation study show approximate MI outperforms full or partial MI In detecting/recovering the true latent mean difference when there are (many) small differences in the intercepts and factor loadings across groups. In the discussion we provide a step-by-step guide in which situation what type of MI is preferred. Our paper provides a first step in the new research area of (partial) approximate MI and shows that it can be a good alternative when strict MI leads to a badly fitting model and when partial MI cannot be applied.
This article reports from a pilot study that was conducted in a probability-based online panel in the Netherlands. Two parallel surveys were conducted: one in the traditional questionnaire layout of the panel and the other optimized for mobile completion with new software that uses a responsive design (optimizes the layout for the device chosen). The latter questionnaire was optimized for mobile completion, and respondents could choose whether they wanted to complete the survey on their mobile phone or on a regular desktop. Results show that a substantive number of respondents (57%) used their mobile phone for survey completion. No differences were found between mobile and desktop users with regard to break offs, item nonresponse, time to complete the survey, or response effects such as length of answers to an open-ended question and the number of responses in a check-all-that-apply question. A considerable number of respondents gave permission to record their GPS coordinates, which are helpful in defining where the survey was taken. Income, household size, and household composition were found to predict mobile completion. In addition, younger respondents, who typically form a hard-to-reach group, show higher mobile completion rates.
Respondents in an Internet panel survey can often choose which device they use to complete questionnaires: a traditional PC, laptop, tablet computer or a smartphone. Because all these devices have different screen sizes and modes of data-entry, measurement errors may differ between devices. Using data from the Dutch LISS panel, we evaluate which devices respondents use over time. We study the measurement error associated with each device, and show that measurement errors are larger on tablets and smartphone than on PCs. To gain insight into the causes of these differences, we study changes in measurement error over time, associated with a switch of devices over two consecutive waves of the panel. We show that within individuals, measurement errors do not change with a switch in device. Therefore, we conclude that the higher measurement error in tablets and smartphones is associated with self-selection of the sample into using a particular device.
The growing smartphone penetration and the integration of smartphones into people’s everyday practices offer researchers opportunities to augment survey measurement with smartphone-sensor measurement or to replace self-reports. Potential benefits include lower measurement error, a widening of research questions, collection of in situ data, and a lowered respondent burden. However, privacy considerations and other concerns may lead to nonparticipation. To date, little is known about the mechanisms of willingness to share sensor data by the general population, and no evidence is available concerning the stability of willingness. The present study focuses on survey respondents’ willingness to share data collected using smartphone sensors (GPS, camera, and wearables) in a probability-based online panel of the general population of the Netherlands. A randomized experiment varied study sponsor, framing of the request, the emphasis on control over the data collection process, and assurance of privacy and confidentiality. Respondents were asked repeatedly about their willingness to share the data collected using smartphone sensors, with varying periods before the second request. Willingness to participate in sensor-based data collection varied by the type of sensor, study sponsor, order of the request, respondent’s familiarity with the device, previous experience with participating in research involving smartphone sensors, and privacy concerns. Willingness increased when respondents were asked repeatedly and varied by sensor and task. The timing of the repeated request, one month or six months after the initial request, did not have a significant effect on willingness.
Missing data form a ubiquitous problem in scientific research, especially since most statistical analyses require complete data. To evaluate the performance of methods dealing with missing data, researchers perform simulation studies. An important aspect of these studies is the generation of missing values in a simulated, complete data set: the amputation procedure. We investigated the methodological validity and statistical nature of both the current amputation practice and a newly developed and implemented multivariate amputation procedure. We found that the current way of practice may not be appropriate for the generation of intuitive and reliable missing data problems. The multivariate amputation procedure, on the other hand, generates reliable amputations and allows for a proper regulation of missing data problems. The procedure has additional features to generate any missing data scenario precisely as intended. Hence, the multivariate amputation procedure is an efficient method to accurately evaluate missing data methodology.
Attrition is the process of dropout from a panel study. Earlier studies into the determinants of attrition study respondents still in the survey and those who attrited at any given wave of data collection. In many panel surveys, the process of attrition is more subtle than being either in or out of the study. Respondents often miss out on one or more waves, but might return after that. They start off responding infrequently, but more often later in the course of the study. Using current analytical models, it is difficult to incorporate such response patterns in analyses of attrition. This article shows how to study attrition in a latent class framework. This allows the separation of different groups of respondents, that each follow a different and distinct process of attrition. Classifying attriting respondents enables us to formally test substantive theories of attrition and its effects on data accuracy more effectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.