High data quality is an important prerequisite for sound empirical research. Meade and Craig (2012) and Huang, Curran, Keeney, Poposki, and DeShon (2012) discussed methods to detect unmotivated or careless respondents in large web-based questionnaires. We first discuss these methods and present multi-test extensions of person-fit statistics as alternatives. Second, we applied these methods to data collected through a web-based questionnaire, in which some respondents received instructions to respond quickly which can result in more careless responding. In addition, we conducted a simulation study. We compared sensitivity and specificity of different methods and concluded that multi-test extensions of person-fit statistics are a good alternative as compared to other methods, although the sensitivity to detect careless respondents using empirical data was lower than using simulated data.
Talent identification research in soccer comprises the prediction of elite soccer performance. While many studies in this field have aimed to empirically relate performance characteristics to subsequent soccer success, a critical evaluation of the methodology of these studies has mostly been absent in the literature. In this position paper, we discuss advantages and limitations of the design, validity, and utility of current soccer talent identification research. Specifically, we draw on principles from selection psychology that can contribute to best practices in the context of making selection decisions across domains. Based on an extensive search of the soccer literature, we identify four methodological issues from this framework that are relevant for talent identification research, i.e. (1) the operationalization of criterion variables (the performance to be predicted) as performance levels; (2) the focus on isolated performance indicators as predictors of soccer performance; (3) the effects of range restriction on the predictive validity of predictors used in talent identification; and (4) the effect of the base rate on the utility of talent identification procedures. Based on these four issues, we highlight opportunities and challenges for future soccer talent identification studies that may contribute to developing evidence-based selection procedures. We suggest for future research to consider the use of individual soccer criterion measures, to adopt representative, high-fidelity predictors of soccer performance, and to take restriction of range and the base rate into account.
The selection of athletes has been a central topic in sports sciences for decades. Yet, little consideration has been given to the theoretical underpinnings and predictive validity of the procedures. In this paper, we evaluate current selection procedures in sports given what we know from the selection psychology literature. We contrast the popular clinical method (predictions based on overall impressions of experts) with the actuarial approach (predictions based on pre-defined decision rules), and we discuss why the latter approach often leads to superior performance predictions. Furthermore, we discuss the "signs" and the "samples" approaches. Taking the prevailing signs approach, athletes' technical-, tactical-, physical-, and psychological skills are often assessed separately in controlled settings. However, for predicting later sport performance, taking samples of athletes' behaviours in their sports environment may result in more valid assessments. We discuss the possible advantages and implications of making selection procedures in sports more actuarial and sample-based.
We studied the validity of two methods for predicting academic performance and student-program fit that were proximal to important study criteria. Applicants to an undergraduate psychology program participated in a selection procedure containing a trial-studying test based on a work sample approach, and specific skills tests in English and math. Test scores were used to predict academic achievement and progress after the first year, achievement in specific course types, enrollment, and dropout after the first year. All tests showed positive significant correlations with the criteria. The trial-studying test was consistently the best predictor in the admission procedure. We found no significant differences between the predictive validity of the trial-studying test and prior educational performance, and substantial shared explained variance between the two predictors. Only applicants with lower trial-studying scores were significantly less likely to enroll in the program. In conclusion, the trial-studying test yielded predictive validities similar to that of prior educational performance and possibly enabled self-selection. In admissions aimed at student-program fit, or in admissions in which past educational performance is difficult to use, a trial-studying test is a good instrument to predict academic performance.
Checking the validity of test scores is important in both educational and psychological measurement. Person-fit analysis provides several statistics that help practitioners assessing whether individual item score vectors conform to a prespecified item response theory model or, alternatively, to a group of test takers. Software enabling easy access to most person-fit statistics was lacking up to now. The PerFit R package was written in order to fill in this void. A theoretical overview of relatively simple person-fit statistics is provided. A practical guide showing how the main functions of PerFit can be used is also given. Both numerical and graphical tools are described and illustrated using examples. The goal is to show how person-fit statistics can be easily applied to testing of questionnaire data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.