Universities worldwide are pausing in an attempt to contain COVID-19’s spread. In February 2019, universities in China took the lead, cancelling all in-person classes and switching to virtual classrooms, with a wave of other institutes globally following suit. The shift to online platform poses serious challenges to medical education so that understanding best practices shared by pilot institutes may help medical educators improve teaching. Provide 12 tips to highlight strategies intended to help on-site medical classes moving completely online under the pandemic. We collected ‘best practices’ reports from 40 medical schools in China that were submitted to the National Centre for Health Professions Education Development. Experts’ review-to-summary cycle was used to finalize the best practices in teaching medical students online that can benefit peer institutions most, under the unprecedented circumstances of the COVID-19 outbreak. The 12 tips presented offer-specific strategies to optimize teaching medical students online under COVID-19, specifically highlighting the tech-based pedagogy, counselling, motivation, and ethics, as well as the assessment and modification. Learning experiences shared by pilot medical schools and customized properly are instructive to ensure a successful transition to e-learning.
Despite the increasing popularity, cognitive diagnosis models have been criticized for limited utility for small samples. In this study, the authors proposed to use Bayes modal (BM) estimation and monotonic constraints to stabilize item parameter estimation and facilitate person classification in small samples based on the generalized deterministic input noisy “and” gate (G-DINA) model. Both simulation study and real data analysis were used to assess the utility of the BM estimation and monotonic constraints. Results showed that in small samples, (a) the G-DINA model with BM estimation is more likely to converge successfully, (b) when prior distributions are specified reasonably, and monotonicity is not violated, the BM estimation with monotonicity tends to produce more stable item parameter estimates and more accurate person classification, and (c) the G-DINA model using the BM estimation with monotonicity is less likely to overfit the data and shows higher predictive power.
Abstract. Extending from classical test theory, G theory allows more sources of variations to be investigated and therefore provides the accuracy of generalizing observed scores to a broader universe. However, G theory has been used less due to the absence of analytic facilities for this purpose in popular statistical software packages. Besides, there is rarely a systematic G theory introduction in the linear mixed-effect model context, which is a widely taught technique in statistical analysis curricula. The present paper fits G theory into linear mixed-effect models and estimates the variance components via the well-known lme4 package in R. Concrete examples, modeling procedures, and R syntax are illustrated so that practitioners may use G theory for their studies. Realizing the G theory estimation in R provides more flexible features than other platforms, such that users need not rely on specialized software such as GENOVA and urGENOVA.
In many behavioral research areas, multivariate generalizability theory (mG theory) has been typically used to investigate the reliability of certain multidimensional assessments. However, traditional mG-theory estimation-namely, using frequentist approaches-has limits, leading researchers to fail to take full advantage of the information that mG theory can offer regarding the reliability of measurements. Alternatively, Bayesian methods provide more information than frequentist approaches can offer. This article presents instructional guidelines on how to implement mG-theory analyses in a Bayesian framework; in particular, BUGS code is presented to fit commonly seen designs from mG theory, including single-facet designs, two-facet crossed designs, and two-facet nested designs. In addition to concrete examples that are closely related to the selected designs and the corresponding BUGS code, a simulated dataset is provided to demonstrate the utility and advantages of the Bayesian approach. This article is intended to serve as a tutorial reference for applied researchers and methodologists conducting mG-theory studies.
This study investigates the performance of robust maximum likelihood (ML) estimators when fitting and evaluating small sample latent growth models with non-normal missing data. Results showed that the robust ML methods could be used to account for non-normality even when the sample size is very small (e.g., N < 100). Among the robust ML estimators, “MLR” was the optimal choice, as it was found to be robust to both non-normality and missing data while also yielding more accurate standard error estimates and growth parameter coverage. However, the choice “MLMV” produced the most accurate p values for the χ2 test statistic under conditions studied. Regarding the goodness of fit indices, as sample size decreased, all three fit indices studied (i.e., comparative fit index, root mean square error of approximation, and standardized root mean square residual) exhibited worse fit. When the sample size was very small (e.g., N < 60), the fit indices would imply that a proposed model fit poorly, when this might not be actually the case in the population.
The Bayesian literature has shown that the Hamiltonian Monte Carlo (HMC) algorithm is powerful and efficient for statistical model estimation, especially for complicated models. Stan, a software program built upon HMC, has been introduced as a means of psychometric modeling estimation. However, there are no systemic guidelines for implementing Stan with the log-linear cognitive diagnosis model (LCDM), which is the saturated version of many cognitive diagnostic model (CDM) variants. This article bridges the gap between Stan application and Bayesian LCDM estimation: Both the modeling procedures and Stan code are demonstrated in detail, such that this strategy can be extended to other CDMs straightforwardly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.