We propose a constitutive model to describe the nonlocality, hysteresis, and several flow features of dry granular materials. Taking the well-known inertial number I as a measure of sheared-induced local fluidization, we derive a relaxation model for I according to the evolution of microstructure during avalanche and dissipation processes. The model yields a nonmonotonic flow law for a homogeneous flow, accounting for hysteretic solid-fluid transition and intermittency in quasistatic flows. For an inhomogeneous flow, the model predicts a generalized Bagnold shear stress revealing the interplay of two microscopic nonlocal mechanisms: collisions among correlated structures and the diffusion of fluidization within the structures. In describing a uniform flow down an incline, the model reproduces the hysteretic starting and stopping heights and the Pouliquen flow rule for mean velocity. Moreover, a dimensionless parameter reflecting the nonlocal effect on the flow is discovered, which controls the transition between Bagnold and creeping flow dynamics.
Measurement invariance is a prerequisite for comparing measurement scores from different groups. In medical education, multi-source feedback (MSF) is utilized to assess core competencies, including the professionalism. However, little attention has been paid to the measurement invariance of assessment instruments; that is, whether an instrument holds the same meaning across different rater groups. To examine the measurement invariance of the National Taiwan University professionalism MSF (NTU P-MSF) in order to determine whether medical students' self-rating can be compared to their peers' rating. An eight-factor model was specified for confirmatory factor analysis to examine the construct validity of the NTU P-MSF. Cronbach's alpha was computed for the items of each domain to evaluate internal consistent reliability. The same eight-factor model was used for multi-group confirmatory factor analyses. Four hierarchical models were specified to test configural (i.e., identical factor-item relationship), metric (i.e., identical factor loadings), scalar (i.e., identical intercepts), and error variance across self-rating and peer rating groups. One hundred and twenty second-year medical students from weekly discussion groups conducted as part of a medical professionalism course agreed to use the NTU P-MSF to assess themselves or their discussion group peers. NTU P-MSF assessment scores were a good fit for the eight-factor model among self group and peer group. The Cronbach's alpha coefficients of students' NTU P-MSF scores and peers' scores ranged from 0.76 to 0.89 and 0.84 to 0.91, respectively indicating that the NTU P-MSF scores also have good internal consistent reliability between both groups. In addition, same factor structure and similar factor loadings and intercepts of NTU P-MSF scores between both groups indicate that NTU P-MSF scores had configural, metric, and scalar invariance. Thus, students' self-assessments and peer assessments can be compared in terms of the constructs of NTU P-MSF scores, change in NTU P-MSF scores, and its factor scores. This study demonstrates how to investigate the measurement invariance of a professionalism MSF and contributes to the discussion on self- and peer assessment in medical education.
The findings of the present study provide strong evidence in support of the reliability and validity of the MOS-HIV health survey for the assessment of quality of life among HIV-infected patients in Taiwan. We find that the original factor structure of the MOS-HIV survey remains valid for patients from Chinese cultural backgrounds. This study therefore contributes to the existing evidence within the extant literature on the cultural relevance of the MOS-HIV health survey (a measure originally developed within a Western culture) as a valid measure for cross-cultural comparative studies on health-related quality of life.
Background-No evidence addresses the effectiveness of patient-centered cultural competence training in non-Western settings.
This corrects the article DOI: 10.1103/PhysRevE.96.062909.
ability to recognise appropriate methods for action and risk assessment. Based on these findings, this course content was effective in significantly improving student awareness of patient safety culture and in developing skills necessary to break the cycle of medical error. Context and setting Since the Liaison Committee on Medical Education has required that all medical schools in the USA offer cultural competency training, several surveys have been developed to measure cultural competency. However, these cultural competency measurements have neither been compared nor applied to a non-Englishspeaking setting. This report presents the psychometrics of 3 instruments tested in a Taiwanese medical school, where a cultural competency curriculum was recently introduced. Why the idea was necessary Due to globalisation, the ethnic make-up of Taiwan's population is becoming increasingly diverse. There is an urgent need for cultural competency training in Taiwan, and reliable and valid assessment tools are essential to evaluate its effectiveness. What was done In May 2006 we recruited 90% (237 ⁄ 262) of our Year 3 and 4 medical students to fill out a survey containing the Inventory for Assessing the Process of Cultural Competence among Healthcare Professionals-Revised (IAPCC-R), which has 5 subscales (cultural awareness, cultural knowledge, cultural skill, cultural encounter and cultural desire), the California Brief Multicultural Competence Scale (CBMCS), which has 4 subscales (multicultural knowledge, awareness of cultural barriers, sensitivity to consumers, sociocultural diversities), and an instrument designed to measure the preparedness of US residents to deliver cross-cultural care (CCC), which has 2 subscales (self-reported preparedness and self-reported skill levels). Within 3 weeks, 78 students volunteered to take the retests. These instruments were chosen because they are reported to have good psychometric properties with large testing sample sizes. We analysed the data with SAS 9.1. Evaluation of results and impactThe values of Cronbach's a for internal consistency coefficient ranged from 0.06 to 0.57 in the IAPCC-R subscales, from 0.76 to 0.92 in the CBMCS subscales, and were 0.95 and 0.96 in the CCC instrument's subscales. These results indicate that the IAPCC-R does not have good reliability for internal consistency, whereas the CBMCS and the CCC instrument do. Regarding the results of the paired t-tests and the test-retest correlation coefficients, the IAPCC-R and the CBMCS scales do not have good test-retest reliability, whereas the CCC instrument does. The Kaiser)Meyer)Olkin measure of sampling adequacy was computed to examine construct validity and the results indicated that factor analysis was appropriate for all 3 scales. We used parallel analysis to determine the number of factors. Exploratory factor analysis with iterated principal factor extraction and Promax oblique rotation showed that the IAPCC-R does not have an identifiable factor structure. The 2 factors of the CBMCS (correlation coefficient ...
resulting in a pass rate of 51%. Scores ranged from 34.8% to 70.5%. Overall, 80% of students considered the OSCE to be a more objective clinical assessment tool, most appreciated the time-efficiency of the process, and all respondents were in favour of adopting the OSCE as a permanent tool. Future plans in the Department of Internal Medicine include the adoption of the OSCE in Years 4 and 6. In addition, the Departments of Psychiatry and Preventive Dentistry are investigating possible uses of the OSCE model. Can cultural competency self-assessment predict OSCE performance?Ming-Jung Ho, Keng-Lin Lee & Alexander R Green Context and setting Although cultural competency training is required by various accreditation bodies in medical education, there is no agreement upon how best to evaluate such training programmes. According to systematic reviews, most studies evaluating cultural competency training of health professionals utilise self-assessments. Few studies have employed objective structured clinical examinations (OSCEs). No study has reported the relationship between self-assessments and OSCEs in evaluating cultural competency training. Why the idea was necessary The OSCE is generally regarded as a better method of measuring competency in clinical skills than self-assessment. However, conducting an OSCE is so much more demanding than administrating a self-efficacy survey that the latter is often chosen as the means to measure cultural competency skills. This study addresses the unanswered question of whether self-assessment can substitute for the OSCE in measuring cross-cultural communication skills. What was done We recruited 57 Year 5 students at our medical school to participate in the study between January 2006 and June 2006. Students filled out a survey containing the Inventory for Assessing the Process of Cultural Competence among Healthcare Professionals-Revised (IAPCC-R), which has 5 subscales, the California Brief Multicultural Competence Scale (CBMCS), which has 4 subscales, and a survey designed to measure US residentsÕ preparedness to deliver cross-cultural care (CCC), which has 2 subscales. The CronbachÕs alpha coefficients for the 3 scales were 0.71, 0.84 and 0.92, respectively. All students were then evaluated with an objective structured clinical examination (OSCE) which tested their ability to explore sociocultural factors influencing a standardised patient adherence to chronic disease treatment. Multiple regressions were conducted to predict the patient perspectives (PP) and social factors (SF) subscales of the OSCE. All 11 subscales of the IAPCC-R, CBMCS and CCC were selected with the stepwise method. We analysed the data using SAS Version 9.1. Evaluation of results and impact In the PP regression model, PP was predicted by the ÔskillÕ subscale of the IAPCC-R and the ÔawarenessÕ subscale of the CBMCS. Overall model test was significant (F[2,56] = 4.59, P = 0.0145, R 2 = 0.15) and parameter estimates were significant at a level of 0.05. The estimated model was PP = 1.32 -0.18 · skill + 0.19 · ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.