Numerous instructional design models have been proposed over the past several decades. Instead of focusing on the design process (means), this study investigated how learners perceived the quality of instruction they experienced (ends). An electronic survey instrument containing nine a priori scales was developed. Students responded from 89 different undergraduate and graduate courses at multiple institutions (n = 140). Data analysis indicated strong correlations between student self-reports on academic learning time, how much they learned, First Principles of Instruction, their satisfaction with the course, perceptions of their mastery of course objectives, and global course ratings. Most importantly, these scales measure principles with which instructional developers and teachers can evaluate their products and courses, regardless of design processes used: provide authentic tasks for students to do; activate prior learning; demonstrate what is to be learned; provide repeated opportunities for students to successfully complete authentic tasks with coaching and feedback; and help students integrate what they have learned into their personal lives.
Recent research has touted the benefits of learner-centered instruction, problembased learning, and a focus on complex learning. Instructors often struggle to put these goals into practice as well as to measure the effectiveness of these new teaching strategies in terms of mastery of course objectives. Enter the course evaluation, often a standardized tool that yields little practical information for an instructor, but is nonetheless utilized in making high-level career decisions, such as tenure and monetary awards to faculty.The present researchers have developed a new instrument to measure teaching and learning quality (TALQ). In a study of 464 students in 12 courses, if students agreed that they experienced academic learning time (ALT) and that their instructors used First Principles of Instruction, then students were nearly 4 times more likely achieve high levels of mastery of course objectives, according to independent instructor assessments.TALQ can measure improvements in use of First Principles in teaching and course design. The feedback from this instrument can assist teachers who wish to implement the recommendation made by Kuh et al. (2006) that universities and colleges should focus their assessment efforts on factors that influence student success.
Purpose: To assess the value of the Physician Assistant Education Association's End of Curriculum exam TM and formative and summative exams during the physician assistant program in predicting Physician Assistant National Certifying Exam (PANCE) scores. Methods: Value of the End of Curriculum exam, Physician Assistant Clinical Knowledge Rating and Assessment Tool (PACKRAT I, PACKRAT II), PANCE simulation (SUMM I), and Objective Structured Clinical Examination in predicting future PANCE scores was assessed using correlation and regression analysis of data for 27 PA students from one cohort. Results: The End of Curriculum exam, PACKRAT I, PACK-RAT II, and SUMM I are statistically significant predictors of PANCE score (p < 0.01). A combination of PACKRAT I and PACKRAT II was the best predictor of PANCE score and explained a large amount of variance (77.0%) in PANCE scores. Conclusion: PAEA's End of Curriculum exam is one of the strongest predictors of PANCE score (r = 0.78). It offers an additional opportunity for programs to provide PA students with another layer of academic advising and to guide their preparation for PANCE.
Purpose: Physician Assistant (PA) programs often set minimum GPA and graduate record examination (GRE) requirements for admission, citing that candidates with higher admission scores will perform better in the PA program. However, to date, there are limited published studies with inconsistent results that have investigated the validity of using these preadmission characteristics to predict performance in PA programs or on the Physician Assistant National Certifying Exam (PANCE). The development of a physician assistant college admission test (PA-CAT) that has predictive validity to determine PANCE success would give PA admissions committees an additional resource to make decisions. This study was conducted to determine the strength of the relationship between PA-CAT and undergraduate cumulative and science GPA. Methods: The PA-CAT is comprised of 180 questions covering 12 subject areas based on research identifying the relative importance of that subject to success in the PA curriculum. The exam was administered through a secured computer-based testing to 479 newly admitted PA students across the United States. Regression analysis was conducted with Rasch scale scores as the dependent variable and two independent variables (undergraduate GPA and undergraduate science GPA). Results: The PA-CAT Rasch scale scores are positively correlated with undergraduate GPA (r=0.16) and undergraduate science GPA (r=0.22). Although these correlation coefficients are statistically significant (pConclusion:Early results from this research study demonstrates there is a statistically significant relationship between the PA-CAT and undergraduate science GPA in newly admitted PA students. Limitations of the study include the fact that students voluntarily took this exam without consequence. Further study is needed to determine if the exam can be generalized to the entire PA applicant pool thereby providing a valid instrument for admissions decisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.