Policy Pearls intRoductionStudents perspectives on their instructors and courses (student ratings of instruction, student ratings of teaching, student evaluation of teaching (here referred to as SETs)) continue to be a topic of extensive research and debate. [1][2][3][4] The use of SETs is expanding globally with increasing value placed on the data for promotion, tenure, and retention. [5] While SETs are not infallible and should not be used in isolation, most agree that student input is essential for evaluation of teaching and course effectiveness. [6][7][8] Many suggest that the problems associated with these surveys have small effects on the final score as long as enough information is collected and interpretation is appropriate. [7][8][9] Adequate response rates for the population are a major key to valid results, minimizing the risk of sampling errors and biased results. [8,10,11] SETs are required at the University of Minnesota by university policy as well as by departmental and collegiate promotion and tenure guidelines. Each semester, students are asked to rate each of their courses and instructors. For many years, course evaluation forms included nine separate criteria and instructor evaluations included twelve. Comments were requested for each of the categories. For our class sizes (approximately 100 students per cohort) and survey design, a >50% response rate was calculated as necessary for reasonable score validity when the average score variability (SD) was ≤1.0. [10,11] For many years, we experienced such low response rates that score validity was in question. Multiple methods were attempted to improve response rates including using forms with fewer questions, only requesting surveys if an instructor taught more than three sessions in a course, opening surveys mid-semester for earlier input, offering rewards for high participation rates, prize drawings, and attempts to withhold grades until surveys were returned. Despite these efforts, response rates continued to decline. However, without an obvious alternative, the data were still used by the curriculum committee for course decision-making and were considered for promotion and salary decisions by departments.In recent years, we changed our practices and significantly improved both course and instructor evaluation response rates and data quality in a time-efficient manner. The purpose of this article is to describe this model to help educators at other Obtaining sufficient survey responses to make course and instructor evaluation results meaningful is a challenge in many, if not most, health professions training programs. This paper describes a series of policy changes that significantly improved data quality at one college of veterinary medicine located in the United States. The steps consisted of minimizing the number of items appearing on the instruments, providing students adequate time and space for completion, clearly explaining the purpose and value of the evaluations, simplifying data collection, collecting verbal feedback, and closing the l...