PurposeDespite increasing awareness of the importance of leadership in healthcare, our understanding of the competencies of effective leadership remains limited. We used a concept mapping approach (a blend of qualitative and quantitative analysis of group processes to produce a visual composite of the group’s ideas) to identify stakeholders’ mental model of effective healthcare leadership, clarifying the underlying structure and importance of leadership competencies.MethodsLiterature review, focus groups, and consensus meetings were used to derive a representative set of healthcare leadership competency statements. Study participants subsequently sorted and rank-ordered these statements based on their perceived importance in contributing to effective healthcare leadership in real-world settings. Hierarchical cluster analysis of individual sortings was used to develop a coherent model of effective leadership in healthcare.ResultsA diverse group of 92 faculty and trainees individually rank-sorted 33 leadership competency statements. The highest rated statements were “Acting with Personal Integrity”, “Communicating Effectively”, “Acting with Professional Ethical Values”, “Pursuing Excellence”, “Building and Maintaining Relationships”, and “Thinking Critically”. Combining the results from hierarchical cluster analysis with our qualitative data led to a healthcare leadership model based on the core principle of Patient Centeredness and the core competencies of Integrity, Teamwork, Critical Thinking, Emotional Intelligence, and Selfless Service.ConclusionUsing a mixed qualitative-quantitative approach, we developed a graphical representation of a shared leadership model derived in the healthcare setting. This model may enhance learning, teaching, and patient care in this important area, as well as guide future research.
These findings support the assessment of TFA to understand how this personal characteristic may interact with the medical school experience and with specialty choice. Longitudinal work in this area will be critical to increase this understanding.
Purpose The authors conducted this scoping review to (1) provide a comprehensive evaluation and summation of published literature reporting on interprofessional substance use disorder (SUD) education for students in health professions education programs and (2) appraise the research quality and outcomes of interprofessional SUD education studies. Their goals were to inform health professions educators of interventions that may be useful to consider as they create their own interprofessional SUD courses and to identify areas of improvement for education and research. Method The authors searched 3 Ovid MEDLINE databases (MEDLINE, In-Process & Other Non-Indexed Citations, and Epub Ahead of Print), Embase.com, ERIC via FirstSearch, and Clarivate Analytics Web of Science from inception through December 7, 2018. The authors used the Medical Education Research Study Quality Instrument (MERSQI) to assess included studies’ quality. Results The authors screened 1,402 unique articles, and 14 met inclusion criteria. Publications dated from 2014 to 2018. Ten (71%) included students from at least 3 health professions education programs. The mean MERSQI score was 10.64 (SD = 1.73) (range, 7.5–15). Interventions varied by study, and topics included general substance use (n = 4, 29%), tobacco (n = 4, 29%), alcohol (n = 3, 21%), and opioids (n = 3, 21%). Two studies (14%) used a nonrandomized 2-group design. Four (29%) included patients in a clinical setting or panel discussion. Ten (72%) used an assessment tool with validity evidence. Studies reported interventions improved students’ educational outcomes related to SUDs and/or interprofessionalism. Conclusions Interprofessional SUD educational interventions improved health professions students’ knowledge, skills, and attitudes toward SUDs and interprofessional collaboration. Future SUD curriculum design should emphasize assessment and measure changes in students’ behaviors and patient or health care outcomes. Interprofessional SUD education can be instrumental in preparing the future workforce to manage this pressing and complex public health threat.
Medical education teaching methods and assessment in the intensive care unit have changed little since the initiation of the Accreditation Council for Graduate Medical Education regulations despite respondents' self-report of a willingness to change. Instead, the Accreditation Council for Graduate Medical Education regulations are thought to have negatively impacted resident attitudes, continuity of care, and even availability for teaching. These concerns, coupled with lack of protected time and funding, serve as barriers toward changes in critical care graduate medical education.
The Medical Student Performance Evaluation (MSPE) was introduced as a refinement of the prior "dean's letter" to provide residency program directors with a standardized comprehensive assessment of a medical student's performance throughout medical school. The author argues that, although the MSPE was created with good intentions, many have questioned its efficacy in predicting performance during residency. The author asserts that, despite decades of use and some acknowledged improvement, the MSPE remains a suboptimal tool for informing program directors' decisions about which applicants to interview and rank. In the current approach to MSPEs, there may even be some inherent conflicts of interest that cannot be overcome. In January 2015, an MSPE Task Force was created to review the MSPE over three years and recommend changes to its next iteration. The author believes, however, that expanding this collaborative effort between undergraduate and graduate medical education and other stakeholders could optimize the MSPE's standardization and transparency. The author offers six recommendations for achieving this goal: developing a truly standardized MSPE template; improving faculty accountability in student assessment; enhancing transparency in the MSPE; reconsidering the authorship responsibility of the MSPE; including assessment of compliance with administrative tasks and peer assessments in student evaluations; and embracing milestones for evaluation of medical student performance.
Step 1 of the United States Medical Licensing Examination (USMLE) is a multiple-choice exam primarily measuring knowledge about foundational sciences and organ systems. The test was psychometrically designed as pass/fail for licensing boards to decide whether physician candidates meet minimum standards they deem necessary to obtain the medical licensure necessary to practice. With an increasing number of applicants to review, Step 1 scores are commonly used by residency program directors to screen applicants, even though the exam was not intended for this purpose. Elsewhere in this issue, Chen and colleagues describe the “Step 1 climate” that has evolved in undergraduate medical education, affecting learning, diversity, and well-being. Addressing issues related to Step 1 is a challenge. Various stakeholders frequently spend more time demonizing one another rather than listening, addressing what lies under their respective control, and working collaboratively toward better long-term solutions. In this Invited Commentary, the author suggests how different constituencies can act now to improve this situation while aspirational future solutions are developed. One suggestion is to report Step 1 and Step 2 Clinical Knowledge scores as pass/fail and Step 2 Clinical Skills scores numerically. Any changes must be carefully implemented in a way that is mindful of the kind of unintended consequences that have befallen Step 1. The upcoming invitational conference on USMLE scoring (InCUS) will bring together representatives from all stakeholders. Until there is large-scale reform, all stakeholders should commit to taking (at least) one small step toward fixing Step 1 today.
BackgroundProfessionalism has been an important tenet of medical education, yet defining it is a challenge. Perceptions of professional behavior may vary by individual, medical specialty, demographic group and institution. Understanding these differences should help institutions better clarify professionalism expectations and provide standards with which to evaluate resident behavior.MethodsDuke University Hospital and Vidant Medical Center/East Carolina University surveyed entering PGY1 residents. Residents were queried on two issues: their perception of the professionalism of 46 specific behaviors related to training and patient care; and their own participation in those specified behaviors. The study reports data analyses for gender and institution based upon survey results in 2009 and 2010. The study received approval by the Institutional Review Boards of both institutions.Results76% (375) of 495 PGY1 residents surveyed in 2009 and 2010 responded. A majority of responders rated all 46 specified behaviors as unprofessional, and a majority had either observed or participated in each behavior. For all 46 behaviors, a greater percentage of women rated the behaviors as unprofessional. Men were more likely than women to have participated in behaviors. There were several significant differences in both the perceptions of specified behaviors and in self-reported observation of and/or involvement in those behaviors between institutions.Respondents indicated the most important professionalism issues relevant to medical practice include: respect for colleagues/patients, relationships with pharmaceutical companies, balancing home/work life, and admitting mistakes. They reported that professionalism can best be assessed by peers, patients, observation of non-medical work and timeliness/detail of paperwork.ConclusionDefining professionalism in measurable terms is a challenge yet critical in order for it to be taught and assessed. Recognition of the differences by gender and institution should allow for tailored teaching and assessment of professionalism so that it is most meaningful. A shared understanding of what constitutes professional behavior is an important first step.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.