What is the effect of landscape heterogeneity on the spread rate of populations? Several spatially-explicit simulation models address this question for particular cases and find qualitative insights (e.g., extinction thresholds) but no quantitative relationships. We use a time-discrete analytic model and find general quantitative relationships for the invasion threshold, i.e., the minimal percentage of suitable habitat required for population spread. We investigate how, on the relevant spatial scales, this threshold depends on the relationship between dispersal ability and fragmentation level. The invasion threshold increases with fragmentation level when there is no Allee effect, but it decreases with fragmentation in the presence of an Allee effect. We obtain simple formulas for the approximate spread rate of a population in heterogeneous landscapes from averaging techniques. Comparison with spatially explicit simulations shows an excellent agreement between approximate and true values. We apply our results to the spread of trees and give some implications for the control of invasive species.
Objectives: The outcome of emergency medicine (EM) training is to produce physicians who can competently run an emergency department (ED) shift. However, there are few tools with supporting validity evidence specifically designed to assess multiple key competencies across an entire shift. The investigators developed and gathered validity evidence for a novel entrustment-based tool to assess a resident's ability to safely run an ED shift. Methods: Through a nominal group technique, local and national stakeholders identified dimensions of performance that are reflective of a competent ED physician and are required to safely manage an ED shift. These were included as items in the Ottawa Emergency Department Shift Observation Tool (O-EDShOT), and each item was scored using an entrustment-based rating scale. The tool was implemented in 2018 at the
PurposePostgraduate training programs are incorporating feedback from registered nurses (RNs) to facilitate holistic assessments of resident performance. RNs are a potentially rich source of feedback because they often observe trainees during clinical encounters when physician supervisors are not present. However, RN perspectives about sharing feedback have not been deeply explored. This study investigated RN perspectives about providing feedback and explored the facilitators and barriers influencing their engagement.
ObjectivesConferences are designed for knowledge translation, but traditional conference evaluations are inadequate. We lack studies that explore alternative metrics to traditional evaluation metrics. We sought to determine how traditional evaluation metrics and Twitter metrics performed using data from a conference of the Canadian Association of Emergency Physicians (CAEP).MethodsThis study used a retrospective design to compare social media posts and tradition evaluations related to an annual specialty conference. A post (“tweet”) on the social media platform Twitter was included if it associated with a session. We differentiated original and discussion tweets from retweets. We weighted the numbers of tweets and retweets to comprise a novel Twitter Discussion Index. We extracted the speaker score from the conference evaluation. We performed descriptive statistics and correlation analyses.ResultsOf a total of 3,804 tweets, 2,218 (58.3%) were session-specific. Forty-eight percent (48%) of all sessions received tweets (mean = 11.7 tweets; 95% CI of 0 to 57.5; range, 0–401), with a median Twitter Discussion Index score of 8 (interquartile range, 0 to 27). In the 111 standard presentations, 85 had traditional evaluation metrics and 71 received tweets (p > 0.05), while 57 received both. Twenty (20 of 71; 28%) moderated posters and 44% (40 of 92) posters or oral abstracts received tweets without traditional evaluation metrics. We found no significant correlation between Twitter Discussion Index and traditional evaluation metrics (R = 0.087).ConclusionsWe found no correlation between traditional evaluation metrics and Twitter metrics. However, in many sessions with and without traditional evaluation metrics, audience created real-time tweets to disseminate knowledge. Future conference organizers could use Twitter metrics as a complement to traditional evaluation metrics to evaluate knowledge translation and dissemination.
Background Coaching is an important component of workplace‐based assessment in competency‐based medical education. Longitudinal coaching relationships have been proposed to enhance the trainee–supervisor relationship and promote high‐quality assessment. Objective The objective of this study was to determine the influence of longitudinal coaching relationships on the quality of entrustable professional activity (EPA) assessments. Methods EPAs (n = 174) completed by emergency medicine (EM) supervisors between July 2020 and June 2021 were extracted and divided into two groups; one group consisted of EPAs completed by supervisors when a longitudinal coaching relationship existed (n = 87) and the other group consisted of EPAs completed by the same supervisors when no coaching relationship existed (n = 87). Three physicians were recruited to rate the EPAs using the Quality of Assessment and Learning (QuAL) score, a previously published measure of EPA quality. An analysis of variance was performed to compare mean QuAL scores between the groups. Linear regression analysis was conducted to examine the relationship between trainee performance (EPA rating) and EPA assessment quality (QuAL score). Results All raters completed the survey. The mean ± SD QuAL score in the coaching relationship group (3.63 ± 0.91) was higher than the no coaching relationship group (3.51 ± 1.10) but the difference was not statistically significant (p = 0.40). Supervisor was a significant predictor of QuAL score (p = 0.012) and supervisor alone accounted for 26% of the variability in QuAL scores (R2 = 0.26). There was no significant relationship between trainee performance and EPA assessment quality. Conclusions The presence of a longitudinal coaching relationship did not influence the quality of EPA assessments.
BackgroundWork‐based assessments (WBAs) are increasingly used to inform decisions about trainee progression. Unfortunately, WBAs often fail to discriminate between trainees of differing abilities and have poor reliability. Entrustment‐supervision scales may improve WBA performance, but there is a paucity of literature directly comparing them to traditional WBA tools.MethodsThe Ottawa Emergency Department Shift Observation Tool (O‐EDShOT) is a previously published WBA tool employing an entrustment‐supervision scale with strong validity evidence. This pre‐/post‐implementation study compares the performance of the O‐EDShOT with that of a traditional WBA tool using norm‐based anchors.All assessments completed in 12‐month periods before and after implementing the O‐EDShOT were collected, and generalisability analysis was conducted with year of training, trainees within year and forms within trainee as nested factors. Secondary analysis included assessor as a factor.ResultsA total of 3908 and 3679 assessments were completed by 99 and 116 assessors, for 152 and 138 trainees in the pre‐ and post‐implementation phases respectively. The O‐EDShOT generated a wider range of awarded scores than the traditional WBA, and mean scores increased more with increasing level of training (0.32 vs. 0.14 points per year, p = 0.01). A significantly greater proportion of overall score variability was attributable to trainees using the O‐EDShOT (59%) compared with the traditional tool (21%, p < 0.001). Assessors contributed less to overall score variability for the O‐EDShOT than for the traditional WBA (16% vs. 37%). Moreover, the O‐EDShOT required fewer completed assessments than the traditional tool (27 vs. 51) for a reliability of 0.8.ConclusionThe O‐EDShOT outperformed a traditional norm‐referenced WBA in discriminating between trainees and required fewer assessments to generate a reliable estimate of trainee performance. More broadly, this study adds to the body of literature suggesting that entrustment‐supervision scales generate more useful and reliable assessments in a variety of clinical settings.
Introduction: Traditional post-conference speaker evaluations are inconsistently completed; meanwhile, real time social media tools such as Twitter are increasingly used in conferences. We sought to determine whether a correlation exists between traditional conference evaluation for a speaker and the number of real-time tweets it generated using data from a CAEP conference. Methods: This study utilized a retrospective design. The hashtag #CAEP14 was prospectively registered with Symplur, an online Twitter management tool, so that all tweets related to CAEP conference 2014 were stored. A tweet was associated with a session if it mentioned the speaker name, or if the tweet content and timing closely matched that of the session in the schedule. A tweet classification system was developed to differentiate original tweets from retweets, and quotes from comments generating further discussion. Two authors assessed and coded the first 200 tweets together to ensure a uniform approach to coding, and then independently coded the remaining tweets. Discrepancies were resolved by consensus. One author reviewed post-conference speaker evaluation, and abstracted the value corresponding to the question “The speaker was an effective communicator”. We present descriptive statistics and correlation analyses. Results: A total of 3,804 tweets were collected, with 2,218 (58.3%) associated with a session. Forty-eight (48%) (131 out of 274) of sessions receiving at least one tweet, with a mean of 11.7 tweets per session (95% CI of 0 to 57.5). In comparison, only 31% (85 out of 274) of sessions received a formal post conference speaker evaluation (p<0.005). For sessions that received at least one traditional post-conference evaluation, there was no significant correlation between the number of tweets and evaluation scores (R=0.087). This can be attributed to the fact that there was minimal variation between evaluation scores (median = 3.6 out of 5, IQR of 3.4 to 3.7). Conclusion: There was no correlation between the number of real-time tweets and traditional post-conference speaker evaluation. However, many sessions which received no formal speaker evaluation generated tweets, and the number of tweets was highly variable between sessions. Thus, Twitter metrics might be useful for conference organizers to supplement formal speaker evaluations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.