What is the effect of landscape heterogeneity on the spread rate of populations? Several spatially-explicit simulation models address this question for particular cases and find qualitative insights (e.g., extinction thresholds) but no quantitative relationships. We use a time-discrete analytic model and find general quantitative relationships for the invasion threshold, i.e., the minimal percentage of suitable habitat required for population spread. We investigate how, on the relevant spatial scales, this threshold depends on the relationship between dispersal ability and fragmentation level. The invasion threshold increases with fragmentation level when there is no Allee effect, but it decreases with fragmentation in the presence of an Allee effect. We obtain simple formulas for the approximate spread rate of a population in heterogeneous landscapes from averaging techniques. Comparison with spatially explicit simulations shows an excellent agreement between approximate and true values. We apply our results to the spread of trees and give some implications for the control of invasive species.
Objectives: The outcome of emergency medicine (EM) training is to produce physicians who can competently run an emergency department (ED) shift. However, there are few tools with supporting validity evidence specifically designed to assess multiple key competencies across an entire shift. The investigators developed and gathered validity evidence for a novel entrustment-based tool to assess a resident's ability to safely run an ED shift. Methods: Through a nominal group technique, local and national stakeholders identified dimensions of performance that are reflective of a competent ED physician and are required to safely manage an ED shift. These were included as items in the Ottawa Emergency Department Shift Observation Tool (O-EDShOT), and each item was scored using an entrustment-based rating scale. The tool was implemented in 2018 at the
PurposePostgraduate training programs are incorporating feedback from registered nurses (RNs) to facilitate holistic assessments of resident performance. RNs are a potentially rich source of feedback because they often observe trainees during clinical encounters when physician supervisors are not present. However, RN perspectives about sharing feedback have not been deeply explored. This study investigated RN perspectives about providing feedback and explored the facilitators and barriers influencing their engagement.
ObjectivesConferences are designed for knowledge translation, but traditional conference evaluations are inadequate. We lack studies that explore alternative metrics to traditional evaluation metrics. We sought to determine how traditional evaluation metrics and Twitter metrics performed using data from a conference of the Canadian Association of Emergency Physicians (CAEP).MethodsThis study used a retrospective design to compare social media posts and tradition evaluations related to an annual specialty conference. A post (“tweet”) on the social media platform Twitter was included if it associated with a session. We differentiated original and discussion tweets from retweets. We weighted the numbers of tweets and retweets to comprise a novel Twitter Discussion Index. We extracted the speaker score from the conference evaluation. We performed descriptive statistics and correlation analyses.ResultsOf a total of 3,804 tweets, 2,218 (58.3%) were session-specific. Forty-eight percent (48%) of all sessions received tweets (mean = 11.7 tweets; 95% CI of 0 to 57.5; range, 0–401), with a median Twitter Discussion Index score of 8 (interquartile range, 0 to 27). In the 111 standard presentations, 85 had traditional evaluation metrics and 71 received tweets (p > 0.05), while 57 received both. Twenty (20 of 71; 28%) moderated posters and 44% (40 of 92) posters or oral abstracts received tweets without traditional evaluation metrics. We found no significant correlation between Twitter Discussion Index and traditional evaluation metrics (R = 0.087).ConclusionsWe found no correlation between traditional evaluation metrics and Twitter metrics. However, in many sessions with and without traditional evaluation metrics, audience created real-time tweets to disseminate knowledge. Future conference organizers could use Twitter metrics as a complement to traditional evaluation metrics to evaluate knowledge translation and dissemination.
Introduction: Traditional post-conference speaker evaluations are inconsistently completed; meanwhile, real time social media tools such as Twitter are increasingly used in conferences. We sought to determine whether a correlation exists between traditional conference evaluation for a speaker and the number of real-time tweets it generated using data from a CAEP conference. Methods: This study utilized a retrospective design. The hashtag #CAEP14 was prospectively registered with Symplur, an online Twitter management tool, so that all tweets related to CAEP conference 2014 were stored. A tweet was associated with a session if it mentioned the speaker name, or if the tweet content and timing closely matched that of the session in the schedule. A tweet classification system was developed to differentiate original tweets from retweets, and quotes from comments generating further discussion. Two authors assessed and coded the first 200 tweets together to ensure a uniform approach to coding, and then independently coded the remaining tweets. Discrepancies were resolved by consensus. One author reviewed post-conference speaker evaluation, and abstracted the value corresponding to the question “The speaker was an effective communicator”. We present descriptive statistics and correlation analyses. Results: A total of 3,804 tweets were collected, with 2,218 (58.3%) associated with a session. Forty-eight (48%) (131 out of 274) of sessions receiving at least one tweet, with a mean of 11.7 tweets per session (95% CI of 0 to 57.5). In comparison, only 31% (85 out of 274) of sessions received a formal post conference speaker evaluation (p<0.005). For sessions that received at least one traditional post-conference evaluation, there was no significant correlation between the number of tweets and evaluation scores (R=0.087). This can be attributed to the fact that there was minimal variation between evaluation scores (median = 3.6 out of 5, IQR of 3.4 to 3.7). Conclusion: There was no correlation between the number of real-time tweets and traditional post-conference speaker evaluation. However, many sessions which received no formal speaker evaluation generated tweets, and the number of tweets was highly variable between sessions. Thus, Twitter metrics might be useful for conference organizers to supplement formal speaker evaluations.
Background Coaching is an important component of workplace‐based assessment in competency‐based medical education. Longitudinal coaching relationships have been proposed to enhance the trainee–supervisor relationship and promote high‐quality assessment. Objective The objective of this study was to determine the influence of longitudinal coaching relationships on the quality of entrustable professional activity (EPA) assessments. Methods EPAs (n = 174) completed by emergency medicine (EM) supervisors between July 2020 and June 2021 were extracted and divided into two groups; one group consisted of EPAs completed by supervisors when a longitudinal coaching relationship existed (n = 87) and the other group consisted of EPAs completed by the same supervisors when no coaching relationship existed (n = 87). Three physicians were recruited to rate the EPAs using the Quality of Assessment and Learning (QuAL) score, a previously published measure of EPA quality. An analysis of variance was performed to compare mean QuAL scores between the groups. Linear regression analysis was conducted to examine the relationship between trainee performance (EPA rating) and EPA assessment quality (QuAL score). Results All raters completed the survey. The mean ± SD QuAL score in the coaching relationship group (3.63 ± 0.91) was higher than the no coaching relationship group (3.51 ± 1.10) but the difference was not statistically significant (p = 0.40). Supervisor was a significant predictor of QuAL score (p = 0.012) and supervisor alone accounted for 26% of the variability in QuAL scores (R2 = 0.26). There was no significant relationship between trainee performance and EPA assessment quality. Conclusions The presence of a longitudinal coaching relationship did not influence the quality of EPA assessments.
Introduction: The Ottawa Emergency Department Shift Observation Tool (O-EDShOT) was recently developed to assess a resident's ability to safely run an ED shift and is supported by multiple sources of validity evidence. The O-EDShOT uses entrustability scales, which reflect the degree of supervision required for a given task. It was found to discriminate between learners of different levels, and to differentiate between residents who were rated as able to safely run the shift and those who were not. In June 2018 we replaced norm-based daily encounter cards (DECs) with the O-EDShOT. With the ideal assessment tool, most of the score variability would be explained by variability in learners’ performances. In reality, however, much of the observed variability is explained by other factors. The purpose of this study is to determine what proportion of total score variability is accounted for by learner variability when using norm-based DECs vs the O-EDShOT. Methods: This was a prospective pre-/post-implementation study, including all daily assessments completed between July 2017 and June 2019 at The Ottawa Hospital ED. A generalizability analysis (G study) was performed to determine what proportion of total score variability is accounted for by the various factors in this study (learner, rater, form, pgy level) for both the pre- and post- implementation phases. We collected 12 months of data for each phase, because we estimated that 6-12 months would be required to observe a measurable increase in entrustment scale scores within a learner. Results: A total of 3908 and 3679 assessments were completed by 99 and 116 assessors in the pre- and post- implementation phases respectively. Our G study revealed that 21% of total score variance was explained by a combination of post-graduate year (PGY) level and the individual learner in the pre-implementation phase, compared to 59% in the post-implementation phase. An average of 51 vs 27 forms/learner are required to achieve a reliability of 0.80 in the pre- and post-implementation phases respectively. Conclusion: A significantly greater proportion of total score variability is explained by variability in learners’ performances with the O-EDShOT compared to norm-based DECs. The O-EDShOT also requires fewer assessments to generate a reliable estimate of the learner's ability. This study suggests that the O-EDShOT is a more useful assessment tool than norm-based DECs, and could be adopted in other emergency medicine training programs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.