2018
DOI: 10.1097/acm.0000000000002238
|View full text |Cite
|
Sign up to set email alerts
|

Measuring Medical Housestaff Teamwork Performance Using Multiple Direct Observation Instruments: Comparing Apples and Apples

Abstract: There was substantial variation in the rating of individual teams assessed concurrently by a single observer using multiple instruments. Because existing teamwork observation tools do not yield concordant assessments, researchers should create better tools for measuring teamwork performance.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 19 publications
0
11
0
1
Order By: Relevance
“…30 Another group applied 9 different teamwork observation instruments designed for other settings to the context of daily rounds and found discordant assessments across instruments. 10 They concluded that existing frameworks may not be suitable for daily rounds and recommended the development of novel instruments for rounding leadership, and collecting validity evidence, which we have accomplished here.…”
Section: (100%)mentioning
confidence: 97%
See 1 more Smart Citation
“…30 Another group applied 9 different teamwork observation instruments designed for other settings to the context of daily rounds and found discordant assessments across instruments. 10 They concluded that existing frameworks may not be suitable for daily rounds and recommended the development of novel instruments for rounding leadership, and collecting validity evidence, which we have accomplished here.…”
Section: (100%)mentioning
confidence: 97%
“…Approaches attempting to adapt current leadership assessments to daily rounds have been limited by variability, consistency, and validity. 10 Further, alternative methods often rely on subjective measures and are prone to bias and halo effect, 11,12 while others have not been used in the clinical setting to capture real-life performance. Finally, there is an absence of codified best practices 13 in rounding leadership to serve as guiding frameworks.…”
Section: Introductionmentioning
confidence: 99%
“…Fewer authors appeared to leverage constructivist/interpretivist framings (e.g., Christensen et al, 2018;Pool et al, 2018). We noted instances in which observers were considered objective, but fallible, interchangeable and as contributing error that could be mitigated through training (e.g., Biagioli et al, 2017;Cameron et al, 2017;Dory et al, 2018;Naumann et al, 2016;Park et al, , 2016Park et al, , , 2017Roberts et al, 2017a, b;Turner et al, 2017;Weingart et al, 2018). By contrast, others appeared to value observer subjectivity, and positioned the variation between observers as meaningful (e.g., Chahine et al, 2016;Christensen et al, 2018;Pool et al, 2018).…”
Section: Differences In the Way Assessment Features Are Enacted: Suggesting Variable Positionalitymentioning
confidence: 98%
“…Authors' descriptions on how assessment features were informed by philosophical positions were either vague, unclear, or not reported, and thus required a high degree of interpretation or inference for each feature. (e.g., Biagioli et al, 2017;DeMuth et al, 2018;Gingerich et al, 2017;Ginsburg et al, 2017;Hauer et al, 2018;Li et al, 2017;Martin et al, 2018;Mink et al, 2018;Naidoo et al, 2017;Naumann et al, 2016;Weingart et al, 2018)). This need to interpret led us to call multiple team meetings to discuss and debate the positionality of authors' potential positions.…”
Section: Philosophical Positions As Vague Unclear or Not Reportedmentioning
confidence: 99%
“…Since the idea of a diagnostic team is relatively new, [29] there is are substantial knowledge gapsand research opportunitieswith respect to how best define and design these teams to optimize the diagnostic process and clinical reasoning outcomes. While there is much known about the assessment of individuals [30] and teams [31], we must develop and gather validity evidence for better tools that allow more meaningful measurement of diagnostic performance. With respect to team assessment, teamwork alone should not be the dependent variable of these assessmentsinstead, teamwork should be treated as an independent variable that leads to a clinically important outcome.…”
Section: The Structure Of Teams In Health Carementioning
confidence: 99%