2019
DOI: 10.1177/1460458218824705
|View full text |Cite
|
Sign up to set email alerts
|

The elephant in the record: On the multiplicity of data recording work

Abstract: This article focuses on the production side of clinical data work, or data recording work, and in particular, on its multiplicity in terms of data variability. We report the findings from two case studies aimed at assessing the multiplicity that can be observed when the same medical phenomenon is recorded by multiple competent experts, yet the recorded data enable the knowledgeable management of illness trajectories. Often framed in terms of the latent unreliability of medical data, and then treated as a probl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

5
3

Authors

Journals

citations
Cited by 22 publications
(19 citation statements)
references
References 41 publications
0
19
0
Order By: Relevance
“…While data infrastructures are built to enable monitoring of populations and early prevention measures, 20 and to monitor the quality of services, 21,22 healthcare workers increasingly learn to maneuver playfully within these environments. 23 Thus, the papers collected here discuss the data work of different groups: doctors struggling to clarify data ambiguity and use predictive algorithms for personalized medicine; 24 coming to terms with data variability-varying data on the same phenomena; 25 clinicians' response to patients' increasing data literacy; 26 nurses' interpretation of patient-generated data as a means of inclusion in their own care; 27,28 and patients generating data about their health. 29,30 Finally, there is work to produce data itself, including skillfully assessing messy charts to create structured datasets, 22 to sanitizing and validating data, 31 and building data integrations between various information systems.…”
Section: What Is Data Work?mentioning
confidence: 99%
See 1 more Smart Citation
“…While data infrastructures are built to enable monitoring of populations and early prevention measures, 20 and to monitor the quality of services, 21,22 healthcare workers increasingly learn to maneuver playfully within these environments. 23 Thus, the papers collected here discuss the data work of different groups: doctors struggling to clarify data ambiguity and use predictive algorithms for personalized medicine; 24 coming to terms with data variability-varying data on the same phenomena; 25 clinicians' response to patients' increasing data literacy; 26 nurses' interpretation of patient-generated data as a means of inclusion in their own care; 27,28 and patients generating data about their health. 29,30 Finally, there is work to produce data itself, including skillfully assessing messy charts to create structured datasets, 22 to sanitizing and validating data, 31 and building data integrations between various information systems.…”
Section: What Is Data Work?mentioning
confidence: 99%
“…[46][47][48][49] Collectively, the papers in this special issue address these concerns by providing detailed empirical accounts on how the quest for data has local consequences. The papers also provide policy and design implications focused on how to better support individual and collaborative data work, 25 how to characterize the data activism of patients so they might be more influential, 29 and how to support workers to adapt to data work and find ways to thrive amid changing, data-centric work environments. 23 A critical line of ongoing research should pertain to the consequences of widespread "valorization of data-oriented ways of knowing" 50 in healthcare.…”
Section: Implications Of Healthcare Data Workmentioning
confidence: 99%
“…The same patient could be associated with a Mild label by a doctor, and a Severe label by another doctor, and this even if either doctors intend to characterize the very same condition, which could be represented by the same numerical value on a 0-100 continuum. This observation regards the phenomenon of inter-rater reliability that, although widely known in the medical ambit [ 47 ], is still little known and considered in most of the fields of applied computer science [ 3 , 48 ]. For these reasons we argue that any method for properly representing ordinal scales in numerical terms should be grounded on an empirical and human-centered approach, that is, on the subjective perceptions of domain experts for whom the ordinal categories to be fuzzified are meaningful according to the context, right in virtue of their descriptive power and despite their ambiguity.…”
Section: Discussionmentioning
confidence: 99%
“…The same patient could be associated with a Mild label by a doctor, and a Severe label by another doctor, and this even if either doctors intend to characterize the very same condition, which could be represented by the same numerical value on a 0-100 continuum. This observation regards the phenomenon of inter-rater reliability that, although widely known in the medical ambit [47], is still little known and considered Table 3 The regression performance of the 4 machine learning models considered in the comparative study in terms of Mean Absolute Error (MAE) and related confidence intervals (CIs, at a 95% Confidence Level): the lower the value, the better the performance. The first column presents the CIs of the MAE of models with the ordinal encoding; the second column the same accuracy indicators for the CoIED encoding…”
Section: Perception Of Hl7 Terminologymentioning
confidence: 99%
“…According to the above perspective, we can speak of reliability of a Gold Standard only in terms of the reliability of the Diamond Standard from which the former has been derived, by means of some specific reduction. In its turn, the reliability of a Diamond Standard regards the extent this set of annotations expresses a unitary interpretation of the single cases observed, despite the multiplicity of views entailed by the different raters involved in interpreting each case [11]. If all of the raters agree upon each and every case, then no disagreement among the case's annotations is observed, and the reliability is maximum.…”
Section: Reliabilitymentioning
confidence: 99%