1995
DOI: 10.1111/j.1365-2044.1995.tb04554.x
|View full text |Cite
|
Sign up to set email alerts
|

An assessment of the consistency of ASA physical status classification allocation

Abstract: SummaryThe American Society of Anesthesiologists' (ASA) Physical Status Classification was tested for consistency of use by anaesthetists. A postal questionnaire was sent to 113 anaesthetists of varying experience working in the Northern Region of England. They were asked to allot ASA grades to 10 hypothetical patients. Ninety-seven (85.8%) responded to two mailings. In no case was there complete agreement on ASA grade, and in only one case were responses restricted to two of the Jive possible grades. In one c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

8
198
2
11

Year Published

2002
2002
2015
2015

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 339 publications
(219 citation statements)
references
References 9 publications
8
198
2
11
Order By: Relevance
“…Although the anesthesiologist was not controlled in our study, because of its retrospective nature, the anesthesiologists who determined each patient's ASA score reported their findings at the time of surgery as part of normal clinical routine and not as part of our study. Studies of interrater consistency of the ASA score have observed its inconsistency [18,32] and imprecision [31,38]. As previously suggested [19], this poor interobserver reliability most likely is the result of difficulty in distinguishing a patient with normal health (ASA Class 1) from one with mild systemic disease (ASA Class 2) rather than discriminating between a relatively healthy patient (ASA Classes 1 and 2) and one with severe, even lifethreatening, systemic disease (ASA Classes 3 and 4).…”
Section: Discussionmentioning
confidence: 99%
“…Although the anesthesiologist was not controlled in our study, because of its retrospective nature, the anesthesiologists who determined each patient's ASA score reported their findings at the time of surgery as part of normal clinical routine and not as part of our study. Studies of interrater consistency of the ASA score have observed its inconsistency [18,32] and imprecision [31,38]. As previously suggested [19], this poor interobserver reliability most likely is the result of difficulty in distinguishing a patient with normal health (ASA Class 1) from one with mild systemic disease (ASA Class 2) rather than discriminating between a relatively healthy patient (ASA Classes 1 and 2) and one with severe, even lifethreatening, systemic disease (ASA Classes 3 and 4).…”
Section: Discussionmentioning
confidence: 99%
“…After evaluation by an anesthesiologist patients are assigned a score between 1, which represents a normal healthy patient, and 5, which represents a moribound patient not expected to survive without the operation [1]. The ASA Physical Status Classification System is widely used but cases must be manually reviewed by an clinician, which is time consuming, subjective, and highly variable [12]. Attempts have been made to adapt existing tools such as the Therapeutic Intervention Scoring System-28 [8] or to develop new metrics with site-specific analyses based on direct observation [14].…”
Section: Introductionmentioning
confidence: 99%
“…13 Notably, some prior research has found limited inter-rater reliability when anesthesiologists applied the ASA-PS scale to hypothetical case scenarios or de-identified medical records. [14][15][16][17][18][19] Conversely, more recent research has shown the scale to have at least moderate inter-rater reliability in usual clinical practice, with more than 98% of paired ASA-PS ratings of individual patients being within one class of each other. 20 Additionally, despite these potential limitations, the ASA-PS scale has shown at least moderate accuracy in predicting postoperative mortality across a wide range of studies.…”
Section: Résumémentioning
confidence: 99%
“…The graph was plotted using the R Statistical Language Version 3.2.1 (Vienna, Austria) reliability. [14][15][16][17][18][19][20] Thus, whenever possible, a risk score and its individual components should have good inter-rater reliability.…”
Section: Other Characteristics Needed In a Good Risk Scorementioning
confidence: 99%