BACKGROUND. Health care databases provide a widely used source of data for health care research, but their accuracy remains uncertain. We analyzed data from the 1985 National DRG Validation Study, which carefully reabstracted and reassigned ICD-9-CM diagnosis and procedure codes from a national sample of 7050 medical records, to determine whether coding accuracy had improved since the Institute of Medicine studies of the 1970s and to assess the current coding accuracy of specific diagnoses and procedures. METHODS. We defined agreement as the proportion of all reabstracted records that had the same principal diagnosis or procedure coded on both the original (hospital) record and on the reabstracted record. We also evaluated coding accuracy in 1985 using the concepts of diagnostic test evaluation. RESULTS. Overall, the percentage of agreement between the principal diagnosis on the reabstracted record and the original hospital record, when analyzed at the third digit, improved from 73.2% in 1977 to 78.2% in 1985. However, analysis of the 1985 data demonstrated that the accuracy of diagnosis and procedure coding varies substantially across conditions. CONCLUSIONS. Although some diagnoses and all major surgical procedures that we examined were accurately coded, the variability in the accuracy of diagnosis coding poses a problem that must be overcome if claims-based research is to achieve its full potential.
Reimbursement of hospitals by Medicare under the prospective-payment system is based on patients' diagnoses as coded at discharge. During the period October 1984 through March 1985, we studied the accuracy of the coding for diagnosis-related groups (DRGs) in hospitals receiving Medicare reimbursement. We used a two-stage cluster method to sample 7050 medical records from 239 hospitals that were stratified according to size. Using blinded techniques with reliability checks, medical-record specialists reabstracted the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes to assign correct DRGs to discharged patients. The correct DRGs were then compared with those originally assigned by the physician and the hospital administration. The study revealed an error rate of 20.8 percent in DRG coding. Errors were distributed equally between physicians and hospitals. Small hospitals had significantly higher error rates. Previous studies had found that errors occurred randomly, so that half the errors benefited the hospital financially and half penalized the hospital. The present study found that a statistically significant 61.7 percent of coding errors favored the hospital. These errors caused the average hospital's case-mix index--a measure of the complexity of illness of the hospital's patients--to increase by 1.9 percent. As a result, hospitals received higher net reimbursement from Medicare than was supportable by the medical records. We conclude that "creep" does occur in the coding of DRGs, resulting in overpayment to hospitals for patients covered by Medicare.
Extensive debates exist in the literature on the indications, effectiveness, and risks of carotid endarterectomy. However, no investigations analyze the procedure's epidemiology. Medicare paid for essentially all carotid endarterectomies on patients over 65 years old, more than two thirds of all such surgery. Accordingly, we identified all 1985 to 1989 Medicare bills for ICD-9-CM code 38.12. This report found an average annual decrease of 6.4% in the frequency of carotid endarterectomies. Higher proportions and incidence rates occurred among 65- to 79-year-old people, men, and whites. Larger, urban, and nonprofit hospitals performed the procedure more often. The number of hospitals performing this procedure has increased over time. Mortality rates within 30 days decreased from 3.0% of procedures in 1985 to 2.5% in 1989. Higher than average death rates occurred among older, male, and black patients, and in low volume hospitals. Clinical trials undertaken in large, urban, teaching, high-volume institutions reported only 1% deaths. The institutions actually performing carotid endarterectomies differ from the clinical trials in their demography and perioperative mortality rates. This difference in community practice may limit the applicability of the clinical trials.
The attestation requirement may have deterred DRG creep due to attending physician upcoding, but the peer review organizations' sentinel effect and educational activities have not eliminated hospital resequencing.
As part of a controlled clinical trial of Health Hazard Appraisal's (HHA) efficacy in stimulating risk reduction, the reliability of the HHA questionnaire was evaluated. Of 203 subjects, only 30 (15 per cent) had no contradictions when comparing the responses of the follow-up with baseline questionnaire. Overall, there was an average of 1.6 contradictions per subject. Failure to control for reliability may account for apparent reduction of risk reported in previous
INTRODUCTIONHow would a nonrespondent respond if a nonrespondent responded? While that sounds like a childhood rhyme, it is in fact a critical question for evaluators. Whether the methodology is a telephone survey, mailed questionnaire, or face-to-face interview, the evaluator must inevitably face the reality of nonrespondents. The extent to which these nonresponders might differ in their answers and the fraction they constitute of the sample can have a significant impact on the validity of any findings with subsequent implications on the costs and resources required to conduct such surveys.Because of concerns raised about an inverse relationship between response rate and satisfaction levels on an annual client satisfaction survey, a study was conducted to address the issue of how nonrespondents might have affected results. Specifically, following the mailed questionnaire survey and two follow-up attempts, hard-core nonrespondents were contacted to ask why they did not respond and to elicit responses to key questions from the original questionnaire. These were used to determine the effect these individuals might have on the overall results reported annually from the U.S.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.