Information on the stage of liver fibrosis is essential in managing chronic hepatitis C (CHC) patients. However, most models for predicting liver fibrosis are complicated and separate formulas are needed to predict significant fibrosis and cirrhosis. The aim of our study was to construct one simple model consisting of routine laboratory data to predict both significant fibrosis and cirrhosis among patients with CHC. Consecutive treatment-naive CHC patients who underwent liver biopsy over a 25-month period were divided into 2 sequential cohorts: training set (n ؍ 192) and validation set (n ؍ 78). The best model for predicting both significant fibrosis (Ishak score > 3) and cirrhosis in the training set included platelets, aspartate aminotransferase (AST), and alkaline phosphatase with an area under ROC curves (AUC) of 0.82 and 0.92, respectively. A novel index, AST to platelet ratio index (APRI), was developed to amplify the opposing effects of liver fibrosis on AST and platelet count. The AUC of APRI for predicting significant fibrosis and cirrhosis were 0.80 and 0.89, respectively, in the training set. Using optimized cut-off values, significant fibrosis could be predicted accurately in 51% and cirrhosis in 81% of patients. The AUC of APRI for predicting significant fibrosis and cirrhosis in the validation set were 0.88 and 0.94, respectively. In conclusion, our study showed that a simple index using readily available laboratory results can identify CHC patients with significant fibrosis and cirrhosis with a high degree of accuracy. Application of this index may decrease the need for staging liver biopsy specimens among CHC patients. (HEPATOLOGY 2003;38:518-526.)
Distinct problems in the analysis of failure times with competing causes of failure include the estimation of treatment or exposure effects on specific failure types, the study of interrelations among failure types, and the estimation of failure rates for some causes given the removal of certain other failure types. The usual formation of these problems is in terms of conceptual or latent failure times for each failure type. This approach is criticized on the basis of unwarranted assumptions, lack of physical interpretation and identifiability problems. An alternative approach utilizing cause-specific hazard functions for observable quantities, including time-dependent covariates, is proposed. Cause-specific hazard functions are shown to be the basic estimable quantities in the competing risks framework. A method, involving the estimation of parameters that relate time-dependent risk indicators for some causes to cause-specific hazard functions for other causes, is proposed for the study of interrelations among failure types. Further, it is argued that the problem of estimation of failure rates under the removal of certain causes is not well posed until a mechanism for cause removal is specified. Following such a specification, one will sometimes be in a position to make sensible extrapolations from available data to situations involving cause removal. A clinical program in bone marrow transplantation for leukemia provides a setting for discussion and illustration of each of these ideas. Failure due to censoring in a survivorship study leads to further discussion.
Standard methods for the regression analysis of clustered data postulate models relating covariates to the response without regard to between- and within-cluster covariate effects. Implicit in these analyses is the assumption that these effects are identical. Example data show that this is frequently not the case and that analyses that ignore differential between- and within-cluster covariate effects can be misleading. Consideration of between- and within-cluster effects also helps to explain observed and theoretical differences between mixture model analyses and those based on conditional likelihood methods. In particular, we show that conditional likelihood methods estimate purely within-cluster covariate effects, whereas mixture model approaches estimate a weighted average of between- and within-cluster covariate effects.
Currently, patients awaiting deceased-donor liver transplantation are prioritized by medical urgency. Specifically, wait-listed chronic liver failure patients are sequenced in decreasing order of Model for Endstage Liver Disease (MELD) score. To maximize lifetime gained through liver transplantation, posttransplant survival should be considered in prioritizing liver waiting list candidates. We evaluate a survival benefit based system for allocating deceased-donor livers to chronic liver failure patients. Under the proposed system, at the time of offer, the transplant survival benefit score would be computed for each patient active on the waiting list. The proposed score is based on the difference in 5-year mean lifetime (with vs. without a liver transplant) and accounts for patient and donor characteristics. The rank correlation between benefit score and MELD score is 0.67. There is great overlap in the distribution of benefit scores across MELD categories, since waiting list mortality is significantly affected by several factors. Simulation results indicate that over 2000 life-years would be saved per year if benefit-based allocation was implemented. The shortage of donor livers increases the need to maximize the life-saving capacity of procured livers. Allocation of deceased-donor livers to chronic liver failure patients would be improved by prioritizing patients by transplant survival benefit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.