A staged US and CT imaging protocol in which US is performed first in children suspected of having acute appendicitis is highly accurate and offers the opportunity to substantially reduce radiation.
Objectives: In the emergency department (ED), a significant amount of radiation exposure is due to computed tomography (CT) scans performed for the diagnosis of appendicitis. Children are at increased risk of developing cancer from low-dose radiation and it is therefore desirable to utilize CT only when appropriate. Ultrasonography (US) eliminates radiation but has sensitivity inferior to that of CT. We describe an interdisciplinary initiative to use a staged US and CT pathway to maximize diagnostic accuracy while minimizing radiation exposure.Methods: This was a retrospective outcomes analysis of patients presenting after hours for suspected appendicitis at an academic children's hospital ED over a 6-year period. The pathway established US as the initial imaging modality. CT was recommended only if US was equivocal. Clinical and pathologic outcomes from ED diagnosis and disposition, histopathology and return visits, were correlated with the US and CT. ED diagnosis and disposition, pathology, and return visits were used to determine outcome.Results: A total of 680 patients met the study criteria. A total of 407 patients (60%) followed the pathway. Two-hundred of these (49%) were managed definitively without CT. A total of 106 patients (26%) had a positive US for appendicitis; 94 (23%) had a negative US. A total of 207 patients had equivocal US with follow-up CT. A total of 144 patients went to the operating room (OR); 10 patients (7%) had negative appendectomies. One case of appendicitis was missed (<0.5%). The sensitivity, specificity, negative predictive value, and positive predictive values of our staged US-CT pathway were 99%, 91%, 99%, and 85%, respectively. A total of 228 of 680 patients (34%) had an equivocal US with no follow-up CT. Of these patients, 10 (4%) went to the OR with one negative appendectomy. A total of 218 patients (32%) were observed clinically without complications. Conclusions:Half of the patients who were treated using this pathway were managed with definitive US alone with an acceptable negative appendectomy rate (7%) and a missed appendicitis rate of less than 0.5%. Visualization of a normal appendix (negative US) was sufficient to obviate the need for a CT in the authors' experience. Emergency physicians (EPs) used an equivocal US in conjunction with clinical assessment to care for one-third of study patients without a CT and with no known cases of missed appendicitis. These data suggest that by employing US first on all children needing diagnostic imaging for diagnosis of acute appendicitis, radiation exposure may be substantially decreased without a decrease in safety or efficacy.
Objectives: The Broselow pediatric emergency weight estimation tape is an accurate method of estimating children's weights based on height-weight correlations and determining standardized medication dosages and equipment sizes using color-coded zones. The study objective was to determine the accuracy of the Broselow tape in the Indian pediatric population. Methods:The authors conducted a 6-week prospective cross-sectional study of 548 children at a government pediatric hospital in Chennai, India, in three weight-based groups: <10 kg (n = 175), 10-18 kg (n = 197), and >18 kg (n = 176). Measured weight was compared to Broselow-predicted weight, and the percentage difference was calculated. Accuracy was defined as agreement on Broselow color-coded zones, as well as agreement within 10% between the measured and Broselow-predicted weights. A cross-validated correction factor was also derived. Results:The mean percentage differences were )2.4, )11.3, and )12.9% for each weight-based group. The Broselow color-coded zone agreement was 70.8% in children weighing less than 10 kg, but only 56.3% in the 10-to 18-kg group and 37.5% in the >18-kg group. Agreement within 10% was 52.6% for the <10-kg group, but only 44.7% for the 10-to 18-kg group and 33.5% for the >18-kg group. Application of a 10% weight-correction factor improved the percentages to 77.1% for the 10-to 18-kg group and 63.0% for the >18-kg group. Conclusions:The Broselow tape overestimates weight by more than 10% in Indian children >10 kg. Weight overestimation increases the risk of medical errors due to incorrect dosing or equipment selection. Applying a 10% weight-correction factor may be advisable.ACADEMIC EMERGENCY MEDICINE 2008; 15:431-436 ª
We view our study as a fundamental part of the incremental progress to understand how best to use US and CT imaging to diagnose pediatric appendicitis while minimizing ionizing radiation. Children at low risk for appendicitis with equivocal US are amenable to observation and reassessment prior to reimaging with US or CT.
Introduction: Experts at tertiary care centers provide solutions to complex cases not addressed by high quality evidence. They intuitively retrieve patterns from years of experience to make treatment decisions. Short of personal consultations, there is no way to access this vast “experience database.” Experience Engine (XE) is a machine learning solution to structure experiential knowledge relevant for decision making, derive a similarity metric for patients who have received similar treatments, and predict treatment decisions that experts are likely to recommend. Methods: 277 patient histories relating to 743 breast cancer tumor board decisions at two tertiary care centers were abstracted as the training set for machine learning. 161 distinct histories relating to 496 decisions for a separate expert opinion service at one of the centers was the holdout test set. Data was structured into 690 features based on a novel ontology designed specifically for breast cancer decision making. To uncover nonlinear similarities, (for example, treatments for younger patients with multiple comorbidities and elderly patients may be similar), treatment decisions were grouped by timing and modality into 13 groups, such as primary surgery, 1st line palliative chemotherapy, etc. Similarity metric was derived using machine learning on the training set. The target for prediction was the specific treatment decision i.e. TAC or another adjuvant regimen. The primary endpoint was percent accuracy of agreement between XE's predicted decision and experts' actual decision in the holdout test set. Multiple similarity distance metrics including Bhattacharya, Eskin, Goodall, etc., and multiclass classification algorithms such as Extreme Gradient Boosted Trees, Support Vector Machines, etc., were systematically evaluated to arrive at the algorithms that best fit each treatment group. Results: The winning XE algorithms were 71% to 89% accurate for the various treatment groups, in predicting the actual treatment decisions recommended by the experts. The most frequent treatments recommended across all groups were standard evidence based therapies, as are often recommended by experts. For instance, when XE recommended standard adjuvant therapies for Her2- patients, it was 88% to 97% accurate. When XE recommended nonstandard therapies for the same treatment group, it was 72% to 90% accurate, related to larger number of nonstandard therapies within each treatment group and smaller samples of patients who underwent each type of nonstandard therapy. XE learned to weigh features relating to comorbidities and toxicities when recommending nonstandard therapies. Conclusion: Machine learning on a structured database of past treatment decisions made by experts, can yield a predicted treatment decision that an expert is likely to recommend for a new patient. By including complex decisions that consider toxicities and morbidities, a rich source of knowledge can be created. Despite the limited dataset, XE learned features that experts strongly consider when making decisions. XE has the potential to analyze variations in decision making at expert practices, assess when to recommend nonstandard therapies, and serve as a training tool for new oncologists to make expert grade treatment decisions. Citation Format: Ramarajan N, Gupta S, Perry P, Srivastava G, Kumbla A, Miller J, Feldman N, Nair N, Badwe RA. Building an experience engine to make cancer treatment decisions using machine learning [abstract]. In: Proceedings of the 2016 San Antonio Breast Cancer Symposium; 2016 Dec 6-10; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2017;77(4 Suppl):Abstract nr P1-14-01.
Introduction: Access to expert, evidence based clinical decision making is crucial in maximizing the outcome of women with breast cancer, but is a scarce resource, especially in developing countries. The Navya Expert System is a patented, software based clinical decision support system that exhaustively searches and assimilates relevant medical literature and guidelines to make specific therapeutic recommendations for individual patients based on their clinical data. This study is a retrospective validation of Navya Expert System's output against tumor board decisions of a multidisciplinary group of expert breast cancer clinicians working in a tertiary care oncology center in India. Methods: Women with non-metastatic breast cancer who had already completed their loco-regional and systemic therapy based on the recommendations of the tumor board were included in the study. The protocol specified clinical and pathology data of these women were retrospectively abstracted from their case charts and processed through the Navya Expert System. The output was classified into major (neo-adjuvant chemotherapy versus upfront surgery and need for adjuvant chemotherapy, endocrine therapy and radiation therapy, respectively) and minor (breast conservation versus mastectomy, taxane versus non-taxane adjuvant chemotherapy and need for nodal radiation therapy) therapeutic decisions. Decisions discordant between the tumor board and the Navya Expert System were adjudicated by an expert panel of breast cancer clinicians from the same institution. Navya Expert System decisions were classified as discordant with appropriate clinical practice if they were in disagreement with both the tumor board and expert panel. All other Navya Expert System decisions were classified as concordant. The primary outcome of the study was concordance between the Navya Expert System and the tumor board or expert panel for major and minor therapeutic decisions. Results: A total of 76 patients involving 224 major and 224 minor therapeutic decisions were included in the study. Navya Expert System's output was concordant with the tumor board or expert review in 224/224 major decisions (100%, 95% CI 99.6%-100%) and 221/224 minor decisions (98.6%, 95% CI 97.1%-100%). Navya Expert System's output was concordant with the tumor board alone in 210/224 (93.75%, 95% CI 90.6%-96.9%) major decisions and 160/224 (71.4%, 95% CI 65.5%-77.3%) minor decisions. Most common reasons for discordance were non-prescription of HER2 targeted therapy by the tumor board due to financial constraints and non-use of nodal radiation for 1-3 node positive patients. Of the 64/224 Navya Expert System decisions discordant with the tumor board, only 3 were finally deemed discordant after review by the expert panel. Conclusions: Navya Expert System treatment recommendations, only requiring the input of commonly available clinical data, are highly concordant with those of a tumor board comprised of breast cancer experts with high level expertise. If these results can be prospectively validated, Navya Expert System has the potential to increase global access to evidence based clinical decision making in breast cancer. Citation Format: Nita Nair, Sudeep Gupta, Naresh Ramarajan, Gitika Srivastava, Vani Parmar, Anusheel Munshi, Shraddha Vanmali, Vaibhav Vanmali, Rohini Hawaldar, Rajendra A Badwe. Validation of a software based clinical decision support system for breast cancer treatment in a tertiary care cancer center in India [abstract]. In: Proceedings of the Thirty-Seventh Annual CTRC-AACR San Antonio Breast Cancer Symposium: 2014 Dec 9-13; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2015;75(9 Suppl):Abstract nr P4-16-01.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.