Object The authors describe the artificial neural network (ANN) as an innovative and powerful modeling tool that can be increasingly applied to develop predictive models in neurosurgery. They aimed to demonstrate the utility of an ANN in predicting survival following traumatic brain injury and compare its predictive ability with that of regression models and clinicians. Methods The authors designed an ANN to predict in-hospital survival following traumatic brain injury. The model was generated with 11 clinical inputs and a single output. Using a subset of the National Trauma Database, the authors “trained” the model to predict outcome by providing the model with patients for whom 11 clinical inputs were paired with known outcomes, which allowed the ANN to “learn” the relevant relationships that predict outcome. The model was tested against actual outcomes in a novel subset of 100 patients derived from the same database. For comparison with traditional forms of modeling, 2 regression models were developed using the same training set and were evaluated on the same testing set. Lastly, the authors used the same 100-patient testing set to evaluate 5 neurosurgery residents and 4 neurosurgery staff physicians on their ability to predict survival on the basis of the same 11 data points that were provided to the ANN. The ANN was compared with the clinicians and the regression models in terms of accuracy, sensitivity, specificity, and discrimination. Results Compared with regression models, the ANN was more accurate (p < 0.001), more sensitive (p < 0.001), as specific (p = 0.260), and more discriminating (p < 0.001). There was no difference between the neurosurgery residents and staff physicians, and all clinicians were pooled to compare with the 5 best neural networks. The ANNs were more accurate (p < 0.0001), more sensitive (p < 0.0001), as specific (p = 0.743), and more discriminating (p < 0.0001) than the clinicians. Conclusions When given the same limited clinical information, the ANN significantly outperformed regression models and clinicians on multiple performance measures. While this paradigm certainly does not adequately reflect a real clinical scenario, this form of modeling could ultimately serve as a useful clinical decision support tool. As the model evolves to include more complex clinical variables, the performance gap over clinicians and logistic regression models will persist or, ideally, further increase.
An ensemble is a set of learned models that make decisions collectively. Although an ensemble is usually more accurate than a single learner, existing ensemble methods often tend to construct unnecessarily large ensembles, which increases the memory consumption and computational cost. Ensemble pruning tackles this problem by selecting a subset of ensemble members to form subensembles that are subject to less resource consumption and response time with accuracy that is similar to or better than the original ensemble. In this paper, we analyze the accuracy/diversity trade-off and prove that classifiers that are more accurate and make more predictions in the minority group are more important for subensemble construction. Based on the gained insights, a heuristic metric that considers both accuracy and diversity is proposed to explicitly evaluate each individual classifier's contribution to the whole ensemble. By incorporating ensemble members in decreasing order of their contributions, subensembles are formed such that users can select the top p percent of ensemble members, depending on their resource availability and tolerable waiting time, for predictions. Experimental results on 26 UCI data sets show that subensembles formed by the proposed EPIC (Ensemble Pruning via Individual Contribution ordering) algorithm outperform the original ensemble and a state-ofthe-art ensemble pruning method, Orientation Ordering (OO) [16].
Derivatives are common preprocessing tools, typically implemented as Savitzky-Golay (SG) smoothing derivatives. This work discusses the implementation and optimization of fourth-order gap derivatives (GDs) as an alternative to SG derivatives for processing infrared spectra before multivariate calibration. Gap derivatives approximate the analytical derivative by calculating finite differences of spectra without curve fitting. Gap derivatives offer an advantage of tunability for spectral data as the distance (gap) over which this finite difference is calculated can be varied. Gap selection is a compromise between signal attenuation, noise amplification, and spectral resolution. A method and discussion of the importance of fourth derivative gap selections are presented as well as a comparison to SG preprocessing and lower-order GDs in the context of multivariate calibration. In most cases, we found that optimized GDs led to calibration models performing comparably to or better than SG derivatives, and that optimized fourth-order GDs behaved similarly to matched filters.
Background: Two coronavirus disease 2019 (COVID-19) vaccines have received emergency use authorizations in the U.S. However, the safety of these vaccines in the real-world remains unknown.Methods: We reviewed adverse events (AEs) following COVID-19 vaccination among adults in the Vaccine Adverse Event Reporting System (VAERS) from December 14, 2020, through January 22, 2021. We compared the top 10 AEs, serious AEs, along with office and emergency room (ER) visits by age (18–64 years, ≥65 years) and gender (female, male).Results: There were age and gender disparities among adults with AEs following COVID-19 vaccination. Compared to younger adults aged between 18 and 64 years, older adults were more likely to report serious AEs, death, permanent disability, and hospitalization. Males were more likely to report serious AEs, death, and hospitalization compared to females.Conclusions: COVID-19 vaccines are generally safe but possible age and gender disparities in reported AEs may exist.
Counting craters in remotely sensed images is the only tool that provides relative dating of remote planetary surfaces. Surveying craters requires counting a large amount of small subkilometer craters, which calls for highly efficient automatic crater detection. In this article, we present an integrated framework on autodetection of subkilometer craters with boosting and transfer learning. The framework contains three key components. First, we utilize mathematical morphology to efficiently identify crater candidates, the regions of an image that can potentially contain craters. Only those regions occupying relatively small portions of the original image are the subjects of further processing. Second, we extract and select image texture features, in combination with supervised boosting ensemble learning algorithms, to accurately classify crater candidates into craters and noncraters. Third, we integrate transfer learning into boosting, to enhance detection performance in the regions where surface morphology differs from what is characterized by the training set. Our framework is evaluated on a large test image of 37, 500 × 56, 250 m 2 on Mars, which exhibits a heavily cratered Martian terrain characterized by nonuniform surface morphology. Empirical studies demonstrate that the proposed crater detection framework can achieve an F1 score above 0.85, a significant improvement over the other crater detection algorithms.
BackgroundProxy responses are very common when surveys are conducted among the elderly or disabled population. Outcomes reported by proxy may be systematically different from those obtained from patients directly. The objective of the study is to examine the presence, direction, and magnitude of possible differences between proxy-reported and patient-reported outcomes in health and functional status measures among Medicare beneficiaries.MethodsThis study is a pooled cross-sectional study of a nationally representative sample of community-dwelling Medicare beneficiaries from 2006 to 2011. Survey respondents can respond to the Medicare Current Beneficiary Survey either by themselves or via proxies. Health and functional status was assessed across five domains: physical, affective, cognitive, social, and sensory status. Propensity score matching was used to get matched pairs of patient-reports and proxy-reports.ResultsAfter applying the propensity score matching, the study identified 7,780 person-years of patient-reports paired with 7,780 person-years of proxy-reports. Except for the sensory limitation, differences between proxy-reported and patient-reported outcomes were present in physical, affective, cognitive, and social limitations. Compared to patient-reports, a question regarding survey respondents’ difficulties in managing money was associated with the largest proxy response bias (relative risk, RR = 3.83). With few exceptions, the presence, direction, and magnitude of differences between proxy-reported and patient-reported outcomes did not vary much in the subgroup analysis.ConclusionsWhen there is a difference between proxy-reported and patient-reported outcomes, proxies tended to report more health and functional limitations among the elderly and disabled population. The extent of proxy response bias depended on the domain being tested and the nature of the question being asked. Researchers should accept proxy reports for sensory status and objective, observable, or easy questions. For physical, affective, cognitive, or social status and private, unobservable, or complex questions, proxy-reported outcomes should be used with caution when patient-reported outcomes are not available.
Generating models from large data sets -- and determining which subsets of data to mine -- is becoming increasingly automated. However choosing what data to collect in the first place requires human intuition or experience, usually supplied by a domain expert. This paper describes a new approach to machine science which demonstrates for the first time that non-domain experts can collectively formulate features, and provide values for those features such that they are predictive of some behavioral outcome of interest. This was accomplished by building a web platform in which human groups interact to both respond to questions likely to help predict a behavioral outcome and pose new questions to their peers. This results in a dynamically-growing online survey, but the result of this cooperative behavior also leads to models that can predict user's outcomes based on their responses to the user-generated survey questions. Here we describe two web-based experiments that instantiate this approach: the first site led to models that can predict users' monthly electric energy consumption; the other led to models that can predict users' body mass index. As exponential increases in content are often observed in successful online collaborative communities, the proposed methodology may, in the future, lead to similar exponential rises in discovery and insight into the causal factors of behavioral outcomes
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.