ObjectiveThe accurate prediction of seizure freedom after epilepsy surgery remains challenging. We investigated if (1) training more complex models, (2) recruiting larger sample sizes, or (3) using data‐driven selection of clinical predictors would improve our ability to predict postoperative seizure outcome using clinical features. We also conducted the first substantial external validation of a machine learning model trained to predict postoperative seizure outcome.MethodsWe performed a retrospective cohort study of 797 children who had undergone resective or disconnective epilepsy surgery at a tertiary center. We extracted patient information from medical records and trained three models—a logistic regression, a multilayer perceptron, and an XGBoost model—to predict 1‐year postoperative seizure outcome on our data set. We evaluated the performance of a recently published XGBoost model on the same patients. We further investigated the impact of sample size on model performance, using learning curve analysis to estimate performance at samples up to N = 2000. Finally, we examined the impact of predictor selection on model performance.ResultsOur logistic regression achieved an accuracy of 72% (95% confidence interval [CI] = 68%–75%, area under the curve [AUC] = .72), whereas our multilayer perceptron and XGBoost both achieved accuracies of 71% (95% CIMLP = 67%–74%, AUCMLP = .70; 95% CIXGBoost own = 68%–75%, AUCXGBoost own = .70). There was no significant difference in performance between our three models (all p > .4) and they all performed better than the external XGBoost, which achieved an accuracy of 63% (95% CI = 59%–67%, AUC = .62; pLR = .005, pMLP = .01, pXGBoost own = .01) on our data. All models showed improved performance with increasing sample size, but limited improvements beyond our current sample. The best model performance was achieved with data‐driven feature selection.SignificanceWe show that neither the deployment of complex machine learning models nor the assembly of thousands of patients alone is likely to generate significant improvements in our ability to predict postoperative seizure freedom. We instead propose that improved feature selection alongside collaboration, data standardization, and model sharing is required to advance the field.
Objective: The accurate prediction of seizure freedom after epilepsy surgery remains challenging. We investigated if 1) training more complex models, 2) recruiting larger sample sizes, or 3) using data-driven selection of clinical predictors would improve our ability to predict post-operative seizure outcome. We also conducted the first external validation of a machine learning model trained to predict post-operative seizure outcome. Methods: We performed a retrospective cohort study of 797 children who had undergone resective or disconnective epilepsy surgery at a single tertiary center. We extracted patient information from medical records and trained three models - a logistic regression, a multilayer perceptron, and an XGBoost model - to predict one-year post-operative seizure outcome on our dataset. We evaluated the performance of a recently published XGBoost model on the same patients. We further investigated the impact of sample size on model performance, using learning curve analysis to estimate performance at samples up to N=2,000. Finally, we examined the impact of predictor selection on model performance. Results: Our logistic regression achieved an accuracy of 72% (95% CI=68-75%, AUC=0.72), while our multilayer perceptron and XGBoost both achieved accuracies of 71% (95% CI-MLP=67-74%, AUC-MLP=0.70; 95% CI-XGBoost own=68-75%, AUC-XGBoost own=0.70). There was no significant difference in performance between our three models (all P>0.4) and they all performed better than the external XGBoost, which achieved an accuracy of 63% (95% CI=59-67%, AUC=0.62; P-LR=0.005, P-MLP=0.01, P-XGBoost own=0.01) on our data. All models showed improved performance with increasing sample size, with limited improvements above N=400. The best model performance was achieved with data-driven feature selection. Significance: We show that neither the deployment of complex machine learning models nor the assembly of thousands of patients alone is likely to generate significant improvements in our ability to predict post-operative seizure freedom. We instead propose that improved feature selection alongside collaboration, data standardization, and model sharing is required to advance the field.
Purpose: Our objective was to review the outcomes of children with CIM and associated cerebrospinal fluid (CSF) disorders and ventriculomegaly undergoing endoscopic third ventriculostomy (ETV) as a primary intervention.Materials and methods: A retrospective, single-centre, observational cohort study was conducted, of consecutive children with CIM with associated CSF disorders and ventriculomegaly treated first by ETV between January 2014 and December 2020.Results: Raised intracranial pressure symptoms were the most frequent with ten patients followed by posterior fossa and syrinx symptoms in three cases.One patient had a later closure of the stoma and required a shunt insertion. The success rate of the ETV in the cohort was 92% (11/12). There was no surgical mortality in our series. No other complications were reported.The median herniation of the tonsils was not statistically different in the pre vs post-operative MRI (1.14 vs. 0.94, p=0.1). However, the median Evan’s index (0.4 vs 0.36, p<0.01) and the median diameter of the third ventricle (1.35 vs. 0.76, p<0.01) were statistically different between the two measurements. The preoperative length of the syrinx did not change significantly when compared with the postoperative (5 vs. 1; p=0.052), nevertheless, the median transverse diameter of the syrinx did improve significantly after the surgery (0.75 vs. 0.32, p=0.03).Conclusions: Our study supports the safety and effectiveness of ETV for the management of children with CSF disorders, ventriculomegaly and associated CIM.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.