2017
DOI: 10.1016/j.compbiomed.2017.04.013
|View full text |Cite
|
Sign up to set email alerts
|

Automated detection of premature delivery using empirical mode and wavelet packet decomposition techniques with uterine electromyogram signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

6
68
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 93 publications
(75 citation statements)
references
References 48 publications
6
68
1
Order By: Relevance
“…108/09/09). 171 Materials and methods 172 With the aim to develop a useful and improved automatic method for predicting 173 preterm birth, we followed a general and widely accepted development process [29][30][31][32][33][34][35][36]: 174 1. select or construct a valid batabase for training and testing the model; 175 2. characterize the data and use effective mathematical expressions to formulate the 176 features that reflect their correlation with the target classes;…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…108/09/09). 171 Materials and methods 172 With the aim to develop a useful and improved automatic method for predicting 173 preterm birth, we followed a general and widely accepted development process [29][30][31][32][33][34][35][36]: 174 1. select or construct a valid batabase for training and testing the model; 175 2. characterize the data and use effective mathematical expressions to formulate the 176 features that reflect their correlation with the target classes;…”
mentioning
confidence: 99%
“…Assessing separability, feature selection, and feature ranking 349 In order to estimate the ability of individual features to separate between preterm and 350 term, contraction and dummy intervals, to separate between the entire preterm and term 351 EHG records, and to assess the rank of their ability to classify preterm and term 352 deliveries, we used the two-sample t-test with a pooled variance estimate [45], the 353 Bhattacaryya criterion, i.e., the minimum attainable classification error or Chernoff 354 bound [35,46,47] and the relative entropy criterion, also known as Kullback-Leibler 355 distance or divergence [48]. 356 We dealt with a large number of potential features that could be used for 357 classification.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…The use of classification methods that take into account unbalanced data such as the weighted extreme learning machine [48] or weighted decision trees [49] could also be explored. In the same context, we should like to point out that we applied SMOTE before splitting up 9 Journal of Sensors the data subsets (training/validation) as has been done in several studies [50][51][52]. It was seen that when performing cross-validation after simple oversampling, the same samples can be included to build the prediction model and evaluate its performance [53].…”
Section: Discussionmentioning
confidence: 99%
“…On the other hand, applying SMOTE to such a low minority class would yield samples similar to the original ones and would not solve this limitation. We thus opted to perform SMOTE on the entire database, as has been done in numerous other studies [50][51][52]. We hope to address this limitation in a future work with a larger database.…”
Section: Discussionmentioning
confidence: 99%