2021
DOI: 10.1016/j.spinee.2021.02.007
|View full text |Cite
|
Sign up to set email alerts
|

Using a national surgical database to predict complications following posterior lumbar surgery and comparing the area under the curve and F1-score for the assessment of prognostic capability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 47 publications
(21 citation statements)
references
References 25 publications
1
20
0
Order By: Relevance
“…Because at this stage the data used is still imbalanced, to determine which ratio to use as a baseline, we can see from the resulting F1-Score. F1-score can be used on imbalanced data with high true negative class for the assessment of prediction algorithm [19].…”
Section: Resultsmentioning
confidence: 99%
“…Because at this stage the data used is still imbalanced, to determine which ratio to use as a baseline, we can see from the resulting F1-Score. F1-score can be used on imbalanced data with high true negative class for the assessment of prediction algorithm [19].…”
Section: Resultsmentioning
confidence: 99%
“…The AUC [ 39 ] evaluates the merit of the obtained recommendation results from the perspective of sample probability, which is simply introduced as the probability of acquiring a positive sample, that is, greater than the negative sample in a pair of randomly selected samples (one positive sample and one negative sample); the model obtained from training is used to predict these two samples, as shown in Equation ( 18 ). where M denotes the number of positive samples, N denotes the number of negative samples, and M × N represents the total number of sample pairs, P positive refers to the probability of getting a positive sample for the prediction, P negative refers to the probability of getting a negative sample for the prediction.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Secondly, the actual number of operons compared to single genes most likely results in an unbalanced dataset. In these two scenarios, the F1 score has been proven to be a better metric than the accuracy score to evaluate algorithm performance [49]. Similarly, when datasets are unbalanced, Condition-specific mapping of operons using dynamic and static genome data precision and recall were demonstrated as better evaluators for a model's classification performance; and precision-recall curves were more useful and robust than the ROC curves.…”
Section: Comparison To Existing Algorithmsmentioning
confidence: 99%