4th International IEEE EMBS Special Topic Conference on Information Technology Applications in Biomedicine, 2003.
DOI: 10.1109/itab.2003.1222512
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of the logarithmic-sensitivity index as a neural network stopping criterion for rare outcomes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 7 publications
0
10
0
Order By: Relevance
“…To be consistent with the previous research, the logarithmic sensitivity index, defined in (1) was still adopted, although a maximum log-likelihood criterion could be employed. The logarithmic-sensitivity index attempts to achieve optimal sensitivity and specificity of a classifier while slightly favoring higher sensitivity [11]. log -sensitivity -index = -sensitivity n * log10 (1 -sensitivity * specificity) (1)…”
Section: Introductionmentioning
confidence: 99%
“…To be consistent with the previous research, the logarithmic sensitivity index, defined in (1) was still adopted, although a maximum log-likelihood criterion could be employed. The logarithmic-sensitivity index attempts to achieve optimal sensitivity and specificity of a classifier while slightly favoring higher sensitivity [11]. log -sensitivity -index = -sensitivity n * log10 (1 -sensitivity * specificity) (1)…”
Section: Introductionmentioning
confidence: 99%
“…sensitivity and specificity of the neural network (Ennett, Frize, & Scales, 2003). The logarithmic sensitivity index, defined in Equation 4-4, expands towards infinity when both sensitivity and specificity are 1, but is slightly weighted in favor of sensitivity.…”
Section: 5 L O G a R I T H M I C S E N S I T I V I T Y I N D E Xmentioning
confidence: 99%
“…The logarithmic sensitivity index, defined in Equation 4-4, expands towards infinity when both sensitivity and specificity are 1, but is slightly weighted in favor of sensitivity. The degree of weighting (or bias) towards sensitivity is controlled by the exponent n (Ennett, Frize, & Scales, 2003), which is equal to 0.75 for the experiments related to this thesis.…”
Section: 5 L O G a R I T H M I C S E N S I T I V I T Y I N D E Xmentioning
confidence: 99%
“…The logarithmic sensitivity index is typically the chosen stopping criterion for MIRG's ANN research framework. The key attribute with this performance measure is that it maximises both sensitivity and specificity, but is more weighted towards the sensitivity o f the experiment to encourage the ANN to better predict the less frequent outcome (hospitalisation) (Ennett 2002). The functional representation o f the logarithmic sensitivity index is found below (equation 3.7): Index = -Sensitivity" * log(l -Sensitivity * Specificity) (3.7)…”
Section: Figure 12 Sample Receiver Operator Characteristics Curve (Tmentioning
confidence: 99%
“…The ANN learns from one epoch to the next via nine different control parameters that modify the weights and biases o f each node in the network to encourage optimal performance from the network. Table 12 lists the nine parameters and their function in the ANN's learning as explained by Ennett (2002) and Table 13 lists their default ranges found to be ideal by previous researchers in MIRG (Ennett 2002, Rybchynski 2005.…”
Section: Figure 12 Sample Receiver Operator Characteristics Curve (Tmentioning
confidence: 99%